De Sá Teixeira, Nuno Alexandre
2016-09-01
The memory for the final position of a moving object which suddenly disappears has been found to be displaced forward, in the direction of motion, and downwards, in the direction of gravity. These phenomena were coined, respectively, Representational Momentum and Representational Gravity. Although both these and similar effects have been systematically linked with the functioning of internal representations of physical variables (e.g. momentum and gravity), serious doubts have been raised for a cognitively based interpretation, favouring instead a major role of oculomotor and perceptual factors which, more often than not, were left uncontrolled and even ignored. The present work aims to determine the degree to which Representational Momentum and Representational Gravity are epiphenomenal to smooth pursuit eye movements. Observers were required to indicate the offset locations of targets moving along systematically varied directions after a variable imposed retention interval. Each participant completed the task twice, varying the eye movements' instructions: gaze was either constrained or left free to track the targets. A Fourier decomposition analysis of the localization responses was used to disentangle both phenomena. The results show unambiguously that constraining eye movements significantly eliminates the harmonic components which index Representational Momentum, but have no effect on Representational Gravity or its time course. The found outcomes offer promising prospects for the study of the visual representation of gravity and its neurological substrates.
Eye-movements intervening between two successive sounds disrupt comparisons of auditory location
Pavani, Francesco; Husain, Masud; Driver, Jon
2008-01-01
Summary Many studies have investigated how saccades may affect the internal representation of visual locations across eye-movements. Here we studied instead whether eye-movements can affect auditory spatial cognition. In two experiments, participants judged the relative azimuth (same/different) of two successive sounds presented from a horizontal array of loudspeakers, separated by a 2.5 secs delay. Eye-position was either held constant throughout the trial (being directed in a fixed manner to the far left or right of the loudspeaker array), or had to be shifted to the opposite side of the array during the retention delay between the two sounds, after the first sound but before the second. Loudspeakers were either visible (Experiment1) or occluded from sight (Experiment 2). In both cases, shifting eye-position during the silent delay-period affected auditory performance in the successive auditory comparison task, even though the auditory inputs to be judged were equivalent. Sensitivity (d′) for the auditory discrimination was disrupted, specifically when the second sound shifted in the opposite direction to the intervening eye-movement with respect to the first sound. These results indicate that eye-movements affect internal representation of auditory location. PMID:18566808
Eye-movements intervening between two successive sounds disrupt comparisons of auditory location.
Pavani, Francesco; Husain, Masud; Driver, Jon
2008-08-01
Many studies have investigated how saccades may affect the internal representation of visual locations across eye-movements. Here, we studied, instead, whether eye-movements can affect auditory spatial cognition. In two experiments, participants judged the relative azimuth (same/different) of two successive sounds presented from a horizontal array of loudspeakers, separated by a 2.5-s delay. Eye-position was either held constant throughout the trial (being directed in a fixed manner to the far left or right of the loudspeaker array) or had to be shifted to the opposite side of the array during the retention delay between the two sounds, after the first sound but before the second. Loudspeakers were either visible (Experiment 1) or occluded from sight (Experiment 2). In both cases, shifting eye-position during the silent delay-period affected auditory performance in thn the successive auditory comparison task, even though the auditory inputs to be judged were equivalent. Sensitivity (d') for the auditory discrimination was disrupted, specifically when the second sound shifted in the opposite direction to the intervening eye-movement with respect to the first sound. These results indicate that eye-movements affect internal representation of auditory location.
Initial Scene Representations Facilitate Eye Movement Guidance in Visual Search
ERIC Educational Resources Information Center
Castelhano, Monica S.; Henderson, John M.
2007-01-01
What role does the initial glimpse of a scene play in subsequent eye movement guidance? In 4 experiments, a brief scene preview was followed by object search through the scene via a small moving window that was tied to fixation position. Experiment 1 demonstrated that the scene preview resulted in more efficient eye movements compared with a…
Higher Level Visual Cortex Represents Retinotopic, Not Spatiotopic, Object Location
Kanwisher, Nancy
2012-01-01
The crux of vision is to identify objects and determine their locations in the environment. Although initial visual representations are necessarily retinotopic (eye centered), interaction with the real world requires spatiotopic (absolute) location information. We asked whether higher level human visual cortex—important for stable object recognition and action—contains information about retinotopic and/or spatiotopic object position. Using functional magnetic resonance imaging multivariate pattern analysis techniques, we found information about both object category and object location in each of the ventral, dorsal, and early visual regions tested, replicating previous reports. By manipulating fixation position and stimulus position, we then tested whether these location representations were retinotopic or spatiotopic. Crucially, all location information was purely retinotopic. This pattern persisted when location information was irrelevant to the task, and even when spatiotopic (not retinotopic) stimulus position was explicitly emphasized. We also conducted a “searchlight” analysis across our entire scanned volume to explore additional cortex but again found predominantly retinotopic representations. The lack of explicit spatiotopic representations suggests that spatiotopic object position may instead be computed indirectly and continually reconstructed with each eye movement. Thus, despite our subjective impression that visual information is spatiotopic, even in higher level visual cortex, object location continues to be represented in retinotopic coordinates. PMID:22190434
Grossberg, Stephen; Srinivasan, Karthik; Yazdanbakhsh, Arash
2015-01-01
How does the brain maintain stable fusion of 3D scenes when the eyes move? Every eye movement causes each retinal position to process a different set of scenic features, and thus the brain needs to binocularly fuse new combinations of features at each position after an eye movement. Despite these breaks in retinotopic fusion due to each movement, previously fused representations of a scene in depth often appear stable. The 3D ARTSCAN neural model proposes how the brain does this by unifying concepts about how multiple cortical areas in the What and Where cortical streams interact to coordinate processes of 3D boundary and surface perception, spatial attention, invariant object category learning, predictive remapping, eye movement control, and learned coordinate transformations. The model explains data from single neuron and psychophysical studies of covert visual attention shifts prior to eye movements. The model further clarifies how perceptual, attentional, and cognitive interactions among multiple brain regions (LGN, V1, V2, V3A, V4, MT, MST, PPC, LIP, ITp, ITa, SC) may accomplish predictive remapping as part of the process whereby view-invariant object categories are learned. These results build upon earlier neural models of 3D vision and figure-ground separation and the learning of invariant object categories as the eyes freely scan a scene. A key process concerns how an object's surface representation generates a form-fitting distribution of spatial attention, or attentional shroud, in parietal cortex that helps maintain the stability of multiple perceptual and cognitive processes. Predictive eye movement signals maintain the stability of the shroud, as well as of binocularly fused perceptual boundaries and surface representations. PMID:25642198
Grossberg, Stephen; Srinivasan, Karthik; Yazdanbakhsh, Arash
2014-01-01
How does the brain maintain stable fusion of 3D scenes when the eyes move? Every eye movement causes each retinal position to process a different set of scenic features, and thus the brain needs to binocularly fuse new combinations of features at each position after an eye movement. Despite these breaks in retinotopic fusion due to each movement, previously fused representations of a scene in depth often appear stable. The 3D ARTSCAN neural model proposes how the brain does this by unifying concepts about how multiple cortical areas in the What and Where cortical streams interact to coordinate processes of 3D boundary and surface perception, spatial attention, invariant object category learning, predictive remapping, eye movement control, and learned coordinate transformations. The model explains data from single neuron and psychophysical studies of covert visual attention shifts prior to eye movements. The model further clarifies how perceptual, attentional, and cognitive interactions among multiple brain regions (LGN, V1, V2, V3A, V4, MT, MST, PPC, LIP, ITp, ITa, SC) may accomplish predictive remapping as part of the process whereby view-invariant object categories are learned. These results build upon earlier neural models of 3D vision and figure-ground separation and the learning of invariant object categories as the eyes freely scan a scene. A key process concerns how an object's surface representation generates a form-fitting distribution of spatial attention, or attentional shroud, in parietal cortex that helps maintain the stability of multiple perceptual and cognitive processes. Predictive eye movement signals maintain the stability of the shroud, as well as of binocularly fused perceptual boundaries and surface representations.
The relation between body semantics and spatial body representations.
van Elk, Michiel; Blanke, Olaf
2011-11-01
The present study addressed the relation between body semantics (i.e. semantic knowledge about the human body) and spatial body representations, by presenting participants with word pairs, one below the other, referring to body parts. The spatial position of the word pairs could be congruent (e.g. EYE / MOUTH) or incongruent (MOUTH / EYE) with respect to the spatial position of the words' referents. In addition, the spatial distance between the words' referents was varied, resulting in word pairs referring to body parts that are close (e.g. EYE / MOUTH) or far in space (e.g. EYE / FOOT). A spatial congruency effect was observed when subjects made an iconicity judgment (Experiments 2 and 3) but not when making a semantic relatedness judgment (Experiment 1). In addition, when making a semantic relatedness judgment (Experiment 1) reaction times increased with increased distance between the body parts but when making an iconicity judgment (Experiments 2 and 3) reaction times decreased with increased distance. These findings suggest that the processing of body-semantics results in the activation of a detailed visuo-spatial body representation that is modulated by the specific task requirements. We discuss these new data with respect to theories of embodied cognition and body semantics. Copyright © 2011 Elsevier B.V. All rights reserved.
Chang, Hung-Cheng; Grossberg, Stephen; Cao, Yongqiang
2014-01-01
The Where’s Waldo problem concerns how individuals can rapidly learn to search a scene to detect, attend, recognize, and look at a valued target object in it. This article develops the ARTSCAN Search neural model to clarify how brain mechanisms across the What and Where cortical streams are coordinated to solve the Where’s Waldo problem. The What stream learns positionally-invariant object representations, whereas the Where stream controls positionally-selective spatial and action representations. The model overcomes deficiencies of these computationally complementary properties through What and Where stream interactions. Where stream processes of spatial attention and predictive eye movement control modulate What stream processes whereby multiple view- and positionally-specific object categories are learned and associatively linked to view- and positionally-invariant object categories through bottom-up and attentive top-down interactions. Gain fields control the coordinate transformations that enable spatial attention and predictive eye movements to carry out this role. What stream cognitive-emotional learning processes enable the focusing of motivated attention upon the invariant object categories of desired objects. What stream cognitive names or motivational drives can prime a view- and positionally-invariant object category of a desired target object. A volitional signal can convert these primes into top-down activations that can, in turn, prime What stream view- and positionally-specific categories. When it also receives bottom-up activation from a target, such a positionally-specific category can cause an attentional shift in the Where stream to the positional representation of the target, and an eye movement can then be elicited to foveate it. These processes describe interactions among brain regions that include visual cortex, parietal cortex, inferotemporal cortex, prefrontal cortex (PFC), amygdala, basal ganglia (BG), and superior colliculus (SC). PMID:24987339
Extraocular muscle proprioception and eye position.
Pettorossi, V E; Ferraresi, A; Draicchio, F; Errico, P; Santarelli, R; Manni, E
1995-03-01
In the lamb, acute unilateral section of the ophthalmic branch induced in the ipsilateral eye occasional oscillations of the resting position and misalignment of the horizontal vestibulo-ocular reflex (HVOR) with respect to the stimulus. Additional electrolytic lesion of the cells innervating the proprioceptors of the medial rectus muscle, or of the lateral rectus muscle in the contralateral semilunar ganglion, provoked a 4 degrees-7 degrees consensual eye deviation towards and away from the lesioned side, respectively. The optokinetic beating field was similarly deviated. Under these experimental conditions, HVOR showed enhanced gain and marked misalignment in both eyes. Therefore, the selective suppression of muscular proprioceptive input deviated both eyes towards the direction opposite to the muscle whose gangliar proprioceptive representation has been destroyed.
Barta, András; Horváth, Gábor
2003-12-01
The apparent position, size, and shape of aerial objects viewed binocularly from water change as a result of the refraction of light at the water surface. Earlier studies of the refraction-distorted structure of the aerial binocular visual field of underwater observers were restricted to either vertically or horizontally oriented eyes. Here we calculate the position of the binocular image point of an aerial object point viewed by two arbitrarily positioned underwater eyes when the water surface is flat. Assuming that binocular image fusion is performed by appropriate vergent eye movements to bring the object's image onto the foveae, the structure of the aerial binocular visual field is computed and visualized as a function of the relative positions of the eyes. We also analyze two erroneous representations of the underwater imaging of aerial objects that have occurred in the literature. It is demonstrated that the structure of the aerial binocular visual field of underwater observers distorted by refraction is more complex than has been thought previously.
NASA Astrophysics Data System (ADS)
Chen, Yanjie; Zhang, Quanqi; Qi, Jie; Sun, Yeying; Zhong, Qiwang; Wang, Xubo; Wang, Zhigang; Li, Shuo; Li, Chunmei
2009-02-01
Flatfish or flounder moves one eye to change body proportion into vertebral asymmetry during metamorphosis, during which some become sinistral while others dextral. However, the mechanism behinds the eye-position has not been well understood. In this research, hybrids between Japanese flounder(♀) and stone flounder (♂) show mixed eye-location in both dextral type and sinistral type, and thus become good samples for studying the eye-migration. mRNAs from pro-metamorphosis sinistral and dextral hybrids larvae were screened with classical differential display RT-PCR (DD-RT-PCR) and representational difference analysis of cDNA (cDNA-RDA); 30 and 47 putative fragments were isolated, respectively. The cDNA fragments of creatine kinase and trypsinogen 2 precursor genes isolated by cDNA-RDA exhibited eye-position expression patterns during metamorphosis. However, none of the fragments was proved to be related to flatfishes’ eye-position specifically. Therefore, further studies and more sensitive gene isolated methods are needed to solve the problems.
Crossing the “Uncanny Valley”: adaptation to cartoon faces can influence perception of human faces
Chen, Haiwen; Russell, Richard; Nakayama, Ken; Livingstone, Margaret
2013-01-01
Adaptation can shift what individuals identify to be a prototypical or attractive face. Past work suggests that low-level shape adaptation can affect high-level face processing but is position dependent. Adaptation to distorted images of faces can also affect face processing but only within sub-categories of faces, such as gender, age, and race/ethnicity. This study assesses whether there is a representation of face that is specific to faces (as opposed to all shapes) but general to all kinds of faces (as opposed to subcategories) by testing whether adaptation to one type of face can affect perception of another. Participants were shown cartoon videos containing faces with abnormally large eyes. Using animated videos allowed us to simulate naturalistic exposure and avoid positional shape adaptation. Results suggest that adaptation to cartoon faces with large eyes shifts preferences for human faces toward larger eyes, supporting the existence of general face representations. PMID:20465173
[Eyes test performance among unaffected mothers of patients with schizophrenia].
Birdal, Seval; Yıldırım, Ejder Akgün; Arslan Delice, Mehtap; Yavuz, Kasım Fatih; Kurt, Erhan
2015-01-01
Theory of Mind (ToM) deficit is a widely accepted feature of schizophrenia. A number of studies have examined ToM deficits of first degree relatives of schizophrenic patients as genetic markers of schizophrenia. Examination of mentalization capacity among mothers of schizophrenia patients may improve our understanding of theory of mind impairments in schizophrenia. The aim of this study is to use Reading Mind in the Eyes test to examine theory of mind capacity among mothers of schizophrenic patients. Performance during the test "Reading the Mind in the Eyes" (Eyes Test) was compared between the mothers of schizophrenic patients (n=47) and mothers whose children have no psychotic mental illness (n=47). Test results were analyzed based on the categorization of test items as positive, negative, and neutral. Mothers of schizophrenic patients displayed poorer performance during the Eyes Test compare to mothers in the control group, particularly in the recognition of positive and neutral mental representations. There was no statistically significant difference in the recognition of negative mental representations between mothers of patients and the control groups. The results of this study indicate that mothers of schizophrenic patients differ in some theory of mind patterns. Theory of mind may be an important developmental or endophenotipic factor in the pathogenesis of schizophrenia and should be further evaluated using other biological markers.
NASA Astrophysics Data System (ADS)
Reutterer, Bernd; Traxler, Lukas; Bayer, Natascha; Drauschke, Andreas
2017-04-01
To evaluate the performance of intraocular lenses to treat cataract, an optomechanical eye model was developed. One of the most crucial components is the IOL holder, which should guarantee a physiological representation of the capsular bag and a stable position during measurement sequences. Individual holders are required due to the fact that every IOL has different geometric parameters. A method which allows obtaining the correct dimensions for the holder of a special IOL was developed and tested, by verifying the position of the IOL before and after a measurement sequence. Results of telecentric measurements and MTF measurements show that the IOL position does not change during the displacement sequence induced by the stepper motors of the eye model.
Updating visual memory across eye movements for ocular and arm motor control.
Thompson, Aidan A; Henriques, Denise Y P
2008-11-01
Remembered object locations are stored in an eye-fixed reference frame, so that every time the eyes move, spatial representations must be updated for the arm-motor system to reflect the target's new relative position. To date, studies have not investigated how the brain updates these spatial representations during other types of eye movements, such as smooth-pursuit. Further, it is unclear what information is used in spatial updating. To address these questions we investigated whether remembered locations of pointing targets are updated following smooth-pursuit eye movements, as they are following saccades, and also investigated the role of visual information in estimating eye-movement amplitude for updating spatial memory. Misestimates of eye-movement amplitude were induced when participants visually tracked stimuli presented with a background that moved in either the same or opposite direction of the eye before pointing or looking back to the remembered target location. We found that gaze-dependent pointing errors were similar following saccades and smooth-pursuit and that incongruent background motion did result in a misestimate of eye-movement amplitude. However, the background motion had no effect on spatial updating for pointing, but did when subjects made a return saccade, suggesting that the oculomotor and arm-motor systems may rely on different sources of information for spatial updating.
Olsen, Rosanna K; Sebanayagam, Vinoja; Lee, Yunjo; Moscovitch, Morris; Grady, Cheryl L; Rosenbaum, R Shayna; Ryan, Jennifer D
2016-12-01
There is consistent agreement regarding the positive relationship between cumulative eye movement sampling and subsequent recognition, but the role of the hippocampus in this sampling behavior is currently unknown. It is also unclear whether the eye movement repetition effect, i.e., fewer fixations to repeated, compared to novel, stimuli, depends on explicit recognition and/or an intact hippocampal system. We investigated the relationship between cumulative sampling, the eye movement repetition effect, subsequent memory, and the hippocampal system. Eye movements were monitored in a developmental amnesic case (H.C.), whose hippocampal system is compromised, and in a group of typically developing participants while they studied single faces across multiple blocks. The faces were studied from the same viewpoint or different viewpoints and were subsequently tested with the same or different viewpoint. Our previous work suggested that hippocampal representations support explicit recognition for information that changes viewpoint across repetitions (Olsen et al., 2015). Here, examination of eye movements during encoding indicated that greater cumulative sampling was associated with better memory among controls. Increased sampling, however, was not associated with better explicit memory in H.C., suggesting that increased sampling only improves memory when the hippocampal system is intact. The magnitude of the repetition effect was not correlated with cumulative sampling, nor was it related reliably to subsequent recognition. These findings indicate that eye movements collect information that can be used to strengthen memory representations that are later available for conscious remembering, whereas eye movement repetition effects reflect a processing change due to experience that does not necessarily reflect a memory representation that is available for conscious appraisal. Lastly, H.C. demonstrated a repetition effect for fixed viewpoint faces but not for variable viewpoint faces, which suggests that repetition effects are differentially supported by neocortical and hippocampal systems, depending upon the representational nature of the underlying memory trace. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Klein, P.; Viiri, J.; Mozaffari, S.; Dengel, A.; Kuhn, J.
2018-06-01
Relating mathematical concepts to graphical representations is a challenging task for students. In this paper, we introduce two visual strategies to qualitatively interpret the divergence of graphical vector field representations. One strategy is based on the graphical interpretation of partial derivatives, while the other is based on the flux concept. We test the effectiveness of both strategies in an instruction-based eye-tracking study with N =41 physics majors. We found that students' performance improved when both strategies were introduced (74% correct) instead of only one strategy (64% correct), and students performed best when they were free to choose between the two strategies (88% correct). This finding supports the idea of introducing multiple representations of a physical concept to foster student understanding. Relevant eye-tracking measures demonstrate that both strategies imply different visual processing of the vector field plots, therefore reflecting conceptual differences between the strategies. Advanced analysis methods further reveal significant differences in eye movements between the best and worst performing students. For instance, the best students performed predominantly horizontal and vertical saccades, indicating correct interpretation of partial derivatives. They also focused on smaller regions when they balanced positive and negative flux. This mixed-method research leads to new insights into student visual processing of vector field representations, highlights the advantages and limitations of eye-tracking methodologies in this context, and discusses implications for teaching and for future research. The introduction of saccadic direction analysis expands traditional methods, and shows the potential to discover new insights into student understanding and learning difficulties.
Parietal stimulation destabilizes spatial updating across saccadic eye movements.
Morris, Adam P; Chambers, Christopher D; Mattingley, Jason B
2007-05-22
Saccadic eye movements cause sudden and global shifts in the retinal image. Rather than causing confusion, however, eye movements expand our sense of space and detail. In macaques, a stable representation of space is embodied by neural populations in intraparietal cortex that redistribute activity with each saccade to compensate for eye displacement, but little is known about equivalent updating mechanisms in humans. We combined noninvasive cortical stimulation with a double-step saccade task to examine the contribution of two human intraparietal areas to transsaccadic spatial updating. Right hemisphere stimulation over the posterior termination of the intraparietal sulcus (IPSp) broadened and shifted the distribution of second-saccade endpoints, but only when the first-saccade was directed into the contralateral hemifield. By interleaving trials with and without cortical stimulation, we show that the shift in endpoints was caused by an enduring effect of stimulation on neural functioning (e.g., modulation of neuronal gain). By varying the onset time of stimulation, we show that the representation of space in IPSp is updated immediately after the first-saccade. In contrast, stimulation of an adjacent IPS site had no such effects on second-saccades. These experiments suggest that stimulation of IPSp distorts an eye position or displacement signal that updates the representation of space at the completion of a saccade. Such sensory-motor integration in IPSp is crucial for the ongoing control of action, and may contribute to visual stability across saccades.
The Dorsal Visual System Predicts Future and Remembers Past Eye Position
Morris, Adam P.; Bremmer, Frank; Krekelberg, Bart
2016-01-01
Eye movements are essential to primate vision but introduce potentially disruptive displacements of the retinal image. To maintain stable vision, the brain is thought to rely on neurons that carry both visual signals and information about the current direction of gaze in their firing rates. We have shown previously that these neurons provide an accurate representation of eye position during fixation, but whether they are updated fast enough during saccadic eye movements to support real-time vision remains controversial. Here we show that not only do these neurons carry a fast and accurate eye-position signal, but also that they support in parallel a range of time-lagged variants, including predictive and post dictive signals. We recorded extracellular activity in four areas of the macaque dorsal visual cortex during a saccade task, including the lateral and ventral intraparietal areas (LIP, VIP), and the middle temporal (MT) and medial superior temporal (MST) areas. As reported previously, neurons showed tonic eye-position-related activity during fixation. In addition, they showed a variety of transient changes in activity around the time of saccades, including relative suppression, enhancement, and pre-saccadic bursts for one saccade direction over another. We show that a hypothetical neuron that pools this rich population activity through a weighted sum can produce an output that mimics the true spatiotemporal dynamics of the eye. Further, with different pooling weights, this downstream eye position signal (EPS) could be updated long before (<100 ms) or after (<200 ms) an eye movement. The results suggest a flexible coding scheme in which downstream computations have access to past, current, and future eye positions simultaneously, providing a basis for visual stability and delay-free visually-guided behavior. PMID:26941617
Centripetal force draws the eyes, not memory of the target, toward the center.
Kerzel, Dirk
2003-05-01
Many observers believe that a target will continue on a curved trajectory after exiting a spiral tube. Similarly, when observers were asked to localize the final position of a target moving on a circular orbit, displacement of the judged position in the direction of forward motion ("representational momentum") and toward the center of the orbit was observed (cf. T. L. Hubbard, 1996). The present study shows that memory displacement of targets on a circular orbit is affected by eye movements. Forward displacement was larger with ocular pursuit of the target, whereas inward displacement was larger with motionless eyes. The results challenge an account attributing forward and inward displacement to mental analogues of momentum and centripetal force, respectively.
Martarelli, Corinna S; Mast, Fred W; Hartmann, Matthias
2017-01-01
Time is grounded in various ways, and previous studies point to a "mental time line" with past associated with the left, and future with the right side. In this study, we investigated whether spontaneous eye movements on a blank screen would follow a mental timeline during encoding, free recall, and recognition of past and future items. In all three stages of processing, gaze position was more rightward during future items compared to past items. Moreover, horizontal gaze position during encoding predicted horizontal gaze position during free recall and recognition. We conclude that mental time line and the stored gaze position during encoding assist memory retrieval of past versus future items. Our findings highlight the spatial nature of temporal representations.
Eye movement-invariant representations in the human visual system.
Nishimoto, Shinji; Huth, Alexander G; Bilenko, Natalia Y; Gallant, Jack L
2017-01-01
During natural vision, humans make frequent eye movements but perceive a stable visual world. It is therefore likely that the human visual system contains representations of the visual world that are invariant to eye movements. Here we present an experiment designed to identify visual areas that might contain eye-movement-invariant representations. We used functional MRI to record brain activity from four human subjects who watched natural movies. In one condition subjects were required to fixate steadily, and in the other they were allowed to freely make voluntary eye movements. The movies used in each condition were identical. We reasoned that the brain activity recorded in a visual area that is invariant to eye movement should be similar under fixation and free viewing conditions. In contrast, activity in a visual area that is sensitive to eye movement should differ between fixation and free viewing. We therefore measured the similarity of brain activity across repeated presentations of the same movie within the fixation condition, and separately between the fixation and free viewing conditions. The ratio of these measures was used to determine which brain areas are most likely to contain eye movement-invariant representations. We found that voxels located in early visual areas are strongly affected by eye movements, while voxels in ventral temporal areas are only weakly affected by eye movements. These results suggest that the ventral temporal visual areas contain a stable representation of the visual world that is invariant to eye movements made during natural vision.
Horváth, Gábor; Buchta, Krisztián; Varjú, Dezsö
2003-06-01
It is a well-known phenomenon that when we look into the water with two aerial eyes, both the apparent position and the apparent shape of underwater objects are different from the real ones because of refraction at the water surface. Earlier studies of the refraction-distorted structure of the underwater binocular visual field of aerial observers were restricted to either vertically or horizontally oriented eyes. We investigate a generalized version of this problem: We calculate the position of the binocular image point of an underwater object point viewed by two arbitrarily positioned aerial eyes, including oblique orientations of the eyes relative to the flat water surface. Assuming that binocular image fusion is performed by appropriate vergent eye movements to bring the object's image onto the foveas, the structure of the underwater binocular visual field is computed and visualized in different ways as a function of the relative positions of the eyes. We show that a revision of certain earlier treatments of the aerial imaging of underwater objects is necessary. We analyze and correct some widespread erroneous or incomplete representations of this classical geometric optical problem that occur in different textbooks. Improving the theory of aerial binocular imaging of underwater objects, we demonstrate that the structure of the underwater binocular visual field of aerial observers distorted by refraction is more complex than has been thought previously.
Interaction between gaze and visual and proprioceptive position judgements.
Fiehler, Katja; Rösler, Frank; Henriques, Denise Y P
2010-06-01
There is considerable evidence that targets for action are represented in a dynamic gaze-centered frame of reference, such that each gaze shift requires an internal updating of the target. Here, we investigated the effect of eye movements on the spatial representation of targets used for position judgements. Participants had their hand passively placed to a location, and then judged whether this location was left or right of a remembered visual or remembered proprioceptive target, while gaze direction was varied. Estimates of position of the remembered targets relative to the unseen position of the hand were assessed with an adaptive psychophysical procedure. These positional judgements significantly varied relative to gaze for both remembered visual and remembered proprioceptive targets. Our results suggest that relative target positions may also be represented in eye-centered coordinates. This implies similar spatial reference frames for action control and space perception when positions are coded relative to the hand.
Thaler, Lore; Todd, James T
2009-04-01
Two experiments are reported that were designed to measure the accuracy and reliability of both visually guided hand movements (Exp. 1) and perceptual matching judgments (Exp. 2). The specific procedure for informing subjects of the required response on each trial was manipulated so that some tasks could only be performed using an allocentric representation of the visual target; others could be performed using either an allocentric or hand-centered representation; still others could be performed based on an allocentric, hand-centered or head/eye-centered representation. Both head/eye and hand centered representations are egocentric because they specify visual coordinates with respect to the subject. The results reveal that accuracy and reliability of both motor and perceptual responses are highest when subjects direct their response towards a visible target location, which allows them to rely on a representation of the target in head/eye-centered coordinates. Systematic changes in averages and standard deviations of responses are observed when subjects cannot direct their response towards a visible target location, but have to represent target distance and direction in either hand-centered or allocentric visual coordinates instead. Subjects' motor and perceptual performance agree quantitatively well. These results strongly suggest that subjects process head/eye-centered representations differently from hand-centered or allocentric representations, but that they process visual information for motor actions and perceptual judgments together.
Using eye movements to explore mental representations of space.
Fourtassi, Maryam; Rode, Gilles; Pisella, Laure
2017-06-01
Visual mental imagery is a cognitive experience characterised by the activation of the mental representation of an object or scene in the absence of the corresponding stimulus. According to the analogical theory, mental representations have a pictorial nature that preserves the spatial characteristics of the environment that is mentally represented. This cognitive experience shares many similarities with the experience of visual perception, including eye movements. The mental visualisation of a scene is accompanied by eye movements that reflect the spatial content of the mental image, and which can mirror the deformations of this mental image with respect to the real image, such as asymmetries or size reduction. The present article offers a concise overview of the main theories explaining the interactions between eye movements and mental representations, with some examples of the studies supporting them. It also aims to explain how ocular-tracking could be a useful tool in exploring the dynamics of spatial mental representations, especially in pathological situations where these representations can be altered, for instance in unilateral spatial neglect. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
Maximum entropy perception-action space: a Bayesian model of eye movement selection
NASA Astrophysics Data System (ADS)
Colas, Francis; Bessière, Pierre; Girard, Benoît
2011-03-01
In this article, we investigate the issue of the selection of eye movements in a free-eye Multiple Object Tracking task. We propose a Bayesian model of retinotopic maps with a complex logarithmic mapping. This model is structured in two parts: a representation of the visual scene, and a decision model based on the representation. We compare different decision models based on different features of the representation and we show that taking into account uncertainty helps predict the eye movements of subjects recorded in a psychophysics experiment. Finally, based on experimental data, we postulate that the complex logarithmic mapping has a functional relevance, as the density of objects in this space in more uniform than expected. This may indicate that the representation space and control strategies are such that the object density is of maximum entropy.
Reading Mathematics Representations: An Eye-Tracking Study
ERIC Educational Resources Information Center
Andrá, Chiara; Lindström, Paulina; Arzarello, Ferdinando; Holmqvist, Kenneth; Robutti, Ornella; Sabena, Cristina
2015-01-01
We use eye tracking as a method to examine how different mathematical representations of the same mathematical object are attended to by students. The results of this study show that there is a meaningful difference in the eye movements between formulas and graphs. This difference can be understood in terms of the cultural and social shaping of…
ERIC Educational Resources Information Center
Luke, Steven G.; Henderson, John M.; Ferreira, Fernanda
2015-01-01
The lexical quality hypothesis (Perfetti & Hart, 2002) suggests that skilled reading requires high-quality lexical representations. In children, these representations are still developing, and it has been suggested that this development leads to more adult-like eye-movement behavior during the reading of connected text. To test this idea, a…
McIlwain, J T
1990-03-01
Saccades evoked electrically from the deep layers of the superior colliculus have been examined in the alert cat with its head fixed. Amplitudes of the vertical and horizontal components varied linearly with the starting position of the eye. The slopes of the linear-regression lines provided an estimate of the sensitivity of these components to initial eye position. In observations on 29 sites in nine cats, the vertical and horizontal components of saccades evoked from a given site were rarely influenced to the same degree by initial eye position. For most sites, the horizontal component was more sensitive than the vertical component. Sensitivities of vertical and horizontal components were lowest near the representations of the horizontal and vertical meridians, respectively, of the collicular retinotopic map, but otherwise exhibited no systematic retinotopic dependence. Estimates of component amplitudes for saccades evoked from the center of the oculomotor range also diverged significantly from those predicted from the retinotopic map. The results of this and previous studies indicate that electrical stimulation of the cat's superior colliculus cannot yield a unique oculomotor map or one that is in register everywhere with the sensory retinotopic map. Several features of these observations suggest that electrical stimulation of the colliculus produces faulty activation of a saccadic control system that computes target position with respect to the head and that small and large saccades are controlled differently.
Rapid formation of spatiotopic representations as revealed by inhibition of return.
Pertzov, Yoni; Zohary, Ehud; Avidan, Galia
2010-06-30
Inhibition of return (IOR), a performance decrement for stimuli appearing at recently cued locations, occurs when the target and cue share the same screen position. This is in contrast to cue-based attention facilitation effects that were recently suggested to be mapped in a retinotopic reference frame, the prevailing representation throughout early visual processing stages. Here, we investigate the dynamics of IOR in both reference frames, using a modified cued-location saccadic reaction time task with an intervening saccade between cue and target presentation. Thus, on different trials, the target was present either at the same retinotopic location as the cue, or at the same screen position (e.g., spatiotopic location). IOR was primarily found for targets appearing at the same spatiotopic position as the initial cue, when the cue and target were presented at the same hemifield. This suggests that there is restricted information transfer of cue position across the two hemispheres. Moreover, the effect was maximal when the target was presented 10 ms after the intervening saccade ended and was attenuated in longer delays. In our case, therefore, the representation of previously attended locations (as revealed by IOR) is not remapped slowly after the execution of a saccade. Rather, either a retinotopic representation is remapped rapidly, adjacent to the end of the saccade (using a prospective motor command), or the positions of the cue and target are encoded in a spatiotopic reference frame, regardless of eye position. Spatial attention can therefore be allocated to target positions defined in extraretinal coordinates.
The Influence of Different Representations on Solving Concentration Problems at Elementary School
NASA Astrophysics Data System (ADS)
Liu, Chia-Ju; Shen, Ming-Hsun
2011-10-01
This study investigated the students' learning process of the concept of concentration at the elementary school level in Taiwan. The influence of different representational types on the process of proportional reasoning was also explored. The participants included nineteen third-grade and eighteen fifth-grade students. Eye-tracking technology was used in conducting the experiment. The materials were adapted from Noelting's (1980a) "orange juice test" experiment. All problems on concentration included three stages (the intuitive, the concrete operational, and the formal operational), and each problem was displayed in iconic and symbolic representations. The data were collected through eye-tracking technology and post-test interviews. The results showed that the representational types influenced students' solving of concentration problems. Furthermore, the data on eye movement indicated that students used different strategies or rules to solve concentration problems at the different stages of the problems with different representational types. This study is intended to contribute to the understanding of elementary school students' problem-solving strategies and the usability of eye-tracking technology in related studies.
Fazl, Arash; Grossberg, Stephen; Mingolla, Ennio
2009-02-01
How does the brain learn to recognize an object from multiple viewpoints while scanning a scene with eye movements? How does the brain avoid the problem of erroneously classifying parts of different objects together? How are attention and eye movements intelligently coordinated to facilitate object learning? A neural model provides a unified mechanistic explanation of how spatial and object attention work together to search a scene and learn what is in it. The ARTSCAN model predicts how an object's surface representation generates a form-fitting distribution of spatial attention, or "attentional shroud". All surface representations dynamically compete for spatial attention to form a shroud. The winning shroud persists during active scanning of the object. The shroud maintains sustained activity of an emerging view-invariant category representation while multiple view-specific category representations are learned and are linked through associative learning to the view-invariant object category. The shroud also helps to restrict scanning eye movements to salient features on the attended object. Object attention plays a role in controlling and stabilizing the learning of view-specific object categories. Spatial attention hereby coordinates the deployment of object attention during object category learning. Shroud collapse releases a reset signal that inhibits the active view-invariant category in the What cortical processing stream. Then a new shroud, corresponding to a different object, forms in the Where cortical processing stream, and search using attention shifts and eye movements continues to learn new objects throughout a scene. The model mechanistically clarifies basic properties of attention shifts (engage, move, disengage) and inhibition of return. It simulates human reaction time data about object-based spatial attention shifts, and learns with 98.1% accuracy and a compression of 430 on a letter database whose letters vary in size, position, and orientation. The model provides a powerful framework for unifying many data about spatial and object attention, and their interactions during perception, cognition, and action.
Eye Contact Affects Object Representation in 9-Month-Old Infants.
Okumura, Yuko; Kobayashi, Tessei; Itakura, Shoji
2016-01-01
Social cues in interaction with others enable infants to extract useful information from their environment. Although previous research has shown that infants process and retain different information about an object depending on the presence of social cues, the effect of eye contact as an isolated independent variable has not been investigated. The present study investigated how eye contact affects infants' object processing. Nine-month-olds engaged in two types of social interactions with an experimenter. When the experimenter showed an object without eye contact, the infants processed and remembered both the object's location and its identity. In contrast, when the experimenter showed the object while making eye contact with the infant, the infant preferentially processed object's identity but not its location. Such effects might assist infants to selectively attend to useful information. Our findings revealed that 9-month-olds' object representations are modulated in accordance with the context, thus elucidating the function of eye contact for infants' object representation.
Frank, Cornelia; Land, William M.; Schack, Thomas
2016-01-01
Despite the wealth of research on differences between experts and novices with respect to their perceptual-cognitive background (e.g., mental representations, gaze behavior), little is known about the change of these perceptual-cognitive components over the course of motor learning. In the present study, changes in one’s mental representation, quiet eye behavior, and outcome performance were examined over the course of skill acquisition as it related to physical and mental practice. Novices (N = 45) were assigned to one of three conditions: physical practice, combined physical plus mental practice, and no practice. Participants in the practice groups trained on a golf putting task over the course of 3 days, either by repeatedly executing the putt, or by both executing and imaging the putt. Findings revealed improvements in putting performance across both practice conditions. Regarding the perceptual-cognitive changes, participants practicing mentally and physically revealed longer quiet eye durations as well as more elaborate representation structures in comparison to the control group, while this was not the case for participants who underwent physical practice only. Thus, in the present study, combined mental and physical practice led to both formation of mental representations in long-term memory and longer quiet eye durations. Interestingly, the length of the quiet eye directly related to the degree of elaborateness of the underlying mental representation, supporting the notion that the quiet eye reflects cognitive processing. This study is the first to show that the quiet eye becomes longer in novices practicing a motor action. Moreover, the findings of the present study suggest that perceptual and cognitive adaptations co-occur over the course of motor learning. PMID:26779089
Chen, Ying; Byrne, Patrick; Crawford, J Douglas
2011-01-01
Allocentric cues can be used to encode locations in visuospatial memory, but it is not known how and when these representations are converted into egocentric commands for behaviour. Here, we tested the influence of different memory intervals on reach performance toward targets defined in either egocentric or allocentric coordinates, and then compared this to performance in a task where subjects were implicitly free to choose when to convert from allocentric to egocentric representations. Reach and eye positions were measured using Optotrak and Eyelink Systems, respectively, in fourteen subjects. Our results confirm that egocentric representations degrade over a delay of several seconds, whereas allocentric representations remained relatively stable over the same time scale. Moreover, when subjects were free to choose, they converted allocentric representations into egocentric representations as soon as possible, despite the apparent cost in reach precision in our experimental paradigm. This suggests that humans convert allocentric representations into egocentric commands at the first opportunity, perhaps to optimize motor noise and movement timing in real-world conditions. Copyright © 2010 Elsevier Ltd. All rights reserved.
Deep generative learning of location-invariant visual word recognition.
Di Bono, Maria Grazia; Zorzi, Marco
2013-01-01
It is widely believed that orthographic processing implies an approximate, flexible coding of letter position, as shown by relative-position and transposition priming effects in visual word recognition. These findings have inspired alternative proposals about the representation of letter position, ranging from noisy coding across the ordinal positions to relative position coding based on open bigrams. This debate can be cast within the broader problem of learning location-invariant representations of written words, that is, a coding scheme abstracting the identity and position of letters (and combinations of letters) from their eye-centered (i.e., retinal) locations. We asked whether location-invariance would emerge from deep unsupervised learning on letter strings and what type of intermediate coding would emerge in the resulting hierarchical generative model. We trained a deep network with three hidden layers on an artificial dataset of letter strings presented at five possible retinal locations. Though word-level information (i.e., word identity) was never provided to the network during training, linear decoding from the activity of the deepest hidden layer yielded near-perfect accuracy in location-invariant word recognition. Conversely, decoding from lower layers yielded a large number of transposition errors. Analyses of emergent internal representations showed that word selectivity and location invariance increased as a function of layer depth. Word-tuning and location-invariance were found at the level of single neurons, but there was no evidence for bigram coding. Finally, the distributed internal representation of words at the deepest layer showed higher similarity to the representation elicited by the two exterior letters than by other combinations of two contiguous letters, in agreement with the hypothesis that word edges have special status. These results reveal that the efficient coding of written words-which was the model's learning objective-is largely based on letter-level information.
Cao, Yongqiang; Grossberg, Stephen; Markowitz, Jeffrey
2011-12-01
All primates depend for their survival on being able to rapidly learn about and recognize objects. Objects may be visually detected at multiple positions, sizes, and viewpoints. How does the brain rapidly learn and recognize objects while scanning a scene with eye movements, without causing a combinatorial explosion in the number of cells that are needed? How does the brain avoid the problem of erroneously classifying parts of different objects together at the same or different positions in a visual scene? In monkeys and humans, a key area for such invariant object category learning and recognition is the inferotemporal cortex (IT). A neural model is proposed to explain how spatial and object attention coordinate the ability of IT to learn invariant category representations of objects that are seen at multiple positions, sizes, and viewpoints. The model clarifies how interactions within a hierarchy of processing stages in the visual brain accomplish this. These stages include the retina, lateral geniculate nucleus, and cortical areas V1, V2, V4, and IT in the brain's What cortical stream, as they interact with spatial attention processes within the parietal cortex of the Where cortical stream. The model builds upon the ARTSCAN model, which proposed how view-invariant object representations are generated. The positional ARTSCAN (pARTSCAN) model proposes how the following additional processes in the What cortical processing stream also enable position-invariant object representations to be learned: IT cells with persistent activity, and a combination of normalizing object category competition and a view-to-object learning law which together ensure that unambiguous views have a larger effect on object recognition than ambiguous views. The model explains how such invariant learning can be fooled when monkeys, or other primates, are presented with an object that is swapped with another object during eye movements to foveate the original object. The swapping procedure is predicted to prevent the reset of spatial attention, which would otherwise keep the representations of multiple objects from being combined by learning. Li and DiCarlo (2008) have presented neurophysiological data from monkeys showing how unsupervised natural experience in a target swapping experiment can rapidly alter object representations in IT. The model quantitatively simulates the swapping data by showing how the swapping procedure fools the spatial attention mechanism. More generally, the model provides a unifying framework, and testable predictions in both monkeys and humans, for understanding object learning data using neurophysiological methods in monkeys, and spatial attention, episodic learning, and memory retrieval data using functional imaging methods in humans. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zawadzki, Robert J.; Rowe, T. Scott; Fuller, Alfred R.; Hamann, Bernd; Werner, John S.
2010-02-01
An accurate solid eye model (with volumetric retinal morphology) has many applications in the field of ophthalmology, including evaluation of ophthalmic instruments and optometry/ophthalmology training. We present a method that uses volumetric OCT retinal data sets to produce an anatomically correct representation of three-dimensional (3D) retinal layers. This information is exported to a laser scan system to re-create it within solid eye retinal morphology of the eye used in OCT testing. The solid optical model eye is constructed from PMMA acrylic, with equivalent optical power to that of the human eye (~58D). Additionally we tested a water bath eye model from Eyetech Ltd. with a customized retina consisting of five layers of ~60 μm thick biaxial polypropylene film and hot melt rubber adhesive.
The embodiment of emotional words in a second language: An eye-movement study.
Sheikh, Naveed A; Titone, Debra
2016-01-01
The hypothesis that word representations are emotionally impoverished in a second language (L2) has variable support. However, this hypothesis has only been tested using tasks that present words in isolation or that require laboratory-specific decisions. Here, we recorded eye movements for 34 bilinguals who read sentences in their L2 with no goal other than comprehension, and compared them to 43 first language readers taken from our prior study. Positive words were read more quickly than neutral words in the L2 across first-pass reading time measures. However, this emotional advantage was absent for negative words for the earliest measures. Moreover, negative words but not positive words were influenced by concreteness, frequency and L2 proficiency in a manner similar to neutral words. Taken together, the findings suggest that only negative words are at risk of emotional disembodiment during L2 reading, perhaps because a positivity bias in L2 experiences ensures that positive words are emotionally grounded.
Three-dimensional ophthalmic optical coherence tomography with a refraction correction algorithm
NASA Astrophysics Data System (ADS)
Zawadzki, Robert J.; Leisser, Christoph; Leitgeb, Rainer; Pircher, Michael; Fercher, Adolf F.
2003-10-01
We built an optical coherence tomography (OCT) system with a rapid scanning optical delay (RSOD) line, which allows probing full axial eye length. The system produces Three-dimensional (3D) data sets that are used to generate 3D tomograms of the model eye. The raw tomographic data were processed by an algorithm, which is based on Snell"s law to correct the interface positions. The Zernike polynomials representation of the interfaces allows quantitative wave aberration measurements. 3D images of our results are presented to illustrate the capabilities of the system and the algorithm performance. The system allows us to measure intra-ocular distances.
How Young Children View Mathematical Representations: A Study Using Eye-Tracking Technology
ERIC Educational Resources Information Center
Bolden, David; Barmby, Patrick; Raine, Stephanie; Gardner, Matthew
2015-01-01
Background: It has been shown that mathematical representations can aid children's understanding of mathematical concepts but that children can sometimes have difficulty in interpreting them correctly. New advances in eye-tracking technology can help in this respect because it allows data to be gathered concerning children's focus of attention and…
Structural Correlates of Reading the Mind in the Eyes in Autism Spectrum Disorder.
Sato, Wataru; Uono, Shota; Kochiyama, Takanori; Yoshimura, Sayaka; Sawada, Reiko; Kubota, Yasutaka; Sakihama, Morimitsu; Toichi, Motomi
2017-01-01
Behavioral studies have shown that individuals with autism spectrum disorder (ASD) have impaired ability to read the mind in the eyes. Although this impairment is central to their social malfunctioning, its structural neural correlates remain unclear. To investigate this issue, we assessed Reading the Mind in the Eyes Test, revised version (Eyes Test) and acquired structural magnetic resonance images in adults with high-functioning ASD ( n = 19) and age-, sex- and intelligence quotient-matched typically developing (TD) controls ( n = 19). On the behavioral level, the Eyes Test scores were lower in the ASD group than in the control group. On the neural level, an interaction between group and Eyes Test score was found in the left temporoparietal junction (TPJ). A positive association between the Eyes Test score and gray matter volume of this region was evident in the control group, but not in the ASD group. This finding suggests that the failure to develop appropriate structural neural representations in the TPJ may underlie the impaired ability of individuals with ASD to read the mind in the eyes. These behavioral and neural findings provide support for the theories that impairments in processing eyes and the ability to infer others' mental states are the core symptoms of ASD, and that atypical features in the social brain network underlie such impairments.
Art in the eye of the beholder: the perception of art during monocular viewing.
Finney, Glen Raymond; Heilman, Kenneth M
2008-03-01
To explore whether monocular viewing affects judgment of art. Each superior colliculus receives optic nerve fibers primarily from the contralateral eye, and visual input to each colliculus activates the ipsilateral hemisphere. In previous studies, monocular viewing influenced performance on visual-spatial and verbal memory tasks. Eight college-educated subjects, 6 men and 2 women, monocularly viewed 10 paintings with the right eye and another 10 with the left. Subjects had not previously seen the paintings. Each time, 5 paintings were abstract expressionist and 5 were impressionist. The orders of eye viewing and painting viewed were pseudorandomized and counterbalanced. Subjects rated on a 1 to 10 scale 4 qualities of the paintings: representation, aesthetics (beauty), novelty, and closure (completeness). Paintings in the abstract expressionist style had a significant difference in the rating of novelty; the paintings were rated more novel when viewed with the left eye than with the right eye. There was a trend for rating paintings as having more closure when viewing with the right eye than with the left. Impressionist paintings show no differences. Monocular viewing influences artistic judgments; novelty being rated higher when viewed with the left eye. Asymmetric projections from each eye and hemispheric specialization are posited to explain these differences.
ERIC Educational Resources Information Center
Jian, Yu-Cin; Wu, Chao-Jung
2015-01-01
We investigated strategies used by readers when reading a science article with a diagram and assessed whether semantic and spatial representations were constructed while reading the diagram. Seventy-one undergraduate participants read a scientific article while tracking their eye movements and then completed a reading comprehension test. Our…
NASA Astrophysics Data System (ADS)
Huang, Alex S.; Belghith, Akram; Dastiridou, Anna; Chopra, Vikas; Zangwill, Linda M.; Weinreb, Robert N.
2017-06-01
The purpose was to create a three-dimensional (3-D) model of circumferential aqueous humor outflow (AHO) in a living human eye with an automated detection algorithm for Schlemm's canal (SC) and first-order collector channels (CC) applied to spectral-domain optical coherence tomography (SD-OCT). Anterior segment SD-OCT scans from a subject were acquired circumferentially around the limbus. A Bayesian Ridge method was used to approximate the location of the SC on infrared confocal laser scanning ophthalmoscopic images with a cross multiplication tool developed to initiate SC/CC detection automated through a fuzzy hidden Markov Chain approach. Automatic segmentation of SC and initial CC's was manually confirmed by two masked graders. Outflow pathways detected by the segmentation algorithm were reconstructed into a 3-D representation of AHO. Overall, only <1% of images (5114 total B-scans) were ungradable. Automatic segmentation algorithm performed well with SC detection 98.3% of the time and <0.1% false positive detection compared to expert grader consensus. CC was detected 84.2% of the time with 1.4% false positive detection. 3-D representation of AHO pathways demonstrated variably thicker and thinner SC with some clear CC roots. Circumferential (360 deg), automated, and validated AHO detection of angle structures in the living human eye with reconstruction was possible.
Joint representation of translational and rotational components of optic flow in parietal cortex
Sunkara, Adhira; DeAngelis, Gregory C.; Angelaki, Dora E.
2016-01-01
Terrestrial navigation naturally involves translations within the horizontal plane and eye rotations about a vertical (yaw) axis to track and fixate targets of interest. Neurons in the macaque ventral intraparietal (VIP) area are known to represent heading (the direction of self-translation) from optic flow in a manner that is tolerant to rotational visual cues generated during pursuit eye movements. Previous studies have also reported that eye rotations modulate the response gain of heading tuning curves in VIP neurons. We tested the hypothesis that VIP neurons simultaneously represent both heading and horizontal (yaw) eye rotation velocity by measuring heading tuning curves for a range of rotational velocities of either real or simulated eye movements. Three findings support the hypothesis of a joint representation. First, we show that rotation velocity selectivity based on gain modulations of visual heading tuning is similar to that measured during pure rotations. Second, gain modulations of heading tuning are similar for self-generated eye rotations and visually simulated rotations, indicating that the representation of rotation velocity in VIP is multimodal, driven by both visual and extraretinal signals. Third, we show that roughly one-half of VIP neurons jointly represent heading and rotation velocity in a multiplicatively separable manner. These results provide the first evidence, to our knowledge, for a joint representation of translation direction and rotation velocity in parietal cortex and show that rotation velocity can be represented based on visual cues, even in the absence of efference copy signals. PMID:27095846
Poirier, Frédéric J A M; Faubert, Jocelyn
2012-06-22
Facial expressions are important for human communications. Face perception studies often measure the impact of major degradation (e.g., noise, inversion, short presentations, masking, alterations) on natural expression recognition performance. Here, we introduce a novel face perception technique using rich and undegraded stimuli. Participants modified faces to create optimal representations of given expressions. Using sliders, participants adjusted 53 face components (including 37 dynamic) including head, eye, eyebrows, mouth, and nose shape and position. Data was collected from six participants and 10 conditions (six emotions + pain + gender + neutral). Some expressions had unique features (e.g., frown for anger, upward-curved mouth for happiness), whereas others had shared features (e.g., open eyes and mouth for surprise and fear). Happiness was different from other emotions. Surprise was different from other emotions except fear. Weighted sum morphing provides acceptable stimuli for gender-neutral and dynamic stimuli. Many features were correlated, including (1) head size with internal feature sizes as related to gender, (2) internal feature scaling, and (3) eyebrow height and eye openness as related to surprise and fear. These findings demonstrate the method's validity for measuring the optimal facial expressions, which we argue is a more direct measure of their internal representations.
NASA Astrophysics Data System (ADS)
Dong, Weihua; Liao, Hua
2016-06-01
Despite the now-ubiquitous two-dimensional (2D) maps, photorealistic three-dimensional (3D) representations of cities (e.g., Google Earth) have gained much attention by scientists and public users as another option. However, there is no consistent evidence on the influences of 3D photorealism on pedestrian navigation. Whether 3D photorealism can communicate cartographic information for navigation with higher effectiveness and efficiency and lower cognitive workload compared to the traditional symbolic 2D maps remains unknown. This study aims to explore whether the photorealistic 3D representation can facilitate processes of map reading and navigation in digital environments using a lab-based eye tracking approach. Here we show the differences of symbolic 2D maps versus photorealistic 3D representations depending on users' eye-movement and navigation behaviour data. We found that the participants using the 3D representation were less effective, less efficient and were required higher cognitive workload than using the 2D map for map reading. However, participants using the 3D representation performed more efficiently in self-localization and orientation at the complex decision points. The empirical results can be helpful to improve the usability of pedestrian navigation maps in future designs.
Kerzel, Dirk
2003-05-01
Observers' judgments of the final position of a moving target are typically shifted in the direction of implied motion ("representational momentum"). The role of attention is unclear: visual attention may be necessary to maintain or halt target displacement. When attention was captured by irrelevant distractors presented during the retention interval, forward displacement after implied target motion disappeared, suggesting that attention may be necessary to maintain mental extrapolation of target motion. In a further corroborative experiment, the deployment of attention was measured after a sequence of implied motion, and faster responses were observed to stimuli appearing in the direction of motion. Thus, attention may guide the mental extrapolation of target motion. Additionally, eye movements were measured during stimulus presentation and retention interval. The results showed that forward displacement with implied motion does not depend on eye movements. Differences between implied and smooth motion are discussed with respect to recent neurophysiological findings.
Experimental Test of Spatial Updating Models for Monkey Eye-Head Gaze Shifts
Van Grootel, Tom J.; Van der Willigen, Robert F.; Van Opstal, A. John
2012-01-01
How the brain maintains an accurate and stable representation of visual target locations despite the occurrence of saccadic gaze shifts is a classical problem in oculomotor research. Here we test and dissociate the predictions of different conceptual models for head-unrestrained gaze-localization behavior of macaque monkeys. We adopted the double-step paradigm with rapid eye-head gaze shifts to measure localization accuracy in response to flashed visual stimuli in darkness. We presented the second target flash either before (static), or during (dynamic) the first gaze displacement. In the dynamic case the brief visual flash induced a small retinal streak of up to about 20 deg at an unpredictable moment and retinal location during the eye-head gaze shift, which provides serious challenges for the gaze-control system. However, for both stimulus conditions, monkeys localized the flashed targets with accurate gaze shifts, which rules out several models of visuomotor control. First, these findings exclude the possibility that gaze-shift programming relies on retinal inputs only. Instead, they support the notion that accurate eye-head motor feedback updates the gaze-saccade coordinates. Second, in dynamic trials the visuomotor system cannot rely on the coordinates of the planned first eye-head saccade either, which rules out remapping on the basis of a predictive corollary gaze-displacement signal. Finally, because gaze-related head movements were also goal-directed, requiring continuous access to eye-in-head position, we propose that our results best support a dynamic feedback scheme for spatial updating in which visuomotor control incorporates accurate signals about instantaneous eye- and head positions rather than relative eye- and head displacements. PMID:23118883
Monocular focal retinal lesions induce short-term topographic plasticity in adult cat visual cortex.
Calford, M B; Schmid, L M; Rosa, M G
1999-01-01
Electrophysiological recording in primary visual cortex (VI) was performed both prior to and in the hours immediately following the creation of a discrete retinal lesion in one eye with an argon laser. Lesion projection zones (LPZs; 21-64 mm2) were defined in the visual cortex by mapping the extent of the lesion onto the topographic representation in cortex. There was no effect on neuronal responses to the unlesioned eye or on its topographic representation. However, within hours of producing the retinal lesion, receptive fields obtained from stimulation of the lesioned eye were displaced onto areas surrounding the scotoma and were enlarged compared with the corresponding field obtained through the normal eye. The proportion of such responsive recording sites increased during the experiment such that 8-11 hours post-lesion, 56% of recording sites displayed neurons responsive to the lesioned eye. This is an equivalent proportion to that previously reported with long-term recovery (three weeks to three months). Responsive neurons were evident as far as 2.5 mm inside the border of the LPZ. The reorganization of the lesioned eye representation produced binocular disparities as great as 15 degrees, suggesting interactions between sites in VI up to 5.5 mm apart. PMID:10189714
Huang, Alex S; Belghith, Akram; Dastiridou, Anna; Chopra, Vikas; Zangwill, Linda M; Weinreb, Robert N
2017-06-01
The purpose was to create a three-dimensional (3-D) model of circumferential aqueous humor outflow (AHO) in a living human eye with an automated detection algorithm for Schlemm’s canal (SC) and first-order collector channels (CC) applied to spectral-domain optical coherence tomography (SD-OCT). Anterior segment SD-OCT scans from a subject were acquired circumferentially around the limbus. A Bayesian Ridge method was used to approximate the location of the SC on infrared confocal laser scanning ophthalmoscopic images with a cross multiplication tool developed to initiate SC/CC detection automated through a fuzzy hidden Markov Chain approach. Automatic segmentation of SC and initial CC’s was manually confirmed by two masked graders. Outflow pathways detected by the segmentation algorithm were reconstructed into a 3-D representation of AHO. Overall, only <1% of images (5114 total B-scans) were ungradable. Automatic segmentation algorithm performed well with SC detection 98.3% of the time and <0.1% false positive detection compared to expert grader consensus. CC was detected 84.2% of the time with 1.4% false positive detection. 3-D representation of AHO pathways demonstrated variably thicker and thinner SC with some clear CC roots. Circumferential (360 deg), automated, and validated AHO detection of angle structures in the living human eye with reconstruction was possible.
Retinal image registration for eye movement estimation.
Kolar, Radim; Tornow, Ralf P; Odstrcilik, Jan
2015-01-01
This paper describes a novel methodology for eye fixation measurement using a unique videoophthalmoscope setup and advanced image registration approach. The representation of the eye movements via Poincare plot is also introduced. The properties, limitations and perspective of this methodology are finally discussed.
Marino, Alexandria C.; Mazer, James A.
2016-01-01
During natural vision, saccadic eye movements lead to frequent retinal image changes that result in different neuronal subpopulations representing the same visual feature across fixations. Despite these potentially disruptive changes to the neural representation, our visual percept is remarkably stable. Visual receptive field remapping, characterized as an anticipatory shift in the position of a neuron’s spatial receptive field immediately before saccades, has been proposed as one possible neural substrate for visual stability. Many of the specific properties of remapping, e.g., the exact direction of remapping relative to the saccade vector and the precise mechanisms by which remapping could instantiate stability, remain a matter of debate. Recent studies have also shown that visual attention, like perception itself, can be sustained across saccades, suggesting that the attentional control system can also compensate for eye movements. Classical remapping could have an attentional component, or there could be a distinct attentional analog of visual remapping. At this time we do not yet fully understand how the stability of attentional representations relates to perisaccadic receptive field shifts. In this review, we develop a vocabulary for discussing perisaccadic shifts in receptive field location and perisaccadic shifts of attentional focus, review and synthesize behavioral and neurophysiological studies of perisaccadic perception and perisaccadic attention, and identify open questions that remain to be experimentally addressed. PMID:26903820
Feature-Selective Attentional Modulations in Human Frontoparietal Cortex.
Ester, Edward F; Sutterer, David W; Serences, John T; Awh, Edward
2016-08-03
Control over visual selection has long been framed in terms of a dichotomy between "source" and "site," where top-down feedback signals originating in frontoparietal cortical areas modulate or bias sensory processing in posterior visual areas. This distinction is motivated in part by observations that frontoparietal cortical areas encode task-level variables (e.g., what stimulus is currently relevant or what motor outputs are appropriate), while posterior sensory areas encode continuous or analog feature representations. Here, we present evidence that challenges this distinction. We used fMRI, a roving searchlight analysis, and an inverted encoding model to examine representations of an elementary feature property (orientation) across the entire human cortical sheet while participants attended either the orientation or luminance of a peripheral grating. Orientation-selective representations were present in a multitude of visual, parietal, and prefrontal cortical areas, including portions of the medial occipital cortex, the lateral parietal cortex, and the superior precentral sulcus (thought to contain the human homolog of the macaque frontal eye fields). Additionally, representations in many-but not all-of these regions were stronger when participants were instructed to attend orientation relative to luminance. Collectively, these findings challenge models that posit a strict segregation between sources and sites of attentional control on the basis of representational properties by demonstrating that simple feature values are encoded by cortical regions throughout the visual processing hierarchy, and that representations in many of these areas are modulated by attention. Influential models of visual attention posit a distinction between top-down control and bottom-up sensory processing networks. These models are motivated in part by demonstrations showing that frontoparietal cortical areas associated with top-down control represent abstract or categorical stimulus information, while visual areas encode parametric feature information. Here, we show that multivariate activity in human visual, parietal, and frontal cortical areas encode representations of a simple feature property (orientation). Moreover, representations in several (though not all) of these areas were modulated by feature-based attention in a similar fashion. These results provide an important challenge to models that posit dissociable top-down control and sensory processing networks on the basis of representational properties. Copyright © 2016 the authors 0270-6474/16/368188-12$15.00/0.
[A new system of testing visual performance based on the cylindrical lens screen].
Doege, E; Krause, O
1983-09-01
Using a special microoptical screen as a test-picture coating, a method for testing binocular function was developed. It offers the advantage of providing a separate visual impression to each eye from a diagnostic picture without using any device in front of the eyes. The person tested is unaware of this procedure, of which the diagnostic plate gives no hint. In addition to a description of its numerous uses and diagnostic possibilities, fusion pictures suitable for screening tests are described: Each eye is offered a separate impression with a completely different content. If fusion occurs correctly, a third motif with an entirely new meaning emerges. Several years of experience with this effective system (naked-eye tests) resulted in aids which are listed in the final section of the paper: exercise aids used for preparing the persons tested (especially infants) in the waiting room, recognition aids for the examination, and a partially kinetic picture for rapid, simple and very convincing representation of adjusting movements and of the squint position in cases of concomitant squint.
The Role of Eyes and Mouth in the Memory of a Face
ERIC Educational Resources Information Center
McKelvie, Stuart J.
1976-01-01
Investigates the relative importance that the eyes and mouth play in the representation in memory of a human face. Systematically applies two kinds of transformation--masking the eyes or the mouths on photographs of faces--and observes the effects on recognition. (Author/RK)
Statistical virtual eye model based on wavefront aberration
Wang, Jie-Mei; Liu, Chun-Ling; Luo, Yi-Ning; Liu, Yi-Guang; Hu, Bing-Jie
2012-01-01
Wavefront aberration affects the quality of retinal image directly. This paper reviews the representation and reconstruction of wavefront aberration, as well as the construction of virtual eye model based on Zernike polynomial coefficients. In addition, the promising prospect of virtual eye model is emphasized. PMID:23173112
ERIC Educational Resources Information Center
Fisher, Sara P.; Hartmann, Christopher
2005-01-01
The importance of representations which are fundamental to understanding and applying mathematics, with the emphasis on the way the individuals who cannot see employ the representations is described. The teachers at Hadley school for the Blind showed the way the blind people used representations when learning mathematics with some accommodation,…
Heiser, Laura M; Berman, Rebecca A; Saunders, Richard C; Colby, Carol L
2005-11-01
With each eye movement, a new image impinges on the retina, yet we do not notice any shift in visual perception. This perceptual stability indicates that the brain must be able to update visual representations to take our eye movements into account. Neurons in the lateral intraparietal area (LIP) update visual representations when the eyes move. The circuitry that supports these updated representations remains unknown, however. In this experiment, we asked whether the forebrain commissures are necessary for updating in area LIP when stimulus representations must be updated from one visual hemifield to the other. We addressed this question by recording from LIP neurons in split-brain monkeys during two conditions: stimulus traces were updated either across or within hemifields. Our expectation was that across-hemifield updating activity in LIP would be reduced or abolished after transection of the forebrain commissures. Our principal finding is that LIP neurons can update stimulus traces from one hemifield to the other even in the absence of the forebrain commissures. This finding provides the first evidence that representations in parietal cortex can be updated without the use of direct cortico-cortical links. The second main finding is that updating activity in LIP is modified in the split-brain monkey: across-hemifield signals are reduced in magnitude and delayed in onset compared with within-hemifield signals, which indicates that the pathways for across-hemifield updating are less effective in the absence of the forebrain commissures. Together these findings reveal a dynamic circuit that contributes to updating spatial representations.
Johansson, Roger; Oren, Franziska; Holmqvist, Kenneth
2018-06-01
When recalling something you have previously read, to what degree will such episodic remembering activate a situation model of described events versus a memory representation of the text itself? The present study was designed to address this question by recording eye movements of participants who recalled previously read texts while looking at a blank screen. An accumulating body of research has demonstrated that spontaneous eye movements occur during episodic memory retrieval and that fixation locations from such gaze patterns to a large degree overlap with the visuospatial layout of the recalled information. Here we used this phenomenon to investigate to what degree participants' gaze patterns corresponded with the visuospatial configuration of the text itself versus a visuospatial configuration described in it. The texts to be recalled were scene descriptions, where the spatial configuration of the scene content was manipulated to be either congruent or incongruent with the spatial configuration of the text itself. Results show that participants' gaze patterns were more likely to correspond with a visuospatial representation of the described scene than with a visuospatial representation of the text itself, but also that the contribution of those representations of space is sensitive to the text content. This is the first demonstration that eye movements can be used to discriminate on which representational level texts are remembered and the findings provide novel insight into the underlying dynamics in play. Copyright © 2018 Elsevier B.V. All rights reserved.
Attention Priority Map of Face Images in Human Early Visual Cortex.
Mo, Ce; He, Dongjun; Fang, Fang
2018-01-03
Attention priority maps are topographic representations that are used for attention selection and guidance of task-related behavior during visual processing. Previous studies have identified attention priority maps of simple artificial stimuli in multiple cortical and subcortical areas, but investigating neural correlates of priority maps of natural stimuli is complicated by the complexity of their spatial structure and the difficulty of behaviorally characterizing their priority map. To overcome these challenges, we reconstructed the topographic representations of upright/inverted face images from fMRI BOLD signals in human early visual areas primary visual cortex (V1) and the extrastriate cortex (V2 and V3) based on a voxelwise population receptive field model. We characterized the priority map behaviorally as the first saccadic eye movement pattern when subjects performed a face-matching task relative to the condition in which subjects performed a phase-scrambled face-matching task. We found that the differential first saccadic eye movement pattern between upright/inverted and scrambled faces could be predicted from the reconstructed topographic representations in V1-V3 in humans of either sex. The coupling between the reconstructed representation and the eye movement pattern increased from V1 to V2/3 for the upright faces, whereas no such effect was found for the inverted faces. Moreover, face inversion modulated the coupling in V2/3, but not in V1. Our findings provide new evidence for priority maps of natural stimuli in early visual areas and extend traditional attention priority map theories by revealing another critical factor that affects priority maps in extrastriate cortex in addition to physical salience and task goal relevance: image configuration. SIGNIFICANCE STATEMENT Prominent theories of attention posit that attention sampling of visual information is mediated by a series of interacting topographic representations of visual space known as attention priority maps. Until now, neural evidence of attention priority maps has been limited to studies involving simple artificial stimuli and much remains unknown about the neural correlates of priority maps of natural stimuli. Here, we show that attention priority maps of face stimuli could be found in primary visual cortex (V1) and the extrastriate cortex (V2 and V3). Moreover, representations in extrastriate visual areas are strongly modulated by image configuration. These findings extend our understanding of attention priority maps significantly by showing that they are modulated, not only by physical salience and task-goal relevance, but also by the configuration of stimuli images. Copyright © 2018 the authors 0270-6474/18/380149-09$15.00/0.
Visual Representations of DNA Replication: Middle Grades Students' Perceptions and Interpretations
NASA Astrophysics Data System (ADS)
Patrick, Michelle D.; Carter, Glenda; Wiebe, Eric N.
2005-09-01
Visual representations play a critical role in the communication of science concepts for scientists and students alike. However, recent research suggests that novice students experience difficulty extracting relevant information from representations. This study examined students' interpretations of visual representations of DNA replication. Each of the four steps of DNA replication included in the instructional presentation was represented as a text slide, a simple 2D graphic, and a rich 3D graphic. Participants were middle grade girls ( n = 21) attending a summer math and science program. Students' eye movements were measured as they viewed the representations. Participants were interviewed following instruction to assess their perceived salient features. Eye tracking fixation counts indicated that the same features (look zones) in the corresponding 2D and 3D graphics had different salience. The interviews revealed that students used different characteristics such as color, shape, and complexity to make sense of the graphics. The results of this study have implications for the design of instructional representations. Since many students have difficulty distinguishing between relevant and irrelevant information, cueing and directing student attention through the instructional representation could allow cognitive resources to be directed to the most relevant material.
Spatiotopic coding during dynamic head tilt
Turi, Marco; Burr, David C.
2016-01-01
Humans maintain a stable representation of the visual world effortlessly, despite constant movements of the eyes, head, and body, across multiple planes. Whereas visual stability in the face of saccadic eye movements has been intensely researched, fewer studies have investigated retinal image transformations induced by head movements, especially in the frontal plane. Unlike head rotations in the horizontal and sagittal planes, tilting the head in the frontal plane is only partially counteracted by torsional eye movements and consequently induces a distortion of the retinal image to which we seem to be completely oblivious. One possible mechanism aiding perceptual stability is an active reconstruction of a spatiotopic map of the visual world, anchored in allocentric coordinates. To explore this possibility, we measured the positional motion aftereffect (PMAE; the apparent change in position after adaptation to motion) with head tilts of ∼42° between adaptation and test (to dissociate retinal from allocentric coordinates). The aftereffect was shown to have both a retinotopic and spatiotopic component. When tested with unpatterned Gaussian blobs rather than sinusoidal grating stimuli, the retinotopic component was greatly reduced, whereas the spatiotopic component remained. The results suggest that perceptual stability may be maintained at least partially through mechanisms involving spatiotopic coding. NEW & NOTEWORTHY Given that spatiotopic coding could play a key role in maintaining visual stability, we look for evidence of spatiotopic coding after retinal image transformations caused by head tilt. To this end, we measure the strength of the positional motion aftereffect (PMAE; previously shown to be largely spatiotopic after saccades) after large head tilts. We find that, as with eye movements, the spatial selectivity of the PMAE has a large spatiotopic component after head rotation. PMID:27903636
ERIC Educational Resources Information Center
das Dores Guerreiro, Maria; Caetano, Ana; Rodrigues, Eduardo
2014-01-01
This article examines gender representations of family and parental roles among young people aged 11 to 14 years. It is based on the qualitative analysis of 792 essays written by Portuguese girls and boys attending compulsory education. The adolescents' texts express normative images and cultural representations about gender that are plural and…
Retinotopic maps and foveal suppression in the visual cortex of amblyopic adults.
Conner, Ian P; Odom, J Vernon; Schwartz, Terry L; Mendola, Janine D
2007-08-15
Amblyopia is a developmental visual disorder associated with loss of monocular acuity and sensitivity as well as profound alterations in binocular integration. Abnormal connections in visual cortex are known to underlie this loss, but the extent to which these abnormalities are regionally or retinotopically specific has not been fully determined. This functional magnetic resonance imaging (fMRI) study compared the retinotopic maps in visual cortex produced by each individual eye in 19 adults (7 esotropic strabismics, 6 anisometropes and 6 controls). In our standard viewing condition, the non-tested eye viewed a dichoptic homogeneous mid-level grey stimulus, thereby permitting some degree of binocular interaction. Regions-of-interest analysis was performed for extrafoveal V1, extrafoveal V2 and the foveal representation at the occipital pole. In general, the blood oxygenation level-dependent (BOLD) signal was reduced for the amblyopic eye. At the occipital pole, population receptive fields were shifted to represent more parafoveal locations for the amblyopic eye, compared with the fellow eye, in some subjects. Interestingly, occluding the fellow eye caused an expanded foveal representation for the amblyopic eye in one early-onset strabismic subject with binocular suppression, indicating real-time cortical remapping. In addition, a few subjects actually showed increased activity in parietal and temporal cortex when viewing with the amblyopic eye. We conclude that, even in a heterogeneous population, abnormal early visual experience commonly leads to regionally specific cortical adaptations.
ERIC Educational Resources Information Center
Godfroid, Aline; Boers, Frank; Housen, Alex
2013-01-01
This eye-tracking study tests the hypothesis that more attention leads to more learning, following claims that attention to new language elements in the input results in their initial representation in long-term memory (i.e., intake; Robinson, 2003; Schmidt, 1990, 2001). Twenty-eight advanced learners of English read English texts that contained…
Maplike representation of celestial E-vector orientations in the brain of an insect.
Heinze, Stanley; Homberg, Uwe
2007-02-16
For many insects, the polarization pattern of the blue sky serves as a compass cue for spatial navigation. E-vector orientations are detected by photoreceptors in a dorsal rim area of the eye. Polarized-light signals from both eyes are finally integrated in the central complex, a brain area consisting of two subunits, the protocerebral bridge and the central body. Here we show that a topographic representation of zenithal E-vector orientations underlies the columnar organization of the protocerebral bridge in a locust. The maplike arrangement is highly suited to signal head orientation under the open sky.
When intelligence is in control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bellman, K.L.
Each time a discipline redefines itself, I look at it as a sign of growth, because often such redefinition means that there is new theory, new methods, or new {open_quotes}disciples{close_quote} from other disciplines who are stretching, enlarging, and deepening the field. Such is the case with semiotics. Deeply entwined with the concepts of {open_quotes}intelligent systems{close_quotes}, {open_quotes}intelligent control{close_quotes}, and complex systems theory, semiotics struggles to develop representations, notations (systems of representations), and models (functionally-oriented sets of related representations) to study systems that may or may not be usefully described as employing representations, notations, and models themselves. That last, of course, ismore » the main problem that semiotics faces. Semiotics, like psychology, philosophy, or any other self-referential discipline, is burdened by the eye attempting to study the eye or the mind studying the mind, or more to the point here, the modeler studying the modeling acts of others.« less
Michael, Neethu; Löwel, Siegrid; Bischof, Hans-Joachim
2015-01-01
The visual wulst of the zebra finch comprises at least two retinotopic maps of the contralateral eye. As yet, it is not known how much of the visual field is represented in the wulst neuronal maps, how the organization of the maps is related to the retinal architecture, and how information from the ipsilateral eye is involved in the activation of the wulst. Here, we have used autofluorescent flavoprotein imaging and classical anatomical methods to investigate such characteristics of the most posterior map of the multiple retinotopic representations. We found that the visual wulst can be activated by visual stimuli from a large part of the visual field of the contralateral eye. Horizontally, the visual field representation extended from -5° beyond the beak tip up to +125° laterally. Vertically, a small strip from -10° below to about +25° above the horizon activated the visual wulst. Although retinal ganglion cells had a much higher density around the fovea and along a strip extending from the fovea towards the beak tip, these areas were not overrepresented in the wulst map. The wulst area activated from the foveal region of the ipsilateral eye, overlapped substantially with the middle of the three contralaterally activated regions in the visual wulst, and partially with the other two. Visual wulst activity evoked by stimulation of the frontal visual field was stronger with contralateral than with binocular stimulation. This confirms earlier electrophysiological studies indicating an inhibitory influence of the activation of the ipsilateral eye on wulst activity elicited by stimulating the contralateral eye. The lack of a foveal overrepresentation suggests that identification of objects may not be the primary task of the zebra finch visual wulst. Instead, this brain area may be involved in the processing of visual information necessary for spatial orientation. PMID:25853253
Can gaze-contingent mirror-feedback from unfamiliar faces alter self-recognition?
Estudillo, Alejandro J; Bindemann, Markus
2017-05-01
This study focuses on learning of the self, by examining how human observers update internal representations of their own face. For this purpose, we present a novel gaze-contingent paradigm, in which an onscreen face mimics observers' own eye-gaze behaviour (in the congruent condition), moves its eyes in different directions to that of the observers (incongruent condition), or remains static and unresponsive (neutral condition). Across three experiments, the mimicry of the onscreen face did not affect observers' perceptual self-representations. However, this paradigm influenced observers' reports of their own face. This effect was such that observers felt the onscreen face to be their own and that, if the onscreen gaze had moved on its own accord, observers expected their own eyes to move too. The theoretical implications of these findings are discussed.
Predictive encoding of moving target trajectory by neurons in the parabigeminal nucleus
Ma, Rui; Cui, He; Lee, Sang-Hun; Anastasio, Thomas J.
2013-01-01
Intercepting momentarily invisible moving objects requires internally generated estimations of target trajectory. We demonstrate here that the parabigeminal nucleus (PBN) encodes such estimations, combining sensory representations of target location, extrapolated positions of briefly obscured targets, and eye position information. Cui and Malpeli (Cui H, Malpeli JG. J Neurophysiol 89: 3128–3142, 2003) reported that PBN activity for continuously visible tracked targets is determined by retinotopic target position. Here we show that when cats tracked moving, blinking targets the relationship between activity and target position was similar for ON and OFF phases (400 ms for each phase). The dynamic range of activity evoked by virtual targets was 94% of that of real targets for the first 200 ms after target offset and 64% for the next 200 ms. Activity peaked at about the same best target position for both real and virtual targets. PBN encoding of target position takes into account changes in eye position resulting from saccades, even without visual feedback. Since PBN response fields are retinotopically organized, our results suggest that activity foci associated with real and virtual targets at a given target position lie in the same physical location in the PBN, i.e., a retinotopic as well as a rate encoding of virtual-target position. We also confirm that PBN activity is specific to the intended target of a saccade and is predictive of which target will be chosen if two are offered. A Bayesian predictor-corrector model is presented that conceptually explains the differences in the dynamic ranges of PBN neuronal activity evoked during tracking of real and virtual targets. PMID:23365185
Effect of fixation positions on perception of lightness
NASA Astrophysics Data System (ADS)
Toscani, Matteo; Valsecchi, Matteo; Gegenfurtner, Karl R.
2015-03-01
Visual acuity, luminance sensitivity, contrast sensitivity, and color sensitivity are maximal in the fovea and decrease with retinal eccentricity. Therefore every scene is perceived by integrating the small, high resolution samples collected by moving the eyes around. Moreover, when viewing ambiguous figures the fixated position influences the dominance of the possible percepts. Therefore fixations could serve as a selection mechanism whose function is not confined to finely resolve the selected detail of the scene. Here this hypothesis is tested in the lightness perception domain. In a first series of experiments we demonstrated that when observers matched the color of natural objects they based their lightness judgments on objects' brightest parts. During this task the observers tended to fixate points with above average luminance, suggesting a relationship between perception and fixations that we causally proved using a gaze contingent display in a subsequent experiment. Simulations with rendered physical lighting show that higher values in an object's luminance distribution are particularly informative about reflectance. In a second series of experiments we considered a high level strategy that the visual system uses to segment the visual scene in a layered representation. We demonstrated that eye movement sampling mediates between the layer segregation and its effects on lightness perception. Together these studies show that eye fixations are partially responsible for the selection of information from a scene that allows the visual system to estimate the reflectance of a surface.
Eye Detection and Tracking for Intelligent Human Computer Interaction
2006-02-01
Meer and I. Weiss, “Smoothed Differentiation Filters for Images”, Journal of Visual Communication and Image Representation, 3(1):58-72, 1992. [13...25] P. Meer and I. Weiss. “Smoothed differentiation filters for images”. Journal of Visual Communication and Image Representation, 3(1), 1992
Retinotopic maps and foveal suppression in the visual cortex of amblyopic adults
Conner, Ian P; Odom, J Vernon; Schwartz, Terry L; Mendola, Janine D
2007-01-01
Amblyopia is a developmental visual disorder associated with loss of monocular acuity and sensitivity as well as profound alterations in binocular integration. Abnormal connections in visual cortex are known to underlie this loss, but the extent to which these abnormalities are regionally or retinotopically specific has not been fully determined. This functional magnetic resonance imaging (fMRI) study compared the retinotopic maps in visual cortex produced by each individual eye in 19 adults (7 esotropic strabismics, 6 anisometropes and 6 controls). In our standard viewing condition, the non-tested eye viewed a dichoptic homogeneous mid-level grey stimulus, thereby permitting some degree of binocular interaction. Regions-of-interest analysis was performed for extrafoveal V1, extrafoveal V2 and the foveal representation at the occipital pole. In general, the blood oxygenation level-dependent (BOLD) signal was reduced for the amblyopic eye. At the occipital pole, population receptive fields were shifted to represent more parafoveal locations for the amblyopic eye, compared with the fellow eye, in some subjects. Interestingly, occluding the fellow eye caused an expanded foveal representation for the amblyopic eye in one early–onset strabismic subject with binocular suppression, indicating real-time cortical remapping. In addition, a few subjects actually showed increased activity in parietal and temporal cortex when viewing with the amblyopic eye. We conclude that, even in a heterogeneous population, abnormal early visual experience commonly leads to regionally specific cortical adaptations. PMID:17627994
Laidlaw, Kaitlin E W; Zhu, Mona J H; Kingstone, Alan
2016-06-01
Successful target selection often occurs concurrently with distractor inhibition. A better understanding of the former thus requires a thorough study of the competition that arises between target and distractor representations. In the present study, we explore whether the presence of a distractor influences saccade processing via interfering with visual target and/or saccade goal representations. To do this, we asked participants to make either pro- or antisaccade eye movements to a target and measured the change in their saccade trajectory and landing position (collectively referred to as deviation) in response to distractors placed near or far from the saccade goal. The use of an antisaccade paradigm may help to distinguish between stimulus- and goal-related distractor interference, as unlike with prosaccades, these two features are dissociated in space when making a goal-directed antisaccade response away from a visual target stimulus. The present results demonstrate that for both pro- and antisaccades, distractors near the saccade goal elicited the strongest competition, as indicated by greater saccade trajectory deviation and landing position error. Though distractors far from the saccade goal elicited, on average, greater deviation away in antisaccades than in prosaccades, a time-course analysis revealed a significant effect of far-from-goal distractors in prosaccades as well. Considered together, the present findings support the view that goal-related representations most strongly influence the saccade metrics tested, though stimulus-related representations may play a smaller role in determining distractor-based interference effects on saccade execution under certain circumstances. Further, the results highlight the advantage of considering temporal changes in distractor-based interference.
Medendorp, W. P.
2015-01-01
It is known that the brain uses multiple reference frames to code spatial information, including eye-centered and body-centered frames. When we move our body in space, these internal representations are no longer in register with external space, unless they are actively updated. Whether the brain updates multiple spatial representations in parallel, or whether it restricts its updating mechanisms to a single reference frame from which other representations are constructed, remains an open question. We developed an optimal integration model to simulate the updating of visual space across body motion in multiple or single reference frames. To test this model, we designed an experiment in which participants had to remember the location of a briefly presented target while being translated sideways. The behavioral responses were in agreement with a model that uses a combination of eye- and body-centered representations, weighted according to the reliability in which the target location is stored and updated in each reference frame. Our findings suggest that the brain simultaneously updates multiple spatial representations across body motion. Because both representations are kept in sync, they can be optimally combined to provide a more precise estimate of visual locations in space than based on single-frame updating mechanisms. PMID:26490289
REM Dreaming and Cognitive Skills at Ages 5-8: A Cross-Sectional Study.
ERIC Educational Resources Information Center
Foulkes, David; And Others
1990-01-01
Describes laboratory research on REM (rapid eye movement) sleep in children ages five to eight. Image quality, self-representation, and narrative complexity of dreams all develop as age progresses. Children's representational intelligence predicts their rate of dream production, but language skills do not. (GH)
Persistent spatial information in the frontal eye field during object-based short-term memory.
Clark, Kelsey L; Noudoost, Behrad; Moore, Tirin
2012-08-08
Spatial attention is known to gate entry into visual short-term memory, and some evidence suggests that spatial signals may also play a role in binding features or protecting object representations during memory maintenance. To examine the persistence of spatial signals during object short-term memory, the activity of neurons in the frontal eye field (FEF) of macaque monkeys was recorded during an object-based delayed match-to-sample task. In this task, monkeys were trained to remember an object image over a brief delay, regardless of the locations of the sample or target presentation. FEF neurons exhibited visual, delay, and target period activity, including selectivity for sample location and target location. Delay period activity represented the sample location throughout the delay, despite the irrelevance of spatial information for successful task completion. Furthermore, neurons continued to encode sample position in a variant of the task in which the matching stimulus never appeared in their response field, confirming that FEF maintains sample location independent of subsequent behavioral relevance. FEF neurons also exhibited target-position-dependent anticipatory activity immediately before target onset, suggesting that monkeys predicted target position within blocks. These results show that FEF neurons maintain spatial information during short-term memory, even when that information is irrelevant for task performance.
NASA Astrophysics Data System (ADS)
Susac, Ana; Bubic, Andreja; Martinjak, Petra; Planinic, Maja; Palmovic, Marijan
2017-12-01
Developing a better understanding of the measurement process and measurement uncertainty is one of the main goals of university physics laboratory courses. This study investigated the influence of graphical representation of data on student understanding and interpreting of measurement results. A sample of 101 undergraduate students (48 first year students and 53 third and fifth year students) from the Department of Physics, University of Zagreb were tested with a paper-and-pencil test consisting of eight multiple-choice test items about measurement uncertainties. One version of the test items included graphical representations of the measurement data. About half of the students solved that version of the test while the remaining students solved the same test without graphical representations. The results have shown that the students who had the graphical representation of data scored higher than their colleagues without graphical representation. In the second part of the study, measurements of eye movements were carried out on a sample of thirty undergraduate students from the Department of Physics, University of Zagreb while students were solving the same test on a computer screen. The results revealed that students who had the graphical representation of data spent considerably less time viewing the numerical data than the other group of students. These results indicate that graphical representation may be beneficial for data processing and data comparison. Graphical representation helps with visualization of data and therefore reduces the cognitive load on students while performing measurement data analysis, so students should be encouraged to use it.
Locations of serial reach targets are coded in multiple reference frames.
Thompson, Aidan A; Henriques, Denise Y P
2010-12-01
Previous work from our lab, and elsewhere, has demonstrated that remembered target locations are stored and updated in an eye-fixed reference frame. That is, reach errors systematically vary as a function of gaze direction relative to a remembered target location, not only when the target is viewed in the periphery (Bock, 1986, known as the retinal magnification effect), but also when the target has been foveated, and the eyes subsequently move after the target has disappeared but prior to reaching (e.g., Henriques, Klier, Smith, Lowy, & Crawford, 1998; Sorrento & Henriques, 2008; Thompson & Henriques, 2008). These gaze-dependent errors, following intervening eye movements, cannot be explained by representations whose frame is fixed to the head, body or even the world. However, it is unknown whether targets presented sequentially would all be coded relative to gaze (i.e., egocentrically/absolutely), or if they would be coded relative to the previous target (i.e., allocentrically/relatively). It might be expected that the reaching movements to two targets separated by 5° would differ by that distance. But, if gaze were to shift between the first and second reaches, would the movement amplitude between the targets differ? If the target locations are coded allocentrically (i.e., the location of the second target coded relative to the first) then the movement amplitude should be about 5°. But, if the second target is coded egocentrically (i.e., relative to current gaze direction), then the reaches to this target and the distances between the subsequent movements should vary systematically with gaze as described above. We found that requiring an intervening saccade to the opposite side of 2 briefly presented targets between reaches to them resulted in a pattern of reaching error that systematically varied as a function of the distance between current gaze and target, and led to a systematic change in the distance between the sequential reach endpoints as predicted by an egocentric frame anchored to the eye. However, the amount of change in this distance was smaller than predicted by a pure eye-fixed representation, suggesting that relative positions of the targets or allocentric coding was also used in sequential reach planning. The spatial coding and updating of sequential reach target locations seems to rely on a combined weighting of multiple reference frames, with one of them centered on the eye. Copyright © 2010 Elsevier Ltd. All rights reserved.
Buildup of spatial information over time and across eye-movements.
Zimmermann, Eckart; Morrone, M Concetta; Burr, David C
2014-12-15
To interact rapidly and effectively with our environment, our brain needs access to a neural representation of the spatial layout of the external world. However, the construction of such a map poses major challenges, as the images on our retinae depend on where the eyes are looking, and shift each time we move our eyes, head and body to explore the world. Research from many laboratories including our own suggests that the visual system does compute spatial maps that are anchored to real-world coordinates. However, the construction of these maps takes time (up to 500ms) and also attentional resources. We discuss research investigating how retinotopic reference frames are transformed into spatiotopic reference-frames, and how this transformation takes time to complete. These results have implications for theories about visual space coordinates and particularly for the current debate about the existence of spatiotopic representations. Copyright © 2014 Elsevier B.V. All rights reserved.
Staring, A B P; van den Berg, D P G; Cath, D C; Schoorl, M; Engelhard, I M; Korrelboom, C W
2016-07-01
Little is known about treating low self-esteem in anxiety disorders. This study evaluated two treatments targeting different mechanisms: (1) Eye Movement Desensitization and Reprocessing (EMDR), which aims to desensitize negative memory representations that are proposed to maintain low self-esteem; and (2) Competitive Memory Training (COMET), which aims to activate positive representations for enhancing self-esteem. A Randomized Controlled Trial (RCT) was used with a crossover design. Group 1 received six sessions EMDR first and then six sessions COMET; group 2 vice versa. Assessments were made at baseline (T0), end of first treatment (T1), and end of second treatment (T2). Main outcome was self-esteem. We included 47 patients and performed Linear Mixed Models. COMET showed more improvements in self-esteem than EMDR: effect-sizes 1.25 versus 0.46 post-treatment. Unexpectedly, when EMDR was given first, subsequent effects of COMET were significantly reduced in comparison to COMET as the first intervention. For EMDR, sequence made no difference. Reductions in anxiety and depression were mediated by better self-esteem. COMET was associated with significantly greater improvements in self-esteem than EMDR in patients with anxiety disorders. EMDR treatment reduced the effectiveness of subsequent COMET. Improved self-esteem mediated reductions in anxiety and depression symptoms. Copyright © 2016 Elsevier Ltd. All rights reserved.
"What Are You Looking At?" An Eye Movement Exploration in Science Text Reading
ERIC Educational Resources Information Center
Hung, Yueh-Nu
2014-01-01
The main purpose of this research was to investigate how Taiwanese grade 6 readers selected and used information from different print (main text, headings, captions) and visual elements (decorational, representational, interpretational) to comprehend a science text through tracking their eye movement behaviors. Six grade 6 students read a double…
Relationship among Environmental Pointing Accuracy, Mental Rotation, Sex, and Hormones
ERIC Educational Resources Information Center
Bell, Scott; Saucier, Deborah
2004-01-01
Humans rely on internal representations to solve a variety of spatial problems including navigation. Navigation employs specific information to compose a representation of space that is distinct from that obtained through static bird's-eye or horizontal perspectives. The ability to point to on-route locations, off-route locations, and the route…
The Nature of Change Detection and Online Representations of Scenes
ERIC Educational Resources Information Center
Ryan,J ennifer D.; Cohen, Neal J.
2004-01-01
This article provides evidence for implicit change detection and for the contribution of multiple memory sources to online representations. Multiple eye-movement measures distinguished original from changed scenes, even when college students had no conscious awareness for the change. Patients with amnesia showed a systematic deficit on 1 class of…
ERIC Educational Resources Information Center
Dorn, Fred J.; And Others
1983-01-01
Reviews the inconsistent findings of studies on neurolinguistic programing and recommends some areas that should be examined to verify various claims. Discusses methods of assessing client's primary representational systems, including predicate usage and eye movements, and suggests that more reliable methods of assessing PRS must be found. (JAC)
Short-term Action Intentions Overrule Long-Term Semantic Knowledge
ERIC Educational Resources Information Center
van Elk, M.; van Schie, H.T.; Bekkering, H.
2009-01-01
In the present study, we investigated whether the preparation of an unusual action with an object (e.g. bringing a cup towards the eye) could selectively overrule long-term semantic representations. In the first experiment it was found that unusual action intentions activated short-term semantic goal representations, rather than long-term…
The Influence of Different Representations on Solving Concentration Problems at Elementary School
ERIC Educational Resources Information Center
Liu, Chia-Ju; Shen, Ming-Hsun
2011-01-01
This study investigated the students' learning process of the concept of concentration at the elementary school level in Taiwan. The influence of different representational types on the process of proportional reasoning was also explored. The participants included nineteen third-grade and eighteen fifth-grade students. Eye-tracking technology was…
Pluciennicka, Ewa; Wamain, Yannick; Coello, Yann; Kalénine, Solène
2016-07-01
The aim of this study was to specify the role of action representations in thematic and functional similarity relations between manipulable artifact objects. Recent behavioral and neurophysiological evidence indicates that while they are all relevant for manipulable artifact concepts, semantic relations based on thematic (e.g., saw-wood), specific function similarity (e.g., saw-axe), and general function similarity (e.g., saw-knife) are differently processed, and may relate to different levels of action representation. Point-light displays of object-related actions previously encoded at the gesture level (e.g., "sawing") or at the higher level of action representation (e.g., "cutting") were used as primes before participants identified target objects (e.g., saw) among semantically related and unrelated distractors (e.g., wood, feather, piano). Analysis of eye movements on the different objects during target identification informed about the amplitude and the timing of implicit activation of the different semantic relations. Results showed that action prime encoding impacted the processing of thematic relations, but not that of functional similarity relations. Semantic competition with thematic distractors was greater and earlier following action primes encoded at the gesture level compared to action primes encoded at higher level. As a whole, these findings highlight the direct influence of action representations on thematic relation processing, and suggest that thematic relations involve gesture-level representations rather than intention-level representations.
Ophthalmologic diagnostic tool using MR images for biomechanically-based muscle volume deformation
NASA Astrophysics Data System (ADS)
Buchberger, Michael; Kaltofen, Thomas
2003-05-01
We would like to give a work-in-progress report on our ophthalmologic diagnostic software system which performs biomechanically-based muscle volume deformations using MR images. For reconstructing a three-dimensional representation of an extraocular eye muscle, a sufficient amount of high resolution MR images is used, each representing a slice of the muscle. In addition, threshold values are given, which restrict the amount of data used from the MR images. The Marching Cube algorithm is applied to the polygons, resulting in a 3D representation of the muscle, which can efficiently be rendered. A transformation to a dynamic, deformable model is applied by calculating the center of gravity of each muscle slice, approximating the muscle path and subsequently adding Hermite splines through the centers of gravity of all slices. Then, a radius function is defined for each slice, completing the transformation of the static 3D polygon model. Finally, this paper describes future extensions to our system. One of these extensions is the support for additional calculations and measurements within the reconstructed 3D muscle representation. Globe translation, localization of muscle pulleys by analyzing the 3D reconstruction in two different gaze positions and other diagnostic measurements will be available.
Gomes, Karine de Oliveira; Cotta, Rosângela Minardi Mitre; Araújo, Raquel Maria Amaral; Cherchiglia, Mariângela Leal; Martins, Tatiana de Castro Pereira
2011-01-01
The Primary Health Care (PHC) is the first contact level with the health system. In Brazil, the Family Health Program (PSF) is the main implementation and organization strategy of the PHC. The objective of this study is to evaluate the actions and services of health offered by the PSF, starting from the social representations of the interviewee on the exclusive dimensions of PHC - attention to the first contact, longitudinality, integrality and coordination. It is a quali-quantitative research, accomplished in Cajuri, Minas Gerais State. Municipal managers, PSF professionals and pregnant women assisted by PSF were interviewed. Regarding social representations on SUS, it was observed an inadequate level of apprehension and knowledge of their principles and guidelines. As for PSF, several positive connotations were expressed and the set of perceptions of the protagonists identified it as a restructuring strategy of PHC in the municipality. In spite of this, strong influences of the biomedical model and the challenge of the integration with the other levels of attention were noticed, indicating the need of investments in the professional's training and in the organization of the other levels of attention to health.
ERIC Educational Resources Information Center
Jian, Yu-Cin
2016-01-01
Previous research suggests that multiple representations can improve science reading comprehension. This facilitation effect is premised on the observation that readers can efficiently integrate information in text and diagram formats; however, this effect in young readers is still contested. Using eye-tracking technology and sequential analysis,…
ERIC Educational Resources Information Center
Kim, Kinam; Kim, Minsung; Shin, Jungyeop; Ryu, Jaemyong
2015-01-01
This article examined the role of task demand and its effects on transfer in geographic learning. Student performance was measured through eye-movement analysis in two related experiments. In Experiment 1, the participants were told that they would travel through an area depicted in photographs either driving an automobile or observing the…
Eye Movements to Pictures Reveal Transient Semantic Activation during Spoken Word Recognition
ERIC Educational Resources Information Center
Yee, Eiling; Sedivy, Julie C.
2006-01-01
Two experiments explore the activation of semantic information during spoken word recognition. Experiment 1 shows that as the name of an object unfolds (e.g., lock), eye movements are drawn to pictorial representations of both the named object and semantically related objects (e.g., key). Experiment 2 shows that objects semantically related to an…
Comparison of noncontact infrared and remote sensor thermometry in normal and dry eye patients.
Singh, G; Singh Bhinder, H
To evaluate the role of closed chamber infrared and remote sensor thermometry in normal and dry eye patients. The study was conducted on 51 dry eye cases (102 eyes), 26 men and 25 women aged 19 to 65 years (35.3614.36), and 51 normal (102 eyes) age- and sex-matched control subjects. The criteria for dry eye were Schirm e r-1 (<10 m/5 min), FTBUT (<10 sec), nd lissamine green score (>2). The remote sensor and infrared thermometry was done in losed chamber around the eye in closed and open eye positions. In normal eyes, closed chamber infrared thermometry recorded temperature 34.770.37 C in closed eye position and 35.020.39 C in open eye position as compared to 27.912.46 C in closed eye position and 28.012.46 C in open position with remote sensor thermometry. The difference in temperature from closed to open position was 0.250.90 C in infrared thermometry and 0.100.00 C with remote sensor thermometry, which was statistically significant (p<0.0000). In dry eye, the infrared therm o m e t ry recorded 35.080.61 C temperature in closed eye position and 35.530.63 C in open eye position as compared to 27.412.48 C in open and closed eye position with remote sensor thermometry. The difference in temperature from closed to open eye position was 0.450.14 C (p<0.0000) with infrared thermometry as compared to no change 0.000.00 C with remote sensor thermometry (p<0.0000). Remote sensor thermometry proved better for diagnosis of dry eye disease as it showed no change in temperature under closed chamber in closed and open position (p=0.0000). Infrared thermometry was better in recording the absolute temperature from any point on the eye.
Comparison of noncontact infrared and remote sensor thermometry in normal and dry eye patients.
Singh, G; Bhinder, H Singh
2005-01-01
To evaluate the role of closed chamber infrared and remote sensor thermometry in normal and dry eye patients. The study was conducted on 51 dry eye cases (102 eyes), 26 men and 25 women aged 19 to 65 years (35.36+/-14.36), and 51 normal (102 eyes) age- and sex-matched control subjects. The criteria for dry eye were Schirmer-1 (<10 m/5 min), FTBUT (<10 sec), nd lissamine green score (>2). The remote sensor and infrared thermometry was done in closed chamber around the eye in closed and open eye positions. In normal eyes, closed chamber infrared thermometry recorded temperature 34.77+/-0.37 degrees C in closed eye position and 35.02+/-0.39 degrees C in open eye position as compared to 27.91+/-2.46 degrees C in closed eye position and 28.01+/-2.46 degrees C in open position with remote sensor thermometry. The difference in temperature from closed to open position was 0.25+/-0.90 degrees C in infrared thermometry and 0.10+/-0.00 degrees C with remote sensor thermometry, which was statistically significant (p<0.0000). In dry eye, the infrared thermometry recorded 35.08+/-0.61 degrees C temperature in closed eye position and 35.53+/-0.63 degrees C in open eye position as compared to 27.41+/-2.48 degrees C in open and closed eye position with remote sensor thermometry. The difference in temperature from closed to open eye position was 0.45+/-0.14 degrees C (p<0.0000) with infrared thermometry as compared to no change 0.00+/-0.00 degrees C with remote sensor thermometry (p<0.0000). Remote sensor thermometry proved better for diagnosis of dry eye disease as it showed no change in temperature under closed chamber in closed and open position (p=0.0000). Infrared thermometry was better in recording the absolute temperature from any point on the eye.
ERIC Educational Resources Information Center
Susac, Ana; Bubic, Andreja; Martinjak, Petra; Planinic, Maja; Palmovic, Marijan
2017-01-01
Developing a better understanding of the measurement process and measurement uncertainty is one of the main goals of university physics laboratory courses. This study investigated the influence of graphical representation of data on student understanding and interpreting of measurement results. A sample of 101 undergraduate students (48 first year…
A Model of Self-Organizing Head-Centered Visual Responses in Primate Parietal Areas
Mender, Bedeho M. W.; Stringer, Simon M.
2013-01-01
We present a hypothesis for how head-centered visual representations in primate parietal areas could self-organize through visually-guided learning, and test this hypothesis using a neural network model. The model consists of a competitive output layer of neurons that receives afferent synaptic connections from a population of input neurons with eye position gain modulated retinal receptive fields. The synaptic connections in the model are trained with an associative trace learning rule which has the effect of encouraging output neurons to learn to respond to subsets of input patterns that tend to occur close together in time. This network architecture and synaptic learning rule is hypothesized to promote the development of head-centered output neurons during periods of time when the head remains fixed while the eyes move. This hypothesis is demonstrated to be feasible, and each of the core model components described is tested and found to be individually necessary for successful self-organization. PMID:24349064
Hemifield columns co-opt ocular dominance column structure in human achiasma.
Olman, Cheryl A; Bao, Pinglei; Engel, Stephen A; Grant, Andrea N; Purington, Chris; Qiu, Cheng; Schallmo, Michael-Paul; Tjan, Bosco S
2018-01-01
In the absence of an optic chiasm, visual input to the right eye is represented in primary visual cortex (V1) in the right hemisphere, while visual input to the left eye activates V1 in the left hemisphere. Retinotopic mapping In V1 reveals that in each hemisphere left and right visual hemifield representations are overlaid (Hoffmann et al., 2012). To explain how overlapping hemifield representations in V1 do not impair vision, we tested the hypothesis that visual projections from nasal and temporal retina create interdigitated left and right visual hemifield representations in V1, similar to the ocular dominance columns observed in neurotypical subjects (Victor et al., 2000). We used high-resolution fMRI at 7T to measure the spatial distribution of responses to left- and right-hemifield stimulation in one achiasmic subject. T 2 -weighted 2D Spin Echo images were acquired at 0.8mm isotropic resolution. The left eye was occluded. To the right eye, a presentation of flickering checkerboards alternated between the left and right visual fields in a blocked stimulus design. The participant performed a demanding orientation-discrimination task at fixation. A general linear model was used to estimate the preference of voxels in V1 to left- and right-hemifield stimulation. The spatial distribution of voxels with significant preference for each hemifield showed interdigitated clusters which densely packed V1 in the right hemisphere. The spatial distribution of hemifield-preference voxels in the achiasmic subject was stable between two days of testing and comparable in scale to that of human ocular dominance columns. These results are the first in vivo evidence showing that visual hemifield representations interdigitate in achiasmic V1 following a similar developmental course to that of ocular dominance columns in V1 with intact optic chiasm. Copyright © 2017 Elsevier Inc. All rights reserved.
Eye Gaze Metrics Reflect a Shared Motor Representation for Action Observation and Movement Imagery
ERIC Educational Resources Information Center
McCormick, Sheree A.; Causer, Joe; Holmes, Paul S.
2012-01-01
Action observation (AO) and movement imagery (MI) have been reported to share similar neural networks. This study investigated the congruency between AO and MI using the eye gaze metrics, dwell time and fixation number. A simple reach-grasp-place arm movement was observed and, in a second condition, imagined where the movement was presented from…
Sunkara, Adhira
2015-01-01
As we navigate through the world, eye and head movements add rotational velocity patterns to the retinal image. When such rotations accompany observer translation, the rotational velocity patterns must be discounted to accurately perceive heading. The conventional view holds that this computation requires efference copies of self-generated eye/head movements. Here we demonstrate that the brain implements an alternative solution in which retinal velocity patterns are themselves used to dissociate translations from rotations. These results reveal a novel role for visual cues in achieving a rotation-invariant representation of heading in the macaque ventral intraparietal area. Specifically, we show that the visual system utilizes both local motion parallax cues and global perspective distortions to estimate heading in the presence of rotations. These findings further suggest that the brain is capable of performing complex computations to infer eye movements and discount their sensory consequences based solely on visual cues. DOI: http://dx.doi.org/10.7554/eLife.04693.001 PMID:25693417
Improving information recognition and performance of recycling chimneys.
Durugbo, Christopher
2013-01-01
The aim of this study was to assess and improve how recyclers (individuals carrying out the task of recycling) make use of visual cues to carryout recycling tasks in relation to 'recycling chimneys' (repositories for recycled waste). An initial task analysis was conducted through an activity sampling study and an eye tracking experiment using a mobile eye tracker to capture fixations of recyclers during recycling tasks. Following data collection using the eye tracker, a set of recommendations for improving information representation were then identified using the widely researched skills, rules, knowledge framework, and for a comparative study to assess the performance of improved interfaces for recycling chimneys based on Ecological Interface Design principles. Information representation on recycling chimneys determines how we recycle waste. This study describes an eco-ergonomics-based approach to improve the design of interfaces for recycling chimneys. The results are valuable for improving the performance of waste collection processes in terms of minimising contamination and increasing the quantity of recyclables.
ERIC Educational Resources Information Center
Shinebourne, Pnina
2012-01-01
This paper evolved from previous research on women's experience of addiction and recovery. The original study was based on detailed semi-structured interviews analysed using interpretative phenomenological analysis (IPA). In this study a poetic representation of material from participants' accounts was created to explore how a focus on the poetic…
ERIC Educational Resources Information Center
Altmann, Gerry T. M.; Kamide, Yuki
2009-01-01
Two experiments explored the mapping between language and mental representations of visual scenes. In both experiments, participants viewed, for example, a scene depicting a woman, a wine glass and bottle on the floor, an empty table, and various other objects. In Experiment 1, participants concurrently heard either "The woman will put the glass…
Children's Use of Morphological Cues in Real-Time Event Representation
ERIC Educational Resources Information Center
Zhou, Peng; Ma, Weiyi
2018-01-01
The present study investigated whether and how fast young children can use information encoded in morphological markers during real-time event representation. Using the visual world paradigm, we tested 35 adults, 34 5-year-olds and 33 3-year-olds. The results showed that the adults, the 5-year-olds and the 3-year-olds all exhibited eye gaze…
ERIC Educational Resources Information Center
Klein, P.; Viiri, J.; Mozaffari, S.; Dengel, A.; Kuhn, J.
2018-01-01
Relating mathematical concepts to graphical representations is a challenging task for students. In this paper, we introduce two visual strategies to qualitatively interpret the divergence of graphical vector field representations. One strategy is based on the graphical interpretation of partial derivatives, while the other is based on the flux…
Fundamental Visual Representations of Social Cognition in ASD
2016-12-01
visual adaptation functions in Autism , again pointing to basic sensory processing anomalies in this population. Our research team is developing...challenging-to-test ASD pediatric population. 15. SUBJECT TERMS Autism , Visual Adaptation, Retinotopy, Social Communication, Eye-movements, fMRI, EEG, ERP...social interaction are a hallmark symptom of Autism , and the lack of appropriate eye- contact during interpersonal interactions is an oft-noted feature
Impaired Eye Region Search Accuracy in Children with Autistic Spectrum Disorders
Pruett, John R.; Hoertel, Sarah; Constantino, John N.; LaMacchia Moll, Angela; McVey, Kelly; Squire, Emma; Feczko, Eric; Povinelli, Daniel J.; Petersen, Steven E.
2013-01-01
To explore mechanisms underlying reduced fixation of eyes in autism, children with Autistic Spectrum Disorders (ASD) and typically developing children were tested in five visual search experiments: simple color feature; color-shape conjunction; face in non-face objects; mouth region; and eye region. No group differences were found for reaction time profile shapes in any of the five experiments, suggesting intact basic search mechanics in children with ASD. Contrary to early reports in the literature, but consistent with other more recent findings, we observed no superiority for conjunction search in children with ASD. Importantly, children with ASD did show reduced accuracy for eye region search (p = .005), suggesting that eyes contribute less to high-level face representations in ASD or that there is an eye region-specific disruption to attentional processes engaged by search in ASD. PMID:23516446
Impaired eye region search accuracy in children with autistic spectrum disorders.
Pruett, John R; Hoertel, Sarah; Constantino, John N; Moll, Angela LaMacchia; McVey, Kelly; Squire, Emma; Feczko, Eric; Povinelli, Daniel J; Petersen, Steven E
2013-01-01
To explore mechanisms underlying reduced fixation of eyes in autism, children with autistic spectrum disorders (ASD) and typically developing children were tested in five visual search experiments: simple color feature; color-shape conjunction; face in non-face objects; mouth region; and eye region. No group differences were found for reaction time profile shapes in any of the five experiments, suggesting intact basic search mechanics in children with ASD. Contrary to early reports in the literature, but consistent with other more recent findings, we observed no superiority for conjunction search in children with ASD. Importantly, children with ASD did show reduced accuracy for eye region search (p = .005), suggesting that eyes contribute less to high-level face representations in ASD or that there is an eye region-specific disruption to attentional processes engaged by search in ASD.
Graham, Megan E
2016-09-01
Audiences must be critical of film representations of the aged woman living with Alzheimer's disease and of dangerous reinscriptions of stereotypical equations about ageing as deterioration. This paper analyses the representation and decline of the aged woman through the different voices of Iris Murdoch in Richard Eyre's film Iris (2001). Key vocal scenes are considered: On-screen encounters between young and aged Iris, vocal representations of dementia symptoms and silencing Iris as her disease progresses. Further, Iris' recurrent unaccompanied song, "The Lark in the Clear Air," compels audiences to "see" Iris with their ears more than with their eyes, exemplifying the representational power of sound in film. This paper is an appeal for increased debate about sonic representations of aged women, ageing and Alzheimer's disease and dementia in film. The significance of audiences' critical awareness and understanding about the social implications of these representations is discussed. © The Author(s) 2014.
Summation of visual motion across eye movements reflects a nonspatial decision mechanism.
Morris, Adam P; Liu, Charles C; Cropper, Simon J; Forte, Jason D; Krekelberg, Bart; Mattingley, Jason B
2010-07-21
Human vision remains perceptually stable even though retinal inputs change rapidly with each eye movement. Although the neural basis of visual stability remains unknown, a recent psychophysical study pointed to the existence of visual feature-representations anchored in environmental rather than retinal coordinates (e.g., "spatiotopic" receptive fields; Melcher and Morrone, 2003). In that study, sensitivity to a moving stimulus presented after a saccadic eye movement was enhanced when preceded by another moving stimulus at the same spatial location before the saccade. The finding is consistent with spatiotopic sensory integration, but it could also have arisen from a probabilistic improvement in performance due to the presence of more than one motion signal for the perceptual decision. Here we show that this statistical advantage accounts completely for summation effects in this task. We first demonstrate that measurements of summation are confounded by noise related to an observer's uncertainty about motion onset times. When this uncertainty is minimized, comparable summation is observed regardless of whether two motion signals occupy the same or different locations in space, and whether they contain the same or opposite directions of motion. These results are incompatible with the tuning properties of motion-sensitive sensory neurons and provide no evidence for a spatiotopic representation of visual motion. Instead, summation in this context reflects a decision mechanism that uses abstract representations of sensory events to optimize choice behavior.
Parks, Connie L; Monson, Keith L
2016-09-01
Expanding on research previously reported by the authors, this study further examines the recognizability of ReFace facial approximations generated with the following eye orb positions: (i) centrally within the bony eye socket, (ii) 1.0mm superior and 2.0mm lateral relative to center, and (iii) 1.0mm superior and 2.5mm lateral relative to center. Overall, 81% of the test subjects' approximation ranks improved with the use of either of the two supero-lateral eye orbs. Highly significant performance differences (p<0.01) were observed between the approximations with centrally positioned eye orbs (i) and approximations with the eye orbs placed in the supero-laterally positions (ii and iii). Noteworthy was the observation that in all cases when the best rank for an approximation was obtained with the eye orbs in position (iii), the second best rank was achieved with the eye orbs in position (ii). A similar pattern was also observed when the best rank was obtained with the eye orbs in position (ii), with 60% of the second best ranks observed in position (iii). It is argued, therefore, that an approximation constructed with the eye orbs placed in either of the two supero-lateral positions may be more effective and operationally informative than centrally positioned orbs. Copyright © 2016. Published by Elsevier Ireland Ltd.
Ocular Vestibular Evoked Myogenic Potentials in Response to Three Test Positions and Two Frequencies
Todai, Janvi K.; Congdon, Sharon L.; Sangi-Haghpeykar, Haleh; Cohen, Helen S.
2014-01-01
Objective To determine how eye closure, test positions, and stimulus frequencies influence ocular vestibular evoked myogenic potentials. Study Design This study used a within-subjects repeated measures design. Methods Twenty asymptomatic subjects were each tested on ocular vestibular evoked myogenic potentials in three head/eye conditions at 500 Hz and 1000 Hz using air-conducted sound: 1) Sitting upright, head erect, eyes open, looking up. 2) Lying supine, neck flexed 30 degrees, eyes open and looking up. 3) Lying supine, neck flexed 30 degrees, eyes closed and relaxed. Four dependent variables measured were n10, p16, amplitude, and threshold. Results The supine position/ eyes open was comparable to sitting/ eyes open and better than supine/ eyes closed. Eyes closed resulted in lower amplitude, higher threshold, and prolonged latency. Significantly fewer subjects provided responses with eyes closed than with eyes open. No significant differences were found between both eyes open conditions. Both n10 and p16 were lower at 1000 Hz than at 500 Hz. Amplitude and threshold were higher at 1000 Hz than at 500 Hz. Conclusion Supine eyes open is a reliable alternative to sitting eyes open in patients who cannot maintain a seated position. Testing at 1000 Hz provides a larger response with a faster onset that fatigues faster than at 500 Hz. The increased variability and decreased response in the eyes closed position suggest that the eyes closed position is not reliable. PMID:24178911
Olsen, Rosanna K; Lee, Yunjo; Kube, Jana; Rosenbaum, R Shayna; Grady, Cheryl L; Moscovitch, Morris; Ryan, Jennifer D
2015-04-01
Current theories state that the hippocampus is responsible for the formation of memory representations regarding relations, whereas extrahippocampal cortical regions support representations for single items. However, findings of impaired item memory in hippocampal amnesics suggest a more nuanced role for the hippocampus in item memory. The hippocampus may be necessary when the item elements need to be bound within and across episodes to form a lasting representation that can be used flexibly. The current investigation was designed to test this hypothesis in face recognition. H.C., an individual who developed with a compromised hippocampal system, and control participants incidentally studied individual faces that either varied in presentation viewpoint across study repetitions or remained in a fixed viewpoint across the study repetitions. Eye movements were recorded during encoding and participants then completed a surprise recognition memory test. H.C. demonstrated altered face viewing during encoding. Although the overall number of fixations made by H.C. was not significantly different from that of controls, the distribution of her viewing was primarily directed to the eye region. Critically, H.C. was significantly impaired in her ability to subsequently recognize faces studied from variable viewpoints, but demonstrated spared performance in recognizing faces she encoded from a fixed viewpoint, implicating a relationship between eye movement behavior in the service of a hippocampal binding function. These findings suggest that a compromised hippocampal system disrupts the ability to bind item features within and across study repetitions, ultimately disrupting recognition when it requires access to flexible relational representations. Copyright © 2015 the authors 0270-6474/15/355342-09$15.00/0.
An Eye Model for Computational Dosimetry Using A Multi-Scale Voxel Phantom
NASA Astrophysics Data System (ADS)
Caracappa, Peter F.; Rhodes, Ashley; Fiedler, Derek
2014-06-01
The lens of the eye is a radiosensitive tissue with cataract formation being the major concern. Recently reduced recommended dose limits to the lens of the eye have made understanding the dose to this tissue of increased importance. Due to memory limitations, the voxel resolution of computational phantoms used for radiation dose calculations is too large to accurately represent the dimensions of the eye. A revised eye model is constructed using physiological data for the dimensions of radiosensitive tissues, and is then transformed into a high-resolution voxel model. This eye model is combined with an existing set of whole body models to form a multi-scale voxel phantom, which is used with the MCNPX code to calculate radiation dose from various exposure types. This phantom provides an accurate representation of the radiation transport through the structures of the eye. Two alternate methods of including a high-resolution eye model within an existing whole body model are developed. The accuracy and performance of each method is compared against existing computational phantoms.
The representation of the back in idiomatic expressions--do idioms value the body?
Cedraschi, C; Bove, D; Perrin, E; Vischer, T L
2000-01-01
Whilst investigating the influence of patients' representations on the impact of teaching in the back school, we took an interest in 1) the place of the back in the French idioms referring to the body; and 2) the meaning these idioms convey about the back. The idioms including body part terms were sought on the basis of a compilation of French idioms; it has to be noted that such a compilation, however excellent it may be, can only offer a partial view of lay conversation. Occurrence of body parts and of their connotations were assessed. Idioms were classified as positive, negative or neutral, keeping in mind the difficulties of a strict classification in such a field. Drawings were then performed on the basis of the results of the descriptive analysis. Globally, idiomatic expressions offer a rather negative picture of the body or at least suggest that the body is prominently used to express negative ideas and emotions. This is particularly striking for the idioms associated with the back. The analysis of idioms referring to the body allows us to 'see with our own eyes' another aspect of the representations of the body and the back, as they are conveyed in the French language.
Efficient visual coding and the predictability of eye movements on natural movies.
Vig, Eleonora; Dorr, Michael; Barth, Erhardt
2009-01-01
We deal with the analysis of eye movements made on natural movies in free-viewing conditions. Saccades are detected and used to label two classes of movie patches as attended and non-attended. Machine learning techniques are then used to determine how well the two classes can be separated, i.e., how predictable saccade targets are. Although very simple saliency measures are used and then averaged to obtain just one average value per scale, the two classes can be separated with an ROC score of around 0.7, which is higher than previously reported results. Moreover, predictability is analysed for different representations to obtain indirect evidence for the likelihood of a particular representation. It is shown that the predictability correlates with the local intrinsic dimension in a movie.
Arba-Mosquera, Samuel; Aslanides, Ioannis M.
2012-01-01
Purpose To analyze the effects of Eye-Tracker performance on the pulse positioning errors during refractive surgery. Methods A comprehensive model, which directly considers eye movements, including saccades, vestibular, optokinetic, vergence, and miniature, as well as, eye-tracker acquisition rate, eye-tracker latency time, scanner positioning time, laser firing rate, and laser trigger delay have been developed. Results Eye-tracker acquisition rates below 100 Hz correspond to pulse positioning errors above 1.5 mm. Eye-tracker latency times to about 15 ms correspond to pulse positioning errors of up to 3.5 mm. Scanner positioning times to about 9 ms correspond to pulse positioning errors of up to 2 mm. Laser firing rates faster than eye-tracker acquisition rates basically duplicate pulse-positioning errors. Laser trigger delays to about 300 μs have minor to no impact on pulse-positioning errors. Conclusions The proposed model can be used for comparison of laser systems used for ablation processes. Due to the pseudo-random nature of eye movements, positioning errors of single pulses are much larger than observed decentrations in the clinical settings. There is no single parameter that ‘alone’ minimizes the positioning error. It is the optimal combination of the several parameters that minimizes the error. The results of this analysis are important to understand the limitations of correcting very irregular ablation patterns.
ERIC Educational Resources Information Center
Altmann, Gerry T.M.; Kamide, Yuki
2007-01-01
Two experiments explored the representational basis for anticipatory eye movements. Participants heard "the man will drink ..." or "the man has drunk ..." (Experiment 1) or "the man will drink all of ..." or "the man has drunk all of ..." (Experiment 2). They viewed a concurrent scene depicting a full glass of beer and an empty wine glass (amongst…
Neuro-Linguistic Programming: Eye Movements as Indicators of Representational Systems.
1984-09-01
Elizabeth A. Beck, "Test of the Eye Movement Hypothesis of Neurolinguistic Programing : A Rebuttal of Conclu- sions," Perceptual and Motor Skills, 58: 175...Meta Publications, 1980. 64 .. .] .! S ~ ~ ~ ~ 1 - ----. 14. Maron, Davida, " Neurolinguistic Programming : The Answer to Change? Training and Development... Neurolinguistic Programming ," Perceptual and Motor Skills, 51: 230 (April 1980). 65 VITA Captain William H. Moore was born on 22 October 1949. He
Neural representation of objects in space: a dual coding account.
Humphreys, G W
1998-01-01
I present evidence on the nature of object coding in the brain and discuss the implications of this coding for models of visual selective attention. Neuropsychological studies of task-based constraints on: (i) visual neglect; and (ii) reading and counting, reveal the existence of parallel forms of spatial representation for objects: within-object representations, where elements are coded as parts of objects, and between-object representations, where elements are coded as independent objects. Aside from these spatial codes for objects, however, the coding of visual space is limited. We are extremely poor at remembering small spatial displacements across eye movements, indicating (at best) impoverished coding of spatial position per se. Also, effects of element separation on spatial extinction can be eliminated by filling the space with an occluding object, indicating that spatial effects on visual selection are moderated by object coding. Overall, there are separate limits on visual processing reflecting: (i) the competition to code parts within objects; (ii) the small number of independent objects that can be coded in parallel; and (iii) task-based selection of whether within- or between-object codes determine behaviour. Between-object coding may be linked to the dorsal visual system while parallel coding of parts within objects takes place in the ventral system, although there may additionally be some dorsal involvement either when attention must be shifted within objects or when explicit spatial coding of parts is necessary for object identification. PMID:9770227
Spatial orientation of the vestibular system
NASA Technical Reports Server (NTRS)
Raphan, T.; Dai, M.; Cohen, B.
1992-01-01
1. A simplified three-dimensional state space model of visual vestibular interaction was formulated. Matrix and dynamical system operators representing coupling from the semicircular canals and the visual system to the velocity storage integrator were incorporated into the model. 2. It was postulated that the system matrix for a tilted position was a composition of two linear transformations of the system matrix for the upright position. One transformation modifies the eigenvalues of the system matrix while another rotates the pitch and roll eigenvectors with the head, while maintaining the yaw axis eigenvector approximately spatially invariant. Using this representation, the response characteristics of the pitch, roll, and yaw eye velocity were obtained in terms of the eigenvalues and associated eigenvectors. 3. Using OKAN data obtained from monkeys and comparing to the model predictions, the eigenvalues and eigenvectors of the system matrix were identified as a function of tilt to the side or of tilt to the prone positions, using a modification of the Marquardt algorithm. The yaw eigenvector for right-side-down tilt and for downward pitch cross-coupling was approximately 30 degrees from the spatial vertical. For the prone position, the eigenvector was computed to be approximately 20 degrees relative to the spatial vertical. For both side-down and prone positions, oblique OKN induced along eigenvector directions generated OKAN which decayed to zero along a straight line with approximately a single time constant. This was verified by a spectral analysis of the residual sequence about the straight line fit to the decaying data. The residual sequence was associated with a narrow autocorrelation function and a wide power spectrum. 4. Parameters found using the Marquardt algorithm were incorporated into the model. Diagonal matrices in a head coordinate frame were introduced to represent the direct pathway and the coupling of the visual system to the integrator. Model simulations predicted the behavior of yaw and pitch OKN and OKAN when the animal was upright, as well as the cross-coupling in the tilted position. The trajectories in velocity space were also accurately simulated. 5. There were similarities between the monkey eigenvectors and human perception of the spatial vertical. For side-down tilts and downward eye velocity cross-coupling, there was only an Aubert (A) effect. For upward eye velocity cross-coupling there were both Muller (E) and Aubert (A) effects. The mean of the eigenvectors for upward and downward eye velocities overlay human 1 x g perceptual data.(ABSTRACT TRUNCATED AT 400 WORDS).
Action Anticipation and Interference: A Test of Prospective Gaze
Cannon, Erin N.; Woodward, Amanda L.
2013-01-01
In the current study we investigate the proposal that one aspect of social perception, action anticipation, involves the recruitment of representations for self-produced action. An eye tracking paradigm was implemented to measure prospective gaze to a goal while performing either a motor or working memory task. Results indicate an effect of the motor task, suggesting the interference of a shared motor and action perception representation. PMID:25285317
Wible, Cynthia G.
2012-01-01
A framework is described for understanding the schizophrenic syndrome at the brain systems level. It is hypothesized that over-activation of dynamic gesture and social perceptual processes in the temporal-parietal occipital junction (TPJ), posterior superior temporal sulcus (PSTS) and surrounding regions produce the syndrome (including positive and negative symptoms, their prevalence, prodromal signs, and cognitive deficits). Hippocampal system hyper-activity and atrophy have been consistently found in schizophrenia. Hippocampal activity is highly correlated with activity in the TPJ and may be a source of over-excitation of the TPJ and surrounding regions. Strong evidence for this comes from in-vivo recordings in humans during psychotic episodes. Many positive symptoms of schizophrenia can be reframed as the erroneous sense of a presence or other who is observing, acting, speaking, or controlling; these qualia are similar to those evoked during abnormal activation of the TPJ. The TPJ and PSTS play a key role in the perception (and production) of dynamic social, emotional, and attentional gestures for the self and others (e.g., body/face/eye gestures, audiovisual speech and prosody, and social attentional gestures such as eye gaze). The single cell representation of dynamic gestures is multimodal (auditory, visual, tactile), matching the predominant hallucinatory categories in schizophrenia. Inherent in the single cell perceptual signal of dynamic gesture representations is a computation of intention, agency, and anticipation or expectancy (for the self and others). Stimulation of the TPJ resulting in activation of the self representation has been shown to result a feeling of a presence or multiple presences (due to heautoscopy) and also bizarre tactile experiences. Neurons in the TPJ are also tuned, or biased to detect threat related emotions. Abnormal over-activation in this system could produce the conscious hallucination of a voice (audiovisual speech), a person or a touch. Over-activation could interfere with attentional/emotional gesture perception and production (negative symptoms). It could produce the unconscious feeling of being watched, followed, or of a social situation unfolding along with accompanying abnormal perception of intent and agency (delusions). Abnormal activity in the TPJ would also be predicted to create several cognitive disturbances that are characteristic of schizophrenia, including abnormalities in attention, predictive social processing, working memory, and a bias to erroneously perceive threat. PMID:22737114
Role of Oculoproprioception in Coding the Locus of Attention.
Odoj, Bartholomaeus; Balslev, Daniela
2016-03-01
The most common neural representations for spatial attention encode locations retinotopically, relative to center of gaze. To keep track of visual objects across saccades or to orient toward sounds, retinotopic representations must be combined with information about the rotation of one's own eyes in the orbits. Although gaze input is critical for a correct allocation of attention, the source of this input has so far remained unidentified. Two main signals are available: corollary discharge (copy of oculomotor command) and oculoproprioception (feedback from extraocular muscles). Here we asked whether the oculoproprioceptive signal relayed from the somatosensory cortex contributes to coding the locus of attention. We used continuous theta burst stimulation (cTBS) over a human oculoproprioceptive area in the postcentral gyrus (S1EYE). S1EYE-cTBS reduces proprioceptive processing, causing ∼1° underestimation of gaze angle. Participants discriminated visual targets whose location was cued in a nonvisual modality. Throughout the visual space, S1EYE-cTBS shifted the locus of attention away from the cue by ∼1°, in the same direction and by the same magnitude as the oculoproprioceptive bias. This systematic shift cannot be attributed to visual mislocalization. Accuracy of open-loop pointing to the same visual targets, a function thought to rely mainly on the corollary discharge, was unchanged. We argue that oculoproprioception is selective for attention maps. By identifying a potential substrate for the coupling between eye and attention, this study contributes to the theoretical models for spatial attention.
Misslisch, H; Hess, B J M
2002-11-01
This study examined two kinematical features of the rotational vestibulo-ocular reflex (VOR) of the monkey in near vision. First, is there an effect of eye position on the axes of eye rotation during yaw, pitch and roll head rotations when the eyes are converged to fixate near targets? Second, do the three-dimensional positions of the left and right eye during yaw and roll head rotations obey the binocular extension of Listing's law (L2), showing eye position planes that rotate temporally by a quarter as far as the angle of horizontal vergence? Animals fixated near visual targets requiring 17 or 8.5 degrees vergence and placed at straight ahead, 20 degrees up, down, left, or right during yaw, pitch, and roll head rotations at 1 Hz. The 17 degrees vergence experiments were performed both with and without a structured visual background, the 8.5 degrees vergence experiments with a visual background only. A 40 degrees horizontal change in eye position never influenced the axis of eye rotation produced by the VOR during pitch head rotation. Eye position did not affect the VOR eye rotation axes, which stayed aligned with the yaw and roll head rotation axes, when torsional gain was high. If torsional gain was low, eccentric eye positions produced yaw and roll VOR eye rotation axes that tilted somewhat in the directions predicted by Listing's law, i.e., with or opposite to gaze during yaw or roll. These findings were seen in both visual conditions and in both vergence experiments. During yaw and roll head rotations with a 40 degrees vertical change in gaze, torsional eye position followed on average the prediction of L2: the left eye showed counterclockwise (ex-) torsion in down gaze and clockwise (in-) torsion in up gaze and vice versa for the right eye. In other words, the left and right eye's position plane rotated temporally by about a quarter of the horizontal vergence angle. Our results indicate that torsional gain is the central mechanism by which the brain adjusts the retinal image stabilizing function of the VOR both in far and near vision and the three dimensional eye positions during yaw and roll head rotations in near vision follow on average the predictions of L2, a kinematic pattern that is maintained by the saccadic/quick phase system.
Visual optics: an engineering approach
NASA Astrophysics Data System (ADS)
Toadere, Florin
2010-11-01
The human eyes' visual system interprets the information from the visible light in order to build a representation of the world surrounding the body. It derives color by comparing the responses to light from the three types of photoreceptor cones in the eyes. These long medium and short cones are sensitive to blue, green and red portions of the visible spectrum. We simulate the color vision for the normal eyes. We see the effects of the dyes, filters, glasses and windows on color perception when the test image is illuminated with the D65 light sources. In addition to colors' perception, the human eyes can suffer from diseases and disorders. The eye can be seen as an optical instrument which has its own eye print. We present aspects of some nowadays methods and technologies which can capture and correct the human eyes' wavefront aberrations. We focus our attention to Siedel aberrations formula, Zenike polynomials, Shack-Hartmann Sensor, LASIK, interferograms fringes aberrations and Talbot effect.
ERIC Educational Resources Information Center
Quinn, Paul C.; Doran, Matthew M.; Reiss, Jason E.; Hoffman, James E.
2009-01-01
Previous looking time studies have shown that infants use the heads of cat and dog images to form category representations for these animal classes. The present research used an eye-tracking procedure to determine the time course of attention to the head and whether it reflects a preexisting bias or online learning. Six- to 7-month-olds were…
Anticipation in Real-World Scenes: The Role of Visual Context and Visual Memory.
Coco, Moreno I; Keller, Frank; Malcolm, George L
2016-11-01
The human sentence processor is able to make rapid predictions about upcoming linguistic input. For example, upon hearing the verb eat, anticipatory eye-movements are launched toward edible objects in a visual scene (Altmann & Kamide, 1999). However, the cognitive mechanisms that underlie anticipation remain to be elucidated in ecologically valid contexts. Previous research has, in fact, mainly used clip-art scenes and object arrays, raising the possibility that anticipatory eye-movements are limited to displays containing a small number of objects in a visually impoverished context. In Experiment 1, we confirm that anticipation effects occur in real-world scenes and investigate the mechanisms that underlie such anticipation. In particular, we demonstrate that real-world scenes provide contextual information that anticipation can draw on: When the target object is not present in the scene, participants infer and fixate regions that are contextually appropriate (e.g., a table upon hearing eat). Experiment 2 investigates whether such contextual inference requires the co-presence of the scene, or whether memory representations can be utilized instead. The same real-world scenes as in Experiment 1 are presented to participants, but the scene disappears before the sentence is heard. We find that anticipation occurs even when the screen is blank, including when contextual inference is required. We conclude that anticipatory language processing is able to draw upon global scene representations (such as scene type) to make contextual inferences. These findings are compatible with theories assuming contextual guidance, but posit a challenge for theories assuming object-based visual indices. Copyright © 2015 Cognitive Science Society, Inc.
Differential processing of part-to-whole and part-to-part face priming: an ERP study.
Jemel, B; George, N; Chaby, L; Fiori, N; Renault, B
1999-04-06
We provide electrophysiological evidence supporting the hypothesis that part and whole face processing involve distinct functional mechanisms. We used a congruency judgment task and studied part-to-whole and part-to-part priming effects. Neither part-to-whole nor part-to-part conditions elicited early congruency effects on face-specific ERP components, suggesting that activation of the internal representations should occur later on. However, these components showed differential responsiveness to whole faces and isolated eyes. In addition, although late ERP components were affected when the eye targets were not associated with the prime in both conditions, their temporal and topographical features depended on the latter. These differential effects suggest the existence of distributed neural networks in the inferior temporal cortex where part and whole facial representations may be stored.
Enhanced phase synchrony in the electroencephalograph γ band for musicians while listening to music
NASA Astrophysics Data System (ADS)
Bhattacharya, Joydeep; Petsche, Hellmuth
2001-07-01
Multichannel electroencephalograph signals from two broad groups, 10 musicians and 10 nonmusicians, recorded in different states (in resting states or no task condition, with eyes opened and eyes closed, and with two musical tasks, listening to two different pieces of music) were studied. Degrees of phase synchrony in various frequency bands were assessed. No differences in the degree of synchronization in any frequency band were found between the two groups in resting conditions. Yet, while listening to music, significant increases of synchronization were found only in the γ-frequency range (>30 Hz) over large cortical areas for the group of musicians. This high degree of synchronization elicited by music in the group of musicians might be due to their ability to host long-term memory representations of music and mediate access to these stored representations.
Intrinsic dimensionality predicts the saliency of natural dynamic scenes.
Vig, Eleonora; Dorr, Michael; Martinetz, Thomas; Barth, Erhardt
2012-06-01
Since visual attention-based computer vision applications have gained popularity, ever more complex, biologically inspired models seem to be needed to predict salient locations (or interest points) in naturalistic scenes. In this paper, we explore how far one can go in predicting eye movements by using only basic signal processing, such as image representations derived from efficient coding principles, and machine learning. To this end, we gradually increase the complexity of a model from simple single-scale saliency maps computed on grayscale videos to spatiotemporal multiscale and multispectral representations. Using a large collection of eye movements on high-resolution videos, supervised learning techniques fine-tune the free parameters whose addition is inevitable with increasing complexity. The proposed model, although very simple, demonstrates significant improvement in predicting salient locations in naturalistic videos over four selected baseline models and two distinct data labeling scenarios.
Preferential coding of eye/hand motor actions in the human ventral occipito-temporal cortex.
Tosoni, Annalisa; Guidotti, Roberto; Del Gratta, Cosimo; Committeri, Giorgia; Sestieri, Carlo
2016-12-01
The human ventral occipito-temporal cortex (OTC) contains areas specialized for particular perceptual/semantic categories, such as faces (fusiform face area, FFA) and places (parahippocampal place area, PPA). This organization has been interpreted as reflecting the visual structure of the world, i.e. perceptual similarity and/or eccentricity biases. However, recent functional magnetic resonance imaging (fMRI) studies have shown not only that regions of the OTC are modulated by non-visual, action-related object properties but also by motor planning and execution, although the functional role and specificity of this motor-related activity are still unclear. Here, through a reanalysis of previously published data, we tested whether the selectivity for perceptual/semantic categories in the OTC corresponds to a preference for particular motor actions. The results demonstrate for the first time that face- and place-selective regions of the OTC exhibit preferential BOLD response to the execution of hand pointing and saccadic eye movements, respectively. Moreover, multivariate analyses provide novel evidence for the consistency across neural representations of stimulus category and movement effector in OTC. According to a 'spatial hypothesis', this pattern of results originates from the match between the region eccentricity bias and the typical action space of the motor effectors. Alternatively, the double dissociation may be caused by the different effect produced by hand vs. eye movements on regions coding for body representation. Overall, the present findings offer novel insights on the coupling between visual and motor cortical representations. Copyright © 2016. Published by Elsevier Ltd.
Hadjidimitrakis, K; Moschovakis, A K; Dalezios, Y; Grantyn, A
2007-05-01
Rapid gaze shifts are often accomplished with coordinated movements of the eyes and head, the relative amplitude of which depends on the starting position of the eyes. The size of gaze shifts is determined by the superior colliculus (SC) but additional processing in the lower brain stem is needed to determine the relative contributions of eye and head components. Models of eye-head coordination often assume that the strength of the command sent to the head controllers is modified by a signal indicative of the eye position. Evidence in favor of this hypothesis has been recently obtained in a study of phasic electromyographic (EMG) responses to stimulation of the SC in head-restrained monkeys (Corneil et al. in J Neurophysiol 88:2000-2018, 2002b). Bearing in mind that the patterns of eye-head coordination are not the same in all species and because the eye position sensitivity of phasic EMG responses has not been systematically investigated in cats, in the present study we used cats to address this issue. We stimulated electrically the intermediate and deep layers of the caudal SC in alert cats and recorded the EMG responses of neck muscles with horizontal and vertical pulling directions. Our data demonstrate that phasic, short latency EMG responses can be modulated by the eye position such that they increase as the eye occupies more and more eccentric positions in the pulling direction of the muscle tested. However, the influence of the eye position is rather modest, typically accounting for only 10-50% of the variance of EMG response amplitude. Responses evoked from several SC sites were not modulated by the eye position.
HUMAN EYE OPTICS: Determination of positions of optical elements of the human eye
NASA Astrophysics Data System (ADS)
Galetskii, S. O.; Cherezova, T. Yu
2009-02-01
An original method for noninvasive determining the positions of elements of intraocular optics is proposed. The analytic dependence of the measurement error on the optical-scheme parameters and the restriction in distance from the element being measured are determined within the framework of the method proposed. It is shown that the method can be efficiently used for determining the position of elements in the classical Gullstrand eye model and personalised eye models. The positions of six optical surfaces of the Gullstrand eye model and four optical surfaces of the personalised eye model can be determined with an error of less than 0.25 mm.
Dooley, K O; Farmer, A
1988-08-01
Neurolinguistic programming's hypothesized eye movements were measured independently using videotapes of 10 nonfluent aphasic and 10 control subjects matched for age and sex. Chi-squared analysis indicated that eye-position responses were significantly different for the groups. Although earlier research has not supported the hypothesized eye positions for normal subjects, the present findings support the contention that eye-position responses may differ between neurologically normal and aphasic individuals.
Minimizing camera-eye optical aberrations during the 3D reconstruction of retinal structures
NASA Astrophysics Data System (ADS)
Aldana-Iuit, Javier; Martinez-Perez, M. Elena; Espinosa-Romero, Arturo; Diaz-Uribe, Rufino
2010-05-01
3D reconstruction of blood vessels is a powerful visualization tool for physicians, since it allows them to refer to qualitative representation of their subject of study. In this paper we propose a 3D reconstruction method of retinal vessels from fundus images. The reconstruction method propose herein uses images of the same retinal structure in epipolar geometry. Images are preprocessed by RISA system for segmenting blood vessels and obtaining feature points for correspondences. The correspondence points process is solved using correlation. The LMedS analysis and Graph Transformation Matching algorithm are used for outliers suppression. Camera projection matrices are computed with the normalized eight point algorithm. Finally, we retrieve 3D position of the retinal tree points by linear triangulation. In order to increase the power of visualization, 3D tree skeletons are represented by surfaces via generalized cylinders whose radius correspond to morphological measurements obtained by RISA. In this paper the complete calibration process including the fundus camera and the optical properties of the eye, the so called camera-eye system is proposed. On one hand, the internal parameters of the fundus camera are obtained by classical algorithms using a reference pattern. On the other hand, we minimize the undesirable efects of the aberrations induced by the eyeball optical system assuming that contact enlarging lens corrects astigmatism, spherical and coma aberrations are reduced changing the aperture size and eye refractive errors are suppressed adjusting camera focus during image acquisition. Evaluation of two self-calibration proposals and results of 3D blood vessel surface reconstruction are presented.
Spatial effects of shifting prisms on properties of posterior parietal cortex neurons
Karkhanis, Anushree N; Heider, Barbara; Silva, Fabian Muñoz; Siegel, Ralph M
2014-01-01
The posterior parietal cortex contains neurons that respond to visual stimulation and motor behaviour. The objective of the current study was to test short-term adaptation in neurons in macaque area 7a and the dorsal prelunate during visually guided reaching using Fresnel prisms that displaced the visual field. The visual perturbation shifted the eye position and created a mismatch between perceived and actual reach location. Two non-human primates were trained to reach to visual targets before, during and after prism exposure while fixating the reach target in different locations. They were required to reach to the physical location of the reach target and not the perceived, displaced location. While behavioural adaptation to the prisms occurred within a few trials, the majority of neurons responded to the distortion either with substantial changes in spatial eye position tuning or changes in overall firing rate. These changes persisted even after prism removal. The spatial changes were not correlated with the direction of induced prism shift. The transformation of gain fields between conditions was estimated by calculating the translation and rotation in Euler angles. Rotations and translations of the horizontal and vertical spatial components occurred in a systematic manner for the population of neurons suggesting that the posterior parietal cortex retains a constant representation of the visual field remapping between experimental conditions. PMID:24928956
Biometric recognition via fixation density maps
NASA Astrophysics Data System (ADS)
Rigas, Ioannis; Komogortsev, Oleg V.
2014-05-01
This work introduces and evaluates a novel eye movement-driven biometric approach that employs eye fixation density maps for person identification. The proposed feature offers a dynamic representation of the biometric identity, storing rich information regarding the behavioral and physical eye movement characteristics of the individuals. The innate ability of fixation density maps to capture the spatial layout of the eye movements in conjunction with their probabilistic nature makes them a particularly suitable option as an eye movement biometrical trait in cases when free-viewing stimuli is presented. In order to demonstrate the effectiveness of the proposed approach, the method is evaluated on three different datasets containing a wide gamut of stimuli types, such as static images, video and text segments. The obtained results indicate a minimum EER (Equal Error Rate) of 18.3 %, revealing the perspectives on the utilization of fixation density maps as an enhancing biometrical cue during identification scenarios in dynamic visual environments.
Color-binding errors during rivalrous suppression of form.
Hong, Sang Wook; Shevell, Steven K
2009-09-01
How does a physical stimulus determine a conscious percept? Binocular rivalry provides useful insights into this question because constant physical stimulation during rivalry causes different visual experiences. For example, presentation of vertical stripes to one eye and horizontal stripes to the other eye results in a percept that alternates between horizontal and vertical stripes. Presentation of a different color to each eye (color rivalry) produces alternating percepts of the two colors or, in some cases, a color mixture. The experiments reported here reveal a novel and instructive resolution of rivalry for stimuli that differ in both form and color: perceptual alternation between the rivalrous forms (e.g., horizontal or vertical stripes), with both eyes' colors seen simultaneously in separate parts of the currently perceived form. Thus, the colors presented to the two eyes (a) maintain their distinct neural representations despite resolution of form rivalry and (b) can bind separately to distinct parts of the perceived form.
Prigent, Elise; Amorim, Michel-Ange; de Oliveira, Armando Mónica
2018-01-01
Humans have developed a specific capacity to rapidly perceive and anticipate other people's facial expressions so as to get an immediate impression of their emotional state of mind. We carried out two experiments to examine the perceptual and memory dynamics of facial expressions of pain. In the first experiment, we investigated how people estimate other people's levels of pain based on the perception of various dynamic facial expressions; these differ both in terms of the amount and intensity of activated action units. A second experiment used a representational momentum (RM) paradigm to study the emotional anticipation (memory bias) elicited by the same facial expressions of pain studied in Experiment 1. Our results highlighted the relationship between the level of perceived pain (in Experiment 1) and the direction and magnitude of memory bias (in Experiment 2): When perceived pain increases, the memory bias tends to be reduced (if positive) and ultimately becomes negative. Dynamic facial expressions of pain may reenact an "immediate perceptual history" in the perceiver before leading to an emotional anticipation of the agent's upcoming state. Thus, a subtle facial expression of pain (i.e., a low contraction around the eyes) that leads to a significant positive anticipation can be considered an adaptive process-one through which we can swiftly and involuntarily detect other people's pain.
Persichetti, Andrew S; Aguirre, Geoffrey K; Thompson-Schill, Sharon L
2015-05-01
A central concern in the study of learning and decision-making is the identification of neural signals associated with the values of choice alternatives. An important factor in understanding the neural correlates of value is the representation of the object itself, separate from the act of choosing. Is it the case that the representation of an object within visual areas will change if it is associated with a particular value? We used fMRI adaptation to measure the neural similarity of a set of novel objects before and after participants learned to associate monetary values with the objects. We used a range of both positive and negative values to allow us to distinguish effects of behavioral salience (i.e., large vs. small values) from effects of valence (i.e., positive vs. negative values). During the scanning session, participants made a perceptual judgment unrelated to value. Crucially, the similarity of the visual features of any pair of objects did not predict the similarity of their value, so we could distinguish adaptation effects due to each dimension of similarity. Within early visual areas, we found that value similarity modulated the neural response to the objects after training. These results show that an abstract dimension, in this case, monetary value, modulates neural response to an object in visual areas of the brain even when attention is diverted.
MC ray-tracing optimization of lobster-eye focusing devices with RESTRAX
NASA Astrophysics Data System (ADS)
Šaroun, Jan; Kulda, Jiří
2006-11-01
The enhanced functionalities of the latest version of the RESTRAX software, providing a high-speed Monte Carlo (MC) ray-tracing code to represent a virtual three-axis neutron spectrometer, include representation of parabolic and elliptic guide profiles and facilities for numerical optimization of parameter values, characterizing the instrument components. As examples, we present simulations of a doubly focusing monochromator in combination with cold neutron guides and lobster-eye supermirror devices, concentrating a monochromatic beam to small sample volumes. A Levenberg-Marquardt minimization algorithm is used to optimize simultaneously several parameters of the monochromator and lobster-eye guides. We compare the performance of optimized configurations in terms of monochromatic neutron flux and energy spread and demonstrate the effect of lobster-eye optics on beam transformations in real and momentum subspaces.
Universality in eye movements and reading: A trilingual investigation.
Liversedge, Simon P; Drieghe, Denis; Li, Xin; Yan, Guoli; Bai, Xuejun; Hyönä, Jukka
2016-02-01
Universality in language has been a core issue in the fields of linguistics and psycholinguistics for many years (e.g., Chomsky, 1965). Recently, Frost (2012) has argued that establishing universals of process is critical to the development of meaningful, theoretically motivated, cross-linguistic models of reading. In contrast, other researchers argue that there is no such thing as universals of reading (e.g., Coltheart & Crain, 2012). Reading is a complex, visually mediated psychological process, and eye movements are the behavioural means by which we encode the visual information required for linguistic processing. To investigate universality of representation and process across languages we examined eye movement behaviour during reading of very comparable stimuli in three languages, Chinese, English and Finnish. These languages differ in numerous respects (character based vs. alphabetic, visual density, informational density, word spacing, orthographic depth, agglutination, etc.). We used linear mixed modelling techniques to identify variables that captured common variance across languages. Despite fundamental visual and linguistic differences in the orthographies, statistical models of reading behaviour were strikingly similar in a number of respects, and thus, we argue that their composition might reflect universality of representation and process in reading. Copyright © 2015 Elsevier B.V. All rights reserved.
Eye contrast polarity is critical for face recognition by infants.
Otsuka, Yumiko; Motoyoshi, Isamu; Hill, Harold C; Kobayashi, Megumi; Kanazawa, So; Yamaguchi, Masami K
2013-07-01
Just as faces share the same basic arrangement of features, with two eyes above a nose above a mouth, human eyes all share the same basic contrast polarity relations, with a sclera lighter than an iris and a pupil, and this is unique among primates. The current study examined whether this bright-dark relationship of sclera to iris plays a critical role in face recognition from early in development. Specifically, we tested face discrimination in 7- and 8-month-old infants while independently manipulating the contrast polarity of the eye region and of the rest of the face. This gave four face contrast polarity conditions: fully positive condition, fully negative condition, positive face with negated eyes ("negative eyes") condition, and negated face with positive eyes ("positive eyes") condition. In a familiarization and novelty preference procedure, we found that 7- and 8-month-olds could discriminate between faces only when the contrast polarity of the eyes was preserved (positive) and that this did not depend on the contrast polarity of the rest of the face. This demonstrates the critical role of eye contrast polarity for face recognition in 7- and 8-month-olds and is consistent with previous findings for adults. Copyright © 2013 Elsevier Inc. All rights reserved.
Kinematics of Visually-Guided Eye Movements
Hess, Bernhard J. M.; Thomassen, Jakob S.
2014-01-01
One of the hallmarks of an eye movement that follows Listing’s law is the half-angle rule that says that the angular velocity of the eye tilts by half the angle of eccentricity of the line of sight relative to primary eye position. Since all visually-guided eye movements in the regime of far viewing follow Listing’s law (with the head still and upright), the question about its origin is of considerable importance. Here, we provide theoretical and experimental evidence that Listing’s law results from a unique motor strategy that allows minimizing ocular torsion while smoothly tracking objects of interest along any path in visual space. The strategy consists in compounding conventional ocular rotations in meridian planes, that is in horizontal, vertical and oblique directions (which are all torsion-free) with small linear displacements of the eye in the frontal plane. Such compound rotation-displacements of the eye can explain the kinematic paradox that the fixation point may rotate in one plane while the eye rotates in other planes. Its unique signature is the half-angle law in the position domain, which means that the rotation plane of the eye tilts by half-the angle of gaze eccentricity. We show that this law does not readily generalize to the velocity domain of visually-guided eye movements because the angular eye velocity is the sum of two terms, one associated with rotations in meridian planes and one associated with displacements of the eye in the frontal plane. While the first term does not depend on eye position the second term does depend on eye position. We show that compounded rotation - displacements perfectly predict the average smooth kinematics of the eye during steady- state pursuit in both the position and velocity domain. PMID:24751602
Christophel, Thomas B; Allefeld, Carsten; Endisch, Christian; Haynes, John-Dylan
2018-06-01
Traditional views of visual working memory postulate that memorized contents are stored in dorsolateral prefrontal cortex using an adaptive and flexible code. In contrast, recent studies proposed that contents are maintained by posterior brain areas using codes akin to perceptual representations. An important question is whether this reflects a difference in the level of abstraction between posterior and prefrontal representations. Here, we investigated whether neural representations of visual working memory contents are view-independent, as indicated by rotation-invariance. Using functional magnetic resonance imaging and multivariate pattern analyses, we show that when subjects memorize complex shapes, both posterior and frontal brain regions maintain the memorized contents using a rotation-invariant code. Importantly, we found the representations in frontal cortex to be localized to the frontal eye fields rather than dorsolateral prefrontal cortices. Thus, our results give evidence for the view-independent storage of complex shapes in distributed representations across posterior and frontal brain regions.
Furman, Wyndol; Stephenson, J. Claire; Rhoades, Galena K.
2013-01-01
We examined associations between positive interactions and avoidant and anxious representations in relationships with parents, friends, and romantic partners. Two hundred adolescents completed questionnaires, observations, and attachment interviews. From a between-person perspective, those adolescents with more positive interactions overall had less avoidant representations. Within persons, more positive interactions were relative to one’s own average level in relationships, the less avoidant representations were for that type of relationship. Adolescents were less anxious about a particular type of relationship if they have positive interactions in their other types of relationships. Finally, representations were primarily predicted by interactions in the same type of relationship; interactions in other relationships contributed little. The findings underscore the importance of examining representations of particular types of relationships. PMID:26346530
Populin, Luis C; Tollin, Daniel J; Yin, Tom C T
2004-10-01
We examined the motor error hypothesis of visual and auditory interaction in the superior colliculus (SC), first tested by Jay and Sparks in the monkey. We trained cats to direct their eyes to the location of acoustic sources and studied the effects of eye position on both the ability of cats to localize sounds and the auditory responses of SC neurons with the head restrained. Sound localization accuracy was generally not affected by initial eye position, i.e., accuracy was not proportionally affected by the deviation of the eyes from the primary position at the time of stimulus presentation, showing that eye position is taken into account when orienting to acoustic targets. The responses of most single SC neurons to acoustic stimuli in the intact cat were modulated by eye position in the direction consistent with the predictions of the "motor error" hypothesis, but the shift accounted for only two-thirds of the initial deviation of the eyes. However, when the average horizontal sound localization error, which was approximately 35% of the target amplitude, was taken into account, the magnitude of the horizontal shifts in the SC auditory receptive fields matched the observed behavior. The modulation by eye position was not due to concomitant movements of the external ears, as confirmed by recordings carried out after immobilizing the pinnae of one cat. However, the pattern of modulation after pinnae immobilization was inconsistent with the observations in the intact cat, suggesting that, in the intact animal, information about the position of the pinnae may be taken into account.
Test of the neurolinguistic programming hypothesis that eye-movements relate to processing imagery.
Wertheim, E H; Habib, C; Cumming, G
1986-04-01
Bandler and Grinder's hypothesis that eye-movements reflect sensory processing was examined. 28 volunteers first memorized and then recalled visual, auditory, and kinesthetic stimuli. Changes in eye-positions during recall were videotaped and categorized by two raters into positions hypothesized by Bandler and Grinder's model to represent visual, auditory, and kinesthetic recall. Planned contrast analyses suggested that visual stimulus items, when recalled, elicited significantly more upward eye-positions and stares than auditory and kinesthetic items. Auditory and kinesthetic items, however, did not elicit more changes in eye-position hypothesized by the model to represent auditory and kinesthetic recall, respectively.
Andersen, Lau M
2018-01-01
An important aim of an analysis pipeline for magnetoencephalographic (MEG) data is that it allows for the researcher spending maximal effort on making the statistical comparisons that will answer his or her questions. The example question being answered here is whether the so-called beta rebound differs between novel and repeated stimulations. Two analyses are presented: going from individual sensor space representations to, respectively, an across-group sensor space representation and an across-group source space representation. The data analyzed are neural responses to tactile stimulations of the right index finger in a group of 20 healthy participants acquired from an Elekta Neuromag System. The processing steps covered for the first analysis are MaxFiltering the raw data, defining, preprocessing and epoching the data, cleaning the data, finding and removing independent components related to eye blinks, eye movements and heart beats, calculating participants' individual evoked responses by averaging over epoched data and subsequently removing the average response from single epochs, calculating a time-frequency representation and baselining it with non-stimulation trials and finally calculating a grand average, an across-group sensor space representation. The second analysis starts from the grand average sensor space representation and after identification of the beta rebound the neural origin is imaged using beamformer source reconstruction. This analysis covers reading in co-registered magnetic resonance images, segmenting the data, creating a volume conductor, creating a forward model, cutting out MEG data of interest in the time and frequency domains, getting Fourier transforms and estimating source activity with a beamformer model where power is expressed relative to MEG data measured during periods of non-stimulation. Finally, morphing the source estimates onto a common template and performing group-level statistics on the data are covered. Functions for saving relevant figures in an automated and structured manner are also included. The protocol presented here can be applied to any research protocol where the emphasis is on source reconstruction of induced responses where the underlying sources are not coherent.
Zykin, P A
2005-01-01
Comparative data on the structural-metabolic organization of field 4 of the cat brain in normal conditions and after unilateral enucleation of the eye are presented. Cytochrome oxidase was detected histochemically. Data were processed by a computerized method using an original video capture system. Data were obtained demonstrating the uneven distribution of enzyme along sublayer IlIb of field 4 in animals with unilateral enucleation. A hypothesis based on published data is suggested whereby the alternation of high- and low-reactive areas is evidence for the ordering of the retinal representations of the right and left eyes in the sensorimotor cortex.
Coordinated Flexibility: How Initial Gaze Position Modulates Eye-Hand Coordination and Reaching
ERIC Educational Resources Information Center
Adam, Jos J.; Buetti, Simona; Kerzel, Dirk
2012-01-01
Reaching to targets in space requires the coordination of eye and hand movements. In two experiments, we recorded eye and hand kinematics to examine the role of gaze position at target onset on eye-hand coordination and reaching performance. Experiment 1 showed that with eyes and hand aligned on the same peripheral start location, time lags…
Reflection Positive Stochastic Processes Indexed by Lie Groups
NASA Astrophysics Data System (ADS)
Jorgensen, Palle E. T.; Neeb, Karl-Hermann; Ólafsson, Gestur
2016-06-01
Reflection positivity originates from one of the Osterwalder-Schrader axioms for constructive quantum field theory. It serves as a bridge between euclidean and relativistic quantum field theory. In mathematics, more specifically, in representation theory, it is related to the Cartan duality of symmetric Lie groups (Lie groups with an involution) and results in a transformation of a unitary representation of a symmetric Lie group to a unitary representation of its Cartan dual. In this article we continue our investigation of representation theoretic aspects of reflection positivity by discussing reflection positive Markov processes indexed by Lie groups, measures on path spaces, and invariant gaussian measures in spaces of distribution vectors. This provides new constructions of reflection positive unitary representations.
NASA Technical Reports Server (NTRS)
Hess, B. J.; Angelaki, D. E.
1997-01-01
The kinematic constraints of three-dimensional eye positions were investigated in rhesus monkeys during passive head and body rotations relative to gravity. We studied fast and slow phase components of the vestibulo-ocular reflex (VOR) elicited by constant-velocity yaw rotations and sinusoidal oscillations about an earth-horizontal axis. We found that the spatial orientation of both fast and slow phase eye positions could be described locally by a planar surface with torsional variation of <2.0 +/- 0.4 degrees (displacement planes) that systematically rotated and/or shifted relative to Listing's plane. In supine/prone positions, displacement planes pitched forward/backward; in left/right ear-down positions, displacement planes were parallel shifted along the positive/negative torsional axis. Dynamically changing primary eye positions were computed from displacement planes. Torsional and vertical components of primary eye position modulated as a sinusoidal function of head orientation in space. The torsional component was maximal in ear-down positions and approximately zero in supine/prone orientations. The opposite was observed for the vertical component. Modulation of the horizontal component of primary eye position exhibited a more complex dependence. In contrast to the torsional component, which was relatively independent of rotational speed, modulation of the vertical and horizontal components of primary position depended strongly on the speed of head rotation (i.e., on the frequency of oscillation of the gravity vector component): the faster the head rotated relative to gravity, the larger was the modulation. Corresponding results were obtained when a model based on a sinusoidal dependence of instantaneous displacement planes (and primary eye position) on head orientation relative to gravity was fitted to VOR fast phase positions. When VOR fast phase positions were expressed relative to primary eye position estimated from the model fits, they were confined approximately to a single plane with a small torsional standard deviation ( approximately 1.4-2.6 degrees). This reduced torsional variation was in contrast to the large torsional spread (well >10-15 degrees ) of fast phase positions when expressed relative to Listing's plane. We conclude that primary eye position depends dynamically on head orientation relative to space rather than being fixed to the head. It defines a gravity-dependent coordinate system relative to which the torsional variability of eye positions is minimized even when the head is moved passively and vestibulo-ocular reflexes are evoked. In this general sense, Listing's law is preserved with respect to an otolith-controlled reference system that is defined dynamically by gravity.
Temporal Stability and Authenticity of Self-Representations in Adulthood
Diehl, Manfred; Jacobs, Laurie M.; Hastings, Catherine T.
2008-01-01
The temporal stability of role-specific self-representations was examined in a sample of 188 young, middle-aged, and older adults. Considerable stability was observed for all self-representations. Central self-descriptors showed significantly greater temporal stability than peripheral self-descriptors. Temporal stability of self-representations was positively associated with self-concept clarity, self-esteem, and positive affect (PA). Age differences were obtained for three of the five self-representations, with older adults showing significantly lower stabilities for self with family, self with friend, and self with significant other compared to young and middle-aged adults. Assessment of the authenticity of adults’ role-specific self-representations showed that greater authenticity tended to be associated with greater temporal stability. Authenticity and the number of positive daily events were significant positive predictors of the stability of self-representations. PMID:18820732
Arvind, Hemamalini; Klistorner, Alexander; Graham, Stuart L; Grigg, John R
2006-05-01
Multifocal visual evoked potentials (mfVEPs) have demonstrated good diagnostic capabilities in glaucoma and optic neuritis. This study aimed at evaluating the possibility of simultaneously recording mfVEP for both eyes with dichoptic stimulation using virtual reality goggles and also to determine the stimulus characteristics that yield maximum amplitude. ten healthy volunteers were recruited and temporally sparse pattern pulse stimuli were presented dichoptically using virtual reality goggles. Experiment 1 involved recording responses to dichoptically presented checkerboard stimuli and also confirming true topographic representation by switching off specific segments. Experiment 2 involved monocular stimulation and comparison of amplitude with Experiment 1. In Experiment 3, orthogonally oriented gratings were dichoptically presented. Experiment 4 involved dichoptic presentation of checkerboard stimuli at different levels of sparseness (5.0 times/s, 2.5 times/s, 1.66 times/s and 1.25 times/s), where stimulation of corresponding segments of two eyes were separated by 16.7, 66.7,116.7 & 166.7 ms respectively. Experiment 1 demonstrated good traces in all regions and confirmed topographic representation. However, there was suppression of amplitude of responses to dichoptic stimulation by 17.9+/-5.4% compared to monocular stimulation. Experiment 3 demonstrated similar suppression between orthogonal and checkerboard stimuli (p = 0.08). Experiment 4 demonstrated maximum amplitude and least suppression (4.8%) with stimulation at 1.25 times/s with 166.7 ms separation between eyes. It is possible to record mfVEP for both eyes during dichoptic stimulation using virtual reality goggles, which present binocular simultaneous patterns driven by independent sequences. Interocular suppression can be almost eliminated by using a temporally sparse stimulus of 1.25 times/s with a separation of 166.7 ms between stimulation of corresponding segments of the two eyes.
Beauchet, Olivier; Launay, Cyrille P; Sekhon, Harmehr; Gautier, Jennifer; Chabot, Julia; Levinoff, Elise J; Allali, Gilles
2018-01-01
Assessment of changes in higher levels of gait control with aging is important to better understand age-related gait instability, with the perspective to improve the screening of individuals at risk for falls. The comparison between actual Timed Up and Go test (aTUG) and its imagined version (iTUG) is a simple clinical way to assess age-related changes in gait control. The modulations of iTUG performances by body positions and motor imagery (MI) strategies with normal aging have not been evaluated yet. This study aims 1) to compare the aTUG time with the iTUG time under different body positions (i.e., sitting, standing or supine) in healthy young and middle age, and older adults, and 2) to examine the associations of body positions and MI strategies (i.e., egocentric versus allocentric) with the time needed to complete the iTUG and the delta TUG time (i.e., relative difference between aTUG and iTUG) while taking into consideration clinical characteristics of participants. A total of 60 healthy individuals (30 young and middle age participants 26.6±7.4 years, and 30 old participants 75.0±4.4 years) were recruited in this cross-sectional study. The iTUG was performed while sitting, standing and in supine position. Times of the aTUG, the iTUG under the three body positions, the TUG delta time and the strategies of MI (i.e., ego representation, defined as representation of the location of objects in space relative to the body axes of the self, versus allocentric representation defined as encoding information about body movement with respect to other object, the location of body being defined relative to the location of other objects) were used as outcomes. Age, sex, height, weight, number of drugs taken daily, level of physical activity and prevalence of closed eyes while performing iTUG were recorded. The aTUG time is significantly greater than iTUG while sitting and standing (P<0.001), except when older participants are standing. A significant difference is reported between iTUG while sitting or standing and iTUG while supine (P≤0.002), higher time being reported in supine position. The multiple linear regressions confirm that the supine position is associated with significant increased iTUG (P≤0.04) and decreased TUG delta time (P≤0.010), regardless of the adjustment. Older participants use the allocentric MI while imagining TUG more frequently than young and middle age participants, regardless of body positions (P≤0.001). Allocentric MI strategy is associated with a significant decrease in iTUG (P = 0.037) only while adjusting for age. A significant increase of iTUG time is associated with age (P≤0.026). Supine position while imagining TUG represents a more accurate position of actual performance of TUG. Age has a limited effect on iTUG performance but is associated with a change in MI from ego to allocentric representation that decreases the iTUG performances, and thus increases the discrepancy with aTUG.
Brooks, Kevin R; Kemp, Richard I
2007-01-01
Previous studies of face recognition and of face matching have shown a general improvement for the processing of internal features as a face becomes more familiar to the participant. In this study, we used a psychophysical two-alternative forced-choice paradigm to investigate thresholds for the detection of a displacement of the eyes, nose, mouth, or ears for familiar and unfamiliar faces. No clear division between internal and external features was observed. Rather, for familiar (compared to unfamiliar) faces participants were more sensitive to displacements of internal features such as the eyes or the nose; yet, for our third internal feature-the mouth no such difference was observed. Despite large displacements, many subjects were unable to perform above chance when stimuli involved shifts in the position of the ears. These results are consistent with the proposal that familiarity effects may be mediated by the construction of a robust representation of a face, although the involvement of attention in the encoding of face stimuli cannot be ruled out. Furthermore, these effects are mediated by information from a spatial configuration of features, rather than by purely feature-based information.
Nonretinotopic visual processing in the brain.
Melcher, David; Morrone, Maria Concetta
2015-01-01
A basic principle in visual neuroscience is the retinotopic organization of neural receptive fields. Here, we review behavioral, neurophysiological, and neuroimaging evidence for nonretinotopic processing of visual stimuli. A number of behavioral studies have shown perception depending on object or external-space coordinate systems, in addition to retinal coordinates. Both single-cell neurophysiology and neuroimaging have provided evidence for the modulation of neural firing by gaze position and processing of visual information based on craniotopic or spatiotopic coordinates. Transient remapping of the spatial and temporal properties of neurons contingent on saccadic eye movements has been demonstrated in visual cortex, as well as frontal and parietal areas involved in saliency/priority maps, and is a good candidate to mediate some of the spatial invariance demonstrated by perception. Recent studies suggest that spatiotopic selectivity depends on a low spatial resolution system of maps that operates over a longer time frame than retinotopic processing and is strongly modulated by high-level cognitive factors such as attention. The interaction of an initial and rapid retinotopic processing stage, tied to new fixations, and a longer lasting but less precise nonretinotopic level of visual representation could underlie the perception of both a detailed and a stable visual world across saccadic eye movements.
MR-eyetracker: a new method for eye movement recording in functional magnetic resonance imaging.
Kimmig, H; Greenlee, M W; Huethe, F; Mergner, T
1999-06-01
We present a method for recording saccadic and pursuit eye movements in the magnetic resonance tomograph designed for visual functional magnetic resonance imaging (fMRI) experiments. To reliably classify brain areas as pursuit or saccade related it is important to carefully measure the actual eye movements. For this purpose, infrared light, created outside the scanner by light-emitting diodes (LEDs), is guided via optic fibers into the head coil and onto the eye of the subject. Two additional fiber optical cables pick up the light reflected by the iris. The illuminating and detecting cables are mounted in a plastic eyepiece that is manually lowered to the level of the eye. By means of differential amplification, we obtain a signal that covaries with the horizontal position of the eye. Calibration of eye position within the scanner yields an estimate of eye position with a resolution of 0.2 degrees at a sampling rate of 1000 Hz. Experiments are presented that employ echoplanar imaging with 12 image planes through visual, parietal and frontal cortex while subjects performed saccadic and pursuit eye movements. The distribution of BOLD (blood oxygen level dependent) responses is shown to depend on the type of eye movement performed. Our method yields high temporal and spatial resolution of the horizontal component of eye movements during fMRI scanning. Since the signal is purely optical, there is no interaction between the eye movement signals and the echoplanar images. This reasonably priced eye tracker can be used to control eye position and monitor eye movements during fMRI.
Clavagnier, Simon; Dumoulin, Serge O; Hess, Robert F
2015-11-04
The neural basis of amblyopia is a matter of debate. The following possibilities have been suggested: loss of foveal cells, reduced cortical magnification, loss of spatial resolution of foveal cells, and topographical disarray in the cellular map. To resolve this we undertook a population receptive field (pRF) functional magnetic resonance imaging analysis in the central field in humans with moderate-to-severe amblyopia. We measured the relationship between averaged pRF size and retinal eccentricity in retinotopic visual areas. Results showed that cortical magnification is normal in the foveal field of strabismic amblyopes. However, the pRF sizes are enlarged for the amblyopic eye. We speculate that the pRF enlargement reflects loss of cellular resolution or an increased cellular positional disarray within the representation of the amblyopic eye. The neural basis of amblyopia, a visual deficit affecting 3% of the human population, remains a matter of debate. We undertook the first population receptive field functional magnetic resonance imaging analysis in participants with amblyopia and compared the projections from the amblyopic and fellow normal eye in the visual cortex. The projection from the amblyopic eye was found to have a normal cortical magnification factor, enlarged population receptive field sizes, and topographic disorganization in all early visual areas. This is consistent with an explanation of amblyopia as an immature system with a normal complement of cells whose spatial resolution is reduced and whose topographical map is disordered. This bears upon a number of competing theories for the psychophysical defect and affects future treatment therapies. Copyright © 2015 the authors 0270-6474/15/3514740-16$15.00/0.
Littel, Marianne; van den Hout, Marcel A; Engelhard, Iris M
2016-01-01
Eye movement desensitization and reprocessing (EMDR) is an effective treatment for posttraumatic stress disorder. During this treatment, patients recall traumatic memories while making horizontal eye movements (EM). Studies have shown that EM not only desensitize negative memories but also positive memories and imagined events. Substance use behavior and craving are maintained by maladaptive memory associations and visual imagery. Preliminary findings have indicated that these mental images can be desensitized by EMDR techniques. We conducted two proof-of-principle studies to investigate whether EM can reduce the sensory richness of substance-related mental representations and accompanying craving levels. We investigated the effects of EM on (1) vividness of food-related mental imagery and food craving in dieting and non-dieting students and (2) vividness of recent smoking-related memories and cigarette craving in daily smokers. In both experiments, participants recalled the images while making EM or keeping eyes stationary. Image vividness and emotionality, image-specific craving and general craving were measured before and after the intervention. As a behavioral outcome measure, participants in study 1 were offered a snack choice at the end of the experiment. Results of both experiments showed that image vividness and craving increased in the control condition but remained stable or decreased after the EM intervention. EM additionally reduced image emotionality (experiment 2) and affected behavior (experiment 1): participants in the EM condition were more inclined to choose healthy over unhealthy snack options. In conclusion, these data suggest that EM can be used to reduce intensity of substance-related imagery and craving. Although long-term effects are yet to be demonstrated, the current studies suggest that EM might be a useful technique in addiction treatment.
Thurtell, M J; Black, R A; Halmagyi, G M; Curthoys, I S; Aw, S T
1999-05-01
Vertical eye position-dependence of the human vestibuloocular reflex during passive and active yaw head rotations. The effect of vertical eye-in-head position on the compensatory eye rotation response to passive and active high acceleration yaw head rotations was examined in eight normal human subjects. The stimuli consisted of brief, low amplitude (15-25 degrees ), high acceleration (4,000-6,000 degrees /s2) yaw head rotations with respect to the trunk (peak velocity was 150-350 degrees /s). Eye and head rotations were recorded in three-dimensional space using the magnetic search coil technique. The input-output kinematics of the three-dimensional vestibuloocular reflex (VOR) were assessed by finding the difference between the inverted eye velocity vector and the head velocity vector (both referenced to a head-fixed coordinate system) as a time series. During passive head impulses, the head and eye velocity axes aligned well with each other for the first 47 ms after the onset of the stimulus, regardless of vertical eye-in-head position. After the initial 47-ms period, the degree of alignment of the eye and head velocity axes was modulated by vertical eye-in-head position. When fixation was on a target 20 degrees up, the eye and head velocity axes remained well aligned with each other. However, when fixation was on targets at 0 and 20 degrees down, the eye velocity axis tilted forward relative to the head velocity axis. During active head impulses, the axis tilt became apparent within 5 ms of the onset of the stimulus. When fixation was on a target at 0 degrees, the velocity axes remained well aligned with each other. When fixation was on a target 20 degrees up, the eye velocity axis tilted backward, when fixation was on a target 20 degrees down, the eye velocity axis tilted forward. The findings show that the VOR compensates very well for head motion in the early part of the response to unpredictable high acceleration stimuli-the eye position- dependence of the VOR does not become apparent until 47 ms after the onset of the stimulus. In contrast, the response to active high acceleration stimuli shows eye position-dependence from within 5 ms of the onset of the stimulus. A model using a VOR-Listing's law compromise strategy did not accurately predict the patterns observed in the data, raising questions about how the eye position-dependence of the VOR is generated. We suggest, in view of recent findings, that the phenomenon could arise due to the effects of fibromuscular pulleys on the functional pulling directions of the rectus muscles.
Raghavan, Ramanujan T; Joshua, Mati
2017-10-01
We investigated the composition of preparatory activity of frontal eye field (FEF) neurons in monkeys performing a pursuit target selection task. In response to the orthogonal motion of a large and a small reward target, monkeys initiated pursuit biased toward the direction of large reward target motion. FEF neurons exhibited robust preparatory activity preceding movement initiation in this task. Preparatory activity consisted of two components, ramping activity that was constant across target selection conditions, and a flat offset in firing rates that signaled the target selection condition. Ramping activity accounted for 50% of the variance in the preparatory activity and was linked most strongly, on a trial-by-trial basis, to pursuit eye movement latency rather than to its direction or gain. The offset in firing rates that discriminated target selection conditions accounted for 25% of the variance in the preparatory activity and was commensurate with a winner-take-all representation, signaling the direction of large reward target motion rather than a representation that matched the parameters of the upcoming movement. These offer new insights into the role that the frontal eye fields play in target selection and pursuit control. They show that preparatory activity in the FEF signals more strongly when to move rather than where or how to move and suggest that structures outside the FEF augment its contributions to the target selection process. NEW & NOTEWORTHY We used the smooth eye movement pursuit system to link between patterns of preparatory activity in the frontal eye fields and movement during a target selection task. The dominant pattern was a ramping signal that did not discriminate between selection conditions and was linked, on trial-by-trial basis, to movement latency. A weaker pattern was composed of a constant signal that discriminated between selection conditions but was only weakly linked to the movement parameters. Copyright © 2017 the American Physiological Society.
Böckler, Anne; van der Wel, Robrecht P R D; Welsh, Timothy N
2015-09-01
Direct eye contact and motion onset both constitute powerful cues that capture attention. Recent research suggests that (social) gaze and (non-social) motion onset influence information processing in parallel, even when combined as sudden onset direct gaze cues (i.e., faces suddenly establishing eye contact). The present study investigated the role of eye visibility for attention capture by these sudden onset face cues. To this end, face direction was manipulated (away or towards onlooker) while faces had closed eyes (eliminating visibility of eyes, Experiment 1), wore sunglasses (eliminating visible eyes, but allowing for the expectation of eyes to be open, Experiment 2), and were inverted with visible eyes (disrupting the integration of eyes and faces, Experiment 3). Participants classified targets appearing on one of four faces. Initially, two faces were oriented towards participants and two faces were oriented away from participants. Simultaneous to target presentation, one averted face became directed and one directed face became averted. Attention capture by face direction (i.e., facilitation for faces directed towards participants) was absent when eyes were closed, but present when faces wore sunglasses. Sudden onset direct faces can, hence, induce attentional capture, even when lacking eye cues. Inverted faces, by contrast, did not elicit attentional capture. Thus, when eyes cannot be integrated into a holistic face representation they are not sufficient to capture attention. Overall, the results suggest that visibility of eyes is neither necessary nor sufficient for the sudden direct face effect. Copyright © 2015 Elsevier B.V. All rights reserved.
Ocular surface changes in thyroid eye disease.
Ismailova, Dilyara S; Fedorov, Anatoly A; Grusha, Yaroslav O
2013-04-01
To study the incidence and risk factors of ocular surface damage in thyroid eye disease (TED) and to determine histological changes underlying positive vital staining in this condition. Forty-six patients (92 eyes) with TED were included in this study. Routine ophthalmologic examination, Schirmer test I, vital staining and corneal sensitivity were performed. Fifteen patients with positive vital staining underwent impression cytology and incisional biopsy. Positive vital staining with lissamine green was observed in 56 eyes (60.9%), 30 patients (65.2%). The average degree of staining was 4.57 ± 0.44 (National Eye Institute Workshop grading system). Severe dry eye syndrome was found in 16%. The following histological changes of conjunctiva were revealed: significant epithelial dystrophy with cell polymorphism, goblet cells loss, excessive desquamation and epithelial keratinization with local leukocytic infiltration of substantia propria. According to our results dry eye syndrome is present in 65.2% of patients (60.9% eyes) with TED. Significant risk factors of ocular surface damage in TED were exophthalmos, lagophthalmos, palpebral fissure height and lower lid retraction. Positive conjunctival staining results from punctuate epithelial erosions and excessive desquamation of superficial cells. Histopathologic changes detected in conjunctiva consistent with dry eye and are not specific for TED.
Multi-resolution voxel phantom modeling: a high-resolution eye model for computational dosimetry
NASA Astrophysics Data System (ADS)
Caracappa, Peter F.; Rhodes, Ashley; Fiedler, Derek
2014-09-01
Voxel models of the human body are commonly used for simulating radiation dose with a Monte Carlo radiation transport code. Due to memory limitations, the voxel resolution of these computational phantoms is typically too large to accurately represent the dimensions of small features such as the eye. Recently reduced recommended dose limits to the lens of the eye, which is a radiosensitive tissue with a significant concern for cataract formation, has lent increased importance to understanding the dose to this tissue. A high-resolution eye model is constructed using physiological data for the dimensions of radiosensitive tissues, and combined with an existing set of whole-body models to form a multi-resolution voxel phantom, which is used with the MCNPX code to calculate radiation dose from various exposure types. This phantom provides an accurate representation of the radiation transport through the structures of the eye. Two alternate methods of including a high-resolution eye model within an existing whole-body model are developed. The accuracy and performance of each method is compared against existing computational phantoms.
Compensating For Movement Of Eye In Laser Surgery
NASA Technical Reports Server (NTRS)
Juday, Richard D.
1991-01-01
Conceptual system for laser surgery of retina includes subsystem that tracks position of retina. Tracking signal used to control galvanometer-driven mirrors keeping laser aimed at desired spot on retina as eye moves. Alternatively or additionally, indication of position used to prevent firing of laser when eye moved too far from proper aiming position.
Expansion of visual space during optokinetic afternystagmus (OKAN).
Kaminiarz, André; Krekelberg, Bart; Bremmer, Frank
2008-05-01
The mechanisms underlying visual perceptual stability are usually investigated using voluntary eye movements. In such studies, errors in perceptual stability during saccades and pursuit are commonly interpreted as mismatches between actual eye position and eye-position signals in the brain. The generality of this interpretation could in principle be tested by investigating spatial localization during reflexive eye movements whose kinematics are very similar to those of voluntary eye movements. Accordingly, in this study, we determined mislocalization of flashed visual targets during optokinetic afternystagmus (OKAN). These eye movements are quite unique in that they occur in complete darkness and are generated by subcortical control mechanisms. We found that during horizontal OKAN slow phases, subjects mislocalize targets away from the fovea in the horizontal direction. This corresponds to a perceived expansion of visual space and is unlike mislocalization found for any other voluntary or reflexive eye movement. Around the OKAN fast phases, we found a bias in the direction of the fast phase prior to its onset and opposite to the fast-phase direction thereafter. Such a biphasic modulation has also been reported in the temporal vicinity of saccades and during optokinetic nystagmus (OKN). A direct comparison, however, showed that the modulation during OKAN was much larger and occurred earlier relative to fast-phase onset than during OKN. A simple mismatch between the current eye position and the eye-position signal in the brain is unlikely to explain such disparate results across similar eye movements. Instead, these data support the view that mislocalization arises from errors in eye-centered position information.
Albouy, Geneviève; Fogel, Stuart; Pottiez, Hugo; Nguyen, Vo An; Ray, Laura; Lungu, Ovidiu; Carrier, Julie; Robertson, Edwin; Doyon, Julien
2013-01-01
Motor sequence learning is known to rely on more than a single process. As the skill develops with practice, two different representations of the sequence are formed: a goal representation built under spatial allocentric coordinates and a movement representation mediated through egocentric motor coordinates. This study aimed to explore the influence of daytime sleep (nap) on consolidation of these two representations. Through the manipulation of an explicit finger sequence learning task and a transfer protocol, we show that both allocentric (spatial) and egocentric (motor) representations of the sequence can be isolated after initial training. Our results also demonstrate that nap favors the emergence of offline gains in performance for the allocentric, but not the egocentric representation, even after accounting for fatigue effects. Furthermore, sleep-dependent gains in performance observed for the allocentric representation are correlated with spindle density during non-rapid eye movement (NREM) sleep of the post-training nap. In contrast, performance on the egocentric representation is only maintained, but not improved, regardless of the sleep/wake condition. These results suggest that motor sequence memory acquisition and consolidation involve distinct mechanisms that rely on sleep (and specifically, spindle) or simple passage of time, depending respectively on whether the sequence is performed under allocentric or egocentric coordinates. PMID:23300993
The representation of information about faces in the temporal and frontal lobes.
Rolls, Edmund T
2007-01-07
Neurophysiological evidence is described showing that some neurons in the macaque inferior temporal visual cortex have responses that are invariant with respect to the position, size and view of faces and objects, and that these neurons show rapid processing and rapid learning. Which face or object is present is encoded using a distributed representation in which each neuron conveys independent information in its firing rate, with little information evident in the relative time of firing of different neurons. This ensemble encoding has the advantages of maximising the information in the representation useful for discrimination between stimuli using a simple weighted sum of the neuronal firing by the receiving neurons, generalisation and graceful degradation. These invariant representations are ideally suited to provide the inputs to brain regions such as the orbitofrontal cortex and amygdala that learn the reinforcement associations of an individual's face, for then the learning, and the appropriate social and emotional responses, generalise to other views of the same face. A theory is described of how such invariant representations may be produced in a hierarchically organised set of visual cortical areas with convergent connectivity. The theory proposes that neurons in these visual areas use a modified Hebb synaptic modification rule with a short-term memory trace to capture whatever can be captured at each stage that is invariant about objects as the objects change in retinal view, position, size and rotation. Another population of neurons in the cortex in the superior temporal sulcus encodes other aspects of faces such as face expression, eye gaze, face view and whether the head is moving. These neurons thus provide important additional inputs to parts of the brain such as the orbitofrontal cortex and amygdala that are involved in social communication and emotional behaviour. Outputs of these systems reach the amygdala, in which face-selective neurons are found, and also the orbitofrontal cortex, in which some neurons are tuned to face identity and others to face expression. In humans, activation of the orbitofrontal cortex is found when a change of face expression acts as a social signal that behaviour should change; and damage to the orbitofrontal cortex can impair face and voice expression identification, and also the reversal of emotional behaviour that normally occurs when reinforcers are reversed.
Photorefractor ocular screening system
NASA Technical Reports Server (NTRS)
Richardson, John R. (Inventor); Kerr, Joseph H. (Inventor)
1987-01-01
A method and apparatus for detecting human eye defects, particularly detection of refractive error is presented. Eye reflex is recorded on color film when the eyes are exposed to a flash of light. The photographs are compared with predetermined standards to detect eye defects. The base structure of the ocular screening system is a folding interconnect structure, comprising hinged sections. Attached to one end of the structure is a head positioning station which comprises vertical support, a head positioning bracket having one end attached to the top of the support, and two head positioning lamps to verify precise head positioning. At the opposite end of the interconnect structure is a camera station with camera, electronic flash unit, and blinking fixation lamp, for photographing the eyes of persons being evaluated.
Spatially generalizable representations of facial expressions: Decoding across partial face samples.
Greening, Steven G; Mitchell, Derek G V; Smith, Fraser W
2018-04-01
A network of cortical and sub-cortical regions is known to be important in the processing of facial expression. However, to date no study has investigated whether representations of facial expressions present in this network permit generalization across independent samples of face information (e.g., eye region vs mouth region). We presented participants with partial face samples of five expression categories in a rapid event-related fMRI experiment. We reveal a network of face-sensitive regions that contain information about facial expression categories regardless of which part of the face is presented. We further reveal that the neural information present in a subset of these regions: dorsal prefrontal cortex (dPFC), superior temporal sulcus (STS), lateral occipital and ventral temporal cortex, and even early visual cortex, enables reliable generalization across independent visual inputs (faces depicting the 'eyes only' vs 'eyes removed'). Furthermore, classification performance was correlated to behavioral performance in STS and dPFC. Our results demonstrate that both higher (e.g., STS, dPFC) and lower level cortical regions contain information useful for facial expression decoding that go beyond the visual information presented, and implicate a key role for contextual mechanisms such as cortical feedback in facial expression perception under challenging conditions of visual occlusion. Copyright © 2017 Elsevier Ltd. All rights reserved.
Role of expected reward in frontal eye field during natural scene search
Lawlor, Patrick N.; Ramkumar, Pavan; Kording, Konrad P.; Segraves, Mark A.
2016-01-01
When a saccade is expected to result in a reward, both neural activity in oculomotor areas and the saccade itself (e.g., its vigor and latency) are altered (compared with when no reward is expected). As such, it is unclear whether the correlations of neural activity with reward indicate a representation of reward beyond a movement representation; the modulated neural activity may simply represent the differences in motor output due to expected reward. Here, to distinguish between these possibilities, we trained monkeys to perform a natural scene search task while we recorded from the frontal eye field (FEF). Indeed, when reward was expected (i.e., saccades to the target), FEF neurons showed enhanced responses. Moreover, when monkeys accidentally made eye movements to the target, firing rates were lower than when they purposively moved to the target. Thus, neurons were modulated by expected reward rather than simply the presence of the target. We then fit a model that simultaneously included components related to expected reward and saccade parameters. While expected reward led to shorter latency and higher velocity saccades, these behavioral changes could not fully explain the increased FEF firing rates. Thus, FEF neurons appear to encode motivational factors such as reward expectation, above and beyond the kinematic and behavioral consequences of imminent reward. PMID:27169506
Okada, Kouhei; Takai, Shinji; Jin, Denan; Ishida, Osamu; Fukmoto, Masanori; Oku, Hidehiro; Miyazaki, Mizuo; Ikeda, Tsunehiko
2009-01-01
Purpose To determine the effects of mitomycin C (MMC) on the expression of chymase and mast cells in the conjunctival scar after trabeculectomy. Methods Ten eyes of five monkeys were used. Three eyes underwent trabeculectomy with MMC (MMC-treated), four eyes had trabeculectomy without MMC (placebo-treated), and three eyes served as control eyes. Intraocular pressure was measured before and three weeks after surgery. The scores of the degree of conjunctival adhesion were evaluated. Immunohistochemistry was used to analyze the densities of proliferative cell nuclear antigen-positive cells, chymase-positive cells, and mast cells. The ratio of collagen fiber areas to conjunctival and scleral lesions was analyzed by Mallory-Azan staining. Results After trabeculectomy, the intraocular pressure reduction of MMC-treated eyes was significantly different from placebo-treated and control eyes (p=0.032, 0.035). The adhesion score of MMC-treated eyes was also significantly lower than that of placebo-treated eyes (p=0.034). Densities of proliferative cell nuclear antigen-positive cells, chymase-positive cells, and areas of collagen fiber in conjunctival and scleral lesions were significantly decreased in MMC-treated eyes, compared with placebo-treated eyes (p=0.034, 0.034, 0.049, respectively). There was a tendency for the density of mast cells to be suppressed in MMC-treated eyes (p=0.157). Conclusions Chymase might be involved in one of the mechanisms by which MMC suppresses scar formation after trabeculectomy. PMID:19844588
Gravity modulates Listing's plane orientation during both pursuit and saccades
NASA Technical Reports Server (NTRS)
Hess, Bernhard J M.; Angelaki, Dora E.
2003-01-01
Previous studies have shown that the spatial organization of all eye orientations during visually guided saccadic eye movements (Listing's plane) varies systematically as a function of static and dynamic head orientation in space. Here we tested if a similar organization also applies to the spatial orientation of eye positions during smooth pursuit eye movements. Specifically, we characterized the three-dimensional distribution of eye positions during horizontal and vertical pursuit (0.1 Hz, +/-15 degrees and 0.5 Hz, +/-8 degrees) at different eccentricities and elevations while rhesus monkeys were sitting upright or being statically tilted in different roll and pitch positions. We found that the spatial organization of eye positions during smooth pursuit depends on static orientation in space, similarly as during visually guided saccades and fixations. In support of recent modeling studies, these results are consistent with a role of gravity on defining the parameters of Listing's law.
Novice Interpretations of Visual Representations of Geosciences Data
NASA Astrophysics Data System (ADS)
Burkemper, L. K.; Arthurs, L.
2013-12-01
Past cognition research of individual's perception and comprehension of bar and line graphs are substantive enough that they have resulted in the generation of graph design principles and graph comprehension theories; however, gaps remain in our understanding of how people process visual representations of data, especially of geologic and atmospheric data. This pilot project serves to build on others' prior research and begin filling the existing gaps. The primary objectives of this pilot project include: (i) design a novel data collection protocol based on a combination of paper-based surveys, think-aloud interviews, and eye-tracking tasks to investigate student data handling skills of simple to complex visual representations of geologic and atmospheric data, (ii) demonstrate that the protocol yields results that shed light on student data handling skills, and (iii) generate preliminary findings upon which tentative but perhaps helpful recommendations on how to more effectively present these data to the non-scientist community and teach essential data handling skills. An effective protocol for the combined use of paper-based surveys, think-aloud interviews, and computer-based eye-tracking tasks for investigating cognitive processes involved in perceiving, comprehending, and interpreting visual representations of geologic and atmospheric data is instrumental to future research in this area. The outcomes of this pilot study provide the foundation upon which future more in depth and scaled up investigations can build. Furthermore, findings of this pilot project are sufficient for making, at least, tentative recommendations that can help inform (i) the design of physical attributes of visual representations of data, especially more complex representations, that may aid in improving students' data handling skills and (ii) instructional approaches that have the potential to aid students in more effectively handling visual representations of geologic and atmospheric data that they might encounter in a course, television news, newspapers and magazines, and websites. Such recommendations would also be the potential subject of future investigations and have the potential to impact the design features when data is presented to the public and instructional strategies not only in geoscience courses but also other science, technology, engineering, and mathematics (STEM) courses.
Brief report: decoding representations: how children with autism understand drawings.
Allen, Melissa L
2009-03-01
Young typically developing children can reason about abstract depictions if they know the intention of the artist. Children with autism spectrum disorder (ASD), who are notably impaired in social, 'intention monitoring' domains, may have great difficulty in decoding vague representations. In Experiment 1, children with ASD are unable to use another person's eye gaze as a cue for figuring out what an abstract picture represents. In contrast, when the participants themselves are the artists (Experiment 2), children with ASD are equally proficient as controls at identifying their own perceptually identical pictures (e.g. lollipop and balloon) after a delay, based upon what they intended them to be. Results are discussed in terms of intention and understanding of visual representation in autism.
Internal representations reveal cultural diversity in expectations of facial expressions of emotion.
Jack, Rachael E; Caldara, Roberto; Schyns, Philippe G
2012-02-01
Facial expressions have long been considered the "universal language of emotion." Yet consistent cultural differences in the recognition of facial expressions contradict such notions (e.g., R. E. Jack, C. Blais, C. Scheepers, P. G. Schyns, & R. Caldara, 2009). Rather, culture--as an intricate system of social concepts and beliefs--could generate different expectations (i.e., internal representations) of facial expression signals. To investigate, they used a powerful psychophysical technique (reverse correlation) to estimate the observer-specific internal representations of the 6 basic facial expressions of emotion (i.e., happy, surprise, fear, disgust, anger, and sad) in two culturally distinct groups (i.e., Western Caucasian [WC] and East Asian [EA]). Using complementary statistical image analyses, cultural specificity was directly revealed in these representations. Specifically, whereas WC internal representations predominantly featured the eyebrows and mouth, EA internal representations showed a preference for expressive information in the eye region. Closer inspection of the EA observer preference revealed a surprising feature: changes of gaze direction, shown primarily among the EA group. For the first time, it is revealed directly that culture can finely shape the internal representations of common facial expressions of emotion, challenging notions of a biologically hardwired "universal language of emotion."
Kim, Ho Soong; Park, Ki Ho; Jeoung, Jin Wook
2013-11-01
To evaluate the amount of intraocular pressure (IOP) change in the eye against the pillow in the lateral decubitus position (LDP). Thirty eyes from 15 healthy volunteers (12 men and three women) aged 29 ± 3 (range 25-37) years participated in this study. Using the rebound tonometer (Icare PRO, Icare Finland Oy, Helsinki, Finland), the IOP of both eyes was checked in sitting, supine, right and left LDPs. In the LDP, the additional IOP measurements were taken with the lower eyeball against the latex pillow. Baseline IOP in the sitting position was 12.7 ± 1.9 mmHg in the right eye and 12.8 ± 2.2 mmHg in the left eye. Ten minutes after shifting from the sitting to the supine position, IOP increased significantly (right eye: +1.4 ± 1.4 mmHg, p = 0.006; left eye: +1.8 ± 1.5 mmHg, p = 0.001). Changing from the supine to the right and left LDP increased significantly the IOP of dependent eye (right eye: +2.3 ± 1.8 mmHg, p = 0.001; left eye: +1.5 ± 1.8 mmHg, p = 0.011). When the dependent eye was compressed against the pillow in the LDP, the IOP of the dependent eyes increased significantly after 10 min (right eye in the right LDP: +4.1 ± 4.9 mmHg, p = 0.011; left eye in the left LDP: +3.4 ± 3.7 mmHg, p = 0.006). The IOP was significantly elevated when the eyeball was against the pillow in the LDP. © 2013 Acta Ophthalmologica Scandinavica Foundation. Published by Blackwell Publishing Ltd.
Lobo, Ann-Marie; Gao, Yan; Rusie, Laura; Houlberg, Magda; Mehta, Supriya D
2018-03-01
In 2015, the Centers for Disease Control and Prevention (CDC) and the American Academy of Ophthalmology (AAO) released clinical advisories on rising cases of ocular syphilis. We examined the association between eye disease and syphilis infection among primary care and sexually transmitted infection (STI) clinic patients attending an urban lesbian, gay, bisexual, transgender (LGBT) health center. We conducted a retrospective medical record review of all patients who underwent syphilis testing at Howard Brown Health between 1 January 2010 and 31 December 2015. Confirmed eye diagnosis was based on International Classification of Diseases, Ninth Revision (ICD-9) diagnosis codes for conjunctivitis, uveitis, keratitis, retinitis, and red eye. Demographic information, syphilis treatment, HIV status, and high-risk behaviors were abstracted. Syphilis diagnosis was defined by available laboratory data (enzyme immunoassay [EIA], rapid plasma reagin [RPR] titer, fluorescent treponemal antibody absorption [FTA-Abs], Treponema pallidum Ab). Multivariable logistic regression with robust variance was used to identify independent associations. During the study period, 71,299 syphilis tests were performed on 30,422 patients. There were 2288 (3.2%) positive syphilis tests. Seventy-seven patients had a confirmed eye diagnosis (0.25%). Patients with eye disease had higher probability of at least one positive syphilis test (33%) compared to those without eye disease (8%) ( p < 0.01). Of patients with eye disease, 77% were men who had sex with men (MSM) and 65% were HIV-positive. Patients with eye disease had 5.97 (95% CI: 3.70, 9.63) higher odds of having syphilis compared to patients without eye disease. When adjusted for age, race, gender/sexual orientation, insurance status, and HIV status, this association between positive syphilis test and eye disease decreased but was still significant (OR 2.00, 95% CI 1.17, 3.41). Patients who present with an eye diagnosis to STI/primary care clinic have a higher probability of positive syphilis tests even after adjusting for other risk factors for syphilis. High-risk patients with eye symptoms should have routine STI testing and in keeping with CDC and AAO recommendations, full ophthalmologic examination.
Davies, Patrick T; Coe, Jesse L; Hentges, Rochelle F; Sturge-Apple, Melissa L; van der Kloet, Erika
2018-03-01
This study examined the transactional interplay among children's negative family representations, visual processing of negative emotions, and externalizing symptoms in a sample of 243 preschool children (M age = 4.60 years). Children participated in three annual measurement occasions. Cross-lagged autoregressive models were conducted with multimethod, multi-informant data to identify mediational pathways. Consistent with schema-based top-down models, negative family representations were associated with attention to negative faces in an eye-tracking task and their externalizing symptoms. Children's negative representations of family relationships specifically predicted decreases in their attention to negative emotions, which, in turn, was associated with subsequent increases in their externalizing symptoms. Follow-up analyses indicated that the mediational role of diminished attention to negative emotions was particularly pronounced for angry faces. © 2017 The Authors. Child Development © 2017 Society for Research in Child Development, Inc.
Suzuki, David A; Yamada, Tetsuto; Yee, Robert D
2003-04-01
Neuronal responses that were observed during smooth-pursuit eye movements were recorded from cells in rostral portions of the nucleus reticularis tegmenti pontis (rNRTP). The responses were categorized as smooth-pursuit eye velocity (78%) or eye acceleration (22%). A separate population of rNRTP cells encoded static eye position. The sensitivity to pursuit eye velocity averaged 0.81 spikes/s per degrees /s, whereas the average sensitivity to pursuit eye acceleration was 0.20 spikes/s per degrees /s(2). Of the eye-velocity cells with horizontal preferences for pursuit responses, 56% were optimally responsive to contraversive smooth-pursuit eye movements and 44% preferred ipsiversive pursuit. For cells with vertical pursuit preferences, 61% preferred upward pursuit and 39% preferred downward pursuit. The direction selectivity was broad with 50% of the maximal response amplitude observed for directions of smooth pursuit up to +/-85 degrees away from the optimal direction. The activities of some rNRTP cells were linearly related to eye position with an average sensitivity of 2.1 spikes/s per deg. In some cells, the magnitude of the response during smooth-pursuit eye movements was affected by the position of the eyes even though these cells did not encode eye position. On average, pursuit centered to one side of screen center elicited a response that was 73% of the response amplitude obtained with tracking centered at screen center. For pursuit centered on the opposite side, the average response was 127% of the response obtained at screen center. The results provide a neuronal rationale for the slow, pursuit-like eye movements evoked with rNRTP microstimulation and for the deficits in smooth-pursuit eye movements observed with ibotenic acid injection into rNRTP. More globally, the results support the notion of a frontal and supplementary eye field-rNRTP-cerebellum pathway involved with controlling smooth-pursuit eye movements.
Niechwiej-Szwedo, E; González, E; Bega, S; Verrier, M C; Wong, A M; Steinbach, M J
2006-07-01
A proprioceptive hypothesis for the control of eye movements has been recently proposed based on neuroanatomical tracing studies. It has been suggested that the non-twitch motoneurons could be involved in modulating the gain of sensory feedback from the eye muscles analogous to the gamma (gamma) motoneurons which control the gain of proprioceptive feedback in skeletal muscles. We conducted behavioral and psychophysical experiments to test the above hypothesis using the Jendrassik Maneuver (JM) to alter the activity of gamma motoneurons. It was hypothesized that the JM would alter the proprioceptive feedback from the eye muscles which would result in misregistration of eye position and mislocalization of targets. In the first experiment, vergence eye movements and pointing responses were examined. Data showed that the JM affected the localization responses but not the actual eye position. Perceptual judgments were tested in the second experiment, and the results showed that targets were perceived as farther when the afferent feedback was altered by the JM. Overall, the results from the two experiments showed that eye position was perceived as more divergent with the JM, but the actual eye movements were not affected. We tested this further in Experiment 3 by examining the effect of JM on the amplitude and velocity of saccadic eye movements. As expected, there were no significant differences in saccadic parameters between the control and experimental conditions. Overall, the present study provides novel insight into the mechanism which may be involved in the use of sensory feedback from the eye muscles. Data from the first two experiments support the hypothesis that the JM alters the registered eye position, as evidenced by the localization errors. We propose that the altered eye position signal is due to the effect of the JM which changes the gain of the sensory feedback from the eye muscles, possibly via the activity of non-twitch motoneurons.
Bulf, Hermann; de Hevia, Maria Dolores; Macchi Cassia, Viola
2016-05-01
Numbers are represented as ordered magnitudes along a spatially oriented number line. While culture and formal education modulate the direction of this number-space mapping, it is a matter of debate whether its emergence is entirely driven by cultural experience. By registering 8-9-month-old infants' eye movements, this study shows that numerical cues are critical in orienting infants' visual attention towards a peripheral region of space that is congruent with the number's relative position on a left-to-right oriented representational continuum. This finding provides the first direct evidence that, in humans, the association between numbers and oriented spatial codes occurs before the acquisition of symbols or exposure to formal education, suggesting that the number line is not merely a product of human invention. © 2015 John Wiley & Sons Ltd.
Interaction between Phonological and Semantic Representations: Time Matters
ERIC Educational Resources Information Center
Chen, Qi; Mirman, Daniel
2015-01-01
Computational modeling and eye-tracking were used to investigate how phonological and semantic information interact to influence the time course of spoken word recognition. We extended our recent models (Chen & Mirman, 2012; Mirman, Britt, & Chen, 2013) to account for new evidence that competition among phonological neighbors influences…
Double silicone tube intubation for the management of partial lacrimal system obstruction.
Demirci, Hakan; Elner, Victor M
2008-02-01
To evaluate the effectiveness of double silicone intubation for the management of partial lacrimal drainage system obstruction in adults. Observational retrospective case series. Twenty-four eyes of 18 consecutive adult patients with partial lacrimal system obstruction managed at the University of Michigan. Retrospective review of symptoms and signs, duration of silicone intubation, and complications. Resolution of tearing. Preoperative tearing, negative Jones I testing, positive Jones II testing, and resistance to positive-pressure irrigation were present in all eyes (100%). The first silicone tube was removed after a mean of 11+/-7 months, and the second tube after 16+/-6 months. Postoperatively, at a mean of 21+/-9 months after removal of both tubes, tearing remained resolved in 19 eyes (79%) and remained improved in 2 eyes (8%). In eyes with resolved tearing, Jones I testing became positive, and there was no resistance to positive-pressure irrigation. Persistent tearing in 3 eyes (13%) required treatment with external dacryocystorhinostomy. The only complication was peripunctal pyogenic granulomas in 2 eyes. Double silicone intubation is an effective minimally invasive technique for treatment of partial lacrimal system obstruction in adults.
ERIC Educational Resources Information Center
Lee, Young-joo; Won, Doyeon
2016-01-01
The representative bureaucracy theory posits that the passive representation of women in an organization leads to their active representation in terms of gender equity in policy implementation. The present study examines how women's representation in administration and faculty positions may explain gender equity-oriented policy outcomes, focusing…
Anastasopoulos, D; Mandellos, D; Kostadima, V; Pettorossi, V E
2002-08-01
We studied the amplitude, latency, and probability of occurrence of fast phases (FP) in darkness to unpredictable vestibular and/or cervical yaw stimulation in normal human subjects. The rotational stimuli were smoothed trapezoidal motion transients of 14 degrees amplitude and 1.25 s duration. Eye position before stimulus application (initial eye position, IEP) was introduced as a variable by asking the subjects to fixate a spot appearing either straight ahead or at 7 degrees eccentric positions. The recordings demonstrated that the generation of FP during vestibular stimulation was facilitated when the whole-body rotation was directed opposite the eccentric IEP. Conversely, FP were attenuated if the whole-body rotation was directed toward the eccentric IEP; i.e., the FP attenuated if they were made to further eccentric positions. Cervical stimulation-induced FP were small and variable in direction when IEP was directed straight ahead before stimulus onset. Eccentric IEPs resulted in large FP, the direction of which was essentially independent of the neck-proprioceptive stimulus. They tended to move the eye toward the primary position, both when the trunk motion under the stationary head was directed toward or away from the IEP. FP dependence on IEP was evident also during head-on-trunk rotations. No consistent interaction between vestibularly and cervically induced FP was found. We conclude that extraretinal eye position signals are able to modify vestibularly evoked reflexive FP in darkness, aiming at minimizing excursions of the eyes away from the primary position. However, neck-induced FP do not relate to specific tasks of stabilization or visual search. By keeping the eyes near the primary position, FP may permit flexibility of orienting responses to incoming stimuli. This recentering bias for both vestibularly and cervically generated FP may represent a visuomotor optimizing strategy.
Barmack, N H; Errico, P; Ferraresi, A; Pettorossi, V E
1989-01-01
1. Eye movements in unanaesthetized rabbits were studied during horizontal neck-proprioceptive stimulation (movement of the body with respect to the fixed head), when this stimulation was given alone and when it was given simultaneously with vestibular stimulation (rotation of the head-body). The effect of neck-proprioceptive stimulation on modifying the anticompensatory fast-phase eye movements (AFPs) evoked by vestibular stimulation was studied with a 'conditioning-test' protocol; the 'conditioning' stimulus was a neck-proprioceptive signal evoked by a step-like change in body position with respect to the head and the 'test' stimulus was a vestibular signal evoked by a step rotation of the head-body. 2. The influence of eye position and direction of slow eye movements on the occurrence of compensatory fast-phase eye movements (CFPs) evoked by neck-proprioceptive stimulation was also examined. 3. The anticompensatory fast phase (AFP) evoked by vestibular stimulation was attenuated by a preceding neck-proprioceptive stimulus which when delivered alone evoked compensatory slow-phase eye movements (CSP) in the same direction as the CSP evoked by vestibular stimulation. Conversely, the vestibularly evoked AFP was potentiated by a neck-proprioceptive stimulus which evoked CSPs opposite to that of vestibularly evoked CSPs. 4. Eccentric initial eye positions increased the probability of occurrence of midline-directed compensatory fast-phase eye movements (CFPs) evoked by appropriate neck-proprioceptive stimulation. 5. The gain of the horizontal cervico-ocular reflex (GHCOR) was measured from the combined changes in eye position resulting from AFPs and CSPs. GHCOR was potentiated during simultaneous vestibular stimulation. This enhancement of GHCOR occurred at neck-proprioceptive stimulus frequencies which, in the absence of conjoint vestibular stimulation, do not evoke CSPs. PMID:2795479
San Juan, Valerie; Chambers, Craig G; Berman, Jared; Humphry, Chelsea; Graham, Susan A
2017-10-01
Two experiments examined whether 5-year-olds draw inferences about desire outcomes that constrain their online interpretation of an utterance. Children were informed of a speaker's positive (Experiment 1) or negative (Experiment 2) desire to receive a specific toy as a gift before hearing a referentially ambiguous statement ("That's my present") spoken with either a happy or sad voice. After hearing the speaker express a positive desire, children (N=24) showed an implicit (i.e., eye gaze) and explicit ability to predict reference to the desired object when the speaker sounded happy, but they showed only implicit consideration of the alternate object when the speaker sounded sad. After hearing the speaker express a negative desire, children (N=24) used only happy prosodic cues to predict the intended referent of the statement. Taken together, the findings indicate that the efficiency with which 5-year-olds integrate desire reasoning with language processing depends on the emotional valence of the speaker's voice but not on the type of desire representations (i.e., positive vs. negative) that children must reason about online. Copyright © 2017 Elsevier Inc. All rights reserved.
Migliaccio, Americo A; Cremer, Phillip D; Aw, Swee T; Halmagyi, G Michael; Curthoys, Ian S; Minor, Lloyd B; Todd, Michael J
2003-07-01
The aim of this study was to determine whether vergence-mediated changes in the axis of eye rotation in the human vestibulo-ocular reflex (VOR) would obey Listing's Law (normally associated with saccadic eye movements) independent of the initial eye position. We devised a paradigm for disassociating the saccadic velocity axis from eye position by presenting near and far targets that were centered with respect to one eye. We measured binocular 3-dimensional eye movements using search coils in ten normal subjects and 3-dimensional linear head acceleration using Optotrak in seven normal subjects. The stimuli consisted of passive, unpredictable, pitch head rotations with peak acceleration of approximately 2000 degrees /s(2 )and amplitude of approximately 20 degrees. During the pitch head rotation, each subject fixated straight ahead with one eye, whereas the other eye was adducted 4 degrees during far viewing (94 cm) and 25 degrees during near viewing (15 cm). Our data showed expected compensatory pitch rotations in both eyes, and a vergence-mediated horizontal rotation only in the adducting eye. In addition, during near viewing we observed torsional eye rotations not only in the adducting eye but also in the eye looking straight ahead. In the straight-ahead eye, the change in torsional eye velocity between near and far viewing, which began approximately 40 ms after the start of head rotation, was 10+/-6 degrees /s (mean +/- SD). This change in torsional eye velocity resulted in a 2.4+/-1.5 degrees axis tilt toward Listing's plane in that eye. In the adducting eye, the change in torsional eye velocity between near and far viewing was 16+/-6 degrees /s (mean +/- SD) and resulted in a 4.1+/-1.4 degrees axis tilt. The torsional eye velocities were conjugate and both eyes partially obeyed Listing's Law. The axis of eye rotation tilted in the direction of the line of sight by approximately one-third of the angle between the line of sight and a line orthogonal to Listing's plane. This tilt was higher than predicted by the one-quarter rule. The translational acceleration component of the pitch head rotation measured 0.5 g and may have contributed to the increased torsional component observed during near viewing. Our data show that vergence-mediated eye movements obey a VOR/Listing's Law compromise strategy independent of the initial eye position.
Humor Facilitates Text Comprehension: Evidence from Eye Movements
ERIC Educational Resources Information Center
Ferstl, Evelyn C.; Israel, Laura; Putzar, Lisa
2017-01-01
One crucial property of verbal jokes is that the punchline usually contains an incongruency that has to be resolved by updating the situation model representation. In the standard pragmatic model, these processes are considered to require cognitive effort. However, only few studies compared jokes to texts requiring a situation model revision…
I See It in My Hands' Eye: Representational Gestures Reflect Conceptual Demands
ERIC Educational Resources Information Center
Hostetter, Autumn B.; Alibali, Martha W.; Kita, Sotaro
2007-01-01
The Information Packaging Hypothesis (Kita, 2000) holds that gestures play a role in conceptualising information for speaking. According to this view, speakers will gesture more when describing difficult-to-conceptualise information than when describing easy-to-conceptualise information. In the present study, 24 participants described ambiguous…
3D visualization and stereographic techniques for medical research and education.
Rydmark, M; Kling-Petersen, T; Pascher, R; Philip, F
2001-01-01
While computers have been able to work with true 3D models for a long time, the same does not apply to the users in common. Over the years, a number of 3D visualization techniques have been developed to enable a scientist or a student, to see not only a flat representation of an object, but also an approximation of its Z-axis. In addition to the traditional flat image representation of a 3D object, at least four established methodologies exist: Stereo pairs. Using image analysis tools or 3D software, a set of images can be made, each representing the left and the right eye view of an object. Placed next to each other and viewed through a separator, the three dimensionality of an object can be perceived. While this is usually done on still images, tests at Mednet have shown this to work with interactively animated models as well. However, this technique requires some training and experience. Pseudo3D, such as VRML or QuickTime VR, where the interactive manipulation of a 3D model lets the user achieve a sense of the model's true proportions. While this technique works reasonably well, it is not a "true" stereographic visualization technique. Red/Green separation, i.e. "the traditional 3D image" where a red and a green representation of a model is superimposed at an angle corresponding to the viewing angle of the eyes and by using a similar set of eyeglasses, a person can create a mental 3D image. The end result does produce a sense of 3D but the effect is difficult to maintain. Alternating left/right eye systems. These systems (typified by the StereoGraphics CrystalEyes system) let the computer display a "left eye" image followed by a "right eye" image while simultaneously triggering the eyepiece to alternatively make one eye "blind". When run at 60 Hz or higher, the brain will fuse the left/right images together and the user will effectively see a 3D object. Depending on configurations, the alternating systems run at between 50 and 60 Hz, thereby creating a flickering effect, which is strenuous for prolonged use. However, all of the above have one or more drawbacks such as high costs, poor quality and localized use. A fifth system, recently released by Barco Systems, modifies the CrystalEyes system by projecting two superimposed images, using polarized light, with the wave plane of the left image at right angle to that of the right image. By using polarized glasses, each eye will see the appropriate image and true stereographic vision is achieved. While the system requires very expensive hardware, it solves some of the more important problems mentioned above, such as the capacity to use higher frame rates and the ability to display images to a large audience. Mednet has instigated a research project which uses reconstructed models from the central nervous system (human brain and basal ganglia, cortex, dendrites and dendritic spines) and peripheral nervous system (nodes of Ranvier and axoplasmic areas). The aim is to modify the models to fit the different visualization techniques mentioned above and compare a group of users perceived degree of 3D for each technique.
Bayesian microsaccade detection
Mihali, Andra; van Opheusden, Bas; Ma, Wei Ji
2017-01-01
Microsaccades are high-velocity fixational eye movements, with special roles in perception and cognition. The default microsaccade detection method is to determine when the smoothed eye velocity exceeds a threshold. We have developed a new method, Bayesian microsaccade detection (BMD), which performs inference based on a simple statistical model of eye positions. In this model, a hidden state variable changes between drift and microsaccade states at random times. The eye position is a biased random walk with different velocity distributions for each state. BMD generates samples from the posterior probability distribution over the eye state time series given the eye position time series. Applied to simulated data, BMD recovers the “true” microsaccades with fewer errors than alternative algorithms, especially at high noise. Applied to EyeLink eye tracker data, BMD detects almost all the microsaccades detected by the default method, but also apparent microsaccades embedded in high noise—although these can also be interpreted as false positives. Next we apply the algorithms to data collected with a Dual Purkinje Image eye tracker, whose higher precision justifies defining the inferred microsaccades as ground truth. When we add artificial measurement noise, the inferences of all algorithms degrade; however, at noise levels comparable to EyeLink data, BMD recovers the “true” microsaccades with 54% fewer errors than the default algorithm. Though unsuitable for online detection, BMD has other advantages: It returns probabilities rather than binary judgments, and it can be straightforwardly adapted as the generative model is refined. We make our algorithm available as a software package. PMID:28114483
Bond, R R; Kligfield, P D; Zhu, T; Finlay, D D; Drew, B; Guldenring, D; Breen, C; Clifford, G D; Wagner, G S
2015-01-01
The 12-lead electrocardiogram (ECG) is a complex set of cardiac signals that require a high degree of skill and clinical knowledge to interpret. Therefore, it is imperative to record and understand how expert readers interpret the 12-lead ECG. This short paper showcases how eye tracking technology and audio data can be fused together and visualised to gain insight into the interpretation techniques employed by an eminent ECG champion, namely Dr Rory Childers. Copyright © 2015 Elsevier Inc. All rights reserved.
Premotor neurons encode torsional eye velocity during smooth-pursuit eye movements
NASA Technical Reports Server (NTRS)
Angelaki, Dora E.; Dickman, J. David
2003-01-01
Responses to horizontal and vertical ocular pursuit and head and body rotation in multiple planes were recorded in eye movement-sensitive neurons in the rostral vestibular nuclei (VN) of two rhesus monkeys. When tested during pursuit through primary eye position, the majority of the cells preferred either horizontal or vertical target motion. During pursuit of targets that moved horizontally at different vertical eccentricities or vertically at different horizontal eccentricities, eye angular velocity has been shown to include a torsional component the amplitude of which is proportional to half the gaze angle ("half-angle rule" of Listing's law). Approximately half of the neurons, the majority of which were characterized as "vertical" during pursuit through primary position, exhibited significant changes in their response gain and/or phase as a function of gaze eccentricity during pursuit, as if they were also sensitive to torsional eye velocity. Multiple linear regression analysis revealed a significant contribution of torsional eye movement sensitivity to the responsiveness of the cells. These findings suggest that many VN neurons encode three-dimensional angular velocity, rather than the two-dimensional derivative of eye position, during smooth-pursuit eye movements. Although no clear clustering of pursuit preferred-direction vectors along the semicircular canal axes was observed, the sensitivity of VN neurons to torsional eye movements might reflect a preservation of similar premotor coding of visual and vestibular-driven slow eye movements for both lateral-eyed and foveate species.
Reading paths, eye drawings, and word islands: Movement in Un coup de dés.
Loos, Ruth
2012-01-01
In the framework of an artistic-scientific project on eye-movements during reading, my collaborators from the psychology department at the KU Leuven and I had a close look at the poem "Un coup de dés jamais n'abolira le hasard" ("A throw of the dice will never abolish chance") by Stéphane Mallarmé. The poem is an intriguing example of nonlinear writing, of a typographic game with white and space, and of an interweaving of different reading lines. These specific features evoke multiple reading methods. The animation, Movement in Un coup de dés, created during the still-ongoing collaboration interweaves a horizontal and a vertical reading method, two spontaneous ways of reading that point at the poem's intriguing ambiguity. Not only are we interested in different methods of reading; the scientific representations of eye movements themselves are a rich source of images with much artistic potential. We explore eye movements as "eye drawings" in new images characterized both by a scientific and by an artistic perspective.
Reading paths, eye drawings, and word islands: Movement in Un coup de dés
Loos, Ruth
2012-01-01
In the framework of an artistic–scientific project on eye-movements during reading, my collaborators from the psychology department at the KU Leuven and I had a close look at the poem “Un coup de dés jamais n'abolira le hasard” (“A throw of the dice will never abolish chance”) by Stéphane Mallarmé. The poem is an intriguing example of nonlinear writing, of a typographic game with white and space, and of an interweaving of different reading lines. These specific features evoke multiple reading methods. The animation, Movement in Un coup de dés, created during the still-ongoing collaboration interweaves a horizontal and a vertical reading method, two spontaneous ways of reading that point at the poem's intriguing ambiguity. Not only are we interested in different methods of reading; the scientific representations of eye movements themselves are a rich source of images with much artistic potential. We explore eye movements as “eye drawings” in new images characterized both by a scientific and by an artistic perspective. PMID:23145266
The neural basis of suppression and amblyopia in strabismus.
Sengpiel, F; Blakemore, C
1996-01-01
The neurophysiological consequences of artificial strabismus in cats and monkeys have been studied for 30 years. However, until very recently no clear picture has emerged of neural deficits that might account for the powerful interocular suppression that strabismic humans experience, nor for the severe amblyopia that is often associated with convergent strabismus. Here we review the effects of squint on the integrative capacities of the primary visual cortex and propose a hypothesis about the relationship between suppression and amblyopia. Most neurons in the visual cortex of normal cats and monkeys can be excited through either eye and show strong facilitation during binocular stimulation with contours of similar orientation in the two eyes. But in strabismic animals, cortical neurons tend to fall into two populations of monocularly excitable cells and exhibit suppressive binocular interactions that share key properties with perceptual suppression in strabismic humans. Such interocular suppression, if prolonged and asymmetric (with input from the squinting eye habitually suppressed by that from the fixating eye), might lead to neural defects in the representation of the deviating eye and hence to amblyopia.
Viewing condition dependence of the gaze-evoked nystagmus in Arnold Chiari type 1 malformation.
Ghasia, Fatema F; Gulati, Deepak; Westbrook, Edward L; Shaikh, Aasef G
2014-04-15
Saccadic eye movements rapidly shift gaze to the target of interest. Once the eyes reach a given target, the brainstem ocular motor integrator utilizes feedback from various sources to assure steady gaze. One of such sources is cerebellum whose lesion can impair neural integration leading to gaze-evoked nystagmus. The gaze evoked nystagmus is characterized by drifts moving the eyes away from the target and a null position where the drifts are absent. The extent of impairment in the neural integration for two opposite eccentricities might determine the location of the null position. Eye in the orbit position might also determine the location of the null. We report this phenomenon in a patient with Arnold Chiari type 1 malformation who had intermittent esotropia and horizontal gaze-evoked nystagmus with a shift in the null position. During binocular viewing, the null was shifted to the right. During monocular viewing, when the eye under cover drifted nasally (secondary to the esotropia), the null of the gaze-evoked nystagmus reorganized toward the center. We speculate that the output of the neural integrator is altered from the bilateral conflicting eye in the orbit position secondary to the strabismus. This could possibly explain the reorganization of the location of the null position. Copyright © 2014 Elsevier B.V. All rights reserved.
The effect of emotionally valenced eye region images on visuocortical processing of surprised faces.
Li, Shuaixia; Li, Ping; Wang, Wei; Zhu, Xiangru; Luo, Wenbo
2018-05-01
In this study, we presented pictorial representations of happy, neutral, and fearful expressions projected in the eye regions to determine whether the eye region alone is sufficient to produce a context effect. Participants were asked to judge the valence of surprised faces that had been preceded by a picture of an eye region. Behavioral results showed that affective ratings of surprised faces were context dependent. Prime-related ERPs with presentation of happy eyes elicited a larger P1 than those for neutral and fearful eyes, likely due to the recognition advantage provided by a happy expression. Target-related ERPs showed that surprised faces in the context of fearful and happy eyes elicited dramatically larger C1 than those in the neutral context, which reflected the modulation by predictions during the earliest stages of face processing. There were larger N170 with neutral and fearful eye contexts compared to the happy context, suggesting faces were being integrated with contextual threat information. The P3 component exhibited enhanced brain activity in response to faces preceded by happy and fearful eyes compared with neutral eyes, indicating motivated attention processing may be involved at this stage. Altogether, these results indicate for the first time that the influence of isolated eye regions on the perception of surprised faces involves preferential processing at the early stages and elaborate processing at the late stages. Moreover, higher cognitive processes such as predictions and attention can modulate face processing from the earliest stages in a top-down manner. © 2017 Society for Psychophysiological Research.
NASA Astrophysics Data System (ADS)
Bekisz, Marek; Shendye, Ninad; Raciborska, Ida; Wróbel, Andrzej; Waleszczyk, Wioletta J.
2017-08-01
The process of learning induces plastic changes in neuronal network of the brain. Our earlier studies on mice showed that classical conditioning in which monocular visual stimulation was paired with an electric shock to the tail enhanced GABA immunoreactivity within layer 4 of the monocular part of the primary visual cortex (V1), contralaterally to the stimulated eye. In the present experiment we investigated whether the same classical conditioning paradigm induces changes of neuronal excitability in this cortical area. Two experimental groups were used: mice that underwent 7-day visual classical conditioning and controls. Patch-clamp whole-cell recordings were performed from ex vivo slices of mouse V1. The slices were perfused with the modified artificial cerebrospinal fluid, the composition of which better mimics the brain interstitial fluid in situ and induces spontaneous activity. The neuronal excitability was characterized by measuring the frequency of spontaneous action potentials. We found that layer 4 star pyramidal cells located in the monocular representation of the "trained" eye in V1 had lower frequency of spontaneous activity in comparison with neurons from the same cortical region of control animals. Weaker spontaneous firing indicates decreased general excitability of star pyramidal neurons within layer 4 of the monocular representation of the "trained" eye in V1. Such effect could result from enhanced inhibitory processes accompanying learning in this cortical area.
Spatial constancy mechanisms in motor control
Medendorp, W. Pieter
2011-01-01
The success of the human species in interacting with the environment depends on the ability to maintain spatial stability despite the continuous changes in sensory and motor inputs owing to movements of eyes, head and body. In this paper, I will review recent advances in the understanding of how the brain deals with the dynamic flow of sensory and motor information in order to maintain spatial constancy of movement goals. The first part summarizes studies in the saccadic system, showing that spatial constancy is governed by a dynamic feed-forward process, by gaze-centred remapping of target representations in anticipation of and across eye movements. The subsequent sections relate to other oculomotor behaviour, such as eye–head gaze shifts, smooth pursuit and vergence eye movements, and their implications for feed-forward mechanisms for spatial constancy. Work that studied the geometric complexities in spatial constancy and saccadic guidance across head and body movements, distinguishing between self-generated and passively induced motion, indicates that both feed-forward and sensory feedback processing play a role in spatial updating of movement goals. The paper ends with a discussion of the behavioural mechanisms of spatial constancy for arm motor control and their physiological implications for the brain. Taken together, the emerging picture is that the brain computes an evolving representation of three-dimensional action space, whose internal metric is updated in a nonlinear way, by optimally integrating noisy and ambiguous afferent and efferent signals. PMID:21242137
Ishizuka, K; Kashiwakura, M; Oiji, A
1998-05-01
In order to explore a possible association between psychiatric symptoms and eye movements, 32 patients with schizophrenia were examined using an eye mark recorder in combination with the Positive and Negative Syndrome Scale, and were compared with 32 controls. Four types of figures were presented to the subjects: geometrical figures, drawings, story drawings, and sentences. Mean eye fixation time was significantly longer and mean eye scanning length was significantly shorter for the patients than for controls, not only in response to the geometric figures, but also in response to the story drawings. Eye fixation time and scanning velocity were positively correlated with degrees of thought disturbance. The number of eye fixations, eye fixation time and scanning velocity were negatively correlated with degree of depressive tendency.
[Virtual reality in ophthalmological education].
Wagner, C; Schill, M; Hennen, M; Männer, R; Jendritza, B; Knorz, M C; Bender, H J
2001-04-01
We present a computer-based medical training workstation for the simulation of intraocular eye surgery. The surgeon manipulates two original instruments inside a mechanical model of the eye. The instrument positions are tracked by CCD cameras and monitored by a PC which renders the scenery using a computer-graphic model of the eye and the instruments. The simulator incorporates a model of the operation table, a mechanical eye, three CCD cameras for the position tracking, the stereo display, and a computer. The three cameras are mounted under the operation table from where they can observe the interior of the mechanical eye. Using small markers the cameras recognize the instruments and the eye. Their position and orientation in space is determined by stereoscopic back projection. The simulation runs with more than 20 frames per second and provides a realistic impression of the surgery. It includes the cold light source which can be moved inside the eye and the shadow of the instruments on the retina which is important for navigational purposes.
Looking at My Own Face: Visual Processing Strategies in Self–Other Face Recognition
Chakraborty, Anya; Chakrabarti, Bhismadev
2018-01-01
We live in an age of ‘selfies.’ Yet, how we look at our own faces has seldom been systematically investigated. In this study we test if the visual processing of the highly familiar self-face is different from other faces, using psychophysics and eye-tracking. This paradigm also enabled us to test the association between the psychophysical properties of self-face representation and visual processing strategies involved in self-face recognition. Thirty-three adults performed a self-face recognition task from a series of self-other face morphs with simultaneous eye-tracking. Participants were found to look longer at the lower part of the face for self-face compared to other-face. Participants with a more distinct self-face representation, as indexed by a steeper slope of the psychometric response curve for self-face recognition, were found to look longer at upper part of the faces identified as ‘self’ vs. those identified as ‘other’. This result indicates that self-face representation can influence where we look when we process our own vs. others’ faces. We also investigated the association of autism-related traits with self-face processing metrics since autism has previously been associated with atypical self-processing. The study did not find any self-face specific association with autistic traits, suggesting that autism-related features may be related to self-processing in a domain specific manner. PMID:29487554
The Effectiveness of Panoramic Maps Design: a Preliminary Study Based on Mobile Eye-Tracking
NASA Astrophysics Data System (ADS)
Balzarini, R.; Murat, M.
2016-06-01
This paper presents preliminary results from an ongoing research based on the study of visual attention through mobile eye-tracking techniques. The visual-cognitive approach investigates the reading-comprehension of a particular territorial representation: ski trails maps. The general issue of the study is to provide insights about the effectiveness of panoramic ski maps and more broadly, to suggest innovative efficient representation of the geographic information in mountain. According to some mountain operators, the information provided by paper ski maps no longer meets the needs of a large part of the customers; the question now arises of their adaptation to new digital practices (iPhone, tablets). In a computerized process perspective, this study particularly focuses on the representations, and the inferred information, which are really helpful to the users-skiers to apprehend the territory and make decisions, and which could be effectively replicated into a digital system. The most interesting output relies on the relevance of the panorama view: panorama still fascinates, but contrary to conventional wisdom, the information it provides does not seem to be useful to the skier. From a socio-historical perspective this study shows how empirical evidence-based approach can support the change: our results enhance the discussion on the effectiveness of the message that mountain operators want to convey to the tourist and therefore, on the renewal of (geographical) information in ski resorts.
Multisensory guidance of orienting behavior.
Maier, Joost X; Groh, Jennifer M
2009-12-01
We use both vision and audition when localizing objects and events in our environment. However, these sensory systems receive spatial information in different coordinate systems: sounds are localized using inter-aural and spectral cues, yielding a head-centered representation of space, whereas the visual system uses an eye-centered representation of space, based on the site of activation on the retina. In addition, the visual system employs a place-coded, retinotopic map of space, whereas the auditory system's representational format is characterized by broad spatial tuning and a lack of topographical organization. A common view is that the brain needs to reconcile these differences in order to control behavior, such as orienting gaze to the location of a sound source. To accomplish this, it seems that either auditory spatial information must be transformed from a head-centered rate code to an eye-centered map to match the frame of reference used by the visual system, or vice versa. Here, we review a number of studies that have focused on the neural basis underlying such transformations in the primate auditory system. Although, these studies have found some evidence for such transformations, many differences in the way the auditory and visual system encode space exist throughout the auditory pathway. We will review these differences at the neural level, and will discuss them in relation to differences in the way auditory and visual information is used in guiding orienting movements.
Shields, A; Ryan, R M; Cicchetti, D
2001-05-01
This study examined whether maltreated children were more likely than nonmaltreated children to develop poor-quality representations of caregivers and whether these representations predicted children's rejection by peers. A narrative task assessing representations of mothers and fathers was administered to 76 maltreated and 45 nonmaltreated boys and girls (8-12 years old). Maltreated children's representations were more negative/constricted and less positive/coherent than those of nonmaltreated children. Maladaptive representations were associated with emotion dysregulation, aggression, and peer rejection, whereas positive/coherent representations were related to prosocial behavior and peer preference. Representations mediated maltreatment's effects on peer rejection in part by undermining emotion regulation. Findings suggest that representations of caregivers serve an important regulatory function in the peer relationships of at-risk children.
NASA Astrophysics Data System (ADS)
Tippett, Christine D.
2016-03-01
The move from learning science from representations to learning science with representations has many potential and undocumented complexities. This thematic analysis partially explores the trends of representational uses in science instruction, examining 80 research studies on diagram use in science. These studies, published during 2000-2014, were located through searches of journal databases and books. Open coding of the studies identified 13 themes, 6 of which were identified in at least 10% of the studies: eliciting mental models, classroom-based research, multimedia principles, teaching and learning strategies, representational competence, and student agency. A shift in emphasis on learning with rather than learning from representations was evident across the three 5-year intervals considered, mirroring a pedagogical shift from science instruction as transmission of information to constructivist approaches in which learners actively negotiate understanding and construct knowledge. The themes and topics in recent research highlight areas of active interest and reveal gaps that may prove fruitful for further research, including classroom-based studies, the role of prior knowledge, and the use of eye-tracking. The results of the research included in this thematic review of the 2000-2014 literature suggest that both interpreting and constructing representations can lead to better understanding of science concepts.
Visual-Spatial Attention Aids the Maintenance of Object Representations in Visual Working Memory
Williams, Melonie; Pouget, Pierre; Boucher, Leanne; Woodman, Geoffrey F.
2013-01-01
Theories have proposed that the maintenance of object representations in visual working memory is aided by a spatial rehearsal mechanism. In this study, we used two different approaches to test the hypothesis that overt and covert visual-spatial attention mechanisms contribute to the maintenance of object representations in visual working memory. First, we tracked observers’ eye movements while remembering a variable number of objects during change-detection tasks. We observed that during the blank retention interval, participants spontaneously shifted gaze to the locations that the objects had occupied in the memory array. Next, we hypothesized that if attention mechanisms contribute to the maintenance of object representations, then drawing attention away from the object locations during the retention interval would impair object memory during these change-detection tasks. Supporting this prediction, we found that attending to the fixation point in anticipation of a brief probe stimulus during the retention interval reduced change-detection accuracy even on the trials in which no probe occurred. These findings support models of working memory in which visual-spatial selection mechanisms contribute to the maintenance of object representations. PMID:23371773
Mani, Nivedita; Huettig, Falk
2014-10-01
Despite the efficiency with which language users typically process spoken language, a growing body of research finds substantial individual differences in both the speed and accuracy of spoken language processing potentially attributable to participants' literacy skills. Against this background, the current study took a look at the role of word reading skill in listeners' anticipation of upcoming spoken language input in children at the cusp of learning to read; if reading skills affect predictive language processing, then children at this stage of literacy acquisition should be most susceptible to the effects of reading skills on spoken language processing. We tested 8-year-olds on their prediction of upcoming spoken language input in an eye-tracking task. Although children, like in previous studies to date, were successfully able to anticipate upcoming spoken language input, there was a strong positive correlation between children's word reading skills (but not their pseudo-word reading and meta-phonological awareness or their spoken word recognition skills) and their prediction skills. We suggest that these findings are most compatible with the notion that the process of learning orthographic representations during reading acquisition sharpens pre-existing lexical representations, which in turn also supports anticipation of upcoming spoken words. Copyright © 2014 Elsevier Inc. All rights reserved.
Verkicharla, Pavan K; Suheimat, Marwan; Mallen, Edward A H; Atchison, David A
2014-01-01
The eye rotation approach for measuring peripheral eye length leads to concern about whether the rotation influences results, such as through pressure exerted by eyelids or extra-ocular muscles. This study investigated whether this approach is valid. Peripheral eye lengths were measured with a Lenstar LS 900 biometer for eye rotation and no-eye rotation conditions (head rotation for horizontal meridian and instrument rotation for vertical meridian). Measurements were made for 23 healthy young adults along the horizontal visual field (± 30°) and, for a subset of eight participants along the vertical visual field (± 25°). To investigate the influence of the duration of eye rotation, for six participants measurements were made at 0, 60, 120, 180 and 210 s after eye rotation to ± 30° along horizontal and vertical visual fields. Peripheral eye lengths were not significantly different for the conditions along the vertical meridian (F1,7 = 0.16, p = 0.71). The peripheral eye lengths for the conditions were significantly different along the horizontal meridian (F1,22 = 4.85, p = 0.04), although not at individual positions (p ≥ 0.10) and were not important. There were no apparent differences between the emmetropic and myopic groups. There was no significant change in eye length at any position after maintaining position for 210 s. Eye rotation and no-eye rotation conditions were similar for measuring peripheral eye lengths along horizontal and vertical visual field meridians at ± 30° and ± 25°, respectively. Either condition can be used to estimate retinal shape from peripheral eye lengths. © 2013 The Authors Ophthalmic & Physiological Optics © 2013 The College of Optometrists.
High-resolution eye tracking using V1 neuron activity
McFarland, James M.; Bondy, Adrian G.; Cumming, Bruce G.; Butts, Daniel A.
2014-01-01
Studies of high-acuity visual cortical processing have been limited by the inability to track eye position with sufficient accuracy to precisely reconstruct the visual stimulus on the retina. As a result, studies on primary visual cortex (V1) have been performed almost entirely on neurons outside the high-resolution central portion of the visual field (the fovea). Here we describe a procedure for inferring eye position using multi-electrode array recordings from V1 coupled with nonlinear stimulus processing models. We show that this method can be used to infer eye position with one arc-minute accuracy – significantly better than conventional techniques. This allows for analysis of foveal stimulus processing, and provides a means to correct for eye-movement induced biases present even outside the fovea. This method could thus reveal critical insights into the role of eye movements in cortical coding, as well as their contribution to measures of cortical variability. PMID:25197783
Yang, Jee Myung; Park, Sang Woo; Ji, Yong Sok; Kim, Jaeryung; Yoo, Chungkwon; Heo, Hwan
2017-04-20
To investigate postural effects on intraocular pressure (IOP) and ocular perfusion pressure (OPP) in patients with non-arteritic ischemic optic neuropathy (NAION). IOP and blood pressure (BP) were measured in 20 patients with unilateral NAION 10 min after changing to each of the following positions sequentially: sitting, supine, right lateral decubitus position (LDP), supine, left LDP, and supine. IOP was measured using a rebound tonometer and OPP was calculated using formulas based on mean BP. The dependent LDP (DLDP) was defined as the position when the eye of interest (affected or unaffected eye) was placed on the dependent side in the LDP. IOPs were significantly higher (P = 0.020) and OPPs were significantly lower (P = 0.041) in the affected eye compare with the unaffected eye, with the affected eye in DLDP. Compared with the mean IOP of the unaffected eyes, the mean IOP of the affected eyes increased significantly (+2.9 ± 4.4 versus +0.7 ± 3.1 mmHg, respectively; P = 0.003) and the mean OPP decreased significantly (-6.7 ± 9.4 versus -4.9 ± 8.0 mmHg, respectively; P = 0.022) after changing positions from supine to DLDP. In addition, changing position from supine to DLDP showed significantly larger absolute changes in IOP (4.13 ± 3.19 mmHg versus 2.51 ± 1.92 mmHg, respectively; P = 0.004) and OPP (9.86 ± 5.69 mmHg versus 7.50 ± 5.49 mmHg, respectively; P = 0.009) in the affected eye compared with the unaffected eye. In the affected eye, there was a significant positive correlation between absolute change in IOP and OPP when changing position from supine to DLDP (Rho = 0.512, P = 0.021). A postural change from supine to DLDP caused significant fluctuations in IOP and OPP of the affected eye, and may significantly increase IOP and decrease OPP. Posture-induced IOP changes may be a predisposing factor for NAION development.
Head-body righting reflex from the supine position and preparatory eye movements.
Troiani, Diana; Ferraresi, Aldo; Manni, Ermanno
2005-05-01
Saccular and utricular maculae can provide information on the supine static position, considering that both have pronounced curved structures with hair cells having a variety of polarization vectors that enable them to sense an inverted position and thus direct the righting reflex. The vestibular system is essential for the structuring of motor behaviour, senses linear and angular acceleration and has a strong influence on posture and balance at rest, during locomotion and in head body righting reflexes. Using guinea pigs in the supine position with a symmetrical head and trunk position, the ocular position was analysed to ascertain whether any ocular movement that occurred would adopt a spatial deviation indicative of the subsequent head and body righting. The characteristics of the righting reflex (direction, latency, duration and velocity) were analysed in guinea pigs from position signals obtained from search coils implanted in the eye, head and pelvis. The animals were kept in a supine position for a few seconds or even minutes with the eyes in a stable primary position and the head and body symmetrical and immobile. The righting reflex took place either immediately or after a slow deviation of the eyes. In both cases the righting sequence (eyes, head, body) was stereotyped and consistent. The direction of head and body righting was along the longitudinal axis of the animal and was either clockwise or anticlockwise and the direction of righting was related to the direction of the eye deviation. The ocular deviation and the direction of deviation that initiated and determined the direction of the righting reflex could be explained by possible otolithic activation.
Principi, S; Farah, J; Ferrari, P; Carinou, E; Clairand, I; Ginjaume, M
2016-09-01
This paper aims to provide some practical recommendations to reduce eye lens dose for workers exposed to X-rays in interventional cardiology and radiology and also to propose an eye lens correction factor when lead glasses are used. Monte Carlo simulations are used to study the variation of eye lens exposure with operator position, height and body orientation with respect to the patient and the X-ray tube. The paper also looks into the efficiency of wraparound lead glasses using simulations. Computation results are compared with experimental measurements performed in Spanish hospitals using eye lens dosemeters as well as with data from available literature. Simulations showed that left eye exposure is generally higher than the right eye, when the operator stands on the right side of the patient. Operator height can induce a strong dose decrease by up to a factor of 2 for the left eye for 10-cm-taller operators. Body rotation of the operator away from the tube by 45°-60° reduces eye exposure by a factor of 2. The calculation-based correction factor of 0.3 for wraparound type lead glasses was found to agree reasonably well with experimental data. Simple precautions, such as the positioning of the image screen away from the X-ray source, lead to a significant reduction of the eye lens dose. Measurements and simulations performed in this work also show that a general eye lens correction factor of 0.5 can be used when lead glasses are worn regardless of operator position, height and body orientation. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Guillaume, Alain; Pélisson, Denis
2006-12-15
Shifting gaze requires precise coordination of eye and head movements. It is clear that the superior colliculus (SC) is involved with saccadic gaze shifts. Here we investigate its role in controlling both eye and head movements during gaze shifts. Gaze shifts of the same amplitude can be evoked from different SC sites by controlled electrical microstimulation. To describe how the SC coordinates the eye and the head, we compare the characteristics of these amplitude-matched gaze shifts evoked from different SC sites. We show that matched amplitude gaze shifts elicited from progressively more caudal sites are progressively slower and associated with a greater head contribution. Stimulation at more caudal SC sites decreased the peak velocity of the eye but not of the head, suggesting that the lower peak gaze velocity for the caudal sites is due to the increased contribution of the slower-moving head. Eye-head coordination across the SC motor map is also indicated by the relative latencies of the eye and head movements. For some amplitudes of gaze shift, rostral stimulation evoked eye movement before head movement, whereas this reversed with caudal stimulation, which caused the head to move before the eyes. These results show that gaze shifts of similar amplitude evoked from different SC sites are produced with different kinematics and coordination of eye and head movements. In other words, gaze shifts evoked from different SC sites follow different amplitude-velocity curves, with different eye-head contributions. These findings shed light on mechanisms used by the central nervous system to translate a high-level motor representation (a desired gaze displacement on the SC map) into motor commands appropriate for the involved body segments (the eye and the head).
Anticipatory smooth eye movements with random-dot kinematograms
Santos, Elio M.; Gnang, Edinah K.; Kowler, Eileen
2012-01-01
Anticipatory smooth eye movements were studied in response to expectations of motion of random-dot kinematograms (RDKs). Dot lifetime was limited (52–208 ms) to prevent selection and tracking of the motion of local elements and to disrupt the perception of an object moving across space. Anticipatory smooth eye movements were found in response to cues signaling the future direction of global RDK motion, either prior to the onset of the RDK or prior to a change in its direction of motion. Cues signaling the lifetime of the dots were not effective. These results show that anticipatory smooth eye movements can be produced by expectations of global motion and do not require a sustained representation of an object or set of objects moving across space. At the same time, certain properties of global motion (direction) were more sensitive to cues than others (dot lifetime), suggesting that the rules by which prediction operates to influence pursuit may go beyond simple associations between cues and the upcoming motion of targets. PMID:23027686
Marino, Alexandria C.; Chun, Marvin M.
2011-01-01
During natural vision, eye movements can drastically alter the retinotopic (eye-centered) coordinates of locations and objects, yet the spatiotopic (world-centered) percept remains stable. Maintaining visuospatial attention in spatiotopic coordinates requires updating of attentional representations following each eye movement. However, this updating is not instantaneous; attentional facilitation temporarily lingers at the previous retinotopic location after a saccade, a phenomenon known as the retinotopic attentional trace. At various times after a saccade, we probed attention at an intermediate location between the retinotopic and spatiotopic locations to determine whether a single locus of attentional facilitation slides progressively from the previous retinotopic location to the appropriate spatiotopic location, or whether retinotopic facilitation decays while a new, independent spatiotopic locus concurrently becomes active. Facilitation at the intermediate location was not significant at any time, suggesting that top-down attention can result in enhancement of discrete retinotopic and spatiotopic locations without passing through intermediate locations. PMID:21258903
Training the intelligent eye: understanding illustrations in early modern astronomy texts.
Crowther, Kathleen M; Barker, Peter
2013-09-01
Throughout the early modern period, the most widely read astronomical textbooks were Johannes de Sacrobosco's De sphaera and the Theorica planetarum, ultimately in the new form introduced by Georg Peurbach. This essay argues that the images in these texts were intended to develop an "intelligent eye." Students were trained to transform representations of specific heavenly phenomena into moving mental images of the structure of the cosmos. Only by learning the techniques of mental visualization and manipulation could the student "see" in the mind's eye the structure and motions of the cosmos. While anyone could look up at the heavens, only those who had acquired the intelligent eye could comprehend the divinely created order of the universe. Further, the essay demonstrates that the visual program of the Sphaera and Theorica texts played a significant and hitherto unrecognized role in later scientific work. Copernicus, Galileo, and Kepler all utilized the same types of images in their own texts to explicate their ideas about the cosmos.
Nishiyama, Junpei; Hashimoto, Tsutomu; Sakashita, Yusuke; Fujiyoshi, Hironobu; Hirata, Yutaka
2008-01-01
Eye movements are utilized in many scientific studies as a probe that reflects the neural representation of 3 dimensional extrapersonal space. This study proposes a method to accurately measure the roll component of eye movements under the conditions in which the pupil diameter changes. Generally, the iris pattern matching between a reference and a test iris image is performed to estimate roll angle of the test image. However, iris patterns are subject to change when the pupil size changes, thus resulting in less accurate roll angle estimation if the pupil sizes in the test and reference images are different. We characterized non-uniform iris pattern contraction/expansion caused by pupil dilation/constriction, and developed an algorithm to convert an iris pattern with an arbitrary pupil size into that with the same pupil size as the reference iris pattern. It was demonstrated that the proposed method improved the accuracy of the measurement of roll eye movement by up to 76.9%.
Relational Memory Is Evident in Eye Movement Behavior despite the Use of Subliminal Testing Methods.
Nickel, Allison E; Henke, Katharina; Hannula, Deborah E
2015-01-01
While it is generally agreed that perception can occur without awareness, there continues to be debate about the type of representational content that is accessible when awareness is minimized or eliminated. Most investigations that have addressed this issue evaluate access to well-learned representations. Far fewer studies have evaluated whether or not associations encountered just once prior to testing might also be accessed and influence behavior. Here, eye movements were used to examine whether or not memory for studied relationships is evident following the presentation of subliminal cues. Participants assigned to experimental or control groups studied scene-face pairs and test trials evaluated implicit and explicit memory for these pairs. Each test trial began with a subliminal scene cue, followed by three visible studied faces. For experimental group participants, one face was the studied associate of the scene (implicit test); for controls none were a match. Subsequently, the display containing a match was presented to both groups, but now it was preceded by a visible scene cue (explicit test). Eye movements were recorded and recognition memory responses were made. Participants in the experimental group looked disproportionately at matching faces on implicit test trials and participants from both groups looked disproportionately at matching faces on explicit test trials, even when that face had not been successfully identified as the associate. Critically, implicit memory-based viewing effects seemed not to depend on residual awareness of subliminal scene cues, as subjective and objective measures indicated that scenes were successfully masked from view. The reported outcomes indicate that memory for studied relationships can be expressed in eye movement behavior without awareness.
Relational Memory Is Evident in Eye Movement Behavior despite the Use of Subliminal Testing Methods
Nickel, Allison E.; Henke, Katharina; Hannula, Deborah E.
2015-01-01
While it is generally agreed that perception can occur without awareness, there continues to be debate about the type of representational content that is accessible when awareness is minimized or eliminated. Most investigations that have addressed this issue evaluate access to well-learned representations. Far fewer studies have evaluated whether or not associations encountered just once prior to testing might also be accessed and influence behavior. Here, eye movements were used to examine whether or not memory for studied relationships is evident following the presentation of subliminal cues. Participants assigned to experimental or control groups studied scene-face pairs and test trials evaluated implicit and explicit memory for these pairs. Each test trial began with a subliminal scene cue, followed by three visible studied faces. For experimental group participants, one face was the studied associate of the scene (implicit test); for controls none were a match. Subsequently, the display containing a match was presented to both groups, but now it was preceded by a visible scene cue (explicit test). Eye movements were recorded and recognition memory responses were made. Participants in the experimental group looked disproportionately at matching faces on implicit test trials and participants from both groups looked disproportionately at matching faces on explicit test trials, even when that face had not been successfully identified as the associate. Critically, implicit memory-based viewing effects seemed not to depend on residual awareness of subliminal scene cues, as subjective and objective measures indicated that scenes were successfully masked from view. The reported outcomes indicate that memory for studied relationships can be expressed in eye movement behavior without awareness. PMID:26512726
A holographic waveguide based eye tracker
NASA Astrophysics Data System (ADS)
Liu, Changgeng; Pazzucconi, Beatrice; Liu, Juan; Liu, Lei; Yao, Xincheng
2018-02-01
We demonstrated the feasibility of using holographic waveguide for eye tracking. A custom-built holographic waveguide, a 20 mm x 60 mm x 3 mm flat glass substrate with integrated in- and out-couplers, was used for the prototype development. The in- and out-couplers, photopolymer films with holographic fringes, induced total internal reflection in the glass substrate. Diffractive optical elements were integrated into the in-coupler to serve as an optical collimator. The waveguide captured images of the anterior segment of the eye right in front of it and guided the images to a processing unit distant from the eye. The vector connecting the pupil center (PC) and the corneal reflex (CR) of the eye was used to compute eye position in the socket. An eye model, made of a high quality prosthetic eye, was used prototype validation. The benchtop prototype demonstrated a linear relationship between the angular eye position and the PC/CR vector over a range of 60 horizontal degrees and 30 vertical degrees at a resolution of 0.64-0.69 degrees/pixel by simple pixel count. The uncertainties of the measurements at different angular positions were within 1.2 pixels, which indicated that the prototype exhibited a high level of repeatability. These results confirmed that the holographic waveguide technology could be a feasible platform for developing a wearable eye tracker. Further development can lead to a compact, see-through eye tracker, which allows continuous monitoring of eye movement during real life tasks, and thus benefits diagnosis of oculomotor disorders.
Abnormal Fixational Eye Movements in Amblyopia.
Shaikh, Aasef G; Otero-Millan, Jorge; Kumar, Priyanka; Ghasia, Fatema F
2016-01-01
Fixational saccades shift the foveal image to counteract visual fading related to neural adaptation. Drifts are slow eye movements between two adjacent fixational saccades. We quantified fixational saccades and asked whether their changes could be attributed to pathologic drifts seen in amblyopia, one of the most common causes of blindness in childhood. Thirty-six pediatric subjects with varying severity of amblyopia and eleven healthy age-matched controls held their gaze on a visual target. Eye movements were measured with high-resolution video-oculography during fellow eye-viewing and amblyopic eye-viewing conditions. Fixational saccades and drifts were analyzed in the amblyopic and fellow eye and compared with controls. We found an increase in the amplitude with decreased frequency of fixational saccades in children with amblyopia. These alterations in fixational eye movements correlated with the severity of their amblyopia. There was also an increase in eye position variance during drifts in amblyopes. There was no correlation between the eye position variance or the eye velocity during ocular drifts and the amplitude of subsequent fixational saccade. Our findings suggest that abnormalities in fixational saccades in amblyopia are independent of the ocular drift. This investigation of amblyopia in pediatric age group quantitatively characterizes the fixation instability. Impaired properties of fixational saccades could be the consequence of abnormal processing and reorganization of the visual system in amblyopia. Paucity in the visual feedback during amblyopic eye-viewing condition can attribute to the increased eye position variance and drift velocity.
Abnormal Fixational Eye Movements in Amblyopia
Shaikh, Aasef G.; Otero-Millan, Jorge; Kumar, Priyanka; Ghasia, Fatema F.
2016-01-01
Purpose Fixational saccades shift the foveal image to counteract visual fading related to neural adaptation. Drifts are slow eye movements between two adjacent fixational saccades. We quantified fixational saccades and asked whether their changes could be attributed to pathologic drifts seen in amblyopia, one of the most common causes of blindness in childhood. Methods Thirty-six pediatric subjects with varying severity of amblyopia and eleven healthy age-matched controls held their gaze on a visual target. Eye movements were measured with high-resolution video-oculography during fellow eye-viewing and amblyopic eye-viewing conditions. Fixational saccades and drifts were analyzed in the amblyopic and fellow eye and compared with controls. Results We found an increase in the amplitude with decreased frequency of fixational saccades in children with amblyopia. These alterations in fixational eye movements correlated with the severity of their amblyopia. There was also an increase in eye position variance during drifts in amblyopes. There was no correlation between the eye position variance or the eye velocity during ocular drifts and the amplitude of subsequent fixational saccade. Our findings suggest that abnormalities in fixational saccades in amblyopia are independent of the ocular drift. Discussion This investigation of amblyopia in pediatric age group quantitatively characterizes the fixation instability. Impaired properties of fixational saccades could be the consequence of abnormal processing and reorganization of the visual system in amblyopia. Paucity in the visual feedback during amblyopic eye-viewing condition can attribute to the increased eye position variance and drift velocity. PMID:26930079
Utility of Novel Autoantibodies in the Diagnosis of Sjögren's Syndrome Among Patients With Dry Eye.
Karakus, Sezen; Baer, Alan N; Agrawal, Devika; Gurakar, Merve; Massof, Robert W; Akpek, Esen K
2018-04-01
To investigate the value of 3 novel autoantibodies [salivary protein 1 (SP1), carbonic anhydrase 6 (CA6), and parotid secretory protein (PSP)] in differentiating Sjögren's syndrome (SS)-related dry eye from non-SS dry eye. Forty-six dry eye patients with SS (SS dry eye), 14 dry eye patients without SS (non-SS dry eye), and 25 controls were included. The 2012 American College of Rheumatology classification criteria were used for the diagnosis of SS. After a detailed review of systems, the Ocular Surface Disease Index questionnaire, Schirmer test without anesthesia, tear film breakup time, and ocular surface staining were performed to assess dry eye. All participants underwent serological testing using a commercially available finger prick kit. Thirty-seven patients with SS (80.4%) had a positive traditional autoantibody and 28 (60.9%) had a positive novel autoantibody. Traditional autoantibodies were absent in all non-SS dry eye patients and controls. Novel autoantibodies were present in 7/14 (50%) non-SS dry eye patients and 4/25 (16%) controls. Among 3 novel autoantibodies, anti-CA6 was significantly more prevalent in the SS and non-SS dry eye groups than in controls (52.2% vs. 42.9% vs. 8.0%, P = 0.001). Dry eye patients with positive anti-CA6 alone were significantly younger than patients with only traditional autoantibodies. Anti-CA6 was associated with worse dry eye signs and symptoms. Anti-CA6 was the most prevalent novel autoantibody in patients with dry eye, and was associated with younger age and more severe disease. Longitudinal studies are needed to determine whether anti-CA6 is a marker for early SS or perhaps another form of an autoimmune dry eye disease.
ERIC Educational Resources Information Center
Wolff, Charlotte E.; van den Bogert, Niek; Jarodzka, Halszka; Boshuizen, Henny P. A.
2015-01-01
Classroom management represents an important skill and knowledge set for achieving student learning gains, but poses a considerable challenge for beginning teachers. Understanding how teachers' cognition and conceptualizations differ between experts and novices is useful for enhancing beginning teachers' expertise development. We created a coding…
Canon Fodder: Young Adult Literature as a Tool for Critiquing Canonicity
ERIC Educational Resources Information Center
Hateley, Erica
2013-01-01
Young adult literature is a tool of socialisation and acculturation for young readers. This extends to endowing "reading" with particular significance in terms of what literature should be read and why. This paper considers some recent young adult fiction with an eye to its engagement with canonical literature and its representations of…
ERIC Educational Resources Information Center
Brocher, Andreas
2013-01-01
Because many words of a language have more than one meaning, readers regularly need to disambiguate words during sentence comprehension. Using priming, eye-tracking, and event-related brain potentials, this thesis tested whether readers differently disambiguate words with semantically related meanings like "wire" and "cone,"…
Dong, Ying; Huang, Yi-Fei; Liu, Qian; DU, Gai-Ping
2011-05-01
To investigate the clinical and histopathologic features of the superficial tissue proliferation (STP) following the implantation of MICOF keratoprosthesis, and to analyze the formation and treatment of STP. Retrospective study. Eighty-five patients (85 eyes) received MICOF keratoprosthesis surgery from January 2000 through December 2009 in General Hospital of PLA, which included 72 males and 13 females. The mean age of the patients was (45 ± 15) years. Preoperative diagnoses were ocular burn (56 eyes), end-stage of autoimmune dry eye (14 eyes), severe ocular trauma (10 eyes) and repeated graft failure (5 eyes). Postoperatively, STPs of Kpro were observed and treated. The membranes anterior to the optical cylinder were removed and investigated by histological and immunohistochemical methods, and anterior segment specimens from normal eyes were taken as control. Twenty-two (26%) patients presented STP during the follow-up, and proliferations occurred ranging from 2 to 63 months (median, 7 months). The incident rates of STP were 34% (19/56 eyes) in burned eyes, 14% (2/14 eyes) in end-stage dry eye, 10% (1/10 eyes) in severe mechanical ocular trauma, and none in repeated grafts failure. Difference among four groups did not arrive significance statistically (χ(2) = 5.93, P = 0.11). The epithelial proliferations were observed in 11 patients, which were removed easily. To prevent from recurrence, the height of the cylinder was adjusted. Other 4 patients underwent ultra-high frequency ocular surface plastic operation and 7 patients received membranectomy. Histologically, the superficial proliferative membrane was composed of proliferative epithelium and fibrovascular tissue incorporating inflammatory cells. The immunohistochemical staining demonstrated the expression of PCNA increased in the epithelium, compared with control cornea and conjunctiva. Many vimentin-positive fibroblasts and a few α-SMA-positive myofibroblasts presented in the interstitial tissue, and the numbers of CD45RO-positive T cells, CD11c-positive dendritic cells, and CD68-positive macrophages were increased in proliferative membranes. The tissue proliferation around optical cylinder results in membrane formation anterior to the Kpro. The excessive inflammation at the prosthesis-corneal junction and the unsuited height of the optical cylinder might have been the main reasons of STP.
A kinematic model for 3-D head-free gaze-shifts
Daemi, Mehdi; Crawford, J. Douglas
2015-01-01
Rotations of the line of sight are mainly implemented by coordinated motion of the eyes and head. Here, we propose a model for the kinematics of three-dimensional (3-D) head-unrestrained gaze-shifts. The model was designed to account for major principles in the known behavior, such as gaze accuracy, spatiotemporal coordination of saccades with vestibulo-ocular reflex (VOR), relative eye and head contributions, the non-commutativity of rotations, and Listing's and Fick constraints for the eyes and head, respectively. The internal design of the model was inspired by known and hypothesized elements of gaze control physiology. Inputs included retinocentric location of the visual target and internal representations of initial 3-D eye and head orientation, whereas outputs were 3-D displacements of eye relative to the head and head relative to shoulder. Internal transformations decomposed the 2-D gaze command into 3-D eye and head commands with the use of three coordinated circuits: (1) a saccade generator, (2) a head rotation generator, (3) a VOR predictor. Simulations illustrate that the model can implement: (1) the correct 3-D reference frame transformations to generate accurate gaze shifts (despite variability in other parameters), (2) the experimentally verified constraints on static eye and head orientations during fixation, and (3) the experimentally observed 3-D trajectories of eye and head motion during gaze-shifts. We then use this model to simulate how 2-D eye-head coordination strategies interact with 3-D constraints to influence 3-D orientations of the eye-in-space, and the implications of this for spatial vision. PMID:26113816
Automated nystagmus analysis. [on-line computer technique for eye data processing
NASA Technical Reports Server (NTRS)
Oman, C. M.; Allum, J. H. J.; Tole, J. R.; Young, L. R.
1973-01-01
Several methods have recently been used for on-line analysis of nystagmus: A digital computer program has been developed to accept sampled records of eye position, detect fast phase components, and output cumulative slow phase position, continuous slow phase velocity, instantaneous fast phase frequency, and other parameters. The slow phase velocity is obtained by differentiation of the calculated cumulative position rather than the original eye movement record. Also, a prototype analog device has been devised which calculates the velocity of the slow phase component during caloric testing. Examples of clinical and research eye movement records analyzed with these devices are shown.
Turner, Daniel C.; Samuels, Brian C.; Huisingh, Carrie; Girkin, Christopher A.
2017-01-01
Purpose To study the effect and time course of body position changes on IOP in nonhuman primates. Methods We recorded continuous bilateral IOP measurements with a wireless telemetry implant in three rhesus macaques in seven different body positions. IOP measurements were acquired in the seated-upright, standing, prone, supine, right and left lateral decubitus positions (LDPs), and head-down inverted positions. Continuous IOP was recorded for 90 seconds in each position before returning to a supine reference position until IOP stabilized; measurements were averaged after IOP stabilized at each position. Results Head-down inversion increased IOP an average of 8.9 mm Hg, compared to the supine reference. In the LDP, IOP decreased an average of 0.5 mm Hg in the nondependent eye (i.e., the higher eye), while the fellow dependent (i.e., lower) eye increased an average of 0.5 mm Hg, compared to supine reference. Standing and seated positions decreased IOP 1.5 and 2.2 mm Hg, respectively, compared with supine reference. IOP changes occurred within 4 to 15 seconds of a body position change, and timing was affected by the speed at which body position was changed. Compared to the IOP in the supine position, the IOP in the inverted, prone, and seated positions was significantly different (P = 0.0313 for all). The IOP in the standing position was not statistically different from the IOP in the supine position (P = 0.094). In addition, the IOP was significantly different between the nondependent eye and the dependent eye in the LDPs compared to the supine position (P = 0.0313). Conclusions Body position has a significant effect on IOP and those changes persist over time. PMID:29228251
Simulated Keratometry Repeatability in Subjects with & without Down Syndrome
Ravikumar, Ayeswarya; Marsack, Jason D.; Benoit, Julia S.; Anderson, Heather A.
2016-01-01
Purpose To assess the repeatability of simulated keratometry measures obtained with Zeiss Atlas topography for subjects with and without Down syndrome (DS). Methods Corneal topography was attempted on 140 subjects with DS and 138 controls (aged 7 to 59 years). Subjects who had at least 3 measures in each eye were included in analysis (DS: n=140 eyes (70 subjects) and controls: n=264 eyes (132 subjects)). For each measurement the steep corneal power (K), corneal astigmatism, flat K orientation, power vector representation of astigmatism (J0, J45), and astigmatic dioptric difference were determined for each measurement (collectively termed keratometry values here). For flat K orientation comparisons, only eyes with >0.50 DC of astigmatism were included (DS: n=131 eyes (68 subjects) and control: n=217 eyes (119 subjects)). Repeatability was assessed using 1) group mean variability (average standard deviation (SD) across subjects), 2) coefficient of repeatability (COR) 3) coefficient of variation (COV), and 4) intraclass correlation coefficient (ICC). Results The keratometry values showed good repeatability as evidenced by low group mean variability for DS vs control eyes (≤0.26D vs ≤0.09D for all dioptric values; 4.51° vs 3.16° for flat K orientation); however, the group mean variability was significantly higher in DS eyes than control eyes for all parameters (p≤0.03). On average, group mean variability was 2.5× greater in the DS eyes compared to control eyes across the keratometry values. Other metrics of repeatability also indicated good repeatability for both populations for each keratometry value, although repeatability was always better in the control eyes. Conclusions DS eyes showed more variability (on average: 2.5×) compared to controls for all keratometry values. Although differences were statistically significant, on average 91% of DS eyes had variability ≤0.50D for steep K and astigmatism, and 75% of DS eyes had variability ≤5 degrees for flat K orientation. PMID:27741083
ERIC Educational Resources Information Center
Crookes, Kate; Hayward, William G.
2012-01-01
Presenting a face inverted (upside down) disrupts perceptual sensitivity to the spacing between the features. Recently, it has been shown that this disruption is greater for vertical than horizontal changes in eye position. One explanation for this effect proposed that inversion disrupts the processing of long-range (e.g., eye-to-mouth distance)…
Oculomotor control of primary eye position discriminates between translation and tilt
NASA Technical Reports Server (NTRS)
Hess, B. J.; Angelaki, D. E.
1999-01-01
We have previously shown that fast phase axis orientation and primary eye position in rhesus monkeys are dynamically controlled by otolith signals during head rotations that involve a reorientation of the head relative to gravity. Because of the inherent ambiguity associated with primary otolith afferent coding of linear accelerations during head translation and tilts, a similar organization might also underlie the vestibulo-ocular reflex (VOR) during translation. The ability of the oculomotor system to correctly distinguish translational accelerations from gravity in the dynamic control of primary eye position has been investigated here by comparing the eye movements elicited by sinusoidal lateral and fore-aft oscillations (0.5 Hz +/- 40 cm, equivalent to +/- 0.4 g) with those during yaw rotations (180 degrees/s) about a vertically tilted axis (23.6 degrees). We found a significant modulation of primary eye position as a function of linear acceleration (gravity) during rotation but not during lateral and fore-aft translation. This modulation was enhanced during the initial phase of rotation when there was concomitant semicircular canal input. These findings suggest that control of primary eye position and fast phase axis orientation in the VOR are based on central vestibular mechanisms that discriminate between gravity and translational head acceleration.
Modification of Eye Movements and Motion Perception during Off-Vertical Axis Rotation
NASA Technical Reports Server (NTRS)
Wood, S. J.; Reschke, M. F.; Denise, P.; CLement, G.
2006-01-01
Constant velocity Off-Vertical Axis Rotation (OVAR) imposes a continuously varying orientation of the head and body relative to gravity. The ensuing ocular reflexes include modulation of both torsional and horizontal eye movements as a function of the varying linear acceleration along the lateral plane, and modulation of vertical and vergence eye movements as a function of the varying linear acceleration along the sagittal plane. Previous studies have demonstrated that tilt and translation otolith-ocular responses, as well as motion perception, vary as a function of stimulus frequency during OVAR. The purpose of this study is to examine normative OVAR responses in healthy human subjects, and examine adaptive changes in astronauts following short duration space flight at low (0.125 Hz) and high (0.5 Hz) frequencies. Data was obtained on 24 normative subjects (14 M, 10 F) and 14 (13 M, 1F) astronaut subjects. To date, astronauts have participated in 3 preflight sessions (n=14) and on R+0/1 (n=7), R+2 (n= 13) and R+4 (n= 13) days after landing. Subjects were rotated in darkness about their longitudinal axis 20 deg off-vertical at constant rates of 45 and 180 deg/s, corresponding to 0.125 and 0.5 Hz. Binocular responses were obtained with video-oculography. Perceived motion was evaluated using verbal reports and a two-axis joystick (pitch and roll tilt) mounted on top of a two-axis linear stage (anterior-posterior and medial-lateral translation). Eye responses were obtained in ten of the normative subjects with the head and trunk aligned, and then with the head turned relative to the trunk 40 deg to the right or left of center. Sinusoidal curve fits were used to derive amplitude, phase and bias of the responses over several cycles at each stimulus frequency. Eye responses during 0.125 Hz OVAR were dominated by modulation of torsional and vertical eye position, compensatory for tilt relative to gravity. While there is a bias horizontal slow phase velocity (SPV), the modulation of horizontal and vergence SPV is negligible at this lower stimulus frequency. Eye responses during 0.5 Hz OVAR; however, are characterized by modulation of horizontal and vergence SPV, compensatory for translation in the lateral and sagittal planes, respectively. Neither amplitude nor bias velocities were significantly altered by head-on-trunk position. The phases of the ocular reflexes, on the other hand, shifted towards alignment with the head. During the lower frequency OVAR, subjects reported the perception of progressing along the edge of a cone. During higher frequency OVAR, subjects reported the perception of progressing along the edge of an upright cylinder. In contrast to the eye movements, the phase of both perceived tilt and translation motion is not altered by stimulus frequency. Preliminary results from astronaut data suggest that the ocular responses are not substantially altered by short-duration spaceflight. However, compared to preflight averages, astronauts reported greater amplitude of both perceived tilt and translation at low and high frequency, respectively, during early post-flight testing. We conclude that the neural processing to distinguish tilt and translation linear acceleration stimuli differs between eye movements and motion perception. The results from modifying head-on-trunk position are consistent with the modulation of ocular reflexes during OVAR being primarily mediated by the otoliths in response to the sinusoidally varying linear acceleration along the interaural and naso-occipital head axis. While the tilt and translation ocular reflexes appear to operate in an independent fashion, the timing of perceived tilt and translation influence each other. We conclude that the perceived motion path during linear acceleration in darkness results from a composite representation of tilt and translation inputs from both vestibular and somatosensory systems.
Detecting eye movements in dynamic environments.
Reimer, Bryan; Sodhi, Manbir
2006-11-01
To take advantage of the increasing number of in-vehicle devices, automobile drivers must divide their attention between primary (driving) and secondary (operating in-vehicle device) tasks. In dynamic environments such as driving, however, it is not easy to identify and quantify how a driver focuses on the various tasks he/she is simultaneously engaged in, including the distracting tasks. Measures derived from the driver's scan path have been used as correlates of driver attention. This article presents a methodology for analyzing eye positions, which are discrete samples of a subject's scan path, in order to categorize driver eye movements. Previous methods of analyzing eye positions recorded in a dynamic environment have relied completely on the manual identification of the focus of visual attention from a point of regard superimposed on a video of a recorded scene, failing to utilize information regarding movement structure in the raw recorded eye positions. Although effective, these methods are too time consuming to be easily used when the large data sets that would be required to identify subtle differences between drivers, under different road conditions, and with different levels of distraction are processed. The aim of the methods presented in this article are to extend the degree of automation in the processing of eye movement data by proposing a methodology for eye movement analysis that extends automated fixation identification to include smooth and saccadic movements. By identifying eye movements in the recorded eye positions, a method of reducing the analysis of scene video to a finite search space is presented. The implementation of a software tool for the eye movement analysis is described, including an example from an on-road test-driving sample.
View From Outside the Viewing Sphere
Koenderink, Jan; van Doorn, Andrea; Pepperell, Robert
2018-01-01
The ‘viewing sphere’, as defined by Euclid and explored by Gibson as the ‘optic array’, is generally thought of as wrapped around the eye. Can an observer step out of it? With currently popular photographic techniques, the spectator is forced to, because the viewing sphere is presented as a pictorial object. Then the question is whether human observers are able to use such pictorial representations in an intuitive manner. Can the spectator ‘mentally step into the interior’ of the pictorial viewing sphere? We explore this issue in a short experiment. Perhaps unsurprisingly, because the eye cannot see itself, the short answer is no. PMID:29854376
DOE Office of Scientific and Technical Information (OSTI.GOV)
Via, Riccardo, E-mail: riccardo.via@polimi.it; Fassi, Aurora; Fattori, Giovanni
Purpose: External beam radiotherapy currently represents an important therapeutic strategy for the treatment of intraocular tumors. Accurate target localization and efficient compensation of involuntary eye movements are crucial to avoid deviations in dose distribution with respect to the treatment plan. This paper describes an eye tracking system (ETS) based on noninvasive infrared video imaging. The system was designed for capturing the tridimensional (3D) ocular motion and provides an on-line estimation of intraocular lesions position based on a priori knowledge coming from volumetric imaging. Methods: Eye tracking is performed by localizing cornea and pupil centers on stereo images captured by twomore » calibrated video cameras, exploiting eye reflections produced by infrared illumination. Additionally, torsional eye movements are detected by template matching in the iris region of eye images. This information allows estimating the 3D position and orientation of the eye by means of an eye local reference system. By combining ETS measurements with volumetric imaging for treatment planning [computed tomography (CT) and magnetic resonance (MR)], one is able to map the position of the lesion to be treated in local eye coordinates, thus enabling real-time tumor referencing during treatment setup and irradiation. Experimental tests on an eye phantom and seven healthy subjects were performed to assess ETS tracking accuracy. Results: Measurements on phantom showed an overall median accuracy within 0.16 mm and 0.40° for translations and rotations, respectively. Torsional movements were affected by 0.28° median uncertainty. On healthy subjects, the gaze direction error ranged between 0.19° and 0.82° at a median working distance of 29 cm. The median processing time of the eye tracking algorithm was 18.60 ms, thus allowing eye monitoring up to 50 Hz. Conclusions: A noninvasive ETS prototype was designed to perform real-time target localization and eye movement monitoring during ocular radiotherapy treatments. The device aims at improving state-of-the-art invasive procedures based on surgical implantation of radiopaque clips and repeated acquisition of X-ray images, with expected positive effects on treatment quality and patient outcome.« less
Via, Riccardo; Fassi, Aurora; Fattori, Giovanni; Fontana, Giulia; Pella, Andrea; Tagaste, Barbara; Riboldi, Marco; Ciocca, Mario; Orecchia, Roberto; Baroni, Guido
2015-05-01
External beam radiotherapy currently represents an important therapeutic strategy for the treatment of intraocular tumors. Accurate target localization and efficient compensation of involuntary eye movements are crucial to avoid deviations in dose distribution with respect to the treatment plan. This paper describes an eye tracking system (ETS) based on noninvasive infrared video imaging. The system was designed for capturing the tridimensional (3D) ocular motion and provides an on-line estimation of intraocular lesions position based on a priori knowledge coming from volumetric imaging. Eye tracking is performed by localizing cornea and pupil centers on stereo images captured by two calibrated video cameras, exploiting eye reflections produced by infrared illumination. Additionally, torsional eye movements are detected by template matching in the iris region of eye images. This information allows estimating the 3D position and orientation of the eye by means of an eye local reference system. By combining ETS measurements with volumetric imaging for treatment planning [computed tomography (CT) and magnetic resonance (MR)], one is able to map the position of the lesion to be treated in local eye coordinates, thus enabling real-time tumor referencing during treatment setup and irradiation. Experimental tests on an eye phantom and seven healthy subjects were performed to assess ETS tracking accuracy. Measurements on phantom showed an overall median accuracy within 0.16 mm and 0.40° for translations and rotations, respectively. Torsional movements were affected by 0.28° median uncertainty. On healthy subjects, the gaze direction error ranged between 0.19° and 0.82° at a median working distance of 29 cm. The median processing time of the eye tracking algorithm was 18.60 ms, thus allowing eye monitoring up to 50 Hz. A noninvasive ETS prototype was designed to perform real-time target localization and eye movement monitoring during ocular radiotherapy treatments. The device aims at improving state-of-the-art invasive procedures based on surgical implantation of radiopaque clips and repeated acquisition of X-ray images, with expected positive effects on treatment quality and patient outcome.
Raphan, T
1998-05-01
This study evaluates the effects of muscle axis shifts on the performance of a vector velocity-position integrator in the CNS. Earlier models of the oculomotor plant assumed that the muscle axes remained fixed relative to the head as the eye rotated into secondary and tertiary eye positions. Under this assumption, the vector integrator model generates torsional transients as the eye moves from secondary to tertiary positions of fixation. The torsional transient represents an eye movement response to a spatial mismatch between the torque axes that remain fixed in the head and the displacement plane that changes by half the angle of the change in eye orientation. When muscle axis shifts were incorporated into the model, the torque axes were closer to the displacement plane at each eye orientation throughout the trajectory, and torsional transients were reduced dramatically. Their size and dynamics were close to reported data. It was also shown that when the muscle torque axes were rotated by 50% of the eye rotation, there was no torsional transient and Listing's law was perfectly obeyed. When muscle torque axes rotated >50%, torsional transients reversed direction compared with what occurred for muscle axis shifts of <50%. The model indicates that Listing's law is implemented by the oculomotor plant subject to a two-dimensional command signal that is confined to the pitch-yaw plane, having zero torsion. Saccades that bring the eye to orientations outside Listing's plane could easily be corrected by a roll pulse that resets the roll state of the velocity-position integrator to zero. This would be a simple implementation of the corrective controller suggested by Van Opstal and colleagues. The model further indicates that muscle axis shifts together with the torque orientation relationship for tissue surrounding the eye and Newton's laws of motion form a sufficient plant model to explain saccadic trajectories and periods of fixation when driven by a vector command confined to the pitch-yaw plane. This implies that the velocity-position integrator is probably realized as a subtractive feedback vector integrator and not as a quaternion-based integrator that implements kinematic transformations to orient the eye.
Kohnen, T; Kühne, C; Cichocki, M; Strenger, A
2007-01-01
Centration of the ablation zone decisively influences the result of wavefront-guided LASIK. Cyclorotation of the eye occurs as the patient changes from the sitting position during aberrometry to the supine position during laser surgery and may lead to induction of lower and higher order aberrations. Twenty patients (40 eyes) underwent wavefront-guided LASIK (B&L 217z 100 excimer laser) with a static eyetracker driven by iris recognition (mean preoperative SE: -4.72+/-1.45 D; range: -1.63 to -7.00 D). The iris patterns of the patients' eyes were memorized during aberrometry and after flap creation. The mean absolute value of the measured cyclorotation was -1.5+/-4.2 degrees (range: -11.0 to 6.9 degrees ). The mean cyclorotation was 3.5+/-2.7 masculine (range: 0.1 to 11.0 degrees ). In 65% of all eyes cyclorotation was >2 masculine. A static eyetracker driven by iris recognition demonstrated that cyclorotation of up to 11 degrees may occur in myopic and myopic astigmatic eyes when changing from a sitting to a supine position. Use of static eyetrackers with iris recognition may provide a more precise positioning of the ablation profile as they detect and compensate cyclorotation.
A MATLAB-based eye tracking control system using non-invasive helmet head restraint in the macaque.
De Luna, Paolo; Mohamed Mustafar, Mohamed Faiz Bin; Rainer, Gregor
2014-09-30
Tracking eye position is vital for behavioral and neurophysiological investigations in systems and cognitive neuroscience. Infrared camera systems which are now available can be used for eye tracking without the need to surgically implant magnetic search coils. These systems are generally employed using rigid head fixation in monkeys, which maintains the eye in a constant position and facilitates eye tracking. We investigate the use of non-rigid head fixation using a helmet that constrains only general head orientation and allows some freedom of movement. We present a MATLAB software solution to gather and process eye position data, present visual stimuli, interact with various devices, provide experimenter feedback and store data for offline analysis. Our software solution achieves excellent timing performance due to the use of data streaming, instead of the traditionally employed data storage mode for processing analog eye position data. We present behavioral data from two monkeys, demonstrating that adequate performance levels can be achieved on a simple fixation paradigm and show how performance depends on parameters such as fixation window size. Our findings suggest that non-rigid head restraint can be employed for behavioral training and testing on a variety of gaze-dependent visual paradigms, reducing the need for rigid head restraint systems for some applications. While developed for macaque monkey, our system of course can work equally well for applications in human eye tracking where head constraint is undesirable. Copyright © 2014. Published by Elsevier B.V.
Binocular lens tilt and decentration measurements in healthy subjects with phakic eyes.
Schaeffel, Frank
2008-05-01
Tilt and decentration of the natural crystalline lens affect optical quality of the foveal image. However, little is known about the distributions of these variables in healthy subjects with phakic eyes and about their correlations in both eyes. A simple, portable, easy-to-use, and partially automated device was developed to study lens tilt and decentration in both eyes of 11 healthy subjects with phakic eyes. The first, third, and fourth Purkinje images (P1, P3, P4) were visualized using a single infrared (IR) light-emitting diode (LED), a planar lens (F = 85 mm; f/number of 1.4), and an infrared sensitive analog video camera. Software was developed to mark pupil edges and positions of P1, P4, and P3 with the cursor of the computer mouse, for three different gaze positions, and an automated regression analysis determined the gaze position that superimposed the third and fourth Purkinje images, the gaze direction for which the lens was oriented perpendicularly to the axis of the IR LED. In this position, lens decentration was determined as the linear distance of the superimposed P3/P4 positions from the pupil center. Contrary to previous approaches, a short initial fixation of a green LED with known angular position calibrated the device as a gaze tracker, and no further positional information was necessary on fixation targets. Horizontal and vertical kappa, horizontal and vertical lens tilt, and vertical lens decentration were highly correlated in both eyes of the subjects, whereas horizontal decentration of the lens was not. There was a large variability of kappa (average horizontal kappa -1.63 degrees +/- 1.77 degrees [left eyes] and +2.07 degrees +/- 2.68 degrees [right eyes]; average vertical kappa +2.52 degrees +/- 1.30 degrees [left eyes] and +2.77 degrees +/- 1.65 degrees [right eyes]). Standard deviation from three repeated measurements ranged from 0.28 degrees to 0.51 degrees for kappa, 0.36 degrees to 0.91 degrees for horizontal lens tilt, and 0.36 degrees to 0.48 degrees for vertical lens tilt. Decentration was measured with standard deviations ranging from 0.02 mm to 0.05 mm. All lenses were found tilted to the temporal side with respect to the fixation axis (on average by 4.6 degrees ). They were also decentered downward with respect to the pupil center by approximately 0.3 mm. Lens tilts and positions could be conveniently measured with the described portable device, a video camera with a large lens. That the lenses were tilted to the temporal side in both eyes, even if corrected for kappa, was unexpected. That they were displaced downward with respect to the pupil center could be related to gravity.
Are There Multiple Visual Short-Term Memory Stores?
Sligte, Ilja G.; Scholte, H. Steven; Lamme, Victor A. F.
2008-01-01
Background Classic work on visual short-term memory (VSTM) suggests that people store a limited amount of items for subsequent report. However, when human observers are cued to shift attention to one item in VSTM during retention, it seems as if there is a much larger representation, which keeps additional items in a more fragile VSTM store. Thus far, it is not clear whether the capacity of this fragile VSTM store indeed exceeds the traditional capacity limits of VSTM. The current experiments address this issue and explore the capacity, stability, and duration of fragile VSTM representations. Methodology/Principal Findings We presented cues in a change-detection task either just after off-set of the memory array (iconic-cue), 1,000 ms after off-set of the memory array (retro-cue) or after on-set of the probe array (post-cue). We observed three stages in visual information processing 1) iconic memory with unlimited capacity, 2) a four seconds lasting fragile VSTM store with a capacity that is at least a factor of two higher than 3) the robust and capacity-limited form of VSTM. Iconic memory seemed to depend on the strength of the positive after-image resulting from the memory display and was virtually absent under conditions of isoluminance or when intervening light masks were presented. This suggests that iconic memory is driven by prolonged retinal activation beyond stimulus duration. Fragile VSTM representations were not affected by light masks, but were completely overwritten by irrelevant pattern masks that spatially overlapped the memory array. Conclusions/Significance We find that immediately after a stimulus has disappeared from view, subjects can still access information from iconic memory because they can see an after-image of the display. After that period, human observers can still access a substantial, but somewhat more limited amount of information from a high-capacity, but fragile VSTM that is overwritten when new items are presented to the eyes. What is left after that is the traditional VSTM store, with a limit of about four objects. We conclude that human observers store more sustained representations than is evident from standard change detection tasks and that these representations can be accessed at will. PMID:18301775
Are there multiple visual short-term memory stores?
Sligte, Ilja G; Scholte, H Steven; Lamme, Victor A F
2008-02-27
Classic work on visual short-term memory (VSTM) suggests that people store a limited amount of items for subsequent report. However, when human observers are cued to shift attention to one item in VSTM during retention, it seems as if there is a much larger representation, which keeps additional items in a more fragile VSTM store. Thus far, it is not clear whether the capacity of this fragile VSTM store indeed exceeds the traditional capacity limits of VSTM. The current experiments address this issue and explore the capacity, stability, and duration of fragile VSTM representations. We presented cues in a change-detection task either just after off-set of the memory array (iconic-cue), 1,000 ms after off-set of the memory array (retro-cue) or after on-set of the probe array (post-cue). We observed three stages in visual information processing 1) iconic memory with unlimited capacity, 2) a four seconds lasting fragile VSTM store with a capacity that is at least a factor of two higher than 3) the robust and capacity-limited form of VSTM. Iconic memory seemed to depend on the strength of the positive after-image resulting from the memory display and was virtually absent under conditions of isoluminance or when intervening light masks were presented. This suggests that iconic memory is driven by prolonged retinal activation beyond stimulus duration. Fragile VSTM representations were not affected by light masks, but were completely overwritten by irrelevant pattern masks that spatially overlapped the memory array. We find that immediately after a stimulus has disappeared from view, subjects can still access information from iconic memory because they can see an after-image of the display. After that period, human observers can still access a substantial, but somewhat more limited amount of information from a high-capacity, but fragile VSTM that is overwritten when new items are presented to the eyes. What is left after that is the traditional VSTM store, with a limit of about four objects. We conclude that human observers store more sustained representations than is evident from standard change detection tasks and that these representations can be accessed at will.
Eying the future: Eye movement in past and future thinking.
El Haj, Mohamad; Lenoble, Quentin
2017-06-07
We investigated eye movement during past and future thinking. Participants were invited to retrieve past events and to imagine future events while their scan path was recorded by an eye-tracker. Past thinking triggered more fixation (p < .05), and saccade counts (p < .05) than future thinking. Past and future thinking triggered a similar duration of fixations and saccades, as well as a similar amplitude of saccades. Interestingly, participants rated past thinking as more vivid than future thinking (p < .01). Therefore, the vividness of past thinking seems to be accompanied by an increased number of fixations and saccades. Fixations and saccades in past thinking can be interpreted as an attempt by the visual system to find (through saccades) and activate (through fixations) stored memory representations. The same interpretation can be applied to future thinking as this ability requires activation of past experiences. However, future thinking triggers fewer fixations and saccades than past thinking: this may be due to its decreased demand on visual imagery, but could also be related to a potentially deleterious effect of eye movements on spatial imagery required for future thinking. Copyright © 2017 Elsevier Ltd. All rights reserved.
An integrated reweighting theory of perceptual learning
Dosher, Barbara Anne; Jeter, Pamela; Liu, Jiajuan; Lu, Zhong-Lin
2013-01-01
Improvements in performance on visual tasks due to practice are often specific to a retinal position or stimulus feature. Many researchers suggest that specific perceptual learning alters selective retinotopic representations in early visual analysis. However, transfer is almost always practically advantageous, and it does occur. If perceptual learning alters location-specific representations, how does it transfer to new locations? An integrated reweighting theory explains transfer over retinal locations by incorporating higher level location-independent representations into a multilevel learning system. Location transfer is mediated through location-independent representations, whereas stimulus feature transfer is determined by stimulus similarity at both location-specific and location-independent levels. Transfer to new locations/positions differs fundamentally from transfer to new stimuli. After substantial initial training on an orientation discrimination task, switches to a new location or position are compared with switches to new orientations in the same position, or switches of both. Position switches led to the highest degree of transfer, whereas orientation switches led to the highest levels of specificity. A computational model of integrated reweighting is developed and tested that incorporates the details of the stimuli and the experiment. Transfer to an identical orientation task in a new position is mediated via more broadly tuned location-invariant representations, whereas changing orientation in the same position invokes interference or independent learning of the new orientations at both levels, reflecting stimulus dissimilarity. Consistent with single-cell recording studies, perceptual learning alters the weighting of both early and midlevel representations of the visual system. PMID:23898204
Processing of Written Irony in Autism Spectrum Disorder: An Eye-Movement Study.
Au-Yeung, Sheena K; Kaakinen, Johanna K; Liversedge, Simon P; Benson, Valerie
2015-12-01
Previous research has suggested that individuals with Autism Spectrum Disorders (ASD) have difficulties understanding others communicative intent and with using contextual information to correctly interpret irony. We recorded the eye movements of typically developing (TD) adults ASD adults when they read statements that could either be interpreted as ironic or non-ironic depending on the context of the passage. Participants with ASD performed as well as TD controls in their comprehension accuracy for speaker's statements in both ironic and non-ironic conditions. Eye movement data showed that for both participant groups, total reading times were longer for the critical region containing the speaker's statement and a subsequent sentence restating the context in the ironic condition compared to the non-ironic condition. The results suggest that more effortful processing is required in both ASD and TD participants for ironic compared with literal non-ironic statements, and that individuals with ASD were able to use contextual information to infer a non-literal interpretation of ironic text. Individuals with ASD, however, spent more time overall than TD controls rereading the passages, to a similar degree across both ironic and non-ironic conditions, suggesting that they either take longer to construct a coherent discourse representation of the text, or that they take longer to make the decision that their representation of the text is reasonable based on their knowledge of the world. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.
V4 activity predicts the strength of visual short-term memory representations.
Sligte, Ilja G; Scholte, H Steven; Lamme, Victor A F
2009-06-10
Recent studies have shown the existence of a form of visual memory that lies intermediate of iconic memory and visual short-term memory (VSTM), in terms of both capacity (up to 15 items) and the duration of the memory trace (up to 4 s). Because new visual objects readily overwrite this intermediate visual store, we believe that it reflects a weak form of VSTM with high capacity that exists alongside a strong but capacity-limited form of VSTM. In the present study, we isolated brain activity related to weak and strong VSTM representations using functional magnetic resonance imaging. We found that activity in visual cortical area V4 predicted the strength of VSTM representations; activity was low when there was no VSTM, medium when there was a weak VSTM representation regardless of whether this weak representation was available for report or not, and high when there was a strong VSTM representation. Altogether, this study suggests that the high capacity yet weak VSTM store is represented in visual parts of the brain. Allegedly, only some of these VSTM traces are amplified by parietal and frontal regions and as a consequence reside in traditional or strong VSTM. The additional weak VSTM representations remain available for conscious access and report when attention is redirected to them yet are overwritten as soon as new visual stimuli hit the eyes.
Fonseca, Ana; Nazaré, Bárbara; Canavarro, Maria Cristina
2018-07-01
This study aimed to investigate the effect of one's attachment representations on one's and the partner's caregiving representations. According to attachment theory, individual differences in parenting and caregiving behaviours may be a function of parents' caregiving representations of the self as caregiver, and of others as worthy of care, which are rooted on parents' attachment representations. Furthermore, the care-seeking and caregiving interactions that occur within the couple relationship may also shape individuals' caregiving representations. The sample comprised 286 cohabiting couples who were assessed during pregnancy (attachment representations) and one month post-birth (caregiving representations). Path analyses were used to examine effects among variables. Results showed that for mothers and fathers, their own more insecure attachment representations predicted their less positive caregiving representations of the self as caregiver and of others as worthy of help and more self-focused motivations for caregiving. Moreover, fathers' attachment representations were found to predict mothers' caregiving representations of themselves as caregivers. Secure attachment representations of both members of the couple seem to be an inner resource promoting parents' positive representations of caregiving, and should be assessed and fostered during the transition to parenthood in both members of the couple.
More than Meets the Eye: Adult Education for Critical Consciousness in Luis Camnitzer's Art
ERIC Educational Resources Information Center
Zorrilla, Ana Carlina
2012-01-01
The purpose of this study was to explore the connection between art and adult education for critical consciousness through the conceptual art of Luis Camnitzer. The theoretical framework grounding this research was critical public pedagogy, influenced by both critical theory and Stuart Hall's systems of representation (1997). This framework…
Abstract Knowledge of Word Order by 19 Months: An Eye-Tracking Study
ERIC Educational Resources Information Center
Franck, Julie; Millotte, Severine; Posada, Andres; Rizzi, Luigi
2013-01-01
Word order is one of the earliest aspects of grammar that the child acquires, because her early utterances already respect the basic word order of the target language. However, the question of the nature of early syntactic representations is subject to debate. Approaches inspired by formal syntax assume that the head-complement order,…
Visual Representation of Eye Gaze Is Coded by a Nonopponent Multichannel System
ERIC Educational Resources Information Center
Calder, Andrew J.; Jenkins, Rob; Cassel, Anneli; Clifford, Colin W. G.
2008-01-01
To date, there is no functional account of the visual perception of gaze in humans. Previous work has demonstrated that left gaze and right gaze are represented by separate mechanisms. However, these data are consistent with either a multichannel system comprising separate channels for distinct gaze directions (e.g., left, direct, and right) or an…
Using the Dual-Target Cost to Explore the Nature of Search Target Representations
ERIC Educational Resources Information Center
Stroud, Michael J.; Menneer, Tamaryn; Cave, Kyle R.; Donnelly, Nick
2012-01-01
Eye movements were monitored to examine search efficiency and infer how color is mentally represented to guide search for multiple targets. Observers located a single color target very efficiently by fixating colors similar to the target. However, simultaneous search for 2 colors produced a dual-target cost. In addition, as the similarity between…
gPhysics--Using Smart Glasses for Head-Centered, Context-Aware Learning in Physics Experiments
ERIC Educational Resources Information Center
Kuhn, Jochen; Lukowicz, Paul; Hirth, Michael; Poxrucker, Andreas; Weppner, Jens; Younas, Junaid
2016-01-01
Smart Glasses such as Google Glass are mobile computers combining classical Head-Mounted Displays (HMD) with several sensors. Therefore, contact-free, sensor-based experiments can be linked with relating, near-eye presented multiple representations. We will present a first approach on how Smart Glasses can be used as an experimental tool for…
ERIC Educational Resources Information Center
Ruh, Nina; Rahm, Benjamin; Unterrainer, Josef M.; Weiller, Cornelius; Kaller, Christoph P.
2012-01-01
In a companion study, eye-movement analyses in the Tower of London task (TOL) revealed independent indicators of functionally separable cognitive processes during problem solving, with processes of building up an internal representation of the problem preceding actual planning processes. These results imply that processes of internalization and…
Can representational trajectory reveal the nature of an internal model of gravity?
De Sá Teixeira, Nuno; Hecht, Heiko
2014-05-01
The memory for the vanishing location of a horizontally moving target is usually displaced forward in the direction of motion (representational momentum) and downward in the direction of gravity (representational gravity). Moreover, this downward displacement has been shown to increase with time (representational trajectory). However, the degree to which different kinematic events change the temporal profile of these displacements remains to be determined. The present article attempts to fill this gap. In the first experiment, we replicate the finding that representational momentum for downward-moving targets is bigger than for upward motions, showing, moreover, that it increases rapidly during the first 300 ms, stabilizing afterward. This temporal profile, but not the increased error for descending targets, is shown to be disrupted when eye movements are not allowed. In the second experiment, we show that the downward drift with time emerges even for static targets. Finally, in the third experiment, we report an increased error for upward-moving targets, as compared with downward movements, when the display is compatible with a downward ego-motion by including vection cues. Thus, the errors in the direction of gravity are compatible with the perceived event and do not merely reflect a retinotopic bias. Overall, these results provide further evidence for an internal model of gravity in the visual representational system.
Text-to-phonemic transcription and parsing into mono-syllables of English text
NASA Astrophysics Data System (ADS)
Jusgir Mullick, Yugal; Agrawal, S. S.; Tayal, Smita; Goswami, Manisha
2004-05-01
The present paper describes a program that converts the English text (entered through the normal computer keyboard) into its phonemic representation and then parses it into mono-syllables. For every letter a set of context based rules is defined in lexical order. A default rule is also defined separately for each letter. Beginning from the first letter of the word the rules are checked and the most appropriate rule is applied on the letter to find its actual orthographic representation. If no matching rule is found, then the default rule is applied. Current rule sets the next position to be analyzed. Proceeding in the same manner orthographic representation for each word can be found. For example, ``reading'' is represented as ``rEdiNX'' by applying the following rules: r-->r move 1 position ahead ead-->Ed move 3 position ahead i-->i move 1 position ahead ng-->NX move 2 position ahead, i.e., end of word. The phonemic representations obtained from the above procedure are parsed to get mono-syllabic representation for various combinations such as CVC, CVCC, CV, CVCVC, etc. For example, the above phonemic representation will be parsed as rEdiNX---> /rE/ /diNX/. This study is a part of developing TTS for Indian English.
NASA Technical Reports Server (NTRS)
Angelaki, D. E.; Hess, B. J.
1996-01-01
1. The dynamic properties of otolith-ocular reflexes elicited by sinusoidal linear acceleration along the three cardinal head axes were studied during off-vertical axis rotations in rhesus monkeys. As the head rotates in space at constant velocity about an off-vertical axis, otolith-ocular reflexes are elicited in response to the sinusoidally varying linear acceleration (gravity) components along the interaural, nasooccipital, or vertical head axis. Because the frequency of these sinusoidal stimuli is proportional to the velocity of rotation, rotation at low and moderately fast speeds allows the study of the mid-and low-frequency dynamics of these otolith-ocular reflexes. 2. Animals were rotated in complete darkness in the yaw, pitch, and roll planes at velocities ranging between 7.4 and 184 degrees/s. Accordingly, otolith-ocular reflexes (manifested as sinusoidal modulations in eye position and/or slow-phase eye velocity) were quantitatively studied for stimulus frequencies ranging between 0.02 and 0.51 Hz. During yaw and roll rotation, torsional, vertical, and horizontal slow-phase eye velocity was sinusoidally modulated as a function of head position. The amplitudes of these responses were symmetric for rotations in opposite directions. In contrast, mainly vertical slow-phase eye velocity was modulated during pitch rotation. This modulation was asymmetric for rotations in opposite direction. 3. Each of these response components in a given rotation plane could be associated with an otolith-ocular response vector whose sensitivity, temporal phase, and spatial orientation were estimated on the basis of the amplitude and phase of sinusoidal modulations during both directions of rotation. Based on this analysis, which was performed either for slow-phase eye velocity alone or for total eye excursion (including both slow and fast eye movements), two distinct response patterns were observed: 1) response vectors with pronounced dynamics and spatial/temporal properties that could be characterized as the low-frequency range of "translational" otolith-ocular reflexes; and 2) response vectors associated with an eye position modulation in phase with head position ("tilt" otolith-ocular reflexes). 4. The responses associated with two otolith-ocular vectors with pronounced dynamics consisted of horizontal eye movements evoked as a function of gravity along the interaural axis and vertical eye movements elicited as a function of gravity along the vertical head axis. Both responses were characterized by a slow-phase eye velocity sensitivity that increased three- to five-fold and large phase changes of approximately 100-180 degrees between 0.02 and 0.51 Hz. These dynamic properties could suggest nontraditional temporal processing in utriculoocular and sacculoocular pathways, possibly involving spatiotemporal otolith-ocular interactions. 5. The two otolith-ocular vectors associated with eye position responses in phase with head position (tilt otolith-ocular reflexes) consisted of torsional eye movements in response to gravity along the interaural axis, and vertical eye movements in response to gravity along the nasooccipital head axis. These otolith-ocular responses did not result from an otolithic effect on slow eye movements alone. Particularly at high frequencies (i.e., high speed rotations), saccades were responsible for most of the modulation of torsional and vertical eye position, which was relatively large (on average +/- 8-10 degrees/g) and remained independent of frequency. Such reflex dynamics can be simulated by a direct coupling of primary otolith afferent inputs to the oculomotor plant. (ABSTRACT TRUNCATED).
Deep Gaze Velocity Analysis During Mammographic Reading for Biometric Identification of Radiologists
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoon, Hong-Jun; Alamudun, Folami T.; Hudson, Kathy
Several studies have confirmed that the gaze velocity of the human eye can be utilized as a behavioral biometric or personalized biomarker. In this study, we leverage the local feature representation capacity of convolutional neural networks (CNNs) for eye gaze velocity analysis as the basis for biometric identification of radiologists performing breast cancer screening. Using gaze data collected from 10 radiologists reading 100 mammograms of various diagnoses, we compared the performance of a CNN-based classification algorithm with two deep learning classifiers, deep neural network and deep belief network, and a previously presented hidden Markov model classifier. The study showed thatmore » the CNN classifier is superior compared to alternative classification methods based on macro F1-scores derived from 10-fold cross-validation experiments. Our results further support the efficacy of eye gaze velocity as a biometric identifier of medical imaging experts.« less
Deep Gaze Velocity Analysis During Mammographic Reading for Biometric Identification of Radiologists
Yoon, Hong-Jun; Alamudun, Folami T.; Hudson, Kathy; ...
2018-01-24
Several studies have confirmed that the gaze velocity of the human eye can be utilized as a behavioral biometric or personalized biomarker. In this study, we leverage the local feature representation capacity of convolutional neural networks (CNNs) for eye gaze velocity analysis as the basis for biometric identification of radiologists performing breast cancer screening. Using gaze data collected from 10 radiologists reading 100 mammograms of various diagnoses, we compared the performance of a CNN-based classification algorithm with two deep learning classifiers, deep neural network and deep belief network, and a previously presented hidden Markov model classifier. The study showed thatmore » the CNN classifier is superior compared to alternative classification methods based on macro F1-scores derived from 10-fold cross-validation experiments. Our results further support the efficacy of eye gaze velocity as a biometric identifier of medical imaging experts.« less
Heo, Hwan; Lee, Won Oh; Shin, Kwang Yong; Park, Kang Ryoung
2014-05-15
We propose a new method for measuring the degree of eyestrain on 3D stereoscopic displays using a glasses-type of eye tracking device. Our study is novel in the following four ways: first, the circular area where a user's gaze position exists is defined based on the calculated gaze position and gaze estimation error. Within this circular area, the position where edge strength is maximized can be detected, and we determine this position as the gaze position that has a higher probability of being the correct one. Based on this gaze point, the eye foveation model is defined. Second, we quantitatively evaluate the correlation between the degree of eyestrain and the causal factors of visual fatigue, such as the degree of change of stereoscopic disparity (CSD), stereoscopic disparity (SD), frame cancellation effect (FCE), and edge component (EC) of the 3D stereoscopic display using the eye foveation model. Third, by comparing the eyestrain in conventional 3D video and experimental 3D sample video, we analyze the characteristics of eyestrain according to various factors and types of 3D video. Fourth, by comparing the eyestrain with or without the compensation of eye saccades movement in 3D video, we analyze the characteristics of eyestrain according to the types of eye movements in 3D video. Experimental results show that the degree of CSD causes more eyestrain than other factors.
Hypothesized eye movements of neurolinguistic programming: a statistical artifact.
Farmer, A; Rooney, R; Cunningham, J R
1985-12-01
Neurolinguistic programming's hypothesized eye-movements were measured independently from videotapes of 30 subjects, aged 15 to 76 yr., who were asked to recall visual pictures, recorded audio sounds, and textural objects. chi 2 indicated that subjects' responses were significantly different from those predicted. When chi 2 comparisons were weighted by number of eye positions assigned to each modality (3 visual, 3 auditory, 1 kinesthetic), subjects' responses did not differ significantly from the expected pattern. These data indicate that the eye-movement hypothesis may represent randomly occurring rather than sensory-modality-related positions.
Evidence for highly selective neuronal tuning to whole words in the "visual word form area".
Glezer, Laurie S; Jiang, Xiong; Riesenhuber, Maximilian
2009-04-30
Theories of reading have posited the existence of a neural representation coding for whole real words (i.e., an orthographic lexicon), but experimental support for such a representation has proved elusive. Using fMRI rapid adaptation techniques, we provide evidence that the human left ventral occipitotemporal cortex (specifically the "visual word form area," VWFA) contains a representation based on neurons highly selective for individual real words, in contrast to current theories that posit a sublexical representation in the VWFA.
A sensorimotor account of vision and visual consciousness.
O'Regan, J K; Noë, A
2001-10-01
Many current neurophysiological, psychophysical, and psychological approaches to vision rest on the idea that when we see, the brain produces an internal representation of the world. The activation of this internal representation is assumed to give rise to the experience of seeing. The problem with this kind of approach is that it leaves unexplained how the existence of such a detailed internal representation might produce visual consciousness. An alternative proposal is made here. We propose that seeing is a way of acting. It is a particular way of exploring the environment. Activity in internal representations does not generate the experience of seeing. The outside world serves as its own, external, representation. The experience of seeing occurs when the organism masters what we call the governing laws of sensorimotor contingency. The advantage of this approach is that it provides a natural and principled way of accounting for visual consciousness, and for the differences in the perceived quality of sensory experience in the different sensory modalities. Several lines of empirical evidence are brought forward in support of the theory, in particular: evidence from experiments in sensorimotor adaptation, visual "filling in," visual stability despite eye movements, change blindness, sensory substitution, and color perception.
Eye Size and Set in Small-Bodied Fossil Primates: A Three-Dimensional Method.
Rosenberger, Alfred L; Smith, Tim D; DeLeon, Valerie B; Burrows, Anne M; Schenck, Robert; Halenar, Lauren B
2016-12-01
We introduce a new method to geometrically reconstruct eye volume and placement in small-bodied primates based on the three-dimensional contour of the intraorbital surface. We validate it using seven species of living primates, with dry skulls and wet dissections, and test its application on seven species of Paleogene fossils of interest. The method performs well even when the orbit is damaged and incomplete, lacking the postorbital bar and represented only by the orbital floor. Eye volume is an important quantity for anatomic and metabolic reasons, which due to differences in eye set, or position within (or outside) the bony orbit, can be underestimated in living and fossil forms when calculated from aperture diameter. Our Ectopic Index quantifies how much the globe's volume protrudes anteriorly from the aperture. Lemur, Notharctus and Rooneyia resemble anthropoids, with deeply recessed eyes protruding 11%-13%. Galago and Tarsius are the other extreme, at 47%-56%. We argue that a laterally oriented aperture has little to do with line-of-sight in euprimates, as large ectopic eyes can position the cornea to enable a directly forward viewing axis, and soft tissue positions the eyes facing forward in megachiropteran bats, which have unenclosed, open eye sockets. The size and set of virtual eyes reconstructed from 3D cranial models confirm that eyes were large to hypertrophic in Hemiacodon, Necrolemur, Microchoerus, Pseudoloris and Shoshonius, but eye size in Rooneyia may have been underestimated by measuring the aperture, as in Aotus. Anat Rec, 299:1671-1689, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
The Effect of the Crystalline Lens on Central Vault After Implantable Collamer Lens Implantation.
Qi, Meng-Ying; Chen, Qian; Zeng, Qing-Yan
2017-08-01
To identify associations between crystalline lens-related factors and central vault after Implantable Collamer Lens (ICL) (Staar Surgical, Monrovia, CA) implantation. This retrospective clinical study included 320 eyes from 186 patients who underwent ICL implantation surgery. At 1 year after surgery, the central vault was measured using anterior segment optical coherence tomography. Preoperative anterior chamber depth, lens thickness, lens position (lens position = anterior chamber depth + 1/2 lens thickness), and vault were analyzed to investigate the effects of lens-related factors on postoperative vault. The mean vault was 513 ± 215 µm at 1 year after surgery. Vault was positively correlated with preoperative anterior chamber depth (r = 0.495, P < .001) and lens position (r = 0.371, P < .001), but negatively correlated with lens thickness (r = -0.262, P < .001). Eyes with vaults of less than 250 µm had shallower anterior chambers, thicker lenses, and smaller lens position than eyes in the other two vault groups (which had vaults ≥ 250 µm) (P < .001). Eyes with both anterior chamber depth less than 3.1 mm and lens position less than 5.1 mm had greatly reduced vaults (P < .001). The crystalline lens could have an important influence on postoperative vault. Eyes with a shallower anterior chamber and a forward lens position will have lower vaults. [J Refract Surg. 2017;33(8):519-523.]. Copyright 2017, SLACK Incorporated.
Mian, Shahzad I; Li, Amy Y; Dutta, Satavisha; Musch, David C; Shtein, Roni M
2009-12-01
To determine whether corneal sensation and dry-eye signs and symptoms after myopic laser in situ keratomileusis (LASIK) surgery with a femtosecond laser are affected by varying hinge position, hinge angle, or flap thickness. University-based academic practice, Ann Arbor, Michigan, USA. This prospective randomized contralateral-eye study evaluated eyes after bilateral myopic LASIK with a femtosecond laser (IntraLase). Superior and temporal hinge positions, 45-degree and 90-degree hinge angles, and 100 microm and 130 microm corneal flap thicknesses were compared. Postoperative follow-up at 1 week and 1, 3, 6, and 12 months included central Cochet-Bonnet esthesiometry, the Ocular Surface Disease Index questionnaire, a Schirmer test with anesthesia, tear breakup time (TBUT), corneal fluorescein staining, and conjunctival lissamine green staining. The study evaluated 190 consecutive eyes (95 patients). Corneal sensation was reduced at all postoperative visits, with improvement over 12 months (P<.001). There was no difference in corneal sensation between the different hinge positions, angles, or flap thicknesses at any time point. The overall ocular surface disease index score was increased at 1 week, 1 month, and 3 months (P<.0001, P<.0001, and P = .046, respectively). The percentage of patients with a TBUT longer than 10 seconds was significantly lower at 1 week and 1 month (P<.0001). Dry-eye syndrome after myopic LASIK with a femtosecond laser was mild and improved after 3 months. Corneal flap hinge position, hinge angle, and thickness had no effect on corneal sensation or dry-eye syndrome.
Peripheral Refraction, Peripheral Eye Length, and Retinal Shape in Myopia.
Verkicharla, Pavan K; Suheimat, Marwan; Schmid, Katrina L; Atchison, David A
2016-09-01
To investigate how peripheral refraction and peripheral eye length are related to retinal shape. Relative peripheral refraction (RPR) and relative peripheral eye length (RPEL) were determined in 36 young adults (M +0.75D to -5.25D) along horizontal and vertical visual field meridians out to ±35° and ±30°, respectively. Retinal shape was determined in terms of vertex radius of curvature Rv, asphericity Q, and equivalent radius of curvature REq using a partial coherence interferometry method involving peripheral eye lengths and model eye raytracing. Second-order polynomial fits were applied to RPR and RPEL as functions of visual field position. Linear regressions were determined for the fits' second order coefficients and for retinal shape estimates as functions of central spherical refraction. Linear regressions investigated relationships of RPR and RPEL with retinal shape estimates. Peripheral refraction, peripheral eye lengths, and retinal shapes were significantly affected by meridian and refraction. More positive (hyperopic) relative peripheral refraction, more negative RPELs, and steeper retinas were found along the horizontal than along the vertical meridian and in myopes than in emmetropes. RPR and RPEL, as represented by their second-order fit coefficients, correlated significantly with retinal shape represented by REq. Effects of meridian and refraction on RPR and RPEL patterns are consistent with effects on retinal shape. Patterns derived from one of these predict the others: more positive (hyperopic) RPR predicts more negative RPEL and steeper retinas, more negative RPEL predicts more positive relative peripheral refraction and steeper retinas, and steeper retinas derived from peripheral eye lengths predict more positive RPR.
Frames of reference for gaze saccades evoked during stimulation of lateral intraparietal cortex.
Constantin, A G; Wang, H; Martinez-Trujillo, J C; Crawford, J D
2007-08-01
Previous studies suggest that stimulation of lateral intraparietal cortex (LIP) evokes saccadic eye movements toward eye- or head-fixed goals, whereas most single-unit studies suggest that LIP uses an eye-fixed frame with eye-position modulations. The goal of our study was to determine the reference frame for gaze shifts evoked during LIP stimulation in head-unrestrained monkeys. Two macaques (M1 and M2) were implanted with recording chambers over the right intraparietal sulcus and with search coils for recording three-dimensional eye and head movements. The LIP region was microstimulated using pulse trains of 300 Hz, 100-150 microA, and 200 ms. Eighty-five putative LIP sites in M1 and 194 putative sites in M2 were used in our quantitative analysis throughout this study. Average amplitude of the stimulation-evoked gaze shifts was 8.67 degrees for M1 and 7.97 degrees for M2 with very small head movements. When these gaze-shift trajectories were rotated into three coordinate frames (eye, head, and body), gaze endpoint distribution for all sites was most convergent to a common point when plotted in eye coordinates. Across all sites, the eye-centered model provided a significantly better fit compared with the head, body, or fixed-vector models (where the latter model signifies no modulation of the gaze trajectory as a function of initial gaze position). Moreover, the probability of evoking a gaze shift from any one particular position was modulated by the current gaze direction (independent of saccade direction). These results provide causal evidence that the motor commands from LIP encode gaze command in eye-fixed coordinates but are also subtly modulated by initial gaze position.
Nishimura, Mayu; Maurer, Daphne; Gao, Xiaoqing
2009-07-01
We explored differences in the mental representation of facial identity between 8-year-olds and adults. The 8-year-olds and adults made similarity judgments of a homogeneous set of faces (individual hair cues removed) using an "odd-man-out" paradigm. Multidimensional scaling (MDS) analyses were performed to represent perceived similarity of faces in a multidimensional space. Five dimensions accounted optimally for the judgments of both children and adults, with similar local clustering of faces. However, the fit of the MDS solutions was better for adults, in part because children's responses were more variable. More children relied predominantly on a single dimension, namely eye color, whereas adults appeared to use multiple dimensions for each judgment. The pattern of findings suggests that children's mental representation of faces has a structure similar to that of adults but that children's judgments are influenced less consistently by that overall structure.
Carvalho, Luis Alberto
2005-02-01
Our main goal in this work was to develop an artificial neural network (NN) that could classify specific types of corneal shapes using Zernike coefficients as input. Other authors have implemented successful NN systems in the past and have demonstrated their efficiency using different parameters. Our claim is that, given the increasing popularity of Zernike polynomials among the eye care community, this may be an interesting choice to add complementing value and precision to existing methods. By using a simple and well-documented corneal surface representation scheme, which relies on corneal elevation information, one can generate simple NN input parameters that are independent of curvature definition and that are also efficient. We have used the Matlab Neural Network Toolbox (MathWorks, Natick, MA) to implement a three-layer feed-forward NN with 15 inputs and 5 outputs. A database from an EyeSys System 2000 (EyeSys Vision, Houston, TX) videokeratograph installed at the Escola Paulista de Medicina-Sao Paulo was used. This database contained an unknown number of corneal types. From this database, two specialists selected 80 corneas that could be clearly classified into five distinct categories: (1) normal, (2) with-the-rule astigmatism, (3) against-the-rule astigmatism, (4) keratoconus, and (5) post-laser-assisted in situ keratomileusis. The corneal height (SAG) information of the 80 data files was fit with the first 15 Vision Science and it Applications (VSIA) standard Zernike coefficients, which were individually used to feed the 15 neurons of the input layer. The five output neurons were associated with the five typical corneal shapes. A group of 40 cases was randomly selected from the larger group of 80 corneas and used as the training set. The NN responses were statistically analyzed in terms of sensitivity [true positive/(true positive + false negative)], specificity [true negative/(true negative + false positive)], and precision [(true positive + true negative)/total number of cases]. The mean values for these parameters were, respectively, 78.75, 97.81, and 94%. Although we have used a relatively small training and testing set, results presented here should be considered promising. They are certainly an indication of the potential of Zernike polynomials as reliable parameters, at least in the cases presented here, as input data for artificial intelligence automation of the diagnosis process of videokeratography examinations. This technique should facilitate the implementation and add value to the classification methods already available. We also discuss briefly certain special properties of Zernike polynomials that are what we think make them suitable as NN inputs for this type of application.
Principi, Sara; Ginjaume, Mercè; Duch, Maria Amor; Sánchez, Roberto M; Fernández, Jose M; Vano, Eliseo
2015-04-01
The equivalent dose limit for the eye lens for occupational exposure recommended by the ICRP has been reduced to 20 mSv y(-1) averaged over defined periods of 5 y, with no single year exceeding 50 mSv. The compliance with this new requirement could not be easy in some workplace such as interventional radiology and cardiology. The aim of this study is to evaluate different possible approaches in order to have a good estimate of the eye lens dose during interventional procedures. Measurements were performed with an X-ray system Philips Allura FD-10, using a PMMA phantom to simulate the patient scattered radiation and a Rando phantom to simulate the cardiologist. Thermoluminescence (TL) whole-body and TL eye lens dosemeters together with Philips DoseAware active dosemeters were located on different positions of the Rando phantom to estimate the eye lens dose in typical cardiology procedures. The results show that, for the studied conditions, any of the analysed dosemeter positions are suitable for eye lens dose assessment. However, the centre of the thyroid collar and the left ear position provide a better estimate. Furthermore, in practice, improper use of the ceiling-suspended screen can produce partial protection of some parts of the body, and thus large differences between the measured doses and the actual exposure of the eye could arise if the dosemeter is not situated close to the eye. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Tabernero, Juan; Artal, Pablo
2012-02-01
To determine the optimum position to center a small-aperture corneal inlay and the effect of residual defocus in the surgical eye to maximize depth of focus. Laboratorio de Óptica, Universidad de Murcia, Murcia, Spain. Cohort study. Personalized eye models were built using actual data (corneal topography, eye length, ocular aberrations, and eye alignment). A small aperture 1.6 mm in diameter was placed at the corneal plane in each model. The monochromatic and polychromatic Strehl ratios were calculated as a function of the pinhole position. Different residual defocus values were also incorporated into the models, and the through-focus Strehl ratios were calculated. Sixteen eye models were built. For most subjects, the optimum location of the aperture for distance vision was close to the corneal reflex position. For a given optimized centration of the aperture, the best compromise of depth of focus was obtained when the eyes had some residual myopic defocus (range -0.75 to -1.00 diopter [D]). Strehl ratio values were over 0.1 for far distance, which led to visual acuities better than 20/20. The depth of focus was 2.50 D with a mean near visual acuity of Jaeger 1 or better. In eyes with little astigmatism and aberrations, the optimum centration of the small aperture was near the corneal reflex position. To improve optical outcomes with the inlay, some small residual myopia and correction of corneal astigmatism might be required. Copyright © 2011 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Mosaic: a position-effect variegation eye-color mutant in the mosquito Anopheles gambiae.
Benedict, M Q; McNitt, L M; Cornel, A J; Collins, F H
2000-01-01
The Mosaic (Mos) mutation, isolated in the F1 of 60Co-irradiated mosquitoes, confers variegated eye color to third and fourth instar larvae, pupae, and adults of the mosquito Anopheles gambiae. Mos is recessive in wild pink eye (p+) individuals, but is dominant and confers areas of wild-type pigment in mutant pink eye backgrounds. Mos is located 14.4 cM from pink eye on the X chromosome and is associated with a duplication of division 2B euchromatin that has been inserted into division 6 heterochromatin. Various combinations of Mos, pink eye alleles, and the autosomal mutation red eye were produced. In all cases, the darker pigmented regions of the eye in Mos individuals show the phenotypic interactions expected if the phenotype of those regions is due to expression of a p+ allele. Expression of Mos is suppressed by rearing larvae at 32 degrees C relative to 22 degrees C. All of these characteristics are consistent with Mos being a duplicated wild copy of the pink eye gene undergoing position-effect variegation.
Beyond Natural Numbers: Negative Number Representation in Parietal Cortex
Blair, Kristen P.; Rosenberg-Lee, Miriam; Tsang, Jessica M.; Schwartz, Daniel L.; Menon, Vinod
2012-01-01
Unlike natural numbers, negative numbers do not have natural physical referents. How does the brain represent such abstract mathematical concepts? Two competing hypotheses regarding representational systems for negative numbers are a rule-based model, in which symbolic rules are applied to negative numbers to translate them into positive numbers when assessing magnitudes, and an expanded magnitude model, in which negative numbers have a distinct magnitude representation. Using an event-related functional magnetic resonance imaging design, we examined brain responses in 22 adults while they performed magnitude comparisons of negative and positive numbers that were quantitatively near (difference <4) or far apart (difference >6). Reaction times (RTs) for negative numbers were slower than positive numbers, and both showed a distance effect whereby near pairs took longer to compare. A network of parietal, frontal, and occipital regions were differentially engaged by negative numbers. Specifically, compared to positive numbers, negative number processing resulted in greater activation bilaterally in intraparietal sulcus (IPS), middle frontal gyrus, and inferior lateral occipital cortex. Representational similarity analysis revealed that neural responses in the IPS were more differentiated among positive numbers than among negative numbers, and greater differentiation among negative numbers was associated with faster RTs. Our findings indicate that despite negative numbers engaging the IPS more strongly, the underlying neural representation are less distinct than that of positive numbers. We discuss our findings in the context of the two theoretical models of negative number processing and demonstrate how multivariate approaches can provide novel insights into abstract number representation. PMID:22363276
Beyond natural numbers: negative number representation in parietal cortex.
Blair, Kristen P; Rosenberg-Lee, Miriam; Tsang, Jessica M; Schwartz, Daniel L; Menon, Vinod
2012-01-01
Unlike natural numbers, negative numbers do not have natural physical referents. How does the brain represent such abstract mathematical concepts? Two competing hypotheses regarding representational systems for negative numbers are a rule-based model, in which symbolic rules are applied to negative numbers to translate them into positive numbers when assessing magnitudes, and an expanded magnitude model, in which negative numbers have a distinct magnitude representation. Using an event-related functional magnetic resonance imaging design, we examined brain responses in 22 adults while they performed magnitude comparisons of negative and positive numbers that were quantitatively near (difference <4) or far apart (difference >6). Reaction times (RTs) for negative numbers were slower than positive numbers, and both showed a distance effect whereby near pairs took longer to compare. A network of parietal, frontal, and occipital regions were differentially engaged by negative numbers. Specifically, compared to positive numbers, negative number processing resulted in greater activation bilaterally in intraparietal sulcus (IPS), middle frontal gyrus, and inferior lateral occipital cortex. Representational similarity analysis revealed that neural responses in the IPS were more differentiated among positive numbers than among negative numbers, and greater differentiation among negative numbers was associated with faster RTs. Our findings indicate that despite negative numbers engaging the IPS more strongly, the underlying neural representation are less distinct than that of positive numbers. We discuss our findings in the context of the two theoretical models of negative number processing and demonstrate how multivariate approaches can provide novel insights into abstract number representation.
Azulay, Haim; Striem, Ella; Amedi, Amir
2009-05-01
People tend to close their eyes when trying to retrieve an event or a visual image from memory. However the brain mechanisms behind this phenomenon remain poorly understood. Recently, we showed that during visual mental imagery, auditory areas show a much more robust deactivation than during visual perception. Here we ask whether this is a special case of a more general phenomenon involving retrieval of intrinsic, internally stored information, which would result in crossmodal deactivations in other sensory cortices which are irrelevant to the task at hand. To test this hypothesis, a group of 9 sighted individuals were scanned while performing a memory retrieval task for highly abstract words (i.e., with low imaginability scores). We also scanned a group of 10 congenitally blind, which by definition do not have any visual imagery per se. In sighted subjects, both auditory and visual areas were robustly deactivated during memory retrieval, whereas in the blind the auditory cortex was deactivated while visual areas, shown previously to be relevant for this task, presented a positive BOLD signal. These results suggest that deactivation may be most prominent in task-irrelevant sensory cortices whenever there is a need for retrieval or manipulation of internally stored representations. Thus, there is a task-dependent balance of activation and deactivation that might allow maximization of resources and filtering out of non relevant information to enable allocation of attention to the required task. Furthermore, these results suggest that the balance between positive and negative BOLD might be crucial to our understanding of a large variety of intrinsic and extrinsic tasks including high-level cognitive functions, sensory processing and multisensory integration.
Spatial updating in human parietal cortex
NASA Technical Reports Server (NTRS)
Merriam, Elisha P.; Genovese, Christopher R.; Colby, Carol L.
2003-01-01
Single neurons in monkey parietal cortex update visual information in conjunction with eye movements. This remapping of stimulus representations is thought to contribute to spatial constancy. We hypothesized that a similar process occurs in human parietal cortex and that we could visualize it with functional MRI. We scanned subjects during a task that involved remapping of visual signals across hemifields. We observed an initial response in the hemisphere contralateral to the visual stimulus, followed by a remapped response in the hemisphere ipsilateral to the stimulus. We ruled out the possibility that this remapped response resulted from either eye movements or visual stimuli alone. Our results demonstrate that updating of visual information occurs in human parietal cortex.
Insights into numerical cognition: considering eye-fixations in number processing and arithmetic.
Mock, J; Huber, S; Klein, E; Moeller, K
2016-05-01
Considering eye-fixation behavior is standard in reading research to investigate underlying cognitive processes. However, in numerical cognition research eye-tracking is used less often and less systematically. Nevertheless, we identified over 40 studies on this topic from the last 40 years with an increase of eye-tracking studies on numerical cognition during the last decade. Here, we review and discuss these empirical studies to evaluate the added value of eye-tracking for the investigation of number processing. Our literature review revealed that the way eye-fixation behavior is considered in numerical cognition research ranges from investigating basic perceptual aspects of processing non-symbolic and symbolic numbers, over assessing the common representational space of numbers and space, to evaluating the influence of characteristics of the base-10 place-value structure of Arabic numbers and executive control on number processing. Apart from basic results such as reading times of numbers increasing with their magnitude, studies revealed that number processing can influence domain-general processes such as attention shifting-but also the other way round. Domain-general processes such as cognitive control were found to affect number processing. In summary, eye-fixation behavior allows for new insights into both domain-specific and domain-general processes involved in number processing. Based thereon, a processing model of the temporal dynamics of numerical cognition is postulated, which distinguishes an early stage of stimulus-driven bottom-up processing from later more top-down controlled stages. Furthermore, perspectives for eye-tracking research in numerical cognition are discussed to emphasize the potential of this methodology for advancing our understanding of numerical cognition.
ERIC Educational Resources Information Center
Sato, Atsushi; Itakura, Shoji
2013-01-01
In everyday social life, we predict others' actions in response to our own actions. Subsequently, on the basis of these predictions, we control our actions to attain desired social outcomes and/or adjust our actions to accommodate the anticipated actions of the others. Representation of the bidirectional association between our and others'…
Anticipating Intentional Actions: The Effect of Eye Gaze Direction on the Judgment of Head Rotation
ERIC Educational Resources Information Center
Hudson, Matthew; Liu, Chang Hong; Jellema, Tjeerd
2009-01-01
Using a representational momentum paradigm, this study investigated the hypothesis that judgments of how far another agent's head has rotated are influenced by the perceived gaze direction of the head. Participants observed a video-clip of a face rotating 60[degrees] towards them starting from the left or right profile view. The gaze direction of…
ERIC Educational Resources Information Center
Dias, Maria do Rosário; Duque, Alexandra Freches
2016-01-01
Childhood overweight and obesity have been increasing over recent years and, more than ever, we are being called to act, whether as clinicians, parents or educators. The aim of this empirical research is to assess children's cognitive and emotionally internalized mental representation of a "Preferred" and "Healthy Meal," using…
ERIC Educational Resources Information Center
Hollingworth, Andrew; Richard, Ashleigh M.; Luck, Steven J.
2008-01-01
Visual short-term memory (VSTM) has received intensive study over the past decade, with research focused on VSTM capacity and representational format. Yet, the function of VSTM in human cognition is not well understood. Here, the authors demonstrate that VSTM plays an important role in the control of saccadic eye movements. Intelligent human…
Portable dynamic fundus instrument
NASA Technical Reports Server (NTRS)
Taylor, Gerald R. (Inventor); Meehan, Richard T. (Inventor); Hunter, Norwood R. (Inventor); Caputo, Michael P. (Inventor); Gibson, C. Robert (Inventor)
1992-01-01
A portable diagnostic image analysis instrument is disclosed for retinal funduscopy in which an eye fundus image is optically processed by a lens system to a charge coupled device (CCD) which produces recordable and viewable output data and is simultaneously viewable on an electronic view finder. The fundus image is processed to develop a representation of the vessel or vessels from the output data.
Predictions about Bisymmetry and Cross-Modal Matches from Global Theories of Subjective Intensities
ERIC Educational Resources Information Center
Luce, R. Duncan
2012-01-01
The article first summarizes the assumptions of Luce (2004, 2008) for inherently binary (2-D) stimuli (e.g., the ears and eyes) that lead to a "p-additive," order-preserving psychophysical representation. Next, a somewhat parallel theory for unary (1-D) signals is developed for intensity attributes such as linear extent, vibration to finger, and…
ERIC Educational Resources Information Center
Harshman, Jordan; Bretz, Stacey Lowery; Yezierski, Ellen
2013-01-01
Adequately accommodating students who are blind or low-vision (BLV) in the sciences has been a focus of recent inquiry, but much of the research to date has addressed broad accommodations rather than devising and testing specific teaching strategies that respond to the unique challenges of BLV students learning chemistry. This case study seeks to…
In Their Own Eyes and Voices: The Value of an Executive MBA Program According to Participants
ERIC Educational Resources Information Center
Han, Jian; Liang, Neng
2015-01-01
The purpose of this study was to more effectively understand the learning experiences of Executive Master of Business Administration (EMBA) students. We asked 330 EMBA students to draw a graphic representation of their life and reflect on their EMBA experiences. We then applied the Zaltman metaphor elicitation technique to conduct in-depth…
Students' Playful Tactics: Teaching at the Intersection of New Media and the Official Curriculum
ERIC Educational Resources Information Center
Rust, Julie
2015-01-01
By examining the ways in which high school students in two different English classes take up virtual self-representation tactics in school-based social networking sites, this article explores how young people carefully juggle the digital identities they adopt for the eyes of both peers and teachers. The data reveals that the students'…
Hollingworth, Andrew; Richard, Ashleigh M; Luck, Steven J
2008-02-01
Visual short-term memory (VSTM) has received intensive study over the past decade, with research focused on VSTM capacity and representational format. Yet, the function of VSTM in human cognition is not well understood. Here, the authors demonstrate that VSTM plays an important role in the control of saccadic eye movements. Intelligent human behavior depends on directing the eyes to goal-relevant objects in the world, yet saccades are very often inaccurate and require correction. The authors hypothesized that VSTM is used to remember the features of the current saccade target so that it can be rapidly reacquired after an errant saccade, a task faced by the visual system thousands of times each day. In 4 experiments, memory-based gaze correction was accurate, fast, automatic, and largely unconscious. In addition, a concurrent VSTM load interfered with memory-based gaze correction, but a verbal short-term memory load did not. These findings demonstrate that VSTM plays a direct role in a fundamentally important aspect of visually guided behavior, and they suggest the existence of previously unknown links between VSTM representations and the occulomotor system. PsycINFO Database Record (c) 2008 APA, all rights reserved.
Distinct eye movement patterns enhance dynamic visual acuity.
Palidis, Dimitrios J; Wyder-Hodge, Pearson A; Fooken, Jolande; Spering, Miriam
2017-01-01
Dynamic visual acuity (DVA) is the ability to resolve fine spatial detail in dynamic objects during head fixation, or in static objects during head or body rotation. This ability is important for many activities such as ball sports, and a close relation has been shown between DVA and sports expertise. DVA tasks involve eye movements, yet, it is unclear which aspects of eye movements contribute to successful performance. Here we examined the relation between DVA and the kinematics of smooth pursuit and saccadic eye movements in a cohort of 23 varsity baseball players. In a computerized dynamic-object DVA test, observers reported the location of the gap in a small Landolt-C ring moving at various speeds while eye movements were recorded. Smooth pursuit kinematics-eye latency, acceleration, velocity gain, position error-and the direction and amplitude of saccadic eye movements were linked to perceptual performance. Results reveal that distinct eye movement patterns-minimizing eye position error, tracking smoothly, and inhibiting reverse saccades-were related to dynamic visual acuity. The close link between eye movement quality and DVA performance has important implications for the development of perceptual training programs to improve DVA.
Distinct eye movement patterns enhance dynamic visual acuity
Palidis, Dimitrios J.; Wyder-Hodge, Pearson A.; Fooken, Jolande; Spering, Miriam
2017-01-01
Dynamic visual acuity (DVA) is the ability to resolve fine spatial detail in dynamic objects during head fixation, or in static objects during head or body rotation. This ability is important for many activities such as ball sports, and a close relation has been shown between DVA and sports expertise. DVA tasks involve eye movements, yet, it is unclear which aspects of eye movements contribute to successful performance. Here we examined the relation between DVA and the kinematics of smooth pursuit and saccadic eye movements in a cohort of 23 varsity baseball players. In a computerized dynamic-object DVA test, observers reported the location of the gap in a small Landolt-C ring moving at various speeds while eye movements were recorded. Smooth pursuit kinematics—eye latency, acceleration, velocity gain, position error—and the direction and amplitude of saccadic eye movements were linked to perceptual performance. Results reveal that distinct eye movement patterns—minimizing eye position error, tracking smoothly, and inhibiting reverse saccades—were related to dynamic visual acuity. The close link between eye movement quality and DVA performance has important implications for the development of perceptual training programs to improve DVA. PMID:28187157
NASA Technical Reports Server (NTRS)
Huebner, W. P.; Paloski, W. H.; Reschke, M. F.; Bloomberg, J. J.
1995-01-01
Neglecting the eccentric position of the eyes in the head can lead to erroneous interpretation of ocular motor data, particularly for near targets. We discuss the geometric effects that eye eccentricity has on the processing of target-directed eye and head movement data, and we highlight two approaches to processing and interpreting such data. The first approach involves determining the true position of the target with respect to the location of the eyes in space for evaluating the efficacy of gaze, and it allows calculation of retinal error directly from measured eye, head, and target data. The second approach effectively eliminates eye eccentricity effects by adjusting measured eye movement data to yield equivalent responses relative to a specified reference location (such as the center of head rotation). This latter technique can be used to standardize measured eye movement signals, enabling waveforms collected under different experimental conditions to be directly compared, both with the measured target signals and with each other. Mathematical relationships describing these approaches are presented for horizontal and vertical rotations, for both tangential and circumferential display screens, and efforts are made to describe the sensitivity of parameter variations on the calculated results.
Vollmann, Manja; Scharloo, Margreet; Langguth, Berthold; Kalkouskaya, Natallia; Salewski, Christel
2013-01-01
Both dispositional optimism and illness representations are related to psychological health in chronic patients. In a group of chronic tinnitus sufferers, the interplay between these two variables was examined. Specifically, it was tested to what extent the relationship between dispositional optimism and depression is mediated by more positive illness representations. The study had a cross-sectional design. One hundred and eighteen patients diagnosed with chronic tinnitus completed questionnaires assessing optimism (Life Orientation Test-Revised [LOT-R]), illness representations (Illness Perceptions Questionnaire-Revised [IPQ-R]) and depression (Hospital Anxiety and Depression Scale [HADS]). Correlation analysis showed that optimism was associated with more positive illness representations and lower levels of depression. Simple mediation analyses revealed that the relationship between optimism and depression was partially mediated by the illness representation dimensions consequences, treatment control, coherence, emotional representations and internal causes. A multiple mediation analysis indicated that the total mediation effect of illness representations is particularly due to the dimension consequences. Optimism influences depression in tinnitus patients both directly and indirectly. The indirect effect indicates that optimism is associated with more positive tinnitus-specific illness representations which, in turn, are related to less depression. These findings contribute to a better understanding of the interplay between generalised expectancies, illness-specific perceptions and psychological adjustment to medical conditions.
Ali, S M; Reisner, L A; King, B; Cao, A; Auner, G; Klein, M; Pandya, A K
2008-01-01
A redesigned motion control system for the medical robot Aesop allows automating and programming its movements. An IR eye tracking system has been integrated with this control interface to implement an intelligent, autonomous eye gaze-based laparoscopic positioning system. A laparoscopic camera held by Aesop can be moved based on the data from the eye tracking interface to keep the user's gaze point region at the center of a video feedback monitor. This system setup provides autonomous camera control that works around the surgeon, providing an optimal robotic camera platform.
NASA Astrophysics Data System (ADS)
Söderberg, Per G.; Sandberg-Melin, Camilla
2018-02-01
The present study aimed to elucidate the angular distribution of the Pigment epithelium central limit-Inner limit of the retina Minimal Distance measured over 2π radians in the frontal plane (PIMD-2π) in young healthy eyes. Both healthy eyes of 16 subjects aged [20;30[ years were included. In each eye, a volume of the optical nerve head (ONH) was captured three times with a TOPCON DRI OCT Triton (Japan). Each volume renders a representation of the ONH 2.8 mm along the sagittal axis resolved in 993 steps, 6 mm long the frontal axis resolved in 512 steps and 6 x mm along the longitudinal axis resolved in 256 steps. The captured volumes were transferred to a custom made software for semiautomatic segmentation of PIMD around the circumference of the ONH. The phases of iterated volumes were calibrated with cross correlation. It was found that PIMD-2π expresses a double hump with a small maximum superiorly, a larger maximum inferiorly, and minima in between. The measurements indicated that there is no difference of PIMD-2π between genders nor between dominant and not dominant eye within subject. The variation between eyes within subject is of the same order as the variation among subjects. The variation among volumes within eye is substantially lower.
Eye Movements of Patients with Tunnel Vision while Walking
Vargas-Martín, Fernando; Peli, Eli
2006-01-01
Purpose To determine how severe peripheral field loss (PFL) affects the dispersion of eye movements relative to the head, while walking in real environments. This information should help to better define the visual field and clearance requirements for head-mounted mobility visual aids. Methods Eye positions relative to the head were recorded in five retinitis pigmentosa patients with less than 15° of visual field and three normally-sighted people, each walking in varied environments for more than 30 minutes. The eye position recorder was made portable by modifying a head-mounted ISCAN system. Custom data processing was implemented to reject unreliable data. Sample standard deviations of eye position (dispersion) were compared across subject groups and environments. Results PFL patients exhibited narrower horizontal eye position dispersions than normally-sighted subjects (9.4° vs. 14.2°, p < 0.0001) and PFL patients’ vertical dispersions were smaller when walking indoors than outdoors (8.2° vs. 10.3°, p = 0.048). Conclusions When walking, the PFL patients did not increase their scanning eye movements to compensate for missing peripheral vision information. Their horizontal scanning was actually reduced, possibly because saccadic amplitude is limited by a lack of peripheral stimulation. The results suggest that a field-of-view as wide as 40° may be needed for closed (immersive) head-mounted mobility aids, while a much narrower display, perhaps as narrow as 20°, might be sufficient with an open design. PMID:17122116
Eye movements of patients with tunnel vision while walking.
Vargas-Martín, Fernando; Peli, Eli
2006-12-01
To determine how severe peripheral field loss (PFL) affects the dispersion of eye movements relative to the head in patients walking in real environments. This information should help to define the visual field and clearance requirements for head-mounted mobility visual aids. Eye positions relative to the head were recorded in five patients with retinitis pigmentosa who had less than 15 degrees of visual field and in three normally sighted people, each walking in varied environments for more than 30 minutes. The eye-position recorder was made portable by modifying a head-mounted system (ISCAN, Burlington, MA). Custom data processing was implemented, to reject unreliable data. Sample standard deviations of eye position (dispersion) were compared across subject groups and environments. The patients with PFL exhibited narrower horizontal eye-position dispersions than did the normally sighted subjects (9.4 degrees vs. 14.2 degrees , P < 0.0001), and the vertical dispersions of patients with PFL were smaller when they were walking indoors than when walking outdoors (8.2 degrees vs. 10.3 degrees ; P = 0.048). When walking, the patients with PFL did not increase their scanning eye movements to compensate for missing peripheral vision information. Their horizontal scanning was actually reduced, possibly because of lack of peripheral stimulation. The results suggest that a field of view as wide as 40 degrees may be needed for closed (immersive) head-mounted mobility aids, whereas a much narrower display, perhaps as narrow as 20 degrees , may be sufficient with an open design.
Fan, X; Wu, L L; Xiao, G G; Ma, Z Z; Liu, F
2018-03-11
Objective: To analyze potentials of frequency-doubling technology perimetry (FDP) for diagnosing open-angle glaucoma (OAG) in perimetrically normal eyes of OAG patients diagnosed with standard automated perimetry (SAP) and relating factors from abnormalities on FDP to visual field loss on SAP. Methods: A prospective cohort study. Sixty-eight eyes of 68 OAG patients visiting the ophthalmic clinic of Peking University Third Hospital during November 2003 and October 2007 [32 primary open-angle glaucoma patients and 36 normal tension glaucoma patients, 32 males and 36 females, with an average age of (59±13) years] with unilateral field loss detected by SAP (Octopus101 tG2 program) were examined with the FDP N-30 threshold program (Humphrey Instruments) at baseline. Two groups, FDP positive group and FDP negative group, were divided based on the FDP results, and visual field examinations were followed by a series of SAP examinations for the perimetrically normal eyes over 8 years. During the follow-up, the difference of the converting rate of SAP tests between the two groups was analyzed. Differences between "convertors" and "non-convertors" of SAP tests in the FDP positive group, such as the cup-to-disk ratio and glaucomatous optic neuropathy rate, were also compared with the independent-sample t test or Wilcoxon two-sample test for continuous variable data and the χ(2) test or Fisher exact test for classified variable data and rates. Results: Forty-eight perimetrically normal eyes of 48 participants had complete data and a qualifying follow-up. Baseline FDP results were positive in 33 eyes and negative in 15 eyes. Of the eyes with positive FDP results, 22 eyes developed abnormal SAP results after 4.0 to 90.0 months (median 14.5 months) , whereas none of the eyes with negative FDP results developed abnormal SAP results. For perimetrically normal eyes in the FDP positive group, "converters" showed a greater cup-to-disk ratio (0.73±0.09 vs . 0.63±0.14, Wilcoxon two-sample test, P= 0.011) and more eyes with glaucomatous optic neuropathy (19/22 vs . 4/11, Fisher exact test, P= 0.006). Conclusions: In perimetrically normal eyes of OAG patients, FDP could detect visual field loss of these eyes and predict to some extent future visual field loss on SAP. Severity of glaucomatous optic neuropathy at baseline is related to converting from abnormalities on FDP to visual field loss on SAP. (Chin J Ophthalmol, 2018, 54: 177-183) .
Rauen, Matthew P; Goins, Kenneth M; Sutphin, John E; Kitzmann, Anna S; Schmidt, Gregory A; Wagoner, Michael D
2012-04-01
To determine if the lamellar cut of donor tissue for endothelial keratoplasty (EK) by an eye bank facility is associated with a change in the prevalence of positive bacterial or fungal donor rim cultures after corneal transplantation. A retrospective review was conducted of bacterial and fungal cultures of donor rims used for corneal transplantation at a tertiary eye care center from January 1, 2003, to December 31, 2008, with tissue provided by a single eye bank. The cases were divided into 2 groups. Group 1 ("no-cut") included keratoplasty procedures in which a lamellar cut was not performed. Group 2 ("precut") included EK procedures in which a 4-hour period of prewarming of tissue followed by a lamellar cut was performed in the eye bank before tissue delivery to the operating surgeon. There were 351 donor rim cultures in group 1 and 278 in group 2. Bacterial cultures were positive in 30 donor rims (8.5%) in group 1 and 13 (4.7%) in group 2 (P = 0.058). Positive bacterial cultures were not associated with any postoperative infections. Fungal cultures were positive in 8 donor rims (2.3%) in group 1 and 7 (2.5%) in group 2 (P = 1.0). Positive fungal cultures were associated with 2 cases (13.3%) of postoperative fungal infections. Corneal donor tissue can be precut for EK by trained eye bank personnel without an increased risk of bacterial or fungal contamination.
Heo, Hwan; Lee, Won Oh; Shin, Kwang Yong; Park, Kang Ryoung
2014-01-01
We propose a new method for measuring the degree of eyestrain on 3D stereoscopic displays using a glasses-type of eye tracking device. Our study is novel in the following four ways: first, the circular area where a user's gaze position exists is defined based on the calculated gaze position and gaze estimation error. Within this circular area, the position where edge strength is maximized can be detected, and we determine this position as the gaze position that has a higher probability of being the correct one. Based on this gaze point, the eye foveation model is defined. Second, we quantitatively evaluate the correlation between the degree of eyestrain and the causal factors of visual fatigue, such as the degree of change of stereoscopic disparity (CSD), stereoscopic disparity (SD), frame cancellation effect (FCE), and edge component (EC) of the 3D stereoscopic display using the eye foveation model. Third, by comparing the eyestrain in conventional 3D video and experimental 3D sample video, we analyze the characteristics of eyestrain according to various factors and types of 3D video. Fourth, by comparing the eyestrain with or without the compensation of eye saccades movement in 3D video, we analyze the characteristics of eyestrain according to the types of eye movements in 3D video. Experimental results show that the degree of CSD causes more eyestrain than other factors. PMID:24834910
Omura, Y; Losco, B M; Takeshige, C
1993-01-01
Using the Bi-Digital O-Ring Test electromagnetic resonance phenomena between 2 identical substances, first the pineal gland representational (rep.) areas were localized on the 5 different locations on the surface of the head using microscope slides of the pineal gland or Melatonin (while the eyes are closed) as a reference control substance. The 3 pineal rep. areas along the mid-line of the head always showed two lobes connected as a "Dumbbell" shape, with one round or oval area at each side of the mid-line. From each side of the head, anterior and superior to the ear, it appeared in a shape resembling the side view of a pineal gland. When both eyes were open, Melatonin, Norepinephrine (NE), and Acetylcholine (ACh) markedly decreased, while Serotonin, Dopamine, and GABA increased significantly in the outer part of the pineal gland rep. areas. When both eyes were closed, Melatonin, NE and ACh increased markedly, with marked decrease in Serotonin, Dopamine and GABA in the outer part of the pineal gland rep. areas. However, in the inner core of the pineal gland rep. area, an opposite response was found. Thus, the pineal gland has 2 main lobes, and functionally each lobe seems to have two concentric areas with an inverse relationship, i.e., a "Functional Cortex" area and a "Functional Core" area. The biochemical changes between the cortex and the core are in an inverse relationship. Melatonin was also found in the S-A node & right side of normal heart when the eyes were closed. When the eyes were open, Melatonin was found in the left side of the heart, as well as the salivary glands, stomach, colon, etc. While both eyes were closed, when a weak light beam was exposed at different parts of the body, such as any part of the upper and lower extremities, Melatonin, NE, and ACh decreased, with an increase in Serotonin, GABA and Dopamine only in the functional cortices of the pineal gland lobes on the same side of the body. Even when both eyes were open, if a very weak narrow beam of light was exposed on any part of the body, Melatonin, NE and ACh decreased, while Serotonin, Dopamine and GABA increased compared with pre-exposure level in only the functional cortices of the pineal gland lobes in the same side of the light exposure, and the opposite effect was also observed in the functional core of the light exposed side only.(ABSTRACT TRUNCATED AT 400 WORDS)
The perception of heading during eye movements
NASA Technical Reports Server (NTRS)
Royden, Constance S.; Banks, Martin S.; Crowell, James A.
1992-01-01
Warren and Hannon (1988, 1990), while studying the perception of heading during eye movements, concluded that people do not require extraretinal information to judge heading with eye/head movements present. Here, heading judgments are examined at higher, more typical eye movement velocities than the extremely slow tracking eye movements used by Warren and Hannon. It is found that people require extraretinal information about eye position to perceive heading accurately under many viewing conditions.
Sexual orientation and education politics: gay and lesbian representation in American schools.
Wald, Kenneth D; Rienzo, Barbara A; Button, James W
2002-01-01
In what has sometimes provoked a "culture war" over America's schools, gays and lesbians have sought an expanded voice in the making of education policy. This paper explores the factors that promote gay representation on school boards, how this variable in turn influences gay representation in both administrative and teaching positions, and how all three forms of gay representation relate to school board policies regarding sexual orientation education. Three of the four models drawn from the social movement literature help to explain gay school board representation. In a manner similar to other minority groups, gay representation on school boards directly or indirectly promotes the appointment of gays to administrative and teaching positions and the adoption of policies that address the problems faced by gay and lesbian students in the public schools.
Effect of gravity on vertical eye position.
Pierrot-Deseilligny, C
2009-05-01
There is growing evidence that gravity markedly influences vertical eye position and movements. A new model for the organization of brainstem upgaze pathways is presented in this review. The crossing ventral tegmental tract (CVTT) could be the efferent tract of an "antigravitational" pathway terminating at the elevator muscle motoneurons in the third nerve nuclei and comprising, upstream, the superior vestibular nucleus and y-group, the flocculus, and the otoliths. This pathway functions in parallel to the medial longitudinal fasciculus pathways, which control vertical eye movements made to compensate for all vertical head movements and may also comprise the "gravitational" vestibular pathways, involved in the central reflection of the gravity effect. The CVTT could provide the upgaze system with the supplement of tonic activity required to counteract the gravity effect expressed in the gravitational pathway, being permanently modulated according to the static positions of the head (i.e., the instantaneous gravity vector) between a maximal activity in the upright position and a minimal activity in horizontal positions. Different types of arguments support this new model. The permanent influence of gravity on vertical eye position is strongly suggested by the vertical slow phases and nystagmus observed after rapid changes in hypo- or hypergravity. The chin-beating nystagmus, existing in normal subjects with their head in the upside-down position, suggests that gravity is not compensated for in the downgaze system. Upbeat nystagmus due to brainstem lesions, most likely affecting the CVTT circuitry, is improved when the head is in the horizontal position, suggesting that this circuitry is involved in the counteraction of gravity between the upright and horizontal positions of the head. In downbeat nystagmus due to floccular damage, in which a permanent hyperexcitation of the CVTT could exist, a marked influence of static positions of the head is also observed. Finally, the strongest argument supporting a marked role of gravity in vertical eye position is that the eye movement alterations observed in the main, typical physiological and pathological conditions are precisely those that would be expected from a direct effect of gravity on the eyeballs, with, moreover, no single alternative interpretation existing so far that could account for all these different types of findings.
Are Eyes a Mirror of the Soul? What Eye Wrinkles Reveal about a Horse’s Emotional State
Hintze, Sara; Smith, Samantha; Patt, Antonia; Bachmann, Iris; Würbel, Hanno
2016-01-01
Finding valid indicators of emotional states is one of the biggest challenges in animal welfare science. Here, we investigated in horses whether variation in the expression of eye wrinkles caused by contraction of the inner eyebrow raiser reflects emotional valence. By confronting horses with positive and negative conditions, we aimed to induce positive and negative emotional states, hypothesising that positive emotions would reduce whereas negative emotions would increase eye wrinkle expression. Sixteen horses were individually exposed in a balanced order to two positive (grooming, food anticipation) and two negative conditions (food competition, waving a plastic bag). Each condition lasted for 60 seconds and was preceded by a 60 second control phase. Throughout both phases, pictures of the eyes were taken, and for each horse four pictures per condition and phase were randomly selected. Pictures were scored in random order and by two experimenters blind to condition and phase for six outcome measures: qualitative impression, eyelid shape, markedness of the wrinkles, presence of eye white, number of wrinkles, and the angle between the line through the eyeball and the highest wrinkle. The angle decreased during grooming and increased during food competition compared to control phases, whereas the two phases did not differ during food anticipation and the plastic bag condition. No effects on the other outcome measures were detected. Taken together, we have defined a set of measures to assess eye wrinkle expression reliably, of which one measure was affected by the conditions the horses were exposed to. Variation in eye wrinkle expression might provide valuable information on horse welfare but further validation of specific measures across different conditions is needed. PMID:27732647
NASA Astrophysics Data System (ADS)
Jian, Yu-Cin; Wu, Chao-Jung
2015-02-01
We investigated strategies used by readers when reading a science article with a diagram and assessed whether semantic and spatial representations were constructed while reading the diagram. Seventy-one undergraduate participants read a scientific article while tracking their eye movements and then completed a reading comprehension test. Our results showed that the text-diagram referencing strategy was commonly used. However, some readers adopted other reading strategies, such as reading the diagram or text first. We found all readers who had referred to the diagram spent roughly the same amount of time reading and performed equally well. However, some participants who ignored the diagram performed more poorly on questions that tested understanding of basic facts. This result indicates that dual coding theory may be a possible theory to explain the phenomenon. Eye movement patterns indicated that at least some readers had extracted semantic information of the scientific terms when first looking at the diagram. Readers who read the scientific terms on the diagram first tended to spend less time looking at the same terms in the text, which they read after. Besides, presented clear diagrams can help readers process both semantic and spatial information, thereby facilitating an overall understanding of the article. In addition, although text-first and diagram-first readers spent similar total reading time on the text and diagram parts of the article, respectively, text-first readers had significantly less number of saccades of text and diagram than diagram-first readers. This result might be explained as text-directed reading.
Evolution and Optimality of Similar Neural Mechanisms for Perception and Action during Search
Zhang, Sheng; Eckstein, Miguel P.
2010-01-01
A prevailing theory proposes that the brain's two visual pathways, the ventral and dorsal, lead to differing visual processing and world representations for conscious perception than those for action. Others have claimed that perception and action share much of their visual processing. But which of these two neural architectures is favored by evolution? Successful visual search is life-critical and here we investigate the evolution and optimality of neural mechanisms mediating perception and eye movement actions for visual search in natural images. We implement an approximation to the ideal Bayesian searcher with two separate processing streams, one controlling the eye movements and the other stream determining the perceptual search decisions. We virtually evolved the neural mechanisms of the searchers' two separate pathways built from linear combinations of primary visual cortex receptive fields (V1) by making the simulated individuals' probability of survival depend on the perceptual accuracy finding targets in cluttered backgrounds. We find that for a variety of targets, backgrounds, and dependence of target detectability on retinal eccentricity, the mechanisms of the searchers' two processing streams converge to similar representations showing that mismatches in the mechanisms for perception and eye movements lead to suboptimal search. Three exceptions which resulted in partial or no convergence were a case of an organism for which the targets are equally detectable across the retina, an organism with sufficient time to foveate all possible target locations, and a strict two-pathway model with no interconnections and differential pre-filtering based on parvocellular and magnocellular lateral geniculate cell properties. Thus, similar neural mechanisms for perception and eye movement actions during search are optimal and should be expected from the effects of natural selection on an organism with limited time to search for food that is not equi-detectable across its retina and interconnected perception and action neural pathways. PMID:20838589
Students’ mathematical representations on secondary school in solving trigonometric problems
NASA Astrophysics Data System (ADS)
Istadi; Kusmayadi, T. A.; Sujadi, I.
2017-06-01
This research aimed to analyse students’ mathematical representations on secondary school in solving trigonometric problems. This research used qualitative method. The participants were 4 students who had high competence of knowledge taken from 20 students of 12th natural-science grade SMAN-1 Kota Besi, Central Kalimantan. Data validation was carried out using time triangulation. Data analysis used Huberman and Miles stages. The results showed that their answers were not only based on the given figure, but also used the definition of trigonometric ratio on verbal representations. On the other hand, they were able to determine the object positions to be observed. However, they failed to determine the position of the angle of depression at the sketches made on visual representations. Failure in determining the position of the angle of depression to cause an error in using the mathematical equation. Finally, they were unsuccessful to use the mathematical equation properly on symbolic representations. From this research, we could recommend the importance of translations between mathematical problems and mathematical representations as well as translations among mathematical representaions (verbal, visual, and symbolic) in learning mathematics in the classroom.
Rihm, Julia S.; Sollberger, Silja B.; Soravia, Leila M.; Rasch, Björn
2016-01-01
Exposure therapy induces extinction learning and is an effective treatment for specific phobias. Sleep after learning promotes extinction memory and benefits therapy success. As sleep-dependent memory-enhancing effects are based on memory reactivations during sleep, here we aimed at applying the beneficial effect of sleep on therapy success by cueing memories of subjective therapy success during non-rapid eye movement sleep after in vivo exposure-based group therapy for spider phobia. In addition, oscillatory correlates of re-presentation during sleep (i.e., sleep spindles and slow oscillations) were investigated. After exposure therapy, spider-phobic patients verbalized their subjectively experienced therapy success under presence of a contextual odor. Then, patients napped for 90 min recorded by polysomnography. Half of the sleep group received the odor during sleep while the other half was presented an odorless vehicle as control. A third group served as a wake control group without odor presentation. While exposure therapy significantly reduced spider-phobic symptoms in all subjects, these symptoms could not be further reduced by re-presenting the odor associated with therapy success, probably due to a ceiling effect of the highly effective exposure therapy. However, odor re-exposure during sleep increased left-lateralized frontal slow spindle (11.0–13.0 Hz) and right-lateralized parietal fast spindle (13.0–15.0 Hz) activity, suggesting the possibility of a successful re-presentation of therapy-related memories during sleep. Future studies need to further examine the possibility to enhance therapy success by targeted memory reactivation (TMR) during sleep. PMID:27445775
Jirsova, Katerina; Brejchova, Kristyna; Krabcova, Ivana; Filipec, Martin; Al Fakih, Aref; Palos, Michalis; Vesela, Viera
2014-01-01
To assess the impact of autologous serum (AS) eye drops on the ocular surface of patients with bilateral severe dry eye and to draw a comparison between the clinical and laboratory examinations and the degree of subjective symptoms before and after serum treatment. A three-month prospective study was conducted on 17 patients with severe dry eye. AS eye drops were applied a maximum of 12 times a day together with regular therapy. Dry eye status was evaluated by clinical examination (visual acuity, Schirmer test, tear film breakup time, vital staining, tear film debris and meniscus), conjunctival impression cytology (epithelial and goblet cell density, snake-like chromatin, HLA-DR-positive and apoptotic cells) and subjectively by the patients. The application of AS eye drops led to a significant improvement in the Schirmer test (p < 0.01) and tear film debris (p < 0.05). The densities of goblet (p < 0.0001) and epithelial cells (p < 0.05) were significantly increased, indicating a decrease of squamous metaplasia after AS treatment. A significant decrease (p < 0.05) was found in the number of apoptotic, HLA-DR-positive and snake-like chromatin cells on the ocular surface. A significant improvement was found in all evaluated subjective symptoms. Altogether, the clinical results were improved in 77%, the laboratory results in 75% and the subjective feelings in 63% of the eyes. We found that three-month AS treatment led especially to the improvement of ocular surface dryness and damage of the epithelium. The improvement of dry eye after AS treatment correlated well with the clinical, laboratory and subjective findings. From the patients' subjective point of view, the positive effect of AS decreased with time, but still persisted up to three months after the end of therapy.
Methods and Devices for Space Optical Communications Using Laser Beams
NASA Technical Reports Server (NTRS)
Goorjian, Peter M. (Inventor)
2018-01-01
Light is used to communicate between objects separated by a large distance. Light beams are received in a telescopic lens assembly positioned in front of a cat's-eye lens. The light can thereby be received at various angles to be output by the cat's-eye lens to a focal plane of the cat's-eye lens, the position of the light beams upon the focal plane corresponding to the angle of the beam received. Lasers and photodetectors are distributed along this focal plane. A processor receives signals from the photodetectors, and selectively signal lasers positioned proximate the photodetectors detecting light, in order to transmit light encoding data through the cat's-eye lens and also through a telescopic lens back in the direction of the received light beams, which direction corresponds to a location upon the focal plane of the transmitting lasers.
Promoting organ donation through an entertainment-education TV program in Korea: Open Your Eyes.
Byoung Kwan Lee; Hyun Soon Park; Choi, Myung-Il; Cheon Soo Kim
2010-01-01
The purpose of this study is to investigate the effects of the characteristics of the program, Open Your Eyes, an entertainment-education TV program in Korea, on parasocial interaction and behavioral intention for organ donation. The results indicated that affective evaluation positively affected parasocial interaction with the program but cognitive evaluation negatively affected involvement with beneficiaries in the program. Also, it was found that cognitive evaluation of Open Your Eyes had a significant positive effect on behavioral intention. In addition, a significant positive effect of program engagement on the behavioral intention was found. Thus, the results indicate that individuals who feel program engagement of Open Your Eyes will be more likely to proceed with organ donation. However, no direct effect of involvement with the beneficiary and program hosts was found.
Individual Differences in Affect.
ERIC Educational Resources Information Center
Haviland, Jeannette
This paper argues that infants' affect patterns are innate and are meaningful indicators of individual differences in internal state. Videotapes of seven infants' faces were coded using an ethogram; the movement of the eyebrow, eye direction, eye openness, mouth shape, mouth position, lip position, and tongue protrusion were assessed…
Accommodation and the Visual Regulation of Refractive State in Marmosets
Troilo, David; Totonelly, Kristen; Harb, Elise
2009-01-01
Purpose To determine the effects of imposed anisometropic retinal defocus on accommodation, ocular growth, and refractive state changes in marmosets. Methods Marmosets were raised with extended-wear soft contact lenses for an average duration of 10 wks beginning at an average age of 76 d. Experimental animals wore either a positive or negative contact lens over one eye and a plano lens or no lens over the other. Another group wore binocular lenses of equal magnitude but opposite sign. Untreated marmosets served as controls and three wore plano lenses monocularly. Cycloplegic refractive state, corneal curvature, and vitreous chamber depth were measured before, during, and after the period of lens wear. To investigate the accommodative response, the effective refractive state was measured through each anisometropic condition at varying accommodative stimuli positions using an infrared refractometer. Results Eye growth and refractive state are significantly correlated with the sign and power of the contact lens worn. The eyes of marmosets reared with monocular negative power lenses had longer vitreous chambers and were myopic relative to contralateral control eyes (p<0.01). Monocular positive power lenses produced a significant reduction in vitreous chamber depth and hyperopia relative to the contralateral control eyes (p<0.05). In marmosets reared binocularly with lenses of opposite sign, we found larger interocular differences in vitreous chamber depths and refractive state (p<0.001). Accommodation influences the defocus experienced through the lenses, however, the mean effective refractive state was still hyperopia in the negative-lens-treated eyes and myopia in the positive-lens-treated eyes. Conclusions Imposed anisometropia effectively alters marmoset eye growth and refractive state to compensate for the imposed defocus. The response to imposed hyperopia is larger and faster than the response to imposed myopia. The pattern of accommodation under imposed anisometropia produces effective refractive states that are consistent with the changes in eye growth and refractive state observed. PMID:19104464
Opposite Effects of Glucagon and Insulin on Compensation for Spectacle Lenses in Chicks
Zhu, Xiaoying; Wallman, Josh
2009-01-01
Purpose Chick eyes compensate for defocus imposed by positive or negative spectacle lenses. Glucagon may signal the sign of defocus. Do insulin (or IGF-1) and glucagon act oppositely in controlling eye growth, as they do in metabolic pathways and in control of retinal neurogenesis? Methods Chicks, wearing either lenses or diffusers or neither over both eyes, were injected with glucagon, a glucagon antagonist, insulin, or IGF-1 in one eye (saline in other eye). Alternatively, chicks without lenses received insulin plus glucagon in one eye, and either glucagon or insulin in the fellow eye. Ocular dimensions, refractive errors and glycosaminoglycan synthesis were measured over 2-4 days. Results Glucagon attenuated the myopic response to negative lenses or diffusers by slowing ocular elongation and thickening the choroid; in contrast, with positive lenses, it increased ocular elongation to normal levels and reduced choroidal thickening, as did a glucagon antagonist. Insulin prevented the hyperopic response to positive lenses by speeding ocular elongation and thinning the choroid. In eyes without lenses, both insulin and IGF-1 speeded, and glucagon slowed, ocular elongation, but either glucagon or insulin increased the rate of thickening of the crystalline lens. When injected together, insulin blocked choroidal thickening by glucagon, at a dose that did not, by itself, thin the choroid. Conclusions Glucagon and insulin (or IGF-1) cause generally opposite modulations of eye-growth, with glucagon mostly increasing choroidal thickness and insulin mostly increasing ocular elongation. These effects are mutually inhibitory and depend on the visual input. PMID:18791176
ELECTRICAL STUDIES ON THE COMPOUND EYE OF LIGIA OCCIDENTALIS DANA (CRUSTACEA: ISOPODA)
Ruck, Philip; Jahn, Theodore L.
1954-01-01
The ERG of the compound eye in freshly collected Ligia occidentalis, in response to high intensity light flashes of ⅛ second or longer duration, begins with a negative on-effect quickly followed by an early positive deflection, rapidly returns to the baseline during illumination, and ends with a positive off-effect. As the stimulus intensity is decreased the early positivity progressively decreases and the rapid return to the baseline is replaced by a slowing decline of the negative on-effect. Responses were recorded with one active electrode subcorneally situated in the illuminated eye, the reference electrode in the dark eye. The dark-adapted eye shows a facilitation of the amplitude and rates of rise and fall of the on-effect to a brief, high intensity light stimulus. This facilitation may persist for more than 2 minutes. Following light adaptation under conditions in which the human eye loses sensitivity by a factor of almost 40,000 the Ligia eye loses sensitivity by a factor of only 3. The flicker fusion frequency of the ERG may be as high as 120/second with a corneal illumination of 15,000 foot-candles. Bleeding an otherwise intact animal very rapidly results in a decline of amplitude, change of wave form, and loss of facilitation in the ERG. When the eye is deganglionated without bleeding the animal the isolated retina responds in the same manner as the intact eye. Histological examination of the Ligia receptor layer showed that each ommatidium contains three different retinula cell types, each of which may be responsible for a different aspect of the ERG. PMID:13174786
2011-01-01
Background Coleoid cephalopods (squids and octopuses) have evolved a camera eye, the structure of which is very similar to that found in vertebrates and which is considered a classic example of convergent evolution. Other molluscs, however, possess mirror, pin-hole, or compound eyes, all of which differ from the camera eye in the degree of complexity of the eye structures and neurons participating in the visual circuit. Therefore, genes expressed in the cephalopod eye after divergence from the common molluscan ancestor could be involved in eye evolution through association with the acquisition of new structural components. To clarify the genetic mechanisms that contributed to the evolution of the cephalopod camera eye, we applied comprehensive transcriptomic analysis and conducted developmental validation of candidate genes involved in coleoid cephalopod eye evolution. Results We compared gene expression in the eyes of 6 molluscan (3 cephalopod and 3 non-cephalopod) species and selected 5,707 genes as cephalopod camera eye-specific candidate genes on the basis of homology searches against 3 molluscan species without camera eyes. First, we confirmed the expression of these 5,707 genes in the cephalopod camera eye formation processes by developmental array analysis. Second, using molecular evolutionary (dN/dS) analysis to detect positive selection in the cephalopod lineage, we identified 156 of these genes in which functions appeared to have changed after the divergence of cephalopods from the molluscan ancestor and which contributed to structural and functional diversification. Third, we selected 1,571 genes, expressed in the camera eyes of both cephalopods and vertebrates, which could have independently acquired a function related to eye development at the expression level. Finally, as experimental validation, we identified three functionally novel cephalopod camera eye genes related to optic lobe formation in cephalopods by in situ hybridization analysis of embryonic pygmy squid. Conclusion We identified 156 genes positively selected in the cephalopod lineage and 1,571 genes commonly found in the cephalopod and vertebrate camera eyes from the analysis of cephalopod camera eye specificity at the expression level. Experimental validation showed that the cephalopod camera eye-specific candidate genes include those expressed in the outer part of the optic lobes, which unique to coleoid cephalopods. The results of this study suggest that changes in gene expression and in the primary structure of proteins (through positive selection) from those in the common molluscan ancestor could have contributed, at least in part, to cephalopod camera eye acquisition. PMID:21702923
Describing a Robot's Workspace Using a Sequence of Views from a Moving Camera.
Hong, T H; Shneier, M O
1985-06-01
This correspondence describes a method of building and maintaining a spatial respresentation for the workspace of a robot, using a sensor that moves about in the world. From the known camera position at which an image is obtained, and two-dimensional silhouettes of the image, a series of cones is projected to describe the possible positions of the objects in the space. When an object is seen from several viewpoints, the intersections of the cones constrain the position and size of the object. After several views have been processed, the representation of the object begins to resemble its true shape. At all times, the spatial representation contains the best guess at the true situation in the world with uncertainties in position and shape explicitly represented. An octree is used as the data structure for the representation. It not only provides a relatively compact representation, but also allows fast access to information and enables large parts of the workspace to be ignored. The purpose of constructing this representation is not so much to recognize objects as to describe the volumes in the workspace that are occupied and those that are empty. This enables trajectory planning to be carried out, and also provides a means of spatially indexing objects without needing to represent the objects at an extremely fine resolution. The spatial representation is one part of a more complex representation of the workspace used by the sensory system of a robot manipulator in understanding its environment.
ERIC Educational Resources Information Center
Ferber, Marianne; Loeb, Jane
This report presents information on the employment status of women at the Urbana-Champaign campus of the University of Illinois. Discussed are: (1) the representation, rank, and pay of females on the faculty; (2) representation of women in administrative positions; (3) representation of women on the faculty versus representation in the labor…
Considerations on the mechanisms of alternating skew deviation in patients with cerebellar lesions.
Zee, D S
1996-01-01
Alternating skew deviation, in which the side of the higher eye changes depending upon whether gaze is directed to the left or the right, is a frequent sign in patients with posterior fossa lesions, including those restricted to the cerebellum. Here we propose a mechanism for alternating skews related to the otolith-ocular responses to fore and aft pitch of the head in lateral-eyed animals. In lateral-eyed animals the expected response to a static head pitch is cyclorotation of the eyes. But if the eyes are rotated horizontally in the orbit, away from the primary position, a compensatory skew deviation should also appear. The direction of the skew would depend upon whether the eyes were directed to the right (left eye forward, right eye backward) or to the left (left eye backward, right eye forward). In contrast, for frontal-eyed animals, skew deviations are counterproductive because they create diplopia and interfere with binocular vision. We attribute the emergence of skew deviations in frontal-eyed animals in pathological conditions to 1) an imbalance in otolithocular pathways and 2) a loss of the component of ocular motor innervation that normally corrects for the differences in pulling directions and strengths of the various ocular muscles as the eyes change position in the orbit. Such a compensatory mechanism is necessary to ensure optimal binocular visual function during and after head motion. This compensatory mechanism may depend upon the cerebellum.
ERIC Educational Resources Information Center
Richardson, Daniel; Matlock, Teenie
2007-01-01
Do we view the world differently if it is described to us in figurative rather than literal terms? An answer to this question would reveal something about both the conceptual representation of figurative language and the scope of top-down influences on scene perception. Previous work has shown that participants will look longer at a path region of…
Eye Movements and Visual Memory for Scenes
2005-01-01
Scene memory research has demonstrated that the memory representation of a semantically inconsistent object in a scene is more detailed and/or complete... memory during scene viewing, then changes to semantically inconsistent objects (which should be represented more com- pletely) should be detected more... semantic description. Due to the surprise nature of the visual memory test, any learning that occurred during the search portion of the experiment was
Simulating Civilians for Military Training: A Canadian Perspective
2010-10-01
either refer to research in anthropology , sociology, psychology, geography, computer science or a combination of these disciplines. One of the most...conveyed through subtle, non-verbal cues: stern looks or avoidance of eye contact, absence of response to friendly waving. Explicitly hostile... colours to signify changing emotions, believable 3D representations of facial expressions is still in its infancy and is an active area of research
Kimmel, Daniel L.; Mammo, Dagem; Newsome, William T.
2012-01-01
From human perception to primate neurophysiology, monitoring eye position is critical to the study of vision, attention, oculomotor control, and behavior. Two principal techniques for the precise measurement of eye position—the long-standing sclera-embedded search coil and more recent optical tracking techniques—are in use in various laboratories, but no published study compares the performance of the two methods simultaneously in the same primates. Here we compare two popular systems—a sclera-embedded search coil from C-N-C Engineering and the EyeLink 1000 optical system from SR Research—by recording simultaneously from the same eye in the macaque monkey while the animal performed a simple oculomotor task. We found broad agreement between the two systems, particularly in positional accuracy during fixation, measurement of saccade amplitude, detection of fixational saccades, and sensitivity to subtle changes in eye position from trial to trial. Nonetheless, certain discrepancies persist, particularly elevated saccade peak velocities, post-saccadic ringing, influence of luminance change on reported position, and greater sample-to-sample variation in the optical system. Our study shows that optical performance now rivals that of the search coil, rendering optical systems appropriate for many if not most applications. This finding is consequential, especially for animal subjects, because the optical systems do not require invasive surgery for implantation and repair of search coils around the eye. Our data also allow laboratories using the optical system in human subjects to assess the strengths and limitations of the technique for their own applications. PMID:22912608
Action and perception in literacy: A common-code for spelling and reading.
Houghton, George
2018-01-01
There is strong evidence that reading and spelling in alphabetical scripts depend on a shared representation (common-coding). However, computational models usually treat the two skills separately, producing a wide variety of proposals as to how the identity and position of letters is represented. This article treats reading and spelling in terms of the common-coding hypothesis for perception-action coupling. Empirical evidence for common representations in spelling-reading is reviewed. A novel version of the Start-End Competitive Queuing (SE-CQ) spelling model is introduced, and tested against the distribution of positional errors in Letter Position Dysgraphia, data from intralist intrusion errors in spelling to dictation, and dysgraphia because of nonperipheral neglect. It is argued that no other current model is equally capable of explaining this range of data. To pursue the common-coding hypothesis, the representation used in SE-CQ is applied, without modification, to the coding of letter identity and position for reading and lexical access, and a lexical matching rule for the representation is proposed (Start End Position Code model, SE-PC). Simulations show the model's compatibility with benchmark findings from form priming, its ability to account for positional effects in letter identification priming and the positional distribution of perseverative intrusion errors. The model supports the view that spelling and reading use a common orthographic description, providing a well-defined account of the major features of this representation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Temporal dynamics of ocular position dependence of the initial human vestibulo-ocular reflex.
Crane, Benjamin T; Tian, Junru; Demer, Joseph L
2006-04-01
While an ideal vestibulo-ocular reflex (VOR) generates ocular rotations compensatory for head motion, during visually guided movements, Listing's Law (LL) constrains the eye to rotational axes lying in Listing's Plane (LP). The present study was conducted to explore the recent proposal that the VOR's rotational axis is not collinear with the head's, but rather follows a time-dependent strategy intermediate between LL and an ideal VOR. Binocular LPs were defined during visual fixation in eight normal humans. The VOR was evoked by a highly repeatable transient whole-body yaw rotation in darkness at a peak acceleration of 2800 deg/s2. Immediately before rotation, subjects regarded targets 15 or 500 cm distant located at eye level, 20 degrees up, or 20 degrees down. Eye and head responses were compared with LL predictions in the position and velocity domains. LP orientation varied both among subjects and between individual subject's eyes, and rotated temporally with convergence by 5 +/- 5 degrees (+/-SEM). In the position domain, the eye compensated for head displacement even when the head rotated out of LP. Even within the first 20 ms from onset of head rotation, the ocular velocity axis tilted relative to the head axis by 30% +/- 8% of vertical gaze position. Saccades increased this tilt. Regardless of vertical gaze position, the ocular rotation axis tilted backward 4 degrees farther in abduction than in adduction. There was also a binocular vertical eye velocity transient and lateral tilt of the ocular axis. These disconjugate, short-latency axis perturbations appear intrinsic to the VOR and may have neural or mechanical origins.
1989-08-01
paths for integration with the off-aperture and dual-mirror VPD designs. PREFACE The goal of this work was to explore integration of an eye line-of- gaze ...Relationship in one plane between point-of- gaze on a flat scene and relative eye, detector, and scene positions...and eye line-of- gaze measurement. As a first step towards the design of an appropriate eye trac.<ing system for interface with the virtual cockpit
Context-specific adaptation of pursuit initiation in humans
NASA Technical Reports Server (NTRS)
Takagi, M.; Abe, H.; Hasegawa, S.; Usui, T.; Hasebe, H.; Miki, A.; Zee, D. S.; Shelhauser, M. (Principal Investigator)
2000-01-01
PURPOSE: To determine if multiple states for the initiation of pursuit, as assessed by acceleration in the "open-loop" period, can be learned and gated by context. METHODS: Four normal subjects were studied. A modified step-ramp paradigm for horizontal pursuit was used to induce adaptation. In an increasing paradigm, target velocity doubled 230 msec after onset; in a decreasing paradigm, it was halved. In the first experiment, vertical eye position (+/-5 degrees ) was used as the context cue, and the training paradigm (increasing or decreasing) changed with vertical eye position. In the second experiment, with vertical position constant, when the target was red, training was decreasing, and when green, increasing. The average eye acceleration in the first 100 msec of tracking was the index of open-loop pursuit performance. RESULTS: With vertical position as the cue, pursuit adaptation differed between up and down gaze. In some cases, the direction of adaptation was in exact accord with the training stimuli. In others, acceleration increased or decreased for both up and down gaze but always in correct relative proportion to the training stimuli. In contrast, multiple adaptive states were not induced with color as the cue. CONCLUSIONS: Multiple values for the relationship between the average eye acceleration during the initiation of pursuit and target velocity could be learned and gated by context. Vertical position was an effective contextual cue but not target color, implying that useful contextual cues must be similar to those occurring naturally, for example, orbital position with eye muscle weakness.
Saccades to remembered targets: the effects of smooth pursuit and illusory stimulus motion
NASA Technical Reports Server (NTRS)
Zivotofsky, A. Z.; Rottach, K. G.; Averbuch-Heller, L.; Kori, A. A.; Thomas, C. W.; Dell'Osso, L. F.; Leigh, R. J.
1996-01-01
1. Measurements were made in four normal human subjects of the accuracy of saccades to remembered locations of targets that were flashed on a 20 x 30 deg random dot display that was either stationary or moving horizontally and sinusoidally at +/-9 deg at 0.3 Hz. During the interval between the target flash and the memory-guided saccade, the "memory period" (1.4 s), subjects either fixated a stationary spot or pursued a spot moving vertically sinusoidally at +/-9 deg at 0.3 Hz. 2. When saccades were made toward the location of targets previously flashed on a stationary background as subjects fixated the stationary spot, median saccadic error was 0.93 deg horizontally and 1.1 deg vertically. These errors were greater than for saccades to visible targets, which had median values of 0.59 deg horizontally and 0.60 deg vertically. 3. When targets were flashed as subjects smoothly pursued a spot that moved vertically across the stationary background, median saccadic error was 1.1 deg horizontally and 1.2 deg vertically, thus being of similar accuracy to when targets were flashed during fixation. In addition, the vertical component of the memory-guided saccade was much more closely correlated with the "spatial error" than with the "retinal error"; this indicated that, when programming the saccade, the brain had taken into account eye movements that occurred during the memory period. 4. When saccades were made to targets flashed during attempted fixation of a stationary spot on a horizontally moving background, a condition that produces a weak Duncker-type illusion of horizontal movement of the primary target, median saccadic error increased horizontally to 3.2 deg but was 1.1 deg vertically. 5. When targets were flashed as subjects smoothly pursued a spot that moved vertically on the horizontally moving background, a condition that induces a strong illusion of diagonal target motion, median saccadic error was 4.0 deg horizontally and 1.5 deg vertically; thus the horizontal error was greater than under any other experimental condition. 6. In most trials, the initial saccade to the remembered target was followed by additional saccades while the subject was still in darkness. These secondary saccades, which were executed in the absence of visual feedback, brought the eye closer to the target location. During paradigms involving horizontal background movement, these corrections were more prominent horizontally than vertically. 7. Further measurements were made in two subjects to determine whether inaccuracy of memory-guided saccades, in the horizontal plane, was due to mislocalization at the time that the target flashed, misrepresentation of the trajectory of the pursuit eye movement during the memory period, or both. 8. The magnitude of the saccadic error, both with and without corrections made in darkness, was mislocalized by approximately 30% of the displacement of the background at the time that the target flashed. The magnitude of the saccadic error also was influenced by net movement of the background during the memory period, corresponding to approximately 25% of net background movement for the initial saccade and approximately 13% for the final eye position achieved in darkness. 9. We formulated simple linear models to test specific hypotheses about which combinations of signals best describe the observed saccadic amplitudes. We tested the possibilities that the brain made an accurate memory of target location and a reliable representation of the eye movement during the memory period, or that one or both of these was corrupted by the illusory visual stimulus. Our data were best accounted for by a model in which both the working memory of target location and the internal representation of the horizontal eye movements were corrupted by the illusory visual stimulus. We conclude that extraretinal signals played only a minor role, in comparison with visual estimates of the direction of gaze, in planning eye movements to remembered targ.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-12
... Fewer Animals to Identify Chemical Eye Hazards: Revised Criteria Necessary to Maintain Equivalent Hazard... criteria using results from 3-animal tests that would provide eye hazard classification equivalent to... least 1 positive animal in a 3-animal test to identify eye hazards will provide the same or greater...
Eye movements during the Rorschach test in schizophrenia.
Hori, Yasuko; Fukuzako, Hiroshi; Sugimoto, Yoko; Takigawa, Morikuni
2002-08-01
In order to understand relationships between scanning behaviors, characteristics of visual stimuli and the clinical symptoms in schizophrenia, eye movements of 37 schizophrenic patients and 36 controls were recorded using an eye-mark recorder during a free-response period in a Rorschach test. Four cards (I, II, V and VIII) were used. Data were analyzed during 15 s from the presentation of each card. For all cards, the number of eye fixations and the number of eye fixation areas were fewer, and total scanning length and mean scanning length were shorter for schizophrenic patients than for controls. For card II, in the non-popular response group, eye fixation frequency upon area 5 + 6 (red) was higher for schizophrenic patients. For card VIII, in the popular response group, eye fixation frequency upon area 5 + 6 (pink) was lower for schizophrenic patients. For cards II and VIII, the number of eye fixations was inversely correlated with negative symptoms. For card II, total scanning length tended to be inversely correlated with negative symptoms, and mean eye fixation time was correlated with negative symptoms. The number of eye fixation areas was inversely correlated with positive symptoms. For card VIII, eye fixation frequency in a stimulative area tended to be correlated with positive symptoms. Scanning behaviors in schizophrenic patients are affected by characteristics of visual stimuli, and partially by clinical symptoms.
Measure and Analysis of a Gaze Position Using Infrared Light Technique
2001-10-25
MEASURE AND ANALYSIS OF A GAZE POSITION USING INFRARED LIGHT TECHNIQUE Z. Ramdane-Cherif1,2, A. Naït-Ali2, J F. Motsch2, M. O. Krebs1 1INSERM E 01-17...also proposes a method to correct head movements. Keywords: eye movement, gaze tracking, visual scan path, spatial mapping. INTRODUCTION The eye gaze ...tracking has been used for clinical purposes to detect illnesses, such as nystagmus , unusual eye movements and many others [1][2][3]. It is also used
Anticipatory Smooth Eye Movements in Autism Spectrum Disorder
Aitkin, Cordelia D.; Santos, Elio M.; Kowler, Eileen
2013-01-01
Smooth pursuit eye movements are important for vision because they maintain the line of sight on targets that move smoothly within the visual field. Smooth pursuit is driven by neural representations of motion, including a surprisingly strong influence of high-level signals representing expected motion. We studied anticipatory smooth eye movements (defined as smooth eye movements in the direction of expected future motion) produced by salient visual cues in a group of high-functioning observers with Autism Spectrum Disorder (ASD), a condition that has been associated with difficulties in either generating predictions, or translating predictions into effective motor commands. Eye movements were recorded while participants pursued the motion of a disc that moved within an outline drawing of an inverted Y-shaped tube. The cue to the motion path was a visual barrier that blocked the untraveled branch (right or left) of the tube. ASD participants showed strong anticipatory smooth eye movements whose velocity was the same as that of a group of neurotypical participants. Anticipatory smooth eye movements appeared on the very first cued trial, indicating that trial-by-trial learning was not responsible for the responses. These results are significant because they show that anticipatory capacities are intact in high-functioning ASD in cases where the cue to the motion path is highly salient and unambiguous. Once the ability to generate anticipatory pursuit is demonstrated, the study of the anticipatory responses with a variety of types of cues provides a window into the perceptual or cognitive processes that underlie the interpretation of events in natural environments or social situations. PMID:24376667
Anticipatory smooth eye movements in autism spectrum disorder.
Aitkin, Cordelia D; Santos, Elio M; Kowler, Eileen
2013-01-01
Smooth pursuit eye movements are important for vision because they maintain the line of sight on targets that move smoothly within the visual field. Smooth pursuit is driven by neural representations of motion, including a surprisingly strong influence of high-level signals representing expected motion. We studied anticipatory smooth eye movements (defined as smooth eye movements in the direction of expected future motion) produced by salient visual cues in a group of high-functioning observers with Autism Spectrum Disorder (ASD), a condition that has been associated with difficulties in either generating predictions, or translating predictions into effective motor commands. Eye movements were recorded while participants pursued the motion of a disc that moved within an outline drawing of an inverted Y-shaped tube. The cue to the motion path was a visual barrier that blocked the untraveled branch (right or left) of the tube. ASD participants showed strong anticipatory smooth eye movements whose velocity was the same as that of a group of neurotypical participants. Anticipatory smooth eye movements appeared on the very first cued trial, indicating that trial-by-trial learning was not responsible for the responses. These results are significant because they show that anticipatory capacities are intact in high-functioning ASD in cases where the cue to the motion path is highly salient and unambiguous. Once the ability to generate anticipatory pursuit is demonstrated, the study of the anticipatory responses with a variety of types of cues provides a window into the perceptual or cognitive processes that underlie the interpretation of events in natural environments or social situations.
Roberts, Michael D.; Grau, Vicente; Grimm, Jonathan; Reynaud, Juan; Bellezza, Anthony J.; Burgoyne, Claude F.; Downs, J. Crawford
2009-01-01
Purpose To characterize the trabeculated connective tissue microarchitecture of the lamina cribrosa (LC) in terms of total connective tissue volume (CTV), connective tissue volume fraction (CTVF), predominant beam orientation, and material anisotropy in monkeys with early experimental glaucoma (EG). Methods The optic nerve heads from three monkeys with unilateral EG and four bilaterally normal monkeys were three dimensionally reconstructed from tissues perfusion fixed at an intraocular pressure of 10 mm Hg. A three-dimensional segmentation algorithm was used to extract a binary, voxel-based representation of the porous LC connective tissue microstructure that was regionalized into 45 subvolumes, and the following quantities were calculated: total CTV within the LC, mean and regional CTVF, regional predominant beam orientation, and mean and regional material anisotropy. Results Regional variation within the laminar microstructure was considerable within the normal eyes of all monkeys. The laminar connective tissue was generally most dense in the central and superior regions for the paired normal eyes, and laminar beams were radially oriented at the periphery for all eyes considered. CTV increased substantially in EG eyes compared with contralateral normal eyes (82%, 44%, 45% increases; P < 0.05), but average CTVF changed little (−7%, 1%, and −2% in the EG eyes). There were more laminar beams through the thickness of the LC in the EG eyes than in the normal controls (46%, 18%, 17% increases). Conclusions The substantial increase in laminar CTV with little change in CTVF suggests that significant alterations in connective and nonconnective tissue components in the laminar region occur in the early stages of glaucomatous damage. PMID:18806292
Representation of Letter Position in Spelling: Evidence from Acquired Dysgraphia
Fischer-Baum, Simon; McCloskey, Michael; Rapp, Brenda
2010-01-01
The graphemic representations that underlie spelling performance must encode not only the identities of the letters in a word, but also the positions of the letters. This study investigates how letter position information is represented. We present evidence from two dysgraphic individuals, CM and LSS, who perseverate letters when spelling: that is, letters from previous spelling responses intrude into subsequent responses. The perseverated letters appear more often than expected by chance in the same position in the previous and subsequent responses. We used these errors to address the question of how letter position is represented in spelling. In a series of analyses we determined how often the perseveration errors produced maintain position as defined by a number of alternative theories of letter position encoding proposed in the literature. The analyses provide strong evidence that the grapheme representations used in spelling encode letter position such that position is represented in a graded manner based on distance from both edges of the word. PMID:20378104
Langeslag-Smith, Miriam A; Vandal, Alain C; Briane, Vincent; Thompson, Benjamin; Anstice, Nicola S
2015-01-01
Objectives To assess the accuracy of preschool vision screening in a large, ethnically diverse, urban population in South Auckland, New Zealand. Design Retrospective longitudinal study. Methods B4 School Check vision screening records (n=5572) were compared with hospital eye department data for children referred from screening due to impaired acuity in one or both eyes who attended a referral appointment (n=556). False positive screens were identified by comparing screening data from the eyes that failed screening with hospital data. Estimation of false negative screening rates relied on data from eyes that passed screening. Data were analysed using logistic regression modelling accounting for the high correlation between results for the two eyes of each child. Primary outcome measure Positive predictive value of the preschool vision screening programme. Results Screening produced high numbers of false positive referrals, resulting in poor positive predictive value (PPV=31%, 95% CI 26% to 38%). High estimated negative predictive value (NPV=92%, 95% CI 88% to 95%) suggested most children with a vision disorder were identified at screening. Relaxing the referral criteria for acuity from worse than 6/9 to worse than 6/12 improved PPV without adversely affecting NPV. Conclusions The B4 School Check generated numerous false positive referrals and consequently had a low PPV. There is scope for reducing costs by altering the visual acuity criterion for referral. PMID:26614622
Women otolaryngologist representation in specialty society membership and leadership positions.
Choi, Sukgi S; Miller, Robert H
2012-11-01
To determine the proportion of female otolaryngologists in leadership positions relative to their number in the specialty, their membership in various otolaryngology organizations, and age. Cross-sectional analyses of otolaryngology organization membership with a subgroup analysis on female membership and leadership proportion comparing 5-year male/female cohort groups. Information on the number of members and leaders was obtained from various specialty societies by direct communication and from their Web sites between June and December 2010. The number of female and male otolaryngologists and their age distribution in 5-year age groups was obtained from the American Academy of Otolaryngology-Head and Neck Surgery (AAO-HNS). Statistical analyses were used to determine whether women had proportional membership and leadership representation in various specialty societies. Additionally, female representation in other leadership roles was analyzed using the male/female ratio within the 5-year cohort groups. Female otolaryngologists were found to constitute approximately 11% of practicing otolaryngologists. The American Society of Pediatric Otolaryngology had a higher proportion of female members (22%) compared to five other societies. When the gender composition within each organization was taken into account, female representation in specialty society leadership positions was proportionate to their membership across all societies. When gender and age were considered, women have achieved proportionate representation in each of the specialty societies' leadership positions. There was also proportionate representation of females as program directors, American Board of Otolaryngology directors, Residency Review Committee members, and journal editors/editorial board members. Finally, fewer female chairs or chiefs of departments/divisions were seen, but when age was taken into consideration, this difference was no longer significant. Women have achieved parity in otolaryngology leadership positions. As the number and seniority of women increase, the specialty should continue to monitor representation of women in leadership positions. Copyright © 2012 The American Laryngological, Rhinological, and Otological Society, Inc.
[Knowledge about the relationship through protagonist-director interactions in psychodrama groups].
Erdélyi, Ildikó
2005-01-01
This report follows emotional behavior in two psychodrama groups from the "present moment" until "moment of contact" using the Consensus Rorschach method. In the analysis of verbal and nonverbal material of protagonist-director dyads the following patterns were distinguished: a) early relationship patterns; b) affective attunement; c) fit of knowledge about the relationship. The author describes the relationship between the concept of "present moment" in therapy and the role of eye contact. Eye contact produces emotional tension in the context of the "present moment". Moments of contact, however, require implicit and explicit knowledge about the relationship to be constructed simultaneously as well as development of affective interactions. Emotional impulses are stored in implicit memory, which has no immediate availability. However, therapy--including psychodrama--attaches words to behaviors that are beyond the verbal levels as well, and therefore it extends the domain of memory. This is the way in which non-symbolized emotional behavior (including eye contact) and the play's verbal level with symbolic representations of memories are interconnected.
When Art Moves the Eyes: A Behavioral and Eye-Tracking Study
Massaro, Davide; Savazzi, Federica; Di Dio, Cinzia; Freedberg, David; Gallese, Vittorio; Gilli, Gabriella; Marchetti, Antonella
2012-01-01
The aim of this study was to investigate, using eye-tracking technique, the influence of bottom-up and top-down processes on visual behavior while subjects, naïve to art criticism, were presented with representational paintings. Forty-two subjects viewed color and black and white paintings (Color) categorized as dynamic or static (Dynamism) (bottom-up processes). Half of the images represented natural environments and half human subjects (Content); all stimuli were displayed under aesthetic and movement judgment conditions (Task) (top-down processes). Results on gazing behavior showed that content-related top-down processes prevailed over low-level visually-driven bottom-up processes when a human subject is represented in the painting. On the contrary, bottom-up processes, mediated by low-level visual features, particularly affected gazing behavior when looking at nature-content images. We discuss our results proposing a reconsideration of the definition of content-related top-down processes in accordance with the concept of embodied simulation in art perception. PMID:22624007
Zheng, Lei; Nikolaev, Anton; Wardill, Trevor J; O'Kane, Cahir J; de Polavieja, Gonzalo G; Juusola, Mikko
2009-01-01
Because of the limited processing capacity of eyes, retinal networks must adapt constantly to best present the ever changing visual world to the brain. However, we still know little about how adaptation in retinal networks shapes neural encoding of changing information. To study this question, we recorded voltage responses from photoreceptors (R1-R6) and their output neurons (LMCs) in the Drosophila eye to repeated patterns of contrast values, collected from natural scenes. By analyzing the continuous photoreceptor-to-LMC transformations of these graded-potential neurons, we show that the efficiency of coding is dynamically improved by adaptation. In particular, adaptation enhances both the frequency and amplitude distribution of LMC output by improving sensitivity to under-represented signals within seconds. Moreover, the signal-to-noise ratio of LMC output increases in the same time scale. We suggest that these coding properties can be used to study network adaptation using the genetic tools in Drosophila, as shown in a companion paper (Part II).
When art moves the eyes: a behavioral and eye-tracking study.
Massaro, Davide; Savazzi, Federica; Di Dio, Cinzia; Freedberg, David; Gallese, Vittorio; Gilli, Gabriella; Marchetti, Antonella
2012-01-01
The aim of this study was to investigate, using eye-tracking technique, the influence of bottom-up and top-down processes on visual behavior while subjects, naïve to art criticism, were presented with representational paintings. Forty-two subjects viewed color and black and white paintings (Color) categorized as dynamic or static (Dynamism) (bottom-up processes). Half of the images represented natural environments and half human subjects (Content); all stimuli were displayed under aesthetic and movement judgment conditions (Task) (top-down processes). Results on gazing behavior showed that content-related top-down processes prevailed over low-level visually-driven bottom-up processes when a human subject is represented in the painting. On the contrary, bottom-up processes, mediated by low-level visual features, particularly affected gazing behavior when looking at nature-content images. We discuss our results proposing a reconsideration of the definition of content-related top-down processes in accordance with the concept of embodied simulation in art perception.
Wardill, Trevor J.; O'Kane, Cahir J.; de Polavieja, Gonzalo G.; Juusola, Mikko
2009-01-01
Because of the limited processing capacity of eyes, retinal networks must adapt constantly to best present the ever changing visual world to the brain. However, we still know little about how adaptation in retinal networks shapes neural encoding of changing information. To study this question, we recorded voltage responses from photoreceptors (R1–R6) and their output neurons (LMCs) in the Drosophila eye to repeated patterns of contrast values, collected from natural scenes. By analyzing the continuous photoreceptor-to-LMC transformations of these graded-potential neurons, we show that the efficiency of coding is dynamically improved by adaptation. In particular, adaptation enhances both the frequency and amplitude distribution of LMC output by improving sensitivity to under-represented signals within seconds. Moreover, the signal-to-noise ratio of LMC output increases in the same time scale. We suggest that these coding properties can be used to study network adaptation using the genetic tools in Drosophila, as shown in a companion paper (Part II). PMID:19180196
Snyder, Lawrence H.
2018-01-01
We often orient to where we are about to reach. Spatial and temporal correlations in eye and arm movements may depend on the posterior parietal cortex (PPC). Spatial representations of saccade and reach goals preferentially activate cells in the lateral intraparietal area (LIP) and the parietal reach region (PRR), respectively. With unimanual reaches, eye and arm movement patterns are highly stereotyped. This makes it difficult to study the neural circuits involved in coordination. Here, we employ bimanual reaching to two different targets. Animals naturally make a saccade first to one target and then the other, resulting in different patterns of limb–gaze coordination on different trials. Remarkably, neither LIP nor PRR cells code which target the eyes will move to first. These results suggest that the parietal cortex plays at best only a permissive role in some aspects of eye–hand coordination and makes the role of LIP in saccade generation unclear. PMID:29610356
Design as dream and self-representation: Philip Johnson and the Glass House of Atreus.
Tutter, Adele
2011-06-01
Philip Johnson's masterpiece--the Glass House--is compared to a dream and conceptualized as containing encrypted and embedded representations of the self. Freud's masterpiece--The Interpretation of Dreams--is the theoretical and methodological model for this approach to design-as-dream. Drawing on Johnson's words and forms set in biographical, historical, and cultural context, interpretive paths are traced from manifest design elements of the Glass House to overdetermined latent meanings, yielding new and surprising insights into the Glass House, its elusive architect, and the process of its design. A mirror that reflects an image, a lens that focuses it, and a prism that reveals its components, the Glass House turns a lucid eye onto its maker.
Feng, Guangxue; Yuan, Youyong; Fang, Hu; Zhang, Ruoyu; Xing, Bengang; Zhang, Guanxin; Zhang, Deqing; Liu, Bin
2015-08-11
We report the design and synthesis of a red fluorescent AIE light-up probe for selective recognition, naked-eye detection, and image-guided photodynamic killing of Gram-positive bacteria, including vancomycin-resistant Enterococcus strains.
Do Monkeys Think in Metaphors? Representations of Space and Time in Monkeys and Humans
ERIC Educational Resources Information Center
Merritt, Dustin J.; Casasanto, Daniel; Brannon, Elizabeth M.
2010-01-01
Research on the relationship between the representation of space and time has produced two contrasting proposals. ATOM posits that space and time are represented via a common magnitude system, suggesting a symmetrical relationship between space and time. According to metaphor theory, however, representations of time depend on representations of…
Reasoning strategies with rational numbers revealed by eye tracking.
Plummer, Patrick; DeWolf, Melissa; Bassok, Miriam; Gordon, Peter C; Holyoak, Keith J
2017-07-01
Recent research has begun to investigate the impact of different formats for rational numbers on the processes by which people make relational judgments about quantitative relations. DeWolf, Bassok, and Holyoak (Journal of Experimental Psychology: General, 144(1), 127-150, 2015) found that accuracy on a relation identification task was highest when fractions were presented with countable sets, whereas accuracy was relatively low for all conditions where decimals were presented. However, it is unclear what processing strategies underlie these disparities in accuracy. We report an experiment that used eye-tracking methods to externalize the strategies that are evoked by different types of rational numbers for different types of quantities (discrete vs. continuous). Results showed that eye-movement behavior during the task was jointly determined by image and number format. Discrete images elicited a counting strategy for both fractions and decimals, but this strategy led to higher accuracy only for fractions. Continuous images encouraged magnitude estimation and comparison, but to a greater degree for decimals than fractions. This strategy led to decreased accuracy for both number formats. By analyzing participants' eye movements when they viewed a relational context and made decisions, we were able to obtain an externalized representation of the strategic choices evoked by different ontological types of entities and different types of rational numbers. Our findings using eye-tracking measures enable us to go beyond previous studies based on accuracy data alone, demonstrating that quantitative properties of images and the different formats for rational numbers jointly influence strategies that generate eye-movement behavior.
Perception of direct vs. averted gaze in portrait paintings: An fMRI and eye-tracking study.
Kesner, Ladislav; Grygarová, Dominika; Fajnerová, Iveta; Lukavský, Jiří; Nekovářová, Tereza; Tintěra, Jaroslav; Zaytseva, Yuliya; Horáček, Jiří
2018-06-15
In this study, we use separate eye-tracking measurements and functional magnetic resonance imaging to investigate the neuronal and behavioral response to painted portraits with direct versus averted gaze. We further explored modulatory effects of several painting characteristics (premodern vs modern period, influence of style and pictorial context). In the fMRI experiment, we show that the direct versus averted gaze elicited increased activation in lingual and inferior occipital and the fusiform face area, as well as in several areas involved in attentional and social cognitive processes, especially the theory of mind: angular gyrus/temporo-parietal junction, inferior frontal gyrus and dorsolateral prefrontal cortex. The additional eye-tracking experiment showed that participants spent more time viewing the portrait's eyes and mouth when the portrait's gaze was directed towards the observer. These results suggest that static and, in some cases, highly stylized depictions of human beings in artistic portraits elicit brain activation commensurate with the experience of being observed by a watchful intelligent being. They thus involve observers in implicit inferences of the painted subject's mental states and emotions. We further confirm the substantial influence of representational medium on brain activity. Copyright © 2018 Elsevier Inc. All rights reserved.
Eye movements during listening reveal spontaneous grammatical processing.
Huette, Stephanie; Winter, Bodo; Matlock, Teenie; Ardell, David H; Spivey, Michael
2014-01-01
Recent research using eye-tracking typically relies on constrained visual contexts in particular goal-oriented contexts, viewing a small array of objects on a computer screen and performing some overt decision or identification. Eyetracking paradigms that use pictures as a measure of word or sentence comprehension are sometimes touted as ecologically invalid because pictures and explicit tasks are not always present during language comprehension. This study compared the comprehension of sentences with two different grammatical forms: the past progressive (e.g., was walking), which emphasizes the ongoing nature of actions, and the simple past (e.g., walked), which emphasizes the end-state of an action. The results showed that the distribution and timing of eye movements mirrors the underlying conceptual structure of this linguistic difference in the absence of any visual stimuli or task constraint: Fixations were shorter and saccades were more dispersed across the screen, as if thinking about more dynamic events when listening to the past progressive stories. Thus, eye movement data suggest that visual inputs or an explicit task are unnecessary to solicit analog representations of features such as movement, that could be a key perceptual component to grammatical comprehension.
Anticipatory eye movements and long-term memory in early infancy.
Wong-Kee-You, Audrey M B; Adler, Scott A
2016-11-01
Advances in our understanding of long-term memory in early infancy have been made possible by studies that have used the Rovee-Collier's mobile conjugate reinforcement paradigm and its variants. One function that has been attributed to long-term memory is the formation of expectations (Rovee-Collier & Hayne, 1987); consequently, a long-term memory representation should be established during expectation formation. To examine this prediction and potentially open the door on a new paradigm for exploring infants' long-term memory, using the Visual Expectation Paradigm (Haith, Hazan, & Goodman, 1988), 3-month-old infants were trained to form an expectation for predictable color and spatial information of picture events and emit anticipatory eye movements to those events. One day later, infants' anticipatory eye movements decreased in number relative to the end of training when the predictable colors were changed but not when the spatial location of the predictable color events was changed. These findings confirm that information encoded during expectation formation are stored in long-term memory, as hypothesized by Rovee-Collier and colleagues. Further, this research suggests that eye movements are potentially viable measures of long-term memory in infancy, providing confirmatory evidence for early mnemonic processes. © 2016 Wiley Periodicals, Inc.
Simulation of Thin Film Equations on an Eye-Shaped Domain with Moving Boundary
NASA Astrophysics Data System (ADS)
Brosch, Joseph; Driscoll, Tobin; Braun, Richard
During a normal eye blink, the upper lid moves, and during the upstroke the lid paints a thin tear film over the exposed corneal and conjunctival surfaces. This thin tear film may be modeled by a nonlinear fourth-order PDE derived from lubrication theory. A major stumbling block in the numerical simulation of this model is to include both the geometry of the eye and the movement of the eyelid. Using a pair of orthogonal and conformal maps, we transform a computational box into a rough representation of a human eye where we proceed to simulate the thin tear film equations. Although we give up some realism, we gain spectrally accurate numerical methods on the computational box. We have applied this method to the heat equation on the blinking domain with both Dirichlet and no-flux boundary conditions, in each case demonstrating at least 10 digits of accuracy.. We are able to perform these simulations very quickly (generally in under a minute) using a desktop version of MATLAB. This project was supported by Grant 1022706 (R.J.B., T.A.D., J.K.B.) from the NSF.
NASA Astrophysics Data System (ADS)
Volpe, Peter A.
This thesis presents analytical models, finite element models and experimental data to investigate the response of the human eye to loads that can be experienced when in a non-supine sleeping position. The hypothesis being investigated is that non-supine sleeping positions can lead to stress, strain and deformation of the eye as well as changes in intraocular pressure (IOP) that may exacerbate vision loss in individuals who have glaucoma. To investigate the quasi-static changes in stress and internal pressure, a Fluid-Structure Interaction simulation was performed on an axisymmetrical model of an eye. Common Aerospace Engineering methods for analyzing pressure vessels and hyperelastic structural walls are applied to developing a suitable model. The quasi-static pressure increase was used in an iterative code to analyze changes in IOP over time.
Clinical Verification of Image Warping as a Potential Aid for the Visually Handicapped
NASA Technical Reports Server (NTRS)
Loshin, David
1996-01-01
The bulk of this research was to designed determine potential of the Programmable Remapper (PR) as a device to enhance vision for the visually handicapped. This research indicated that remapping would have potential as a low vision device if the eye position could be monitored with feedback to specify the proper location of the remapped image. This must be accomplished at high rate so that there is no lag of the image behind the eye position. Since at this time, there is no portable eye monitor device (at a reasonable cost) that will operate under the required conditions, it would not be feasible to continue with remapping experiments for patients with central field defects. However, since patients with peripheral field defects do not have the same eye positioning requirements, they may indeed benefit from this technology. Further investigations must be performed to determine plausibility of this application of remapping.
Pincus, A L; Ruiz, M A
1997-04-01
Research on the relations between parental representations, personality traits, and psychopathology was discussed with reference to their integration for clinical personality assessment. Empirical results linking parental representations assessed by the Structural Analysis of Social Behavior and the Five-Factor Model of personality traits in a young adult population supported the position that parental representations significantly relate to adult personality. Individuals whose parental representations were generally affiliative described themselves as less prone to emotional distress (lower neuroticism); more interpersonally oriented and experiencing of positive emotions (higher extraversion); more peaceable and trustworthy (higher agreeableness); and more dutiful, resourceful, and dependable (higher conscientiousness). Parental representations colored by autonomy granting and autonomy taking were related to higher levels of openness to experience but lower levels of conscientiousness and extraversion in self-descriptions. Assessment implications and an integrative assessment strategy were presented along with a clinical case example.
Do Adaptive Representations of the Item-Position Effect in APM Improve Model Fit? A Simulation Study
ERIC Educational Resources Information Center
Zeller, Florian; Krampen, Dorothea; Reiß, Siegbert; Schweizer, Karl
2017-01-01
The item-position effect describes how an item's position within a test, that is, the number of previous completed items, affects the response to this item. Previously, this effect was represented by constraints reflecting simple courses, for example, a linear increase. Due to the inflexibility of these representations our aim was to examine…
Emerging Object Representations in the Visual System Predict Reaction Times for Categorization
Ritchie, J. Brendan; Tovar, David A.; Carlson, Thomas A.
2015-01-01
Recognizing an object takes just a fraction of a second, less than the blink of an eye. Applying multivariate pattern analysis, or “brain decoding”, methods to magnetoencephalography (MEG) data has allowed researchers to characterize, in high temporal resolution, the emerging representation of object categories that underlie our capacity for rapid recognition. Shortly after stimulus onset, object exemplars cluster by category in a high-dimensional activation space in the brain. In this emerging activation space, the decodability of exemplar category varies over time, reflecting the brain’s transformation of visual inputs into coherent category representations. How do these emerging representations relate to categorization behavior? Recently it has been proposed that the distance of an exemplar representation from a categorical boundary in an activation space is critical for perceptual decision-making, and that reaction times should therefore correlate with distance from the boundary. The predictions of this distance hypothesis have been born out in human inferior temporal cortex (IT), an area of the brain crucial for the representation of object categories. When viewed in the context of a time varying neural signal, the optimal time to “read out” category information is when category representations in the brain are most decodable. Here, we show that the distance from a decision boundary through activation space, as measured using MEG decoding methods, correlates with reaction times for visual categorization during the period of peak decodability. Our results suggest that the brain begins to read out information about exemplar category at the optimal time for use in choice behaviour, and support the hypothesis that the structure of the representation for objects in the visual system is partially constitutive of the decision process in recognition. PMID:26107634
A Balanced Comparison of Object Invariances in Monkey IT Neurons.
Ratan Murty, N Apurva; Arun, Sripati P
2017-01-01
Our ability to recognize objects across variations in size, position, or rotation is based on invariant object representations in higher visual cortex. However, we know little about how these invariances are related. Are some invariances harder than others? Do some invariances arise faster than others? These comparisons can be made only upon equating image changes across transformations. Here, we targeted invariant neural representations in the monkey inferotemporal (IT) cortex using object images with balanced changes in size, position, and rotation. Across the recorded population, IT neurons generalized across size and position both stronger and faster than to rotations in the image plane as well as in depth. We obtained a similar ordering of invariances in deep neural networks but not in low-level visual representations. Thus, invariant neural representations dynamically evolve in a temporal order reflective of their underlying computational complexity.
Gravity Influences the Visual Representation of Object Tilt in Parietal Cortex
Angelaki, Dora E.
2014-01-01
Sensory systems encode the environment in egocentric (e.g., eye, head, or body) reference frames, creating inherently unstable representations that shift and rotate as we move. However, it is widely speculated that the brain transforms these signals into an allocentric, gravity-centered representation of the world that is stable and independent of the observer's spatial pose. Where and how this representation may be achieved is currently unknown. Here we demonstrate that a subpopulation of neurons in the macaque caudal intraparietal area (CIP) visually encodes object tilt in nonegocentric coordinates defined relative to the gravitational vector. Neuronal responses to the tilt of a visually presented planar surface were measured with the monkey in different spatial orientations (upright and rolled left/right ear down) and then compared. This revealed a continuum of representations in which planar tilt was encoded in a gravity-centered reference frame in approximately one-tenth of the comparisons, intermediate reference frames ranging between gravity-centered and egocentric in approximately two-tenths of the comparisons, and in an egocentric reference frame in less than half of the comparisons. Altogether, almost half of the comparisons revealed a shift in the preferred tilt and/or a gain change consistent with encoding object orientation in nonegocentric coordinates. Through neural network modeling, we further show that a purely gravity-centered representation of object tilt can be achieved directly from the population activity of CIP-like units. These results suggest that area CIP may play a key role in creating a stable, allocentric representation of the environment defined relative to an “earth-vertical” direction. PMID:25339732
Romano, Paul E
2006-01-01
The HR (prism diopters [PD] per mm of corneal light reflection test [CLRT] asymmetry for strabometry) varies in humans from 14 to 24 PD/mm, but is totally unpredictable. Photo(grammetric) HR calibration in (of) each case facilitates acceptable strabometry precision and accuracy. Take 3 flash photos of the patient with both the preferred eye and then the deviating eye fixating straight ahead and then again with the deviation eye fixing at (+/-5-10 PD) the strabismic angle on a metric rule (stick) one meter away from the camera lens (where 1 cm = 1 PD). On these 3 photos, make four precise measurements of the position of the CLR with reference to the limbus: In the deviating eye fixing straight ahead and fixating at the angle of deviation. Divide the mm difference in location into the change in the angle of fixation to determine the HR for this patient at this angle. Then determine the CLR position in both the deviating eye and the fixing eye in the straight ahead primary position picture. Apply the calculated calibrated HR to the asymmetry of the CLRs in primary position to determine the true strabismic deviation. This imaging method insures accurate Hirschberg CLRT strabometry in each case, determining the deviation in "free space", under conditions of normal binocular viewing, uncontaminated by the artifacts or inaccuracies of other conventional strabometric methods or devices. So performed, the Hirschberg CLRT is the gold standard of strabometry.
Lev-Ari, Tidhar; Lustig, Avichai; Ketter-Katz, Hadas; Baydach, Yossi; Katzir, Gadi
2016-08-01
A chameleon (Chamaeleo chamaeleon) on a perch responds to a nearby threat by moving to the side of the perch opposite the threat, while bilaterally compressing its abdomen, thus minimizing its exposure to the threat. If the threat moves, the chameleon pivots around the perch to maintain its hidden position. How precise is the body rotation and what are the patterns of eye movement during avoidance? Just-hatched chameleons, placed on a vertical perch, on the side roughly opposite to a visual threat, adjusted their position to precisely opposite the threat. If the threat were moved on a horizontal arc at angular velocities of up to 85°/s, the chameleons co-rotated smoothly so that (1) the angle of the sagittal plane of the head relative to the threat and (2) the direction of monocular gaze, were positively and significantly correlated with threat angular position. Eye movements were role-dependent: the eye toward which the threat moved maintained a stable gaze on it, while the contralateral eye scanned the surroundings. This is the first description, to our knowledge, of such a response in a non-flying terrestrial vertebrate, and it is discussed in terms of possible underlying control systems.
Gerling, J; de Paz, H; Schroth, V; Bach, M; Kommerell, G
2000-06-01
The theory of the "Measuring and Correction Methods of H.-J. Haase" (MCH) states that a small misalignment of one eye, called fixation disparity, indicates a difficulty in overcoming a "vergence position of rest" that is different from ortho position. This difficulty, so the theory, can cause asthenopic complaints, such as headaches, and these complaints can be relieved by prisms. The theory further claims that fixation disparity can be ascertained by a series of tests which depend on the subject's perception. The tests most decisive for the diagnosis of a so-called fixation disparity type 2 consist of stereo displays. The magnitude of the prism that allows the subject to see the test configurations in symmetry is thought to be the one that corrects the "vergence position of rest". Nine subjects with healthy eyes in whom a "fixation disparity type 2" had been diagnosed were selected for the study. Misalignment of the eyes was determined according to the principle of the unilateral cover test. Targets identical for both eyes were presented on the screen of the Polatest E. Then, the target was deleted for one eye and the ensuing position change of the other eye was measured, using the search coil technique. This test was performed both with and without the MCH prism. In all 9 subjects the misalignment was less than 10 minutes of arc, i.e. in the range of normal fixation instability. Averaging across the 9 subjects, the deviation of the eye (misaligned according to MCH) was 0.79 +/- 3.45 minutes of arc in the direction opposed to that predicted by the MCH, a value not significantly different from zero. The MCH prism elicited a fusional vergence movement the magnitude of which corresponded to the magnitude of the MCH prism. Ascertaining fixation disparity with the MCH is unreliable. Accordingly, it appears dubious to correct a "vergence position of rest" on the basis of the MCH.
Benavente-Perez, Alexandra; Nour, Ann; Troilo, David
2012-09-21
We evaluated the effect of imposing negative and positive defocus simultaneously on the eye growth and refractive state of the common marmoset, a New World primate that compensates for either negative and positive defocus when they are imposed individually. Ten marmosets were reared with multizone contact lenses of alternating powers (-5 diopters [D]/+5 D), 50:50 ratio for average pupil of 2.80 mm over the right eye (experimental) and plano over the fellow eye (control) from 10 to 12 weeks. The effects on refraction (mean spherical equivalent [MSE]) and vitreous chamber depth (VC) were measured and compared to untreated, and -5 D and +5 D single vision contact lens-reared marmosets. Over the course of the treatment, pupil diameters ranged from 2.26 to 2.76 mm, leading to 1.5 times greater exposure to negative than positive power zones. Despite this, at different intervals during treatment, treated eyes were on average relatively more hyperopic and smaller than controls (experimental-control [exp-con] mean MSE ± SE +1.44 ± 0.45 D, mean VC ± SE -0.05 ± 0.02 mm) and the effects were similar to those in marmosets raised on +5 D single vision contact lenses (exp-con mean MSE ± SE +1.62 ± 0.44 D. mean VC ± SE -0.06 ± 0.03 mm). Six weeks into treatment, the interocular growth rates in multizone animals were already lower than in -5 D-treated animals (multizone -1.0 ± 0.1 μm/day, -5 D +2.1 ± 0.9 μm/day) and did not change significantly throughout treatment. Imposing hyperopic and myopic defocus simultaneously using concentric contact lenses resulted in relatively smaller and less myopic eyes, despite treated eyes being exposed to a greater percentage of negative defocus. Exposing the retina to combined dioptric powers with multifocal lenses that include positive defocus might be an effective treatment to control myopia development or progression.
Boccia, M; Piccardi, L; Palermo, L; Nemmi, F; Sulpizio, V; Galati, G; Guariglia, C
2014-09-05
Visual mental imagery is a process that draws on different cognitive abilities and is affected by the contents of mental images. Several studies have demonstrated that different brain areas subtend the mental imagery of navigational and non-navigational contents. Here, we set out to determine whether there are distinct representations for navigational and geographical images. Specifically, we used a Spatial Compatibility Task (SCT) to assess the mental representation of a familiar navigational space (the campus), a familiar geographical space (the map of Italy) and familiar objects (the clock). Twenty-one participants judged whether the vertical or the horizontal arrangement of items was correct. We found that distinct representational strategies were preferred to solve different categories on the SCT, namely, the horizontal perspective for the campus and the vertical perspective for the clock and the map of Italy. Furthermore, we found significant effects due to individual differences in the vividness of mental images and in preferences for verbal versus visual strategies, which selectively affect the contents of mental images. Our results suggest that imagining a familiar navigational space is somewhat different from imagining a familiar geographical space. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
The nature of face representations in subcortical regions.
Gabay, Shai; Burlingham, Charles; Behrmann, Marlene
2014-07-01
Studies examining the neural correlates of face perception in humans have focused almost exclusively on the distributed cortical network of face-selective regions. Recently, however, investigations have also identified subcortical correlates of face perception and the question addressed here concerns the nature of these subcortical face representations. To explore this issue, we presented to participants pairs of images sequentially to the same or to different eyes. Superior performance in the former over latter condition implicates monocular, prestriate portions of the visual system. Over a series of five experiments, we manipulated both lower-level (size, location) as well as higher-level (identity) similarity across the pair of faces. A monocular advantage was observed even when the faces in a pair differed in location and in size, implicating some subcortical invariance across lower-level image properties. A monocular advantage was also observed when the faces in a pair were two different images of the same individual, indicating the engagement of subcortical representations in more abstract, higher-level aspects of face processing. We conclude that subcortical structures of the visual system are involved, perhaps interactively, in multiple aspects of face perception, and not simply in deriving initial coarse representations. Copyright © 2014 Elsevier Ltd. All rights reserved.
Tactile Toe Agnosia and Percept of a "Missing Toe" in Healthy Humans.
Cicmil, Nela; Meyer, Achim P; Stein, John F
2016-03-01
A disturbance of body representation is central to many neurological and psychiatric conditions, but the mechanisms by which body representations are constructed by the brain are not fully understood. We demonstrate a directional disturbance in tactile identification of the toes in healthy humans. Nineteen young adult participants underwent tactile stimulation of the digits with the eyes closed and verbally reported the identity of the stimulated digit. In the majority of individuals, responses to the second and third toes were significantly biased toward the laterally neighboring digit. The directional bias was greater for the nondominant foot and was affected by the identity of the immediately preceding stimulated toe. Unexpectedly, 9/19 participants reported the subjective experience of a "missing toe" or "missing space" during the protocol. These findings challenge current models of somatosensory localization, as they cannot be explained simply by a lack of distinct representations for toes compared with fingers, or by overt toe-finger correspondences. We present a novel theory of equal spatial representations of digit width combined with a "preceding neighbor" effect to explain the observed phenomena. The diagnostic implications for neurological disorders that involve "digit agnosia" are discussed. © The Author(s) 2015.
Do we have an internal model of the outside world?
Land, Michael F.
2014-01-01
Our phenomenal world remains stationary in spite of movements of the eyes, head and body. In addition, we can point or turn to objects in the surroundings whether or not they are in the field of view. In this review, I argue that these two features of experience and behaviour are related. The ability to interact with objects we cannot see implies an internal memory model of the surroundings, available to the motor system. And, because we maintain this ability when we move around, the model must be updated, so that the locations of object memories change continuously to provide accurate directional information. The model thus contains an internal representation of both the surroundings and the motions of the head and body: in other words, a stable representation of space. Recent functional MRI studies have provided strong evidence that this egocentric representation has a location in the precuneus, on the medial surface of the superior parietal cortex. This is a region previously identified with ‘self-centred mental imagery’, so it seems likely that the stable egocentric representation, required by the motor system, is also the source of our conscious percept of a stable world. PMID:24395972
Fesharaki, Maryam; Karagiannis, Peter; Tweed, Douglas; Sharpe, James A.; Wong, Agnes M. F.
2016-01-01
Purpose Skew deviation is a vertical strabismus caused by damage to the otolithic–ocular reflex pathway and is associated with abnormal ocular torsion. This study was conducted to determine whether patients with skew deviation show the normal pattern of three-dimensional eye control called Listing’s law, which specifies the eye’s torsional angle as a function of its horizontal and vertical position. Methods Ten patients with skew deviation caused by brain stem or cerebellar lesions and nine normal control subjects were studied. Patients with diplopia and neurologic symptoms less than 1 month in duration were designated as acute (n = 4) and those with longer duration were classified as chronic (n = 10). Serial recordings were made in the four patients with acute skew deviation. With the head immobile, subjects made saccades to a target that moved between straight ahead and eight eccentric positions, while wearing search coils. At each target position, fixation was maintained for 3 seconds before the next saccade. From the eye position data, the plane of best fit, referred to as Listing’s plane, was fitted. Violations of Listing’s law were quantified by computing the “thickness” of this plane, defined as the SD of the distances to the plane from the data points. Results Both the hypertropic and hypotropic eyes in patients with acute skew deviation violated Listing’s and Donders’ laws—that is, the eyes did not show one consistent angle of torsion in any given gaze direction, but rather an abnormally wide range of torsional angles. In contrast, each eye in patients with chronic skew deviation obeyed the laws. However, in chronic skew deviation, Listing’s planes in both eyes had abnormal orientations. Conclusions Patients with acute skew deviation violated Listing’s law, whereas those with chronic skew deviation obeyed it, indicating that despite brain lesions, neural adaptation can restore Listing’s law so that the neural linkage between horizontal, vertical, and torsional eye position remains intact. Violation of Listing’s and Donders’ laws during fixation arises primarily from torsional drifts, indicating that patients with acute skew deviation have unstable torsional gaze holding that is independent of their horizontal–vertical eye positions. PMID:18172094
Pavilack, M A; Brod, R D
2001-02-01
To determine the site of focal illumination on the retina of phakic human cadaver eyes from an operating microscope positioned for temporal approach eye surgery. Experimental study. A Zeiss OPMI-6SFR operating microscope (Zeiss Humphrey Systems, Dublin, CA) was positioned over two phakic human cadaver eyes to measure the site of the focal illumination on the retina by directly observing the illumination on the posterior scleral surface of the globe. External localization of the foveola was made by direct observation using scleral indentation and indirect ophthalmoscopy. Various combinations of microscope angulation and field of view were analyzed. Distance of focal illumination from the operating room microscope relative to the foveola was measured. The diameter of the "hot spot" of focal illumination on the retina was 4.0 mm. With the eye positioned straight ahead and the level operating room microscope positioned for temporal approach eye surgery, the center of retinal illumination was 0.9 and 1.4 mm nasal relative to the foveola when the microscope field of view was centered over the cornea and temporal limbus, respectively. With the microscope angled 5, 10, 15, and 20 degrees temporally (oculars tilted toward surgeon), the center of the illumination was displaced nasal to the foveola by 1.1, 1.5, 3.8, and 5.1 mm, respectively, when the field of view was centered over the cornea and 1.5, 2.6, 4.7, and 6.0 mm, respectively, nasal to the foveola when centered over the temporal limbus. Retinal illumination from an operating microscope positioned for temporal approach eye surgery has the potential for light-induced injury to the fovea. Angulation of the operating microscope by up to 10 degrees temporally when the microscope field of view was centered over the cornea and up to 5 degrees temporally when centered over the temporal limbus was not adequate to displace the focal illumination off the foveola when the eye was in the straight-ahead position. Tilting the operating microscope 15 degrees or more temporally when centered on the pupil and 10 degrees or more when centered over the temporal limbus should safely displace the retinal light exposure away from the fovea during temporal approach surgery. Suggestions for reducing the risk of iatrogenic phototoxicity are reviewed.
ERIC Educational Resources Information Center
Griffin, Robert E., Ed.; And Others
This document contains 47 selected papers from the 1995 International Visual Literacy Association (IVLA) conference. Topics include: the cultural significance of tombstone iconography; the predicted impact of multimedia on education and entertainment; the effects of digital imaging on the art of photography; visual representation of the structure…
Temporal Dynamics of Ocular Position Dependence of the Initial Human Vestibulo-ocular Reflex
Crane, Benjamin T.; Tian, Junru; Demer, Joseph L.
2007-01-01
Purpose While an ideal vestibulo-ocular reflex (VOR) generates ocular rotations compensatory for head motion, during visually guided movements, Listing’s Law (LL) constrains the eye to rotational axes lying in Listing’s Plane (LP). The present study was conducted to explore the recent proposal that the VOR’s rotational axis is not collinear with the head’s, but rather follows a time-dependent strategy intermediate between LL and an ideal VOR. Methods Binocular LPs were defined during visual fixation in eight normal humans. The VOR was evoked by a highly repeatable transient whole-body yaw rotation in darkness at a peak acceleration of 2800 deg/s2. Immediately before rotation, subjects regarded targets 15 or 500 cm distant located at eye level, 20° up, or 20° down. Eye and head responses were compared with LL predictions in the position and velocity domains. Results LP orientation varied both among subjects and between individual subject’s eyes, and rotated temporally with convergence by 5 ± 5° (±SEM). In the position domain, the eye compensated for head displacement even when the head rotated out of LP. Even within the first 20 ms from onset of head rotation, the ocular velocity axis tilted relative to the head axis by 30% ± 8% of vertical gaze position. Saccades increased this tilt. Regardless of vertical gaze position, the ocular rotation axis tilted backward 4° farther in abduction than in adduction. There was also a binocular vertical eye velocity transient and lateral tilt of the ocular axis. Conclusions These disconjugate, short-latency axis perturbations appear intrinsic to the VOR and may have neural or mechanical origins. PMID:16565376
Hyperfocusing in Schizophrenia: Evidence from Interactions Between Working Memory and Eye Movements
Luck, Steven J.; McClenon, Clara; Beck, Valerie M.; Hollingworth, Andrew; Leonard, Carly J.; Hahn, Britta; Robinson, Benjamin M.; Gold, James M.
2014-01-01
Recent research suggests that processing resources are focused more narrowly but more intensely in people with schizophrenia (PSZ) than in healthy control subjects (HCS), possibly reflecting local cortical circuit abnormalities. This hyperfocusing hypothesis leads to the counterintuitive prediction that, although PSZ cannot store as much information in working memory as HCS, the working memory representations that are present in PSZ may be more intense than those in HCS. To test this hypothesis, we used a task in which participants make a saccadic eye movement to a peripheral target and avoid a parafoveal nontarget while they are holding a color in working memory. Previous research with this task has shown that the parafoveal nontarget is more distracting when it matches the color being held in working memory. This effect should be enhanced in PSZ if their working memory representations are more intense. Consistent with this prediction, we found that the effect of a match between the distractor color and the memory color was larger in PSZ than in HCS. We also observed evidence that PSZ hyperfocused spatially on the region surrounding the fixation point. These results provide further evidence that some aspects of cognitive dysfunction in schizophrenia may be a result of a narrower and more intense focusing of processing resources. PMID:25089655
Facial expressions of emotion are not culturally universal.
Jack, Rachael E; Garrod, Oliver G B; Yu, Hui; Caldara, Roberto; Schyns, Philippe G
2012-05-08
Since Darwin's seminal works, the universality of facial expressions of emotion has remained one of the longest standing debates in the biological and social sciences. Briefly stated, the universality hypothesis claims that all humans communicate six basic internal emotional states (happy, surprise, fear, disgust, anger, and sad) using the same facial movements by virtue of their biological and evolutionary origins [Susskind JM, et al. (2008) Nat Neurosci 11:843-850]. Here, we refute this assumed universality. Using a unique computer graphics platform that combines generative grammars [Chomsky N (1965) MIT Press, Cambridge, MA] with visual perception, we accessed the mind's eye of 30 Western and Eastern culture individuals and reconstructed their mental representations of the six basic facial expressions of emotion. Cross-cultural comparisons of the mental representations challenge universality on two separate counts. First, whereas Westerners represent each of the six basic emotions with a distinct set of facial movements common to the group, Easterners do not. Second, Easterners represent emotional intensity with distinctive dynamic eye activity. By refuting the long-standing universality hypothesis, our data highlight the powerful influence of culture on shaping basic behaviors once considered biologically hardwired. Consequently, our data open a unique nature-nurture debate across broad fields from evolutionary psychology and social neuroscience to social networking via digital avatars.
Facial expressions of emotion are not culturally universal
Jack, Rachael E.; Garrod, Oliver G. B.; Yu, Hui; Caldara, Roberto; Schyns, Philippe G.
2012-01-01
Since Darwin’s seminal works, the universality of facial expressions of emotion has remained one of the longest standing debates in the biological and social sciences. Briefly stated, the universality hypothesis claims that all humans communicate six basic internal emotional states (happy, surprise, fear, disgust, anger, and sad) using the same facial movements by virtue of their biological and evolutionary origins [Susskind JM, et al. (2008) Nat Neurosci 11:843–850]. Here, we refute this assumed universality. Using a unique computer graphics platform that combines generative grammars [Chomsky N (1965) MIT Press, Cambridge, MA] with visual perception, we accessed the mind’s eye of 30 Western and Eastern culture individuals and reconstructed their mental representations of the six basic facial expressions of emotion. Cross-cultural comparisons of the mental representations challenge universality on two separate counts. First, whereas Westerners represent each of the six basic emotions with a distinct set of facial movements common to the group, Easterners do not. Second, Easterners represent emotional intensity with distinctive dynamic eye activity. By refuting the long-standing universality hypothesis, our data highlight the powerful influence of culture on shaping basic behaviors once considered biologically hardwired. Consequently, our data open a unique nature–nurture debate across broad fields from evolutionary psychology and social neuroscience to social networking via digital avatars. PMID:22509011
Danckwardt, Joachim F
2007-06-01
Using different psychoanalytic points of view, in this comparative study of "Traumnovelle" by Schnitzler and "Eyes Wide Shut" by Kubrick the author analyses the cultural changes between the first and last thirds of the 20th century. This change consists in the way "facts of life" are dealt with. It is a change from identity through insight and understanding to an identity through excited self-objectification. This change proceeds along the lines of "I think therefore I am" to "I feel therefore I am" arriving at "I am excited, therefore I am noticed and thus I am". In the description and illustration of 48 hours in the life of a married couple, this transformation from thinking to feeling and sensing is made tangible. After 9 years of being married, the couple faces the end of their passionate love. They struggle with the primordial anxiety in love life: the traumatic loss of faith in one's capacity to love. This transformation is accompanied by a change in media that symbolizes the couple's experience: from the language of dreaming, reading and listening in Schnitzler to the representation in audiovisual media, i.e. visual art, theatre, movies and public events in Kubrick. It marks a change in the representation of psychic life in space and time.
Bui Quoc, Emmanuel; Ribot, Jérôme; Quenech’Du, Nicole; Doutremer, Suzette; Lebas, Nicolas; Grantyn, Alexej; Aushana, Yonane; Milleret, Chantal
2011-01-01
In the mammalian primary visual cortex, the corpus callosum contributes to the unification of the visual hemifields that project to the two hemispheres. Its development depends on visual experience. When this is abnormal, callosal connections must undergo dramatic anatomical and physiological changes. However, data concerning these changes are sparse and incomplete. Thus, little is known about the impact of abnormal postnatal visual experience on the development of callosal connections and their role in unifying representation of the two hemifields. Here, the effects of early unilateral convergent strabismus (a model of abnormal visual experience) were fully characterized with respect to the development of the callosal connections in cat visual cortex, an experimental model for humans. Electrophysiological responses and 3D reconstruction of single callosal axons show that abnormally asymmetrical callosal connections develop after unilateral convergent strabismus, resulting from an extension of axonal branches of specific orders in the hemisphere ipsilateral to the deviated eye and a decreased number of nodes and terminals in the other (ipsilateral to the non-deviated eye). Furthermore this asymmetrical organization prevents the establishment of a unifying representation of the two visual hemifields. As a general rule, we suggest that crossed and uncrossed retino-geniculo-cortical pathways contribute successively to the development of the callosal maps in visual cortex. PMID:22275883
Eyes On the Ground: Path Forward Analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brost, Randolph; Little, Charles Q.; peter-stein, natacha
A previous report assesses our progress to date on the Eyes On the Ground project, and reviews lessons learned [1]. In this report, we address the implications of those lessons in defining the most productive path forward for the remainder of the project. We propose two main concepts: Interactive Diagnosis and Model-Driven Assistance. Among these, the Model-Driven Assistance concept appears the most promising. The Model-Driven Assistance concept is based on an approximate but useful model of a facility, which provides a unified representation for storing, viewing, and analyzing data that is known about the facility. This representation provides value tomore » both inspectors and IAEA headquarters, and facilitates communication between the two. The concept further includes a lightweight, portable field tool to aid the inspector in executing a variety of inspection tasks, including capture of images and 3-d scan data. We develop a detailed description of this concept, including its system components, functionality, and example use cases. The envisioned tool would provide value by reducing inspector cognitive load, streamlining inspection tasks, and facilitating communication between the inspector and teams at IAEA headquarters. We conclude by enumerating the top implementation priorities to pursue in the remaining limited time of the project. Approved for public release; further dissemination unlimited.« less
O'Neil, Edward B; Watson, Hilary C; Dhillon, Sonya; Lobaugh, Nancy J; Lee, Andy C H
2015-09-01
Recent work has demonstrated that the perirhinal cortex (PRC) supports conjunctive object representations that aid object recognition memory following visual object interference. It is unclear, however, how these representations interact with other brain regions implicated in mnemonic retrieval and how congruent and incongruent interference influences the processing of targets and foils during object recognition. To address this, multivariate partial least squares was applied to fMRI data acquired during an interference match-to-sample task, in which participants made object or scene recognition judgments after object or scene interference. This revealed a pattern of activity sensitive to object recognition following congruent (i.e., object) interference that included PRC, prefrontal, and parietal regions. Moreover, functional connectivity analysis revealed a common pattern of PRC connectivity across interference and recognition conditions. Examination of eye movements during the same task in a separate study revealed that participants gazed more at targets than foils during correct object recognition decisions, regardless of interference congruency. By contrast, participants viewed foils more than targets for incorrect object memory judgments, but only after congruent interference. Our findings suggest that congruent interference makes object foils appear familiar and that a network of regions, including PRC, is recruited to overcome the effects of interference.
Hyperfocusing in schizophrenia: Evidence from interactions between working memory and eye movements.
Luck, Steven J; McClenon, Clara; Beck, Valerie M; Hollingworth, Andrew; Leonard, Carly J; Hahn, Britta; Robinson, Benjamin M; Gold, James M
2014-11-01
Recent research suggests that processing resources are focused more narrowly but more intensely in people with schizophrenia (PSZ) than in healthy control subjects (HCS), possibly reflecting local cortical circuit abnormalities. This hyperfocusing hypothesis leads to the counterintuitive prediction that, although PSZ cannot store as much information in working memory as HCS, the working memory representations that are present in PSZ may be more intense than those in HCS. To test this hypothesis, we used a task in which participants make a saccadic eye movement to a peripheral target and avoid a parafoveal nontarget while they are holding a color in working memory. Previous research with this task has shown that the parafoveal nontarget is more distracting when it matches the color being held in working memory. This effect should be enhanced in PSZ if their working memory representations are more intense. Consistent with this prediction, we found that the effect of a match between the distractor color and the memory color was larger in PSZ than in HCS. We also observed evidence that PSZ hyperfocused spatially on the region surrounding the fixation point. These results provide further evidence that some aspects of cognitive dysfunction in schizophrenia may be a result of a narrower and more intense focusing of processing resources.
Relationship between relative lens position and appositional closure in eyes with narrow angles.
Otori, Yasumasa; Tomita, Yuki; Hamamoto, Ayumi; Fukui, Kanae; Usui, Shinichi; Tatebayashi, Misako
2011-03-01
To investigate the relationship between relative lens position (RLP) and appositional closure in eyes with narrow angles. Ultrasound biomicroscopy (UBM) was used to measure anterior chamber depth (ACD) and lens thickness (LT), and the IOLMaster to measure axial length (AL). The number of quadrants with appositional closure was assessed by UBM under dark conditions. The RLP was calculated thus: RLP = 10 × (ACD + 0.5 LT) /AL. This study comprised 30 consecutive patients (30 eyes) with narrow-angle eyes defined as Shaffer grade 2 or lower and without peripheral anterior synechiae (24 women, 6 men; mean age ± SD, 67.3 ± 10.4 years; range, 42-87 years). Under dark conditions, 66.7% of the eyes with narrow angles showed appositional closure in at least one quadrant. Of the various ocular biometric parameters, only the RLP significantly decreased with appositional closure in at least one quadrant (P = 0.005). A decrease in the RLP can be predictive of appositional closure for narrow-angle eyes under dark conditions.
Crystalline lens dislocation secondary to bacterial endogenous endophthalmitis.
Sangave, Amit; Komati, Rahul; Weinmann, Allison; Samuel, Linoj; Desai, Uday
2017-09-01
To present an unusual case of endogenous endophthalmitis secondary to Group A streptococcus (GAS) that resulted in dislocation of the crystalline lens. An immunocompetent 51-year-old man presented to the emergency room (ER) with upper respiratory infection (URI) symptoms and painful right eye. He was diagnosed with URI and viral conjunctivitis and discharged on oral azithromycin and polytrim eyedrops. He returned to the ER 30 h later with sepsis and findings consistent with endophthalmitis, including light perception only vision. Ophthalmology was consulted at this time and an emergent vitreous tap and injection was performed. Both blood and vitreous cultures grew an atypical non-hemolytic variant of GAS ( Streptococcus pyogenes ). The primary source of infection was presumed to be secondary to pharyngitis or cutaneous dissemination. Final vision in the affected eye was no light perception, likely from a combination of anterior segment scarring, posterior segment damage, and hypotony. Interestingly, head computed tomography (CT) at the initial ER presentation showed normal lens position, but repeat CT at re-presentation revealed posterior dislocation of the lens. Endophthalmitis secondary to GAS has been sparsely reported in the literature, and this case highlights a unique clinical presentation. We suspect that this atypical non-hemolytic strain may have evaded detection on initial pharyngeal cultures. Additionally, we hypothesize that GAS-mediated protease release resulted in breakdown of the zonular fibers and subsequent lens dislocation. Ophthalmologists should be aware of GAS and its devastating intraocular manifestations.
Non-Orthogonal Corneal Astigmatism among Normal and Keratoconic Brazilian and Chinese populations.
Abass, Ahmed; Clamp, John; Bao, FangJun; Ambrósio, Renato; Elsheikh, Ahmed
2018-06-01
To investigate the prevalence of non-orthogonal astigmatism among normal and keratoconic Brazilian and Chinese populations. Topography data were obtained using the Pentacam High Resolution (HR) system ® from 458 Brazilian (aged 35.6 ± 15.8 years) and 505 Chinese (aged 31.6 ± 10.8 years) eyes with no history of keratoconus or refractive surgery, and 314 Brazilian (aged 24.2 ± 5.7 years) and 74 Chinese (aged 22.0 ± 5.5 years) keratoconic eyes. Orthogonal values of optical flat and steep powers were determined by finding the angular positions of two perpendicular meridians that gave the maximum difference in power. Additionally, the angular positions of the meridians with the minimum and maximum optical powers were located while being unrestricted by the usual orthogonality assumption. Eyes were determined to have non-orthogonal astigmatism if the angle between the two meridians with maximum and minimum optical power deviated by more than 5° from 90°. Evidence of non-orthogonal astigmatism was found in 39% of the Brazilian keratoconic eyes, 26% of the Chinese keratoconic eyes, 29% of the Brazilian normal eyes and 20% of the Chinese normal eyes. The large percentage of participants with non-orthogonal astigmatism in both normal and keratoconic eyes illustrates the need for the common orthogonality assumption to be reviewed when correcting for astigmatism. The prevalence of non-orthogonality should be considered by expanding the prescription system to consider the two power meridians and their independent positions.
Air Bubble-Induced High Intraocular Pressure After Descemet Membrane Endothelial Keratoplasty.
Röck, Daniel; Bartz-Schmidt, Karl Ulrich; Röck, Tobias; Yoeruek, Efdal
2016-08-01
To investigate the incidence and risk factors of pupillary block caused by an air bubble in the anterior chamber in the early postoperative period after Descemet membrane endothelial keratoplasty (DMEK). A retrospective review was conducted in 306 eyes that underwent DMEK from September 2009 through October 2014 at the Tübingen Eye Hospital. Intraocular pressure (IOP) elevation was defined as a spike above 30 mm Hg. In the first 190 eyes, an intraoperative peripheral iridectomy was performed at the 12-o'clock position and in the other 116 eyes at the 6-o'clock position. If possible, reasons for IOP elevation were identified. For all eyes, preoperative and postoperative slit-lamp examinations and IOP measurements were performed. Overall, 30 eyes (9.8%) showed a postoperative IOP elevation within the first postoperative day. The incidence of IOP elevation was 13.9% (5/36) in the triple DMEK group, and 2 of 5 phakic eyes (40%) developed an air bubble-induced IOP elevation. All eyes presented with a de novo IOP elevation, associated in 25 patients with pupillary block from air anterior to iris and in 5 patients with angle closure from air migration posterior to the iris. All of them had an iridectomy at the 12-o'clock position. A postoperative pupillary block with IOP elevation caused by the residual intraoperative air bubble may be an important complication that could be avoided by close and frequent observations, especially in the first postoperative hours and by an inferior peripheral iridectomy and an air bubble with a volume of ≤80% of the anterior chamber.
Samadani, Uzma; Ritlop, Robert; Reyes, Marleen; Nehrbass, Elena; Li, Meng; Lamm, Elizabeth; Schneider, Julia; Shimunov, David; Sava, Maria; Kolecki, Radek; Burris, Paige; Altomare, Lindsey; Mehmood, Talha; Smith, Theodore; Huang, Jason H; McStay, Christopher; Todd, S Rob; Qian, Meng; Kondziolka, Douglas; Wall, Stephen; Huang, Paul
2015-04-15
Disconjugate eye movements have been associated with traumatic brain injury since ancient times. Ocular motility dysfunction may be present in up to 90% of patients with concussion or blast injury. We developed an algorithm for eye tracking in which the Cartesian coordinates of the right and left pupils are tracked over 200 sec and compared to each other as a subject watches a short film clip moving inside an aperture on a computer screen. We prospectively eye tracked 64 normal healthy noninjured control subjects and compared findings to 75 trauma subjects with either a positive head computed tomography (CT) scan (n=13), negative head CT (n=39), or nonhead injury (n=23) to determine whether eye tracking would reveal the disconjugate gaze associated with both structural brain injury and concussion. Tracking metrics were then correlated to the clinical concussion measure Sport Concussion Assessment Tool 3 (SCAT3) in trauma patients. Five out of five measures of horizontal disconjugacy were increased in positive and negative head CT patients relative to noninjured control subjects. Only one of five vertical disconjugacy measures was significantly increased in brain-injured patients relative to controls. Linear regression analysis of all 75 trauma patients demonstrated that three metrics for horizontal disconjugacy negatively correlated with SCAT3 symptom severity score and positively correlated with total Standardized Assessment of Concussion score. Abnormal eye-tracking metrics improved over time toward baseline in brain-injured subjects observed in follow-up. Eye tracking may help quantify the severity of ocular motility disruption associated with concussion and structural brain injury.
Ritlop, Robert; Reyes, Marleen; Nehrbass, Elena; Li, Meng; Lamm, Elizabeth; Schneider, Julia; Shimunov, David; Sava, Maria; Kolecki, Radek; Burris, Paige; Altomare, Lindsey; Mehmood, Talha; Smith, Theodore; Huang, Jason H.; McStay, Christopher; Todd, S. Rob; Qian, Meng; Kondziolka, Douglas; Wall, Stephen; Huang, Paul
2015-01-01
Abstract Disconjugate eye movements have been associated with traumatic brain injury since ancient times. Ocular motility dysfunction may be present in up to 90% of patients with concussion or blast injury. We developed an algorithm for eye tracking in which the Cartesian coordinates of the right and left pupils are tracked over 200 sec and compared to each other as a subject watches a short film clip moving inside an aperture on a computer screen. We prospectively eye tracked 64 normal healthy noninjured control subjects and compared findings to 75 trauma subjects with either a positive head computed tomography (CT) scan (n=13), negative head CT (n=39), or nonhead injury (n=23) to determine whether eye tracking would reveal the disconjugate gaze associated with both structural brain injury and concussion. Tracking metrics were then correlated to the clinical concussion measure Sport Concussion Assessment Tool 3 (SCAT3) in trauma patients. Five out of five measures of horizontal disconjugacy were increased in positive and negative head CT patients relative to noninjured control subjects. Only one of five vertical disconjugacy measures was significantly increased in brain-injured patients relative to controls. Linear regression analysis of all 75 trauma patients demonstrated that three metrics for horizontal disconjugacy negatively correlated with SCAT3 symptom severity score and positively correlated with total Standardized Assessment of Concussion score. Abnormal eye-tracking metrics improved over time toward baseline in brain-injured subjects observed in follow-up. Eye tracking may help quantify the severity of ocular motility disruption associated with concussion and structural brain injury. PMID:25582436
Separate visual representations for perception and for visually guided behavior
NASA Technical Reports Server (NTRS)
Bridgeman, Bruce
1989-01-01
Converging evidence from several sources indicates that two distinct representations of visual space mediate perception and visually guided behavior, respectively. The two maps of visual space follow different rules; spatial values in either one can be biased without affecting the other. Ordinarily the two maps give equivalent responses because both are veridically in register with the world; special techniques are required to pull them apart. One such technique is saccadic suppression: small target displacements during saccadic eye movements are not preceived, though the displacements can change eye movements or pointing to the target. A second way to separate cognitive and motor-oriented maps is with induced motion: a slowly moving frame will make a fixed target appear to drift in the opposite direction, while motor behavior toward the target is unchanged. The same result occurs with stroboscopic induced motion, where the frame jump abruptly and the target seems to jump in the opposite direction. A third method of separating cognitive and motor maps, requiring no motion of target, background or eye, is the Roelofs effect: a target surrounded by an off-center rectangular frame will appear to be off-center in the direction opposite the frame. Again the effect influences perception, but in half of the subjects it does not influence pointing to the target. This experience also reveals more characteristics of the maps and their interactions with one another, the motor map apparently has little or no memory, and must be fed from the biased cognitive map if an enforced delay occurs between stimulus presentation and motor response. In designing spatial displays, the results mean that what you see isn't necessarily what you get. Displays must be designed with either perception or visually guided behavior in mind.
Helo, Andrea; van Ommen, Sandrien; Pannasch, Sebastian; Danteny-Dordoigne, Lucile; Rämä, Pia
2017-11-01
Conceptual representations of everyday scenes are built in interaction with visual environment and these representations guide our visual attention. Perceptual features and object-scene semantic consistency have been found to attract our attention during scene exploration. The present study examined how visual attention in 24-month-old toddlers is attracted by semantic violations and how perceptual features (i. e. saliency, centre distance, clutter and object size) and linguistic properties (i. e. object label frequency and label length) affect gaze distribution. We compared eye movements of 24-month-old toddlers and adults while exploring everyday scenes which either contained an inconsistent (e.g., soap on a breakfast table) or consistent (e.g., soap in a bathroom) object. Perceptual features such as saliency, centre distance and clutter of the scene affected looking times in the toddler group during the whole viewing time whereas looking times in adults were affected only by centre distance during the early viewing time. Adults looked longer to inconsistent than consistent objects either if the objects had a high or a low saliency. In contrast, toddlers presented semantic consistency effect only when objects were highly salient. Additionally, toddlers with lower vocabulary skills looked longer to inconsistent objects while toddlers with higher vocabulary skills look equally long to both consistent and inconsistent objects. Our results indicate that 24-month-old children use scene context to guide visual attention when exploring the visual environment. However, perceptual features have a stronger influence in eye movement guidance in toddlers than in adults. Our results also indicate that language skills influence cognitive but not perceptual guidance of eye movements during scene perception in toddlers. Copyright © 2017 Elsevier Inc. All rights reserved.
Cataract, ocular surgery, aphakia, and the chromatic expression of the painter Jovan Bijelić.
Nikolić, Ljubiša; Jovanović, Vesna
2016-11-01
Approaching art from the standpoint of optics and the artist’s eye pathology can sometimes explain the shift of the spectral colors in the work of some artists with cataract and aphakia. This may not be obvious in the paintings of other artists with the same eye pathology. The aim of this study was to create a timeline from the recently obtained details of the cataract surgery, his best corrected aphakic visual acuity, and the last paintings of the artist Jovan Bijelić. The research included primary and secondary source material: Bijelić’s paintings from all stages of his career, interviews with Bijelić and his eye surgeon, art criticism, sources with the description of Bijelić’s symptoms, hospital archives, discussion with art historians, comparison of his palette from different periods. Jovan Bijelić was nearly blind from cataract in 1957. He underwent an unsuccessful cataract surgery in 1956, followed by enucleation of the operated eye. In 1958, 20/25–20/20 vision was regained, after the extracapsular cataract extraction and sector iridectomy in his right eye, with the posterior lens capsule discision afterwards. Xanthopsia and cyanopsia are not present in his art, which is not a representation of visualized objects. The response of Jovan Bijelić to cataract and aphakia was predominantly a change of his style.
Omaki, Akira; Lau, Ellen F.; Davidson White, Imogen; Dakan, Myles L.; Apple, Aaron; Phillips, Colin
2015-01-01
Much work has demonstrated that speakers of verb-final languages are able to construct rich syntactic representations in advance of verb information. This may reflect general architectural properties of the language processor, or it may only reflect a language-specific adaptation to the demands of verb-finality. The present study addresses this issue by examining whether speakers of a verb-medial language (English) wait to consult verb transitivity information before constructing filler-gap dependencies, where internal arguments are fronted and hence precede the verb. This configuration makes it possible to investigate whether the parser actively makes representational commitments on the gap position before verb transitivity information becomes available. A key prediction of the view that rich pre-verbal structure building is a general architectural property is that speakers of verb-medial languages should predictively construct dependencies in advance of verb transitivity information, and therefore that disruption should be observed when the verb has intransitive subcategorization frames that are incompatible with the predicted structure. In three reading experiments (self-paced and eye-tracking) that manipulated verb transitivity, we found evidence for reading disruption when the verb was intransitive, although no such reading difficulty was observed when the critical verb was embedded inside a syntactic island structure, which blocks filler-gap dependency completion. These results are consistent with the hypothesis that in English, as in verb-final languages, information from preverbal noun phrases is sufficient to trigger active dependency completion without having access to verb transitivity information. PMID:25914658
Omaki, Akira; Lau, Ellen F; Davidson White, Imogen; Dakan, Myles L; Apple, Aaron; Phillips, Colin
2015-01-01
Much work has demonstrated that speakers of verb-final languages are able to construct rich syntactic representations in advance of verb information. This may reflect general architectural properties of the language processor, or it may only reflect a language-specific adaptation to the demands of verb-finality. The present study addresses this issue by examining whether speakers of a verb-medial language (English) wait to consult verb transitivity information before constructing filler-gap dependencies, where internal arguments are fronted and hence precede the verb. This configuration makes it possible to investigate whether the parser actively makes representational commitments on the gap position before verb transitivity information becomes available. A key prediction of the view that rich pre-verbal structure building is a general architectural property is that speakers of verb-medial languages should predictively construct dependencies in advance of verb transitivity information, and therefore that disruption should be observed when the verb has intransitive subcategorization frames that are incompatible with the predicted structure. In three reading experiments (self-paced and eye-tracking) that manipulated verb transitivity, we found evidence for reading disruption when the verb was intransitive, although no such reading difficulty was observed when the critical verb was embedded inside a syntactic island structure, which blocks filler-gap dependency completion. These results are consistent with the hypothesis that in English, as in verb-final languages, information from preverbal noun phrases is sufficient to trigger active dependency completion without having access to verb transitivity information.
Migliaccio, Americo A; Della Santina, Charles C; Carey, John P; Minor, Lloyd B; Zee, David S
2006-08-01
We examined how the gain of the torsional vestibulo-ocular reflex (VOR) (defined as the instantaneous eye velocity divided by inverted head velocity) in normal humans is affected by eye position, target distance, and the plane of head rotation. In six normal subjects we measured three-dimensional (3D) eye and head rotation axes using scleral search coils, and 6D head position using a magnetic angular and linear position measurement device, during low-amplitude (approximately 20 degrees ), high-velocity (approximately 200 degrees/s), high-acceleration (approximately 4000 degrees /s2) rapid head rotations or 'impulses.' Head impulses were imposed manually and delivered in five planes: yaw (horizontal canal plane), pitch, roll, left anterior-right posterior canal plane (LARP), and right anterior-left posterior canal plane (RALP). Subjects were instructed to fix on one of six targets at eye level. Targets were either straight-ahead, 20 degrees left or 20 degrees right from midline, at distance 15 or 124 cm from the subject. Two subjects also looked at more eccentric targets, 30 degrees left or 30 degrees right from midline. We found that the vertical and horizontal VOR gains increased with the proximity of the target to the subject. Previous studies suggest that the torsional VOR gain should decrease with target proximity. We found, however, that the torsional VOR gain did not change for all planes of head rotation and for both target distances. We also found a dynamic misalignment of the vertical positions of the eyes during the torsional VOR, which was greatest during near viewing with symmetric convergence. This dynamic vertical skew during the torsional VOR arises, in part, because when the eyes are converged, the optical axes are not parallel to the naso-occipital axes around which the eyes are rotating. In five of six subjects, the average skew ranged 0.9 degrees -2.9 degrees and was reduced to <0.4 degrees by a 'torsional' quick-phase (around the naso-occipital axis) occurring <110 ms after the onset of the impulse. We propose that the torsional quick-phase mechanism during the torsional VOR could serve at least three functions: (1) resetting the retinal meridians closer to their usual orientation in the head, (2) correcting for the 'skew' deviation created by misalignment between the axes around which the eyes are rotating and the line of sight, and (3) taking the eyes back toward Listing's plane.
NASA Technical Reports Server (NTRS)
Angelaki, Dora E.
2003-01-01
Previous studies have reported that the translational vestibuloocular reflex (TVOR) follows a three-dimensional (3D) kinematic behavior that is more similar to visually guided eye movements, like pursuit, rather than the rotational VOR (RVOR). Accordingly, TVOR rotation axes tilted with eye position toward an eye-fixed reference frame rather than staying relatively fixed in the head like in the RVOR. This difference arises because, contrary to the RVOR where peripheral image stability is functionally important, the TVOR like pursuit and saccades cares to stabilize images on the fovea. During most natural head and body movements, both VORs are simultaneously activated. In the present study, we have investigated in rhesus monkeys the 3D kinematics of the combined VOR during yaw rotation about eccentric axes. The experiments were motivated by and quantitatively compared with the predictions of two distinct hypotheses. According to the first (fixed-rule) hypothesis, an eye-position-dependent torsion is computed downstream of a site for RVOR/TVOR convergence, and the combined VOR axis would tilt through an angle that is proportional to gaze angle and independent of the relative RVOR/TVOR contributions to the total eye movement. This hypothesis would be consistent with the recently postulated mechanical constraints imposed by extraocular muscle pulleys. According to the second (image-stabilization) hypothesis, an eye-position-dependent torsion is computed separately for the RVOR and the TVOR components, implying a processing that takes place upstream of a site for RVOR/TVOR convergence. The latter hypothesis is based on the functional requirement that the 3D kinematics of the combined VOR should be governed by the need to keep images stable on the fovea with slip on the peripheral retina being dependent on the different functional goals of the two VORs. In contrast to the fixed-rule hypothesis, the data demonstrated a variable eye-position-dependent torsion for the combined VOR that was different for synergistic versus antagonistic RVOR/TVOR interactions. Furthermore, not only were the eye-velocity tilt slopes of the combined VOR as much as 10 times larger than what would be expected based on extraocular muscle pulley location, but also eye velocity during antagonistic RVOR/TVOR combinations often tilted opposite to gaze. These results are qualitatively and quantitatively consistent with the image-stabilization hypothesis, suggesting that the eye-position-dependent torsion is computed separately for the RVOR and the TVOR and that the 3D kinematics of the combined VOR are dependent on functional rather than mechanical constraints.
How Do Students Misunderstand Number Representations?
ERIC Educational Resources Information Center
Herman, Geoffrey L.; Zilles, Craig; Loui, Michael C.
2011-01-01
We used both student interviews and diagnostic testing to reveal students' misconceptions about number representations in computing systems. This article reveals that students who have passed an undergraduate level computer organization course still possess surprising misconceptions about positional notations, two's complement representation, and…
Dynamic Circuitry for Updating Spatial Representations: III. From Neurons to Behavior
Berman, Rebecca A.; Heiser, Laura M.; Dunn, Catherine A.; Saunders, Richard C.; Colby, Carol L.
2008-01-01
Each time the eyes move, the visual system must adjust internal representations to account for the accompanying shift in the retinal image. In the lateral intraparietal cortex (LIP), neurons update the spatial representations of salient stimuli when the eyes move. In previous experiments, we found that split-brain monkeys were impaired on double-step saccade sequences that required updating across visual hemifields, as compared to within hemifield (Berman et al. 2005; Heiser et al. 2005). Here we describe a subsequent experiment to characterize the relationship between behavioral performance and neural activity in LIP in the split-brain monkey. We recorded from single LIP neurons while split-brain and intact monkeys performed two conditions of the double-step saccade task: one required across-hemifield updating and the other within-hemifield updating. We found that, despite extensive experience with the task, the split-brain monkeys were significantly more accurate for within-hemifield as compared to across-hemifield sequences. In parallel, we found that population activity in LIP of the split-brain monkeys was significantly stronger for within-hemifield as compared to across-hemifield conditions of the double-step task. In contrast, in the normal monkey, both the average behavioral performance and population activity showed no bias toward the within-hemifield condition. Finally, we found that the difference between within-hemifield and across-hemifield performance in the split-brain monkeys was reflected at the level of single neuron activity in LIP. These findings indicate that remapping activity in area LIP is present in the split-brain monkey for the double-step task and co-varies with spatial behavior on within-hemifield compared to across-hemifield sequences. PMID:17493922
Tokarz-Sawińska, Ewa
2012-01-01
In Part I the problems associated with refraction, accommodation and convergence and their role in proper eye position/visual alignment of the eyes as well as convergent, divergent and vertical alignment of the eyes have been described.
Langeslag-Smith, Miriam A; Vandal, Alain C; Briane, Vincent; Thompson, Benjamin; Anstice, Nicola S
2015-11-27
To assess the accuracy of preschool vision screening in a large, ethnically diverse, urban population in South Auckland, New Zealand. Retrospective longitudinal study. B4 School Check vision screening records (n=5572) were compared with hospital eye department data for children referred from screening due to impaired acuity in one or both eyes who attended a referral appointment (n=556). False positive screens were identified by comparing screening data from the eyes that failed screening with hospital data. Estimation of false negative screening rates relied on data from eyes that passed screening. Data were analysed using logistic regression modelling accounting for the high correlation between results for the two eyes of each child. Positive predictive value of the preschool vision screening programme. Screening produced high numbers of false positive referrals, resulting in poor positive predictive value (PPV=31%, 95% CI 26% to 38%). High estimated negative predictive value (NPV=92%, 95% CI 88% to 95%) suggested most children with a vision disorder were identified at screening. Relaxing the referral criteria for acuity from worse than 6/9 to worse than 6/12 improved PPV without adversely affecting NPV. The B4 School Check generated numerous false positive referrals and consequently had a low PPV. There is scope for reducing costs by altering the visual acuity criterion for referral. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Holloway, Edith E; Sturrock, Bonnie A; Lamoureux, Ecosse L; Keeffe, Jill E; Rees, Gwyneth
2015-12-01
To investigate characteristics associated with screening positive for depressive symptoms among older adults accessing low-vision rehabilitation and eye-care services and to determine client acceptability of depression screening using the Patient Health Questionnaire-2 (PHQ-2) in these settings. One-hundred and twenty-four older adults (mean = 77.02 years, SD = 9.12) attending low-vision rehabilitation and eye-care services across Australia were screened for depression and invited to complete a telephone-administered questionnaire to determine characteristics associated with depressive symptoms and client acceptability of screening in these settings. Thirty-seven per cent (n = 46/124) of participants screened positive for depressive symptoms, and the majority considered the new depression screening method to be a 'good idea' in vision services (85%). Severe vision loss (<6/60 in the better eye) was associated with an increased odds of screening positive for depressive symptoms (odds ratio 2.37; 95% confidence interval 1.08-6.70) even after adjusting for potential confounders. Participants who screened positive had a preference for 'talking' therapy or a combination of medication and 'talking therapy' delivered within their own home (73%) or via telephone (67%). The PHQ-2 appears to be an acceptable method for depression screening in eye-care settings among older adults. Targeted interventions that incorporate home-based or telephone delivered therapy sessions may improve outcomes for depression in this group. © 2014 ACOTA.
A Balanced Comparison of Object Invariances in Monkey IT Neurons
2017-01-01
Abstract Our ability to recognize objects across variations in size, position, or rotation is based on invariant object representations in higher visual cortex. However, we know little about how these invariances are related. Are some invariances harder than others? Do some invariances arise faster than others? These comparisons can be made only upon equating image changes across transformations. Here, we targeted invariant neural representations in the monkey inferotemporal (IT) cortex using object images with balanced changes in size, position, and rotation. Across the recorded population, IT neurons generalized across size and position both stronger and faster than to rotations in the image plane as well as in depth. We obtained a similar ordering of invariances in deep neural networks but not in low-level visual representations. Thus, invariant neural representations dynamically evolve in a temporal order reflective of their underlying computational complexity. PMID:28413827
Single neural code for blur in subjects with different interocular optical blur orientation
Radhakrishnan, Aiswaryah; Sawides, Lucie; Dorronsoro, Carlos; Peli, Eli; Marcos, Susana
2015-01-01
The ability of the visual system to compensate for differences in blur orientation between eyes is not well understood. We measured the orientation of the internal blur code in both eyes of the same subject monocularly by presenting pairs of images blurred with real ocular point spread functions (PSFs) of similar blur magnitude but varying in orientations. Subjects assigned a level of confidence to their selection of the best perceived image in each pair. Using a classification-images–inspired paradigm and applying a reverse correlation technique, a classification map was obtained from the weighted averages of the PSFs, representing the internal blur code. Positive and negative neural PSFs were obtained from the classification map, representing the neural blur for best and worse perceived blur, respectively. The neural PSF was found to be highly correlated in both eyes, even for eyes with different ocular PSF orientations (rPos = 0.95; rNeg = 0.99; p < 0.001). We found that in subjects with similar and with different ocular PSF orientations between eyes, the orientation of the positive neural PSF was closer to the orientation of the ocular PSF of the eye with the better optical quality (average difference was ∼10°), while the orientation of the positive and negative neural PSFs tended to be orthogonal. These results suggest a single internal code for blur with orientation driven by the orientation of the optical blur of the eye with better optical quality. PMID:26114678
Clinical presentation of familial exudative vitreoretinopathy.
Ranchod, Tushar M; Ho, Lawrence Y; Drenser, Kimberly A; Capone, Antonio; Trese, Michael T
2011-10-01
To describe the clinical characteristics, staging and presentation of patients with familial exudative vitreoretinopathy (FEVR) in our clinical practice over the last 25 years. Case series, retrospective review. We included 273 eyes of 145 patients. Data collected from charts included gender, gestational age at birth, birthweight, age at presentation, referring diagnosis, family history, prior ocular surgery, and clinical presentation in each eye. Eyes with invasive posterior segment procedures before initial presentation were excluded. Demographics on presentation and clinical staging. Patients were slightly male predominant (57%) with a mean birthweight of 2.80 kg (range, 740 g-4.76 kg), mean gestational age of 37.8 weeks (range, 25-42), and mean age at presentation of almost 6 years (range, <1 month-49 years). A positive family history of FEVR was obtained in 18% of patients. A positive family history for ocular disease consistent with but not diagnosed as FEVR was obtained in an additional 19%. Stage 1 FEVR was identified in 45 eyes, stage 2 in 33 eyes, stage 3 in 42 eyes, stage 4 in 89 eyes, and stage 5 in 44 eyes. Radial retinal folds were seen in 77 eyes, 64 of which were temporal or inferotemporal in location. The FEVR patient population is remarkable for the wide range of age at presentation, gestational age, and birthweight. Although a positive family history on presentation may support the diagnosis of FEVR, a negative family history is of little help. The majority of retinal folds extended radially in the temporal quadrants, but radial folds were seen in almost all quadrants. Fellow eyes demonstrated a wide variation in symmetry. The presentation of FEVR may mimic the presentation of other pediatric and adult vitreoretinal disorders, and careful examination is often crucial in making the diagnosis of FEVR. The authors have no proprietary or commercial interest in any of the materials discussed in this article. Copyright © 2011 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Suzuki, Akira; Matsubara, Kosuke; Sasa, Yuko
2018-04-01
The present study aimed to determine doses delivered to the eye lenses of surgeons while using the inverted-C-arm technique and the protective effect of leaded spectacles during orthopedic surgery. The kerma in air was measured at five positions on leaded glasses positioned near the eye lens and on the neck using small optically stimulated luminescence (OSL) dosemeters. The lens equivalent dose was also measured at the neck using an OSL dosemeter. The maximum equivalent dose to the eye lens and the maximum kerma were 0.8 mSv/month and 0.66 mGy/month, respectively. The leaded glasses reduced the exposure by ~60%. Even if the surgeons are exposed to the maximum dose of X-ray radiation for 5 years, the equivalent doses to the eye lens will not exceed the present limit recommended by the ICRP.
Pettorossi, V E; Petrosini, L
1984-12-17
In intact guinea pigs a passive horizontal rotation of the body about the fixed head induces compensatory ocular movements (cervico-ocular reflex). When the static neck deviation is maintained, a significant ocular displacement is observed. In acutely hemilabyrinthectomized animals, static body deviation towards the lesion side tonically alters eye nystagmus. It affects slow phase eye velocity and quick phase amplitude and frequency causing the eye to reach a less eccentric orbital position. Apart from such immediate influences, a plastic effect on eye nystagmus abatement is induced. In the animals restrained with no body-on-head deviation, abatement of nystagmus is delayed with respect to the animals restrained with 35 degrees body deviation towards the lesion side. Thus the head position signal is not only a contributing factor for the correction of postural deficits but also influences the time course of the ocular balancing process following unilateral vestibular damage.
Payne, Hannah L
2017-01-01
Eye movements provide insights about a wide range of brain functions, from sensorimotor integration to cognition; hence, the measurement of eye movements is an important tool in neuroscience research. We describe a method, based on magnetic sensing, for measuring eye movements in head-fixed and freely moving mice. A small magnet was surgically implanted on the eye, and changes in the magnet angle as the eye rotated were detected by a magnetic field sensor. Systematic testing demonstrated high resolution measurements of eye position of <0.1°. Magnetic eye tracking offers several advantages over the well-established eye coil and video-oculography methods. Most notably, it provides the first method for reliable, high-resolution measurement of eye movements in freely moving mice, revealing increased eye movements and altered binocular coordination compared to head-fixed mice. Overall, magnetic eye tracking provides a lightweight, inexpensive, easily implemented, and high-resolution method suitable for a wide range of applications. PMID:28872455
Does conspicuity enhance distraction? Saliency and eye landing position when searching for objects.
Foulsham, Tom; Underwood, Geoffrey
2009-06-01
While visual saliency may sometimes capture attention, the guidance of eye movements in search is often dominated by knowledge of the target. How is the search for an object influenced by the saliency of an adjacent distractor? Participants searched for a target amongst an array of objects, with distractor saliency having an effect on response time and on the speed at which targets were found. Saliency did not predict the order in which objects in target-absent trials were fixated. The within-target landing position was distributed around a modal position close to the centre of the object. Saliency did not affect this position, the latency of the initial saccade, or the likelihood of the distractor being fixated, suggesting that saliency affects the allocation of covert attention and not just eye movements.
Gorlick, Marissa A; Maddox, W Todd
2013-01-01
Arousal Biased Competition theory suggests that arousal enhances competitive attentional processes, but makes no strong claims about valence effects. Research suggests that the scope of enhanced attention depends on valence with negative arousal narrowing and positive arousal broadening attention. Attentional scope likely affects declarative-memory-mediated and perceptual-representation-mediated learning systems differently, with declarative-memory-mediated learning depending on narrow attention to develop targeted verbalizable rules, and perceptual-representation-mediated learning depending on broad attention to develop a perceptual representation. We hypothesize that negative arousal accentuates declarative-memory-mediated learning and attenuates perceptual-representation-mediated learning, while positive arousal reverses this pattern. Prototype learning provides an ideal test bed as dissociable declarative-memory and perceptual-representation systems mediate two-prototype (AB) and one-prototype (AN) prototype learning, respectively, and computational models are available that provide powerful insights on cognitive processing. As predicted, we found that negative arousal narrows attentional focus facilitating AB learning and impairing AN learning, while positive arousal broadens attentional focus facilitating AN learning and impairing AB learning.
Gorlick, Marissa A.; Maddox, W. Todd
2013-01-01
Arousal Biased Competition theory suggests that arousal enhances competitive attentional processes, but makes no strong claims about valence effects. Research suggests that the scope of enhanced attention depends on valence with negative arousal narrowing and positive arousal broadening attention. Attentional scope likely affects declarative-memory-mediated and perceptual-representation-mediated learning systems differently, with declarative-memory-mediated learning depending on narrow attention to develop targeted verbalizable rules, and perceptual-representation-mediated learning depending on broad attention to develop a perceptual representation. We hypothesize that negative arousal accentuates declarative-memory-mediated learning and attenuates perceptual-representation-mediated learning, while positive arousal reverses this pattern. Prototype learning provides an ideal test bed as dissociable declarative-memory and perceptual-representation systems mediate two-prototype (AB) and one-prototype (AN) prototype learning, respectively, and computational models are available that provide powerful insights on cognitive processing. As predicted, we found that negative arousal narrows attentional focus facilitating AB learning and impairing AN learning, while positive arousal broadens attentional focus facilitating AN learning and impairing AB learning. PMID:23646101
Internal structure changes of eyelash induced by eye makeup.
Fukami, Ken-Ichi; Inoue, Takafumi; Kawai, Tomomitsu; Takeuchi, Akihisa; Uesugi, Kentaro; Suzuki, Yoshio
2014-01-01
To investigate how eye makeup affects eyelash structure, internal structure of eyelashes were observed with a scanning X-ray microscopic tomography system using synchrotron radiation light source. Eyelash samples were obtained from 36 Japanese women aged 20-70 years and whose use of eye makeup differed. Reconstructed cross-sectional images showed that the structure of the eyelash closely resembled that of scalp hair. The eyelash structure is changed by use of eye makeup. There was a positive correlation between the frequency of mascara use and the degree of cracking in cuticle. The positive correlation was also found between the frequency of mascara use and the porosity of the cortex. By contrast, the use of eyelash curler did not affect the eyelash structure with statistical significance.
Speeded Probed Recall Is Affected by Grouping.
Morra, Sergio; Epidendio, Valentina
2015-01-01
Most of the evidence from previous studies on speeded probed recall supported primacy-gradient models of serial order representation. Two experiments investigated the effect of grouping on speeded probed recall. Six-word lists, followed by a number between 1 and 6, were presented for speeded recall of the word in the position indicated by the number. Grouping was manipulated through interstimulus intervals. In both experiments, a significant Position × Grouping interaction was found in RT. It is concluded that the results are not consistent with models of order representation only based on a primacy gradient. Possible alternative representations of serial order are also discussed; a case is made for a holistic order representation.
Lin, Zhicheng; He, Sheng
2012-10-25
Object identities ("what") and their spatial locations ("where") are processed in distinct pathways in the visual system, raising the question of how the what and where information is integrated. Because of object motions and eye movements, the retina-based representations are unstable, necessitating nonretinotopic representation and integration. A potential mechanism is to code and update objects according to their reference frames (i.e., frame-centered representation and integration). To isolate frame-centered processes, in a frame-to-frame apparent motion configuration, we (a) presented two preceding or trailing objects on the same frame, equidistant from the target on the other frame, to control for object-based (frame-based) effect and space-based effect, and (b) manipulated the target's relative location within its frame to probe frame-centered effect. We show that iconic memory, visual priming, and backward masking depend on objects' relative frame locations, orthogonal of the retinotopic coordinate. These findings not only reveal that iconic memory, visual priming, and backward masking can be nonretinotopic but also demonstrate that these processes are automatically constrained by contextual frames through a frame-centered mechanism. Thus, object representation is robustly and automatically coupled to its reference frame and continuously being updated through a frame-centered, location-specific mechanism. These findings lead to an object cabinet framework, in which objects ("files") within the reference frame ("cabinet") are orderly coded relative to the frame.
Baghaie, Ahmadreza; Yu, Zeyun; D'Souza, Roshan M
2017-04-01
In this paper, we review state-of-the-art techniques to correct eye motion artifacts in Optical Coherence Tomography (OCT) imaging. The methods for eye motion artifact reduction can be categorized into two major classes: (1) hardware-based techniques and (2) software-based techniques. In the first class, additional hardware is mounted onto the OCT scanner to gather information about the eye motion patterns during OCT data acquisition. This information is later processed and applied to the OCT data for creating an anatomically correct representation of the retina, either in an offline or online manner. In software based techniques, the motion patterns are approximated either by comparing the acquired data to a reference image, or by considering some prior assumptions about the nature of the eye motion. Careful investigations done on the most common methods in the field provides invaluable insight regarding future directions of the research in this area. The challenge in hardware-based techniques lies in the implementation aspects of particular devices. However, the results of these techniques are superior to those obtained from software-based techniques because they are capable of capturing secondary data related to eye motion during OCT acquisition. Software-based techniques on the other hand, achieve moderate success and their performance is highly dependent on the quality of the OCT data in terms of the amount of motion artifacts contained in them. However, they are still relevant to the field since they are the sole class of techniques with the ability to be applied to legacy data acquired using systems that do not have extra hardware to track eye motion. Copyright © 2017 Elsevier B.V. All rights reserved.
Dias Neto, David; Figueiras, Maria João; Campos, Sónia; Tavares, Patrícia
2017-12-01
Mass media plays a fundamental role in how communities understand mental health and its treatment. However, the effect of major events such as economic crises on the depiction of mental health is still unclear. This study aimed at analyzing representations of mental health and its treatment and the impact of the 2008 economic crisis. In total, 1,000 articles were randomly selected from two newspapers from a period before and after the economic crisis. These articles were analyzed with a closed coding system that classified the news as good or bad news according to the presence of themes associated with positive or stigmatizing representations. The results show a positive representation of mental health and a negative representation of treatment. Furthermore, the economic crisis had a negative impact on the representation of mental health, but not on treatment. These findings suggest that the representation of mental health is multifaceted and may be affected differently in its dimensions. There is a need for stigma-reducing interventions that both account for this complexity and are sensitive to context and period.
Social place-cells in the bat hippocampus.
Omer, David B; Maimon, Shir R; Las, Liora; Ulanovsky, Nachum
2018-01-12
Social animals have to know the spatial positions of conspecifics. However, it is unknown how the position of others is represented in the brain. We designed a spatial observational-learning task, in which an observer bat mimicked a demonstrator bat while we recorded hippocampal dorsal-CA1 neurons from the observer bat. A neuronal subpopulation represented the position of the other bat, in allocentric coordinates. About half of these "social place-cells" represented also the observer's own position-that is, were place cells. The representation of the demonstrator bat did not reflect self-movement or trajectory planning by the observer. Some neurons represented also the position of inanimate moving objects; however, their representation differed from the representation of the demonstrator bat. This suggests a role for hippocampal CA1 neurons in social-spatial cognition. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.
Hiramoto, Keiichi; Kobayashi, Hiromi; Yamate, Yurika; Ishii, Masamitsu; Sato, Takao; Inoue, Masayasu
2013-02-01
Irradiation by ultraviolet (UV) B is known to increase the number of Dopa-positive melanocytes in the skin. This study examines the effectiveness of a contact lens for the defense of UVB eye irradiation-induced pigmentation. A 2.5 kJ/m(2) dose of UVB radiation was delivered by a sunlamp to the eye of C57BL/6j male mice, and changes in the expression of Dopa-positive melanocytes in the epidermis and the plasma level of alpha-melanocyte-stimulating hormone (α-MSH) was analyzed. The degree of change in the Dopa-positive melanocytes expression was reduced by UVB blocking contact lens using mice given UVB irradiation to the eye. The plasma level of α-MSH increased in the C57BL/6j mice after irradiation to the eye, but there was no increase in the UVB blocking contact lens mice given UVB irradiation to the eye. Both the increase of the expression of Dopa-positive melanocytes and the plasma level of α-MSH were strongly suppressed by an alignment fitting UVB blocking contact lens and only a slightly suspended UVB blocking contact lens. In addition, these changes were successfully inhibited by a UVB blocking contact lens but not by a non-UVB blocking contact lens with a similar absorbance. These observations suggest that the UVB blocking contact lens inhibits the pigmentation of the epidermis in mice by suppressing of the α-MSH. Copyright © 2012 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.
Jaspal, Rusi; Nerlich, Brigitte
2017-09-01
Pre-exposure prophylaxis is a novel biomedical HIV prevention option for individuals at high risk of HIV acquisition. Although pre-exposure prophylaxis has yielded encouraging results in various clinical trials, opponents argue that pre-exposure prophylaxis poses a number of risks to human health and to sexually transmitted infection prevention efforts. Using qualitative thematic analysis and social representation theory, this article explores coverage of pre-exposure prophylaxis in the UK print media between 2008 and 2015 in order to chart the emerging social representations of this novel HIV prevention strategy. The analysis revealed two competing social representations of pre-exposure prophylaxis: (1) as a positive development in the 'battle' against HIV (the hope representation) and (2) as a medical, social and psychological setback in this battle, particularly for gay/bisexual men (the risk representation). These social representations map onto the themes of pre-exposure prophylaxis as a superlatively positive development; pre-exposure prophylaxis as a weapon in the battle against HIV/AIDS; and risk, uncertainty and fear in relation to pre-exposure prophylaxis. The hope representation focuses on taking (individual and collective) responsibility, while the risk representation focuses on attributing (individual and collective) blame. The implications for policy and practice are discussed.
Population Coding of Visual Space: Modeling
Lehky, Sidney R.; Sereno, Anne B.
2011-01-01
We examine how the representation of space is affected by receptive field (RF) characteristics of the encoding population. Spatial responses were defined by overlapping Gaussian RFs. These responses were analyzed using multidimensional scaling to extract the representation of global space implicit in population activity. Spatial representations were based purely on firing rates, which were not labeled with RF characteristics (tuning curve peak location, for example), differentiating this approach from many other population coding models. Because responses were unlabeled, this model represents space using intrinsic coding, extracting relative positions amongst stimuli, rather than extrinsic coding where known RF characteristics provide a reference frame for extracting absolute positions. Two parameters were particularly important: RF diameter and RF dispersion, where dispersion indicates how broadly RF centers are spread out from the fovea. For large RFs, the model was able to form metrically accurate representations of physical space on low-dimensional manifolds embedded within the high-dimensional neural population response space, suggesting that in some cases the neural representation of space may be dimensionally isomorphic with 3D physical space. Smaller RF sizes degraded and distorted the spatial representation, with the smallest RF sizes (present in early visual areas) being unable to recover even a topologically consistent rendition of space on low-dimensional manifolds. Finally, although positional invariance of stimulus responses has long been associated with large RFs in object recognition models, we found RF dispersion rather than RF diameter to be the critical parameter. In fact, at a population level, the modeling suggests that higher ventral stream areas with highly restricted RF dispersion would be unable to achieve positionally-invariant representations beyond this narrow region around fixation. PMID:21344012
The representation of order information in auditory-verbal short-term memory.
Kalm, Kristjan; Norris, Dennis
2014-05-14
Here we investigate how order information is represented in auditory-verbal short-term memory (STM). We used fMRI and a serial recall task to dissociate neural activity patterns representing the phonological properties of the items stored in STM from the patterns representing their order. For this purpose, we analyzed fMRI activity patterns elicited by different item sets and different orderings of those items. These fMRI activity patterns were compared with the predictions made by positional and chaining models of serial order. The positional models encode associations between items and their positions in a sequence, whereas the chaining models encode associations between successive items and retain no position information. We show that a set of brain areas in the postero-dorsal stream of auditory processing store associations between items and order as predicted by a positional model. The chaining model of order representation generates a different pattern similarity prediction, which was shown to be inconsistent with the fMRI data. Our results thus favor a neural model of order representation that stores item codes, position codes, and the mapping between them. This study provides the first fMRI evidence for a specific model of order representation in the human brain. Copyright © 2014 the authors 0270-6474/14/346879-08$15.00/0.
NASA Technical Reports Server (NTRS)
Dickman, J. D.; Angelaki, D. E.
1999-01-01
During linear accelerations, compensatory reflexes should continually occur in order to maintain objects of visual interest as stable images on the retina. In the present study, the three-dimensional organization of the vestibulo-ocular reflex in pigeons was quantitatively examined during linear accelerations produced by constant velocity off-vertical axis yaw rotations and translational motion in darkness. With off-vertical axis rotations, sinusoidally modulated eye-position and velocity responses were observed in all three components, with the vertical and torsional eye movements predominating the response. Peak torsional and vertical eye positions occurred when the head was oriented with the lateral visual axis of the right eye directed orthogonal to or aligned with the gravity vector, respectively. No steady-state horizontal nystagmus was obtained with any of the rotational velocities (8-58 degrees /s) tested. During translational motion, delivered along or perpendicular to the lateral visual axis, vertical and torsional eye movements were elicited. No significant horizontal eye movements were observed during lateral translation at frequencies up to 3 Hz. These responses suggest that, in pigeons, all linear accelerations generate eye movements that are compensatory to the direction of actual or perceived tilt of the head relative to gravity. In contrast, no translational horizontal eye movements, which are known to be compensatory to lateral translational motion in primates, were observed under the present experimental conditions.
A generalised porous medium approach to study thermo-fluid dynamics in human eyes.
Mauro, Alessandro; Massarotti, Nicola; Salahudeen, Mohamed; Romano, Mario R; Romano, Vito; Nithiarasu, Perumal
2018-03-22
The present work describes the application of the generalised porous medium model to study heat and fluid flow in healthy and glaucomatous eyes of different subject specimens, considering the presence of ocular cavities and porous tissues. The 2D computational model, implemented into the open-source software OpenFOAM, has been verified against benchmark data for mixed convection in domains partially filled with a porous medium. The verified model has been employed to simulate the thermo-fluid dynamic phenomena occurring in the anterior section of four patient-specific human eyes, considering the presence of anterior chamber (AC), trabecular meshwork (TM), Schlemm's canal (SC), and collector channels (CC). The computational domains of the eye are extracted from tomographic images. The dependence of TM porosity and permeability on intraocular pressure (IOP) has been analysed in detail, and the differences between healthy and glaucomatous eye conditions have been highlighted, proving that the different physiological conditions of patients have a significant influence on the thermo-fluid dynamic phenomena. The influence of different eye positions (supine and standing) on thermo-fluid dynamic variables has been also investigated: results are presented in terms of velocity, pressure, temperature, friction coefficient and local Nusselt number. The results clearly indicate that porosity and permeability of TM are two important parameters that affect eye pressure distribution. Graphical abstract Velocity contours and vectors for healthy eyes (top) and glaucomatous eyes (bottom) for standing position.
Transposed-Letter Effects in Reading: Evidence from Eye Movements and Parafoveal Preview
ERIC Educational Resources Information Center
Johnson, Rebecca L.; Perea, Manuel; Rayner, Keith
2007-01-01
Three eye movement experiments were conducted to examine the role of letter identity and letter position during reading. Before fixating on a target word within each sentence, readers were provided with a parafoveal preview that differed in the amount of useful letter identity and letter position information it provided. In Experiments 1 and 2,…
An Active System for Visually-Guided Reaching in 3D across Binocular Fixations
2014-01-01
Based on the importance of relative disparity between objects for accurate hand-eye coordination, this paper presents a biological approach inspired by the cortical neural architecture. So, the motor information is coded in egocentric coordinates obtained from the allocentric representation of the space (in terms of disparity) generated from the egocentric representation of the visual information (image coordinates). In that way, the different aspects of the visuomotor coordination are integrated: an active vision system, composed of two vergent cameras; a module for the 2D binocular disparity estimation based on a local estimation of phase differences performed through a bank of Gabor filters; and a robotic actuator to perform the corresponding tasks (visually-guided reaching). The approach's performance is evaluated through experiments on both simulated and real data. PMID:24672295
Hirata, Akimasa; Watanabe, Soichi; Taki, Masao; Fujiwara, Osamu; Kojima, Masami; Sasaki, Kazuyuki
2008-02-01
This study calculated the temperature elevation in the rabbit eye caused by 2.45-GHz near-field exposure systems. First, we calculated specific absorption rate distributions in the eye for different antennas and then compared them with those observed in previous studies. Next, we re-examined the temperature elevation in the rabbit eye due to a horizontally-polarized dipole antenna with a C-shaped director, which was used in a previous study. For our computational results, we found that decisive factors of the SAR distribution in the rabbit eye were the polarization of the electromagnetic wave and antenna aperture. Next, we quantified the eye average specific absorption rate as 67 W kg(-1) for the dipole antenna with an input power density at the eye surface of 150 mW cm(-2), which was specified in the previous work as the minimum cataractogenic power density. The effect of administrating anesthesia on the temperature elevation was 30% or so in the above case. Additionally, the position where maximum temperature in the lens appears is discussed due to different 2.45-GHz microwave systems. That position was found to appear around the posterior of the lens regardless of the exposure condition, which indicates that the original temperature distribution in the eye was the dominant factor.
Image-based computer-assisted diagnosis system for benign paroxysmal positional vertigo
NASA Astrophysics Data System (ADS)
Kohigashi, Satoru; Nakamae, Koji; Fujioka, Hiromu
2005-04-01
We develop the image based computer assisted diagnosis system for benign paroxysmal positional vertigo (BPPV) that consists of the balance control system simulator, the 3D eye movement simulator, and the extraction method of nystagmus response directly from an eye movement image sequence. In the system, the causes and conditions of BPPV are estimated by searching the database for record matching with the nystagmus response for the observed eye image sequence of the patient with BPPV. The database includes the nystagmus responses for simulated eye movement sequences. The eye movement velocity is obtained by using the balance control system simulator that allows us to simulate BPPV under various conditions such as canalithiasis, cupulolithiasis, number of otoconia, otoconium size, and so on. Then the eye movement image sequence is displayed on the CRT by the 3D eye movement simulator. The nystagmus responses are extracted from the image sequence by the proposed method and are stored in the database. In order to enhance the diagnosis accuracy, the nystagmus response for a newly simulated sequence is matched with that for the observed sequence. From the matched simulation conditions, the causes and conditions of BPPV are estimated. We apply our image based computer assisted diagnosis system to two real eye movement image sequences for patients with BPPV to show its validity.
Optimal Eye-Gaze Fixation Position for Face-Related Neural Responses
Zerouali, Younes; Lina, Jean-Marc; Jemel, Boutheina
2013-01-01
It is generally agreed that some features of a face, namely the eyes, are more salient than others as indexed by behavioral diagnosticity, gaze-fixation patterns and evoked-neural responses. However, because previous studies used unnatural stimuli, there is no evidence so far that the early encoding of a whole face in the human brain is based on the eyes or other facial features. To address this issue, scalp electroencephalogram (EEG) and eye gaze-fixations were recorded simultaneously in a gaze-contingent paradigm while observers viewed faces. We found that the N170 indexing the earliest face-sensitive response in the human brain was the largest when the fixation position is located around the nasion. Interestingly, for inverted faces, this optimal fixation position was more variable, but mainly clustered in the upper part of the visual field (around the mouth). These observations extend the findings of recent behavioral studies, suggesting that the early encoding of a face, as indexed by the N170, is not driven by the eyes per se, but rather arises from a general perceptual setting (upper-visual field advantage) coupled with the alignment of a face stimulus to a stored face template. PMID:23762224
Optimal eye-gaze fixation position for face-related neural responses.
Zerouali, Younes; Lina, Jean-Marc; Jemel, Boutheina
2013-01-01
It is generally agreed that some features of a face, namely the eyes, are more salient than others as indexed by behavioral diagnosticity, gaze-fixation patterns and evoked-neural responses. However, because previous studies used unnatural stimuli, there is no evidence so far that the early encoding of a whole face in the human brain is based on the eyes or other facial features. To address this issue, scalp electroencephalogram (EEG) and eye gaze-fixations were recorded simultaneously in a gaze-contingent paradigm while observers viewed faces. We found that the N170 indexing the earliest face-sensitive response in the human brain was the largest when the fixation position is located around the nasion. Interestingly, for inverted faces, this optimal fixation position was more variable, but mainly clustered in the upper part of the visual field (around the mouth). These observations extend the findings of recent behavioral studies, suggesting that the early encoding of a face, as indexed by the N170, is not driven by the eyes per se, but rather arises from a general perceptual setting (upper-visual field advantage) coupled with the alignment of a face stimulus to a stored face template.
Eye-hand coupling during closed-loop drawing: evidence of shared motor planning?
Reina, G Anthony; Schwartz, Andrew B
2003-04-01
Previous paradigms have used reaching movements to study coupling of eye-hand kinematics. In the present study, we investigated eye-hand kinematics as curved trajectories were drawn at normal speeds. Eye and hand movements were tracked as a monkey traced ellipses and circles with the hand in free space while viewing the hand's position on a computer monitor. The results demonstrate that the movement of the hand was smooth and obeyed the 2/3 power law. Eye position, however, was restricted to 2-3 clusters along the hand's trajectory and fixed approximately 80% of the time in one of these clusters. The eye remained stationary as the hand moved away from the fixation for up to 200 ms and saccaded ahead of the hand position to the next fixation along the trajectory. The movement from one fixation cluster to another consistently occurred just after the tangential hand velocity had reached a local minimum, but before the next segment of the hand's trajectory began. The next fixation point was close to an area of high curvature along the hand's trajectory even though the hand had not reached that point along the path. A visuo-motor illusion of hand movement demonstrated that the eye movement was influenced by hand movement and not simply by visual input. During the task, neural activity of pre-motor cortex (area F4) was recorded using extracellular electrodes and used to construct a population vector of the hand's trajectory. The results suggest that the saccade onset is correlated in time with maximum curvature in the population vector trajectory for the hand movement. We hypothesize that eye and arm movements may have common, or shared, information in forming their motor plans.
Imaging Modalities Relevant to Intracranial Pressure Assessment in Astronauts
NASA Technical Reports Server (NTRS)
Sargsyan, Ashot E.; Kramer, Larry A.; Hamilton, Douglas R.; Fogarty, Jennifer; Polk, J. D.
2011-01-01
Learning Objectives of this slide presentation are: 1: To review the morphological changes in orbit structures caused by elevated Intracranial Pressure (ICP), and their imaging representation. 2: To learn about the similarities and differences between MRI and sonographic imaging of the eye and orbit. 3: To learn about the role of MRI and sonography in the noninvasive assessment of intracranial pressure in aerospace medicine, and the added benefits from their combined interpretation.
NASA Technical Reports Server (NTRS)
Fymat, A. L.
1971-01-01
Our method of matrix synthesis of optical components and instruments is applied to the derivation of Jones's matrices appropriate for Fourier interferometers (spectrometers and spectropolarimeters). These matrices are obtained for both the source beam and the detector beam. In the course of synthesis, Jones's matrices of the various reflectors (plane mirrors; retroreflectors: roofed mirror, trihedral and prism cube corner, cat's eye) used by these interferometers are also obtained.
ERIC Educational Resources Information Center
Plaut, David C.; McClelland, James L.
2010-01-01
According to Bowers, the finding that there are neurons with highly selective responses to familiar stimuli supports theories positing localist representations over approaches positing the type of distributed representations typically found in parallel distributed processing (PDP) models. However, his conclusions derive from an overly narrow view…
Prognostic value of gonioscopy after deep sclerectomy.
Moreno-Montañés, J; Rebolleda, G; Muñoz-Negrete, F J
2007-01-01
To ascertain gonioscopic characteristics and identify prognostic indicators related to intraocular pressure (IOP) after deep sclerectomy (DS). A transversal, prospective, and nonselected study was performed in 106 eyes (95 patients) after DS. Three surgeons performed all the surgeries and the gonioscopic examination, using the same protocol including 13 gonioscopic data. These data were evaluated for an association with postoperative IOP and time after surgery. A subscleral space was found in 91 eyes (85.8%), with visualization of the line of scleral flap in 48 eyes (45.3%). The trabeculo-Descemet membrane (TDM) was transparent in 46 eyes (43.4%), opaque in 4 cases, and pigmented in 18 eyes. This TDM was broken using Nd:YAG laser goniopuncture in 38 eyes(35.8%). Thin vessels around TDM were found in 58 eyes (54.7%), and blood remained in 25 eyes (23.5%). Gonioscopic variables significantly positively related with postoperative IOP were as follows: presence of subscleral space, scleral flap line view, and a Schwalbe line depressed. A narrow anterior chamber angle and iris synechia in TDM had a statistically significant negative effect on the postoperative IOP control. Similarly, eyes requiring Nd:YAG goniopuncture had a worse IOP control. The frequency of eyes with visible subscleral space and transparent TDM decreases with time after surgery (p=0.001). A visible subscleral space was a gonioscopic sign positively related to IOP control after surgery, although it decreased with follow-up. Eyes with goniopuncture, postoperative narrow angle, and iris synechia had worse postoperative IOP control. Although new vessels in TDM were a common finding after DS, the authors did not find any association with postoperative IOP.
Binocular coordination in response to stereoscopic stimuli
NASA Astrophysics Data System (ADS)
Liversedge, Simon P.; Holliman, Nicolas S.; Blythe, Hazel I.
2009-02-01
Humans actively explore their visual environment by moving their eyes. Precise coordination of the eyes during visual scanning underlies the experience of a unified perceptual representation and is important for the perception of depth. We report data from three psychological experiments investigating human binocular coordination during visual processing of stereoscopic stimuli.In the first experiment participants were required to read sentences that contained a stereoscopically presented target word. Half of the word was presented exclusively to one eye and half exclusively to the other eye. Eye movements were recorded and showed that saccadic targeting was uninfluenced by the stereoscopic presentation, strongly suggesting that complementary retinal stimuli are perceived as a single, unified input prior to saccade initiation. In a second eye movement experiment we presented words stereoscopically to measure Panum's Fusional Area for linguistic stimuli. In the final experiment we compared binocular coordination during saccades between simple dot stimuli under 2D, stereoscopic 3D and real 3D viewing conditions. Results showed that depth appropriate vergence movements were made during saccades and fixations to real 3D stimuli, but only during fixations on stereoscopic 3D stimuli. 2D stimuli did not induce depth vergence movements. Together, these experiments indicate that stereoscopic visual stimuli are fused when they fall within Panum's Fusional Area, and that saccade metrics are computed on the basis of a unified percept. Also, there is sensitivity to non-foveal retinal disparity in real 3D stimuli, but not in stereoscopic 3D stimuli, and the system responsible for binocular coordination responds to this during saccades as well as fixations.
Eye tracking a self-moved target with complex hand-target dynamics
Landelle, Caroline; Montagnini, Anna; Madelain, Laurent
2016-01-01
Previous work has shown that the ability to track with the eye a moving target is substantially improved when the target is self-moved by the subject's hand compared with when being externally moved. Here, we explored a situation in which the mapping between hand movement and target motion was perturbed by simulating an elastic relationship between the hand and target. Our objective was to determine whether the predictive mechanisms driving eye-hand coordination could be updated to accommodate this complex hand-target dynamics. To fully appreciate the behavioral effects of this perturbation, we compared eye tracking performance when self-moving a target with a rigid mapping (simple) and a spring mapping as well as when the subject tracked target trajectories that he/she had previously generated when using the rigid or spring mapping. Concerning the rigid mapping, our results confirmed that smooth pursuit was more accurate when the target was self-moved than externally moved. In contrast, with the spring mapping, eye tracking had initially similar low spatial accuracy (though shorter temporal lag) in the self versus externally moved conditions. However, within ∼5 min of practice, smooth pursuit improved in the self-moved spring condition, up to a level similar to the self-moved rigid condition. Subsequently, when the mapping unexpectedly switched from spring to rigid, the eye initially followed the expected target trajectory and not the real one, thereby suggesting that subjects used an internal representation of the new hand-target dynamics. Overall, these results emphasize the stunning adaptability of smooth pursuit when self-maneuvering objects with complex dynamics. PMID:27466129
Does mood influence text processing and comprehension? Evidence from an eye-movement study.
Scrimin, Sara; Mason, Lucia
2015-09-01
Previous research has indicated that mood influences cognitive processes. However, there is scarce data regarding the link between everyday emotional states and readers' text processing and comprehension. We aim to extend current research on the effects of mood induction on science text processing and comprehension, using eye-tracking methodology. We investigated whether a positive-, negative-, and neutral-induced mood influences online processing, as revealed by indices of visual behaviour during reading, and offline text comprehension, as revealed by post-test questions. We were also interested in the link between text processing and comprehension. Seventy-eight undergraduate students randomly assigned to three mood-induction conditions. Students were mood-induced by watching a video clip. They were then asked to read a scientific text while eye movements were registered. Pre- and post-reading knowledge was assessed through open-ended questions. Experimentally induced moods lead readers to process an expository text differently. Overall, students in a positive mood spent significantly longer on the text processing than students in the negative and neutral moods. Eye-movement patterns indicated more effective processing related to longer proportion of look-back fixation times in positive-induced compared with negative-induced readers. Students in a positive mood also comprehended the text better, learning more factual knowledge, compared with students in the negative group. Only for the positive-induced readers did the more purposeful second-pass reading positively predict text comprehension. New insights are given on the effects of normal mood variations and students' text processing and comprehension by the use of eye-tracking methodology. Important implications for the role of emotional states in educational settings are highlighted. © 2015 The British Psychological Society.
Gravity influences the visual representation of object tilt in parietal cortex.
Rosenberg, Ari; Angelaki, Dora E
2014-10-22
Sensory systems encode the environment in egocentric (e.g., eye, head, or body) reference frames, creating inherently unstable representations that shift and rotate as we move. However, it is widely speculated that the brain transforms these signals into an allocentric, gravity-centered representation of the world that is stable and independent of the observer's spatial pose. Where and how this representation may be achieved is currently unknown. Here we demonstrate that a subpopulation of neurons in the macaque caudal intraparietal area (CIP) visually encodes object tilt in nonegocentric coordinates defined relative to the gravitational vector. Neuronal responses to the tilt of a visually presented planar surface were measured with the monkey in different spatial orientations (upright and rolled left/right ear down) and then compared. This revealed a continuum of representations in which planar tilt was encoded in a gravity-centered reference frame in approximately one-tenth of the comparisons, intermediate reference frames ranging between gravity-centered and egocentric in approximately two-tenths of the comparisons, and in an egocentric reference frame in less than half of the comparisons. Altogether, almost half of the comparisons revealed a shift in the preferred tilt and/or a gain change consistent with encoding object orientation in nonegocentric coordinates. Through neural network modeling, we further show that a purely gravity-centered representation of object tilt can be achieved directly from the population activity of CIP-like units. These results suggest that area CIP may play a key role in creating a stable, allocentric representation of the environment defined relative to an "earth-vertical" direction. Copyright © 2014 the authors 0270-6474/14/3414170-11$15.00/0.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsai, Hui-Yu; Liao, Ying-Lan; Chang Gung University / Chang Gung Memorial Hospital, Taoyun, Taiwan
Purpose: The purpose of this study is to assess eye-lens dose for patients who underwent brain CT examinations using two dose reduction Methods: organ-based tube current modulation (OBTCM) and in-plane bismuth shielding method. Methods: This study received institutional review board approval; written informed consent to participate was obtained from all patients. Ninety patients who underwent the routine brain CT examination were randomly assigned to three groups, ie. routine, OBTCM, and bismuth shield. The OBTCM technique reduced the tube current when the X-ray tube rotates in front of patients’ eye-lens region. The patients in the bismuth shield group were covered one-plymore » bismuth shield in the eyes’ region. Eye-lens doses were measured using TLD-100H chips and the total effective doses were calculated using CT-Expo according to the CT scanning parameters. The surface doses for patients at off-center positions were assessed to evaluate the off-centering effect. Results: Phantom measurements indicates that OBTCM technique could reduced by 26% to 28% of the surface dose to the eye lens, and increased by 25% of the surface dose at the opposed incident direction at the angle of 180°. Patients’ eye-lens doses were reduced 16.9% and 30.5% dose of bismuth shield scan and OBTCM scan, respectively compared to the routine scan. The eye-lens doses were apparently increased when the table position was lower than isocenter. Conclusion: Reducing the dose to the radiosensitive organs, such as eye lens, during routine brain CT examinations could lower the radiation risks. The OBTCM technique and in-plane bismuth shielding could be used to reduce the eye-lens dose. The eye-lens dose could be effectively reduced using OBTCM scan without interfering the diagnostic image quality. Patient position relative the CT gantry also affects the dose level of the eye lens. This study was supported by the grants from the Ministry of Science and Technology of Taiwan (MOST103-2314-B-182-009-MY2), and Chang Gung Memorial Hospital (CMRPD1C0682)« less
Adaptation of catch-up saccades during the initiation of smooth pursuit eye movements.
Schütz, Alexander C; Souto, David
2011-04-01
Reduction of retinal speed and alignment of the line of sight are believed to be the respective primary functions of smooth pursuit and saccadic eye movements. As the eye muscles strength can change in the short-term, continuous adjustments of motor signals are required to achieve constant accuracy. While adaptation of saccade amplitude to systematic position errors has been extensively studied, we know less about the adaptive response to position errors during smooth pursuit initiation, when target motion has to be taken into account to program saccades, and when position errors at the saccade endpoint could also be corrected by increasing pursuit velocity. To study short-term adaptation (250 adaptation trials) of tracking eye movements, we introduced a position error during the first catch-up saccade made during the initiation of smooth pursuit-in a ramp-step-ramp paradigm. The target position was either shifted in the direction of the horizontally moving target (forward step), against it (backward step) or orthogonally to it (vertical step). Results indicate adaptation of catch-up saccade amplitude to back and forward steps. With vertical steps, saccades became oblique, by an inflexion of the early or late saccade trajectory. With a similar time course, post-saccadic pursuit velocity was increased in the step direction, adding further evidence that under some conditions pursuit and saccades can act synergistically to reduce position errors.
Blood culture bottles are superior to conventional media for vitreous culture.
Thariya, Patsuda; Yospaiboon, Yosanan; Sinawat, Suthasinee; Sanguansak, Thuss; Bhoomibunchoo, Chavakij; Laovirojjanakul, Wipada
2016-08-01
To compare blood culture bottles and conventional media for the vitreous culture in patients with clinically suspected infectious endophthalmitis. Retrospective comparative study at KKU Eye Center, Khon Kaen University. There were 342 patients with clinically suspected infectious endophthalmitis participated in the study. The vitreous specimens were inoculated in both blood culture bottles and on conventional culture media (blood agar, MacConkey agar, chocolate agar, Sabouraud dextrose agar and thioglycolate broth). The number of positive culture yields in both blood culture bottles and conventional media. Positive culture yields in both methods were found in 151 eyes (49.5%). There were 136 of 151 eyes (90.1%) with positive culture in blood culture bottles, whereas 99 of 151 eyes (65.6%) yielded positive cultures in conventional media. These findings were different with a statistical significance (P < 0.00001) and an odds ratio of 3.47 (95% confidence interval 1.92, 6.63). A combination of blood culture bottles and conventional media improved the yield. Blood culture bottles are superior to conventional media for vitreous culture in clinically suspected infectious endophthalmitis. Vitreous culture using blood culture bottles should be recommended as the primary method for microbiological diagnosis. A combination of both methods further improves the positive culture yield. © 2016 Royal Australian and New Zealand College of Ophthalmologists.
Fetal eye movements on magnetic resonance imaging.
Woitek, Ramona; Kasprian, Gregor; Lindner, Christian; Stuhr, Fritz; Weber, Michael; Schöpf, Veronika; Brugger, Peter C; Asenbaum, Ulrika; Furtner, Julia; Bettelheim, Dieter; Seidl, Rainer; Prayer, Daniela
2013-01-01
Eye movements are the physical expression of upper fetal brainstem function. Our aim was to identify and differentiate specific types of fetal eye movement patterns using dynamic MRI sequences. Their occurrence as well as the presence of conjugated eyeball motion and consistently parallel eyeball position was systematically analyzed. Dynamic SSFP sequences were acquired in 72 singleton fetuses (17-40 GW, three age groups [17-23 GW, 24-32 GW, 33-40 GW]). Fetal eye movements were evaluated according to a modified classification originally published by Birnholz (1981): Type 0: no eye movements; Type I: single transient deviations; Type Ia: fast deviation, slower reposition; Type Ib: fast deviation, fast reposition; Type II: single prolonged eye movements; Type III: complex sequences; and Type IV: nystagmoid. In 95.8% of fetuses, the evaluation of eye movements was possible using MRI, with a mean acquisition time of 70 seconds. Due to head motion, 4.2% of the fetuses and 20.1% of all dynamic SSFP sequences were excluded. Eye movements were observed in 45 fetuses (65.2%). Significant differences between the age groups were found for Type I (p = 0.03), Type Ia (p = 0.031), and Type IV eye movements (p = 0.033). Consistently parallel bulbs were found in 27.3-45%. In human fetuses, different eye movement patterns can be identified and described by MRI in utero. In addition to the originally classified eye movement patterns, a novel subtype has been observed, which apparently characterizes an important step in fetal brainstem development. We evaluated, for the first time, eyeball position in fetuses. Ultimately, the assessment of fetal eye movements by MRI yields the potential to identify early signs of brainstem dysfunction, as encountered in brain malformations such as Chiari II or molar tooth malformations.
Word Length and Lexical Activation: Longer Is Better
ERIC Educational Resources Information Center
Pitt, Mark A.; Samuel, Arthur G.
2006-01-01
Many models of spoken word recognition posit the existence of lexical and sublexical representations, with excitatory and inhibitory mechanisms used to affect the activation levels of such representations. Bottom-up evidence provides excitatory input, and inhibition from phonetically similar representations leads to lexical competition. In such a…
Body-Specific Representations of Spatial Location
ERIC Educational Resources Information Center
Brunye, Tad T.; Gardony, Aaron; Mahoney, Caroline R.; Taylor, Holly A.
2012-01-01
The body specificity hypothesis (Casasanto, 2009) posits that the way in which people interact with the world affects their mental representation of information. For instance, right- versus left-handedness affects the mental representation of affective valence, with right-handers categorically associating good with rightward areas and bad with…
Kasten, Erich; Bunzenthal, Ulrike; Sabel, Bernhard A
2006-11-25
It has been argued that patients with visual field defects compensate for their deficit by making more frequent eye movements toward the hemianopic field and that visual field enlargements found after vision restoration therapy (VRT) may be an artefact of such eye movements. In order to determine if this was correct, we recorded eye movements in hemianopic subjects before and after VRT. Visual fields were measured in subjects with homonymous visual field defects (n=15) caused by trauma, cerebral ischemia or haemorrhage (lesion age >6 months). Visual field charts were plotted using both high-resolution perimetry (HRP) and conventional perimetry before and after a 3-month period of VRT, with eye movements being recorded with a 2D-eye tracker. This permitted quantification of eye positions and measurements of deviation from fixation. VRT lead to significant visual field enlargements as indicated by an increase of stimulus detection of 3.8% when tested using HRP and about 2.2% (OD) and 3.5% (OS) fewer misses with conventional perimetry. Eye movements were expressed as the standard deviations (S.D.) of the eye position recordings from fixation. Before VRT, the S.D. was +/-0.82 degrees horizontally and +/-1.16 degrees vertically; after VRT, it was +/-0.68 degrees and +/-1.39 degrees , respectively. A cluster analysis of the horizontal eye movements before VRT showed three types of subjects with (i) small (n=7), (ii) medium (n=7) or (iii) large fixation instability (n=1). Saccades were directed equally to the right or the left side; i.e., with no preference toward the blind hemifield. After VRT, many subjects showed a smaller variability of horizontal eye movements. Before VRT, 81.6% of the recorded eye positions were found within a range of 1 degrees horizontally from fixation, whereas after VRT, 88.3% were within that range. In the 2 degrees range, we found 94.8% before and 98.9% after VRT. Subjects moved their eyes 5 degrees or more 0.3% of the time before VRT versus 0.1% after VRT. Thus, in this study, subjects with homonymous visual field defects who were attempting to fixate a central target while their fields were being plotted, typically showed brief horizontal shifts with no preference toward or away from the blind hemifield. These eye movements were usually less than 1 degrees from fixation. Large saccades toward the blind field after VRT were very rare. VRT has no effect on either the direction or the amplitude of horizontal eye movements during visual field testing. These results argue against the theory that the visual field enlargements are artefacts induced by eye movements.
Salter, Phia S.; Adams, Glenn
2016-01-01
A cultural-psychological analysis emphasizes the intentionality of everyday worlds: the idea that material products not only bear psychological traces of culturally constituted beliefs and desires, but also subsequently afford and promote culturally consistent understandings and actions. We applied this conceptual framework of mutual constitution in a research project using quantitative and qualitative approaches to understand the dynamic resonance between sociocultural variance in Black History Month (BHM) representations and the reproduction of racial inequality in the U.S. In studies 1 and 2, we considered whether mainstream BHM artifacts reflect the preferences and understandings of White Americans (i.e., psychological constitution of cultural worlds). Consistent with the psychological constitution hypothesis, White American participants reported more positive affect, better recognition, and greater liking for BHM representations from the schools where White Americans were the majority than BHM representations from the schools where Black students and other students of color were the majority. Moreover, as an indication of the identity relevance of BHM representations, White identification was more positively associated with judgments of positive affect and preference in response to BHM representations from White schools than BHM representations from the schools where Black students were in the majority. In studies 3 and 4, we considered whether BHM representations from different settings differentially afford support or opposition to anti-racism policies (i.e., cultural constitution of psychological experience). In support of the cultural constitution hypothesis, BHM representations typical of schools where Black students were in the majority were more effective at promoting support for anti-racism policies compared to BHM representations typical of predominately White schools and a control condition. This effect was mediated by the effect of (different) BHM representations on perception of racism. Together, these studies suggest that representations of Black History constitute cultural affordances that, depending on their source, can promote (or impede) perception of racism and anti-racism efforts. This research contributes to an emerging body of work examining the bidirectional, psychological importance of cultural products. We discuss implications for theorizing collective manifestations of mind. PMID:27621712
Lehrer, Roni; Schumacher, Gijs
2018-01-01
The policy positions parties choose are central to both attracting voters and forming coalition governments. How then should parties choose positions to best represent voters? Laver and Sergenti show that in an agent-based model with boundedly rational actors a decision rule (Aggregator) that takes the mean policy position of its supporters is the best rule to achieve high congruence between voter preferences and party positions. But this result only pertains to representation by the legislature, not representation by the government. To evaluate this we add a coalition formation procedure with boundedly rational parties to the Laver and Sergenti model of party competition. We also add two new decision rules that are sensitive to government formation outcomes rather than voter positions. We develop two simulations: a single-rule one in which parties with the same rule compete and an evolutionary simulation in which parties with different rules compete. In these simulations we analyze party behavior under a large number of different parameters that describe real-world variance in political parties' motives and party system characteristics. Our most important conclusion is that Aggregators also produce the best match between government policy and voter preferences. Moreover, even though citizens often frown upon politicians' interest in the prestige and rents that come with winning political office (office pay-offs), we find that citizens actually receive better representation by the government if politicians are motivated by these office pay-offs in contrast to politicians with ideological motivations (policy pay-offs). Finally, we show that while more parties are linked to better political representation, how parties choose policy positions affects political representation as well. Overall, we conclude that to understand variation in the quality of political representation scholars should look beyond electoral systems and take into account variation in party behavior as well.
2018-01-01
The policy positions parties choose are central to both attracting voters and forming coalition governments. How then should parties choose positions to best represent voters? Laver and Sergenti show that in an agent-based model with boundedly rational actors a decision rule (Aggregator) that takes the mean policy position of its supporters is the best rule to achieve high congruence between voter preferences and party positions. But this result only pertains to representation by the legislature, not representation by the government. To evaluate this we add a coalition formation procedure with boundedly rational parties to the Laver and Sergenti model of party competition. We also add two new decision rules that are sensitive to government formation outcomes rather than voter positions. We develop two simulations: a single-rule one in which parties with the same rule compete and an evolutionary simulation in which parties with different rules compete. In these simulations we analyze party behavior under a large number of different parameters that describe real-world variance in political parties’ motives and party system characteristics. Our most important conclusion is that Aggregators also produce the best match between government policy and voter preferences. Moreover, even though citizens often frown upon politicians’ interest in the prestige and rents that come with winning political office (office pay-offs), we find that citizens actually receive better representation by the government if politicians are motivated by these office pay-offs in contrast to politicians with ideological motivations (policy pay-offs). Finally, we show that while more parties are linked to better political representation, how parties choose policy positions affects political representation as well. Overall, we conclude that to understand variation in the quality of political representation scholars should look beyond electoral systems and take into account variation in party behavior as well. PMID:29394268
Optimizations and Applications in Head-Mounted Video-Based Eye Tracking
ERIC Educational Resources Information Center
Li, Feng
2011-01-01
Video-based eye tracking techniques have become increasingly attractive in many research fields, such as visual perception and human-computer interface design. The technique primarily relies on the positional difference between the center of the eye's pupil and the first-surface reflection at the cornea, the corneal reflection (CR). This…
Visual Short-Term Memory During Smooth Pursuit Eye Movements
ERIC Educational Resources Information Center
Kerzel, Dirk; Ziegler, Nathalie E.
2005-01-01
Visual short-term memory (VSTM) was probed while observers performed smooth pursuit eye movements. Smooth pursuit keeps a moving object stabilized in the fovea. VSTM capacity for position was reduced during smooth pursuit compared with a condition with eye fixation. There was no difference between a condition in which the items were approximately…
Perception and Processing of Faces in the Human Brain Is Tuned to Typical Feature Locations
Schwarzkopf, D. Samuel; Alvarez, Ivan; Lawson, Rebecca P.; Henriksson, Linda; Kriegeskorte, Nikolaus; Rees, Geraint
2016-01-01
Faces are salient social stimuli whose features attract a stereotypical pattern of fixations. The implications of this gaze behavior for perception and brain activity are largely unknown. Here, we characterize and quantify a retinotopic bias implied by typical gaze behavior toward faces, which leads to eyes and mouth appearing most often in the upper and lower visual field, respectively. We found that the adult human visual system is tuned to these contingencies. In two recognition experiments, recognition performance for isolated face parts was better when they were presented at typical, rather than reversed, visual field locations. The recognition cost of reversed locations was equal to ∼60% of that for whole face inversion in the same sample. Similarly, an fMRI experiment showed that patterns of activity evoked by eye and mouth stimuli in the right inferior occipital gyrus could be separated with significantly higher accuracy when these features were presented at typical, rather than reversed, visual field locations. Our findings demonstrate that human face perception is determined not only by the local position of features within a face context, but by whether features appear at the typical retinotopic location given normal gaze behavior. Such location sensitivity may reflect fine-tuning of category-specific visual processing to retinal input statistics. Our findings further suggest that retinotopic heterogeneity might play a role for face inversion effects and for the understanding of conditions affecting gaze behavior toward faces, such as autism spectrum disorders and congenital prosopagnosia. SIGNIFICANCE STATEMENT Faces attract our attention and trigger stereotypical patterns of visual fixations, concentrating on inner features, like eyes and mouth. Here we show that the visual system represents face features better when they are shown at retinal positions where they typically fall during natural vision. When facial features were shown at typical (rather than reversed) visual field locations, they were discriminated better by humans and could be decoded with higher accuracy from brain activity patterns in the right occipital face area. This suggests that brain representations of face features do not cover the visual field uniformly. It may help us understand the well-known face-inversion effect and conditions affecting gaze behavior toward faces, such as prosopagnosia and autism spectrum disorders. PMID:27605606
Multimodal representation of limb endpoint position in the posterior parietal cortex.
Shi, Ying; Apker, Gregory; Buneo, Christopher A
2013-04-01
Understanding the neural representation of limb position is important for comprehending the control of limb movements and the maintenance of body schema, as well as for the development of neuroprosthetic systems designed to replace lost limb function. Multiple subcortical and cortical areas contribute to this representation, but its multimodal basis has largely been ignored. Regarding the parietal cortex, previous results suggest that visual information about arm position is not strongly represented in area 5, although these results were obtained under conditions in which animals were not using their arms to interact with objects in their environment, which could have affected the relative weighting of relevant sensory signals. Here we examined the multimodal basis of limb position in the superior parietal lobule (SPL) as monkeys reached to and actively maintained their arm position at multiple locations in a frontal plane. On half of the trials both visual and nonvisual feedback of the endpoint of the arm were available, while on the other trials visual feedback was withheld. Many neurons were tuned to arm position, while a smaller number were modulated by the presence/absence of visual feedback. Visual modulation generally took the form of a decrease in both firing rate and variability with limb vision and was associated with more accurate decoding of position at the population level under these conditions. These findings support a multimodal representation of limb endpoint position in the SPL but suggest that visual signals are relatively weakly represented in this area, and only at the population level.
Frost, Shawn B; Iliakova, Maria; Dunham, Caleb; Barbay, Scott; Arnold, Paul; Nudo, Randolph J
2013-08-01
The purpose of the present study was to determine the feasibility of using a common laboratory rat strain for reliably locating cortical motor representations of the hindlimb. Intracortical microstimulation techniques were used to derive detailed maps of the hindlimb motor representations in 6 adult Fischer-344 rats. The organization of the hindlimb movement representation, while variable across individual rats in topographic detail, displayed several commonalities. The hindlimb representation was positioned posterior to the forelimb motor representation and posterolateral to the motor trunk representation. The areal extent of the hindlimb representation across the cortical surface averaged 2.00 ± 0.50 mm(2). Superimposing individual maps revealed an overlapping area measuring 0.35 mm(2), indicating that the location of the hindlimb representation can be predicted reliably based on stereotactic coordinates. Across the sample of rats, the hindlimb representation was found 1.25-3.75 mm posterior to the bregma, with an average center location approximately 2.6 mm posterior to the bregma. Likewise, the hindlimb representation was found 1-3.25 mm lateral to the midline, with an average center location approximately 2 mm lateral to the midline. The location of the cortical hindlimb motor representation in Fischer-344 rats can be reliably located based on its stereotactic position posterior to the bregma and lateral to the longitudinal skull suture at midline. The ability to accurately predict the cortical localization of functional hindlimb territories in a rodent model is important, as such animal models are being increasingly used in the development of brain-computer interfaces for restoration of function after spinal cord injury.
Reliability in the Location of Hindlimb Motor Representations in Fischer-344 Rats
Frost, Shawn B.; Iliakova, Maria; Dunham, Caleb; Barbay, Scott; Arnold, Paul; Nudo, Randolph J.
2014-01-01
Object The purpose of the present study was to determine the feasibility of using a common laboratory rat strain for locating cortical motor representations of the hindlimb reliably. Methods Intracortical Microstimulation (ICMS) techniques were used to derive detailed maps of the hindlimb motor representations in six adult Fischer-344 rats. Results The organization of the hindlimb movement representation, while variable across individuals in topographic detail, displayed several commonalities. The hindlimb representation was positioned posterior to the forelimb motor representation and postero-lateral to the motor trunk representation. The areal extent of the hindlimb representation across the cortical surface averaged 2.00 +/− 0.50 mm2. Superimposing individual maps revealed an overlapping area measuring 0.35 mm2, indicating that the location of the hindlimb representation can be predicted reliably based on stereotactic coordinates. Across the sample of rats, the hindlimb representation was found 1.25–3.75 mm posterior to Bregma, with an average center location ~ 2.6 mm posterior to Bregma. Likewise, the hindlimb representation was found 1–3.25 mm lateral to the midline, with an average center location ~ 2 mm lateral to midline. Conclusions The location of the cortical hindlimb motor representation in Fischer-344 rats can be reliably located based on its stereotactic position posterior to Bregma and lateral to the longitudinal skull suture at midline. The ability to accurately predict the cortical localization of functional hindlimb territories in a rodent model is important, as such animal models are being used increasingly in the development of brain-computer interfaces for restoration of function after spinal cord injury. PMID:23725395
Six-Position, Frontal View Photography in Blepharoplasty: A Simple Method.
Zhang, Cheng; Guo, Xiaoshuang; Han, Xuefeng; Tian, Yi; Jin, Xiaolei
2018-02-26
Photography plays a pivotal role in patient education, photo-documentation, preoperative planning and postsurgical evaluation in plastic surgeries. It has long been serving as a bridge that facilitated communication not only between patients and doctors, but also among plastic surgeons from different countries. Although several basic principles and photographic methods have been proposed, there is no internationally accepted photography that could provide both static and dynamic information in blepharoplasty. In this article, we introduced a novel six-position, frontal view photography for thorough assessment in blepharoplasty. From October 2013 to January 2017, 1068 patients who underwent blepharoplasty were enrolled in our clinical research. All patients received six-position, frontal view photography. Pictures were taken of the patients looking up, looking down, squeezing, smiling, looking ahead and with closed eyes. Conventionally, frontal view photography only contained the last two positions. Then, both novel six-position photographs and conventional two-position photographs were used to appraise postsurgical outcomes. Compared to conventional two-position, frontal view photography, six-position, frontal view photography can provide more detailed, thorough information about the eyes. It is of clinical significance in indicating underlying adhesion of skin/muscle/fat according to individual's features and assessing preoperative and postoperative dynamic changes and aesthetic outcomes. Six-position, frontal view photography is technically uncomplicated while exhibiting static, dynamic and detailed information of the eyes. This innovative method is favorable in eye assessment, especially for revision blepharoplasty. We suggest using six-position, frontal view photography to obtain comprehensive photographs. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Physiological responses to the Coriolis illusion: effects of head position and vision.
Westmoreland, David; Krell, Robert W; Self, Brian P
2007-10-01
Changes in sympathetic outflow during Type II spatial disorientation are well documented. In this study we investigated the influences of head position and eye state (open or closed) on sympathetic activation. There were 11 naive subjects (6 men, 5 women) who were tested in a General Aviation Trainer that accelerated at a subthreshold rate for 60 s until a constant angular velocity of 90 degrees x s(-1) was reached. Approximately 40 s later, subjects were instructed to tilt their heads along either the pitch or roll axis, stimulating a Coriolis illusion. Subjects reported the perceived intensity and duration of disorientation. Heart rate, heart rate variability, and electrodermal responses were recorded before, during, and after the period of disorientation. Each subject completed four trials, which were crossed combinations of head position and eye state. There were significant increases in heart rate and the electrodermal response during disorientation, but no significant change in heart rate variability. Head position had no significant effect on any physiological parameters or on the perceived intensity of disorientation; subjects reported a shorter duration of disorientation when the head was tilted into the roll versus the pitch axis. Eye state had no effect on heart rate, heart rate variability, or the intensity of disorientation, but the electrodermal response was somewhat greater, and the duration of disorientation shorter when eyes were open. The results suggest that head position and eye state (open or closed) do not need to be included as factors when investigating sympathetic outflow during a mild Coriolis illusion.
NASA Technical Reports Server (NTRS)
Thurtell, M. J.; Kunin, M.; Raphan, T.; Wall, C. C. (Principal Investigator)
2000-01-01
It is well established that the head and eye velocity axes do not always align during compensatory vestibular slow phases. It has been shown that the eye velocity axis systematically tilts away from the head velocity axis in a manner that is dependent on eye-in-head position. The mechanisms responsible for producing these axis tilts are unclear. In this model-based study, we aimed to determine whether muscle pulleys could be involved in bringing about these phenomena. The model presented incorporates semicircular canals, central vestibular pathways, and an ocular motor plant with pulleys. The pulleys were modeled so that they brought about a rotation of the torque axes of the extraocular muscles that was a fraction of the angle of eye deviation from primary position. The degree to which the pulleys rotated the torque axes was altered by means of a pulley coefficient. Model input was head velocity and initial eye position data from passive and active yaw head impulses with fixation at 0 degrees, 20 degrees up and 20 degrees down, obtained from a previous experiment. The optimal pulley coefficient required to fit the data was determined by calculating the mean square error between data and model predictions of torsional eye velocity. For active head impulses, the optimal pulley coefficient varied considerably between subjects. The median optimal pulley coefficient was found to be 0.5, the pulley coefficient required for producing saccades that perfectly obey Listing's law when using a two-dimensional saccadic pulse signal. The model predicted the direction of the axis tilts observed in response to passive head impulses from 50 ms after onset. During passive head impulses, the median optimal pulley coefficient was found to be 0.21, when roll gain was fixed at 0.7. The model did not accurately predict the alignment of the eye and head velocity axes that was observed early in the response to passive head impulses. We found that this alignment could be well predicted if the roll gain of the angular vestibuloocular reflex was modified during the initial period of the response, while pulley coefficient was maintained at 0.5. Hence a roll gain modification allows stabilization of the retinal image without requiring a change in the pulley effect. Our results therefore indicate that the eye position-dependent velocity axis tilts could arise due to the effects of the pulleys and that a roll gain modification in the central vestibular structures may be responsible for countering the pulley effect.
Acute air pollution-related symptoms among residents in Chiang Mai, Thailand.
Wiwatanadate, Phongtape
2014-01-01
Open burnings (forest fires, agricultural, and garbage burnings) are the major sources of air pollution in Chiang Mai, Thailand. A time series prospective study was conducted in which 3025 participants were interviewed for 19 acute symptoms with the daily records of ambient air pollutants: particulate matter less than 10 microm in size (PM10), carbon monoxide (CO), nitrogen dioxide (NO2), sulfur dioxide (SO2), and ozone (O3). PM10 was positively associated with blurred vision with an adjusted odds ratio (OR) of 1.009. CO was positively associated with lower lung and heart symptoms with adjusted ORs of 1.137 and 1.117. NO2 was positively associated with nosebleed, larynx symptoms, dry cough, lower lung symptoms, heart symptoms, and eye irritation with the range of adjusted ORs (ROAORs) of 1.024 to 1.229. SO2 was positively associated with swelling feet, skin symptoms, eye irritation, red eyes, and blurred vision with ROAORs of 1.205 to 2.948. Conversely, O3 was negatively related to running nose, burning nose, dry cough, body rash, red eyes, and blurred vision with ROAORs of 0.891 to 0.979.
Common determinants of body size and eye size in chickens from an advanced intercross line.
Prashar, Ankush; Hocking, Paul M; Erichsen, Jonathan T; Fan, Qiao; Saw, Seang Mei; Guggenheim, Jeremy A
2009-06-15
Myopia development is characterised by an increased axial eye length. Therefore, identifying factors that influence eye size may provide new insights into the aetiology of myopia. In humans, axial length is positively correlated to height and weight, and in mice, eye weight is positively correlated with body weight. The purpose of this study was to examine the relationship between eye size and body size in chickens from a genetic cross in which alleles with major effects on eye and body size were segregating. Chickens from a cross between a layer line (small body size and eye size) and a broiler line (large body and eye size) were interbred for 10 generations so that alleles for eye and body size would have the chance to segregate independently. At 3 weeks of age, 510 chicks were assessed using in vivo high resolution A-scan ultrasonography and keratometry. Equatorial eye diameter and eye weight were measured after enucleation. The variations in eye size parameters that could be explained by body weight (BW), body length (BL), head width (HW) and sex were examined using multiple linear regression. It was found that BW, BL and HW and sex together predicted 51-56% of the variation in eye weight, axial length, corneal radius, and equatorial eye diameter. By contrast, the same variables predicted only 22% of the variation in lens thickness. After adjusting for sex, the three body size parameters predicted 45-49% of the variation in eye weight, axial length, corneal radius, and eye diameter, but only 0.4% of the variation in lens thickness. In conclusion, about half of the variation in eye size in the chickens of this broiler-layer advanced intercross line is likely to be determined by pleiotropic genes that also influence body size. Thus, mapping the quantitative trait loci (QTL) that determine body size may be useful in understanding the genetic determination of eye size (a logical inference of this result is that the 20 or more genetic variants that have recently been shown to influence human height may also be found to influence axial eye length). Furthermore, adjusting for body size will be essential in mapping pure eye size QTL in this chicken population, and may also have value in mapping eye size QTL in humans.
Identities, Social Representations and Critical Thinking
ERIC Educational Resources Information Center
Lopez-Facal, Ramon; Jimenez-Aleixandre, Maria Pilar
2009-01-01
This comment on L. Simonneaux and J. Simonneaux paper focuses on the role of "identities" in dealing with socio-scientific issues. We argue that there are two types of identities (social representations) influencing the students' positions: On the one hand their social representations of the bears' and wolves' identities as belonging to…
Wade, Nicholas J
2008-01-01
The art of visual communication is not restricted to the fine arts. Scientists also apply art in communicating their ideas graphically. Diagrams of anatomical structures, like the eye and visual pathways, and figures displaying specific visual phenomena have assisted in the communication of visual ideas for centuries. It is often the case that the development of a discipline can be traced through graphical representations and this is explored here in the context of concepts of visual science. As with any science, vision can be subdivided in a variety of ways. The classification adopted is in terms of optics, anatomy, and visual phenomena; each of these can in turn be further subdivided. Optics can be considered in terms of the nature of light and its transmission through the eye. Understanding of the gross anatomy of the eye and visual pathways was initially dependent upon the skills of the anatomist whereas microanatomy relied to a large extent on the instruments that could resolve cellular detail, allied to the observational skills of the microscopist. Visual phenomena could often be displayed on the printed page, although novel instruments expanded the scope of seeing, particularly in the nineteenth century.
Dissociated effects of distractors on saccades and manual aiming.
McIntosh, Robert D; Buonocore, Antimo
2012-08-01
The remote distractor effect (RDE) is a robust phenomenon whereby target-directed saccades are delayed by the appearance of a distractor. This effect persists even when the target location is perfectly predictable. The RDE has been studied extensively in the oculomotor domain but it is unknown whether it generalises to other spatially oriented responses. In three experiments, we tested whether the RDE generalises to manual aiming. Experiment 1 required participants to move their hand or eyes to predictable targets presented alone or accompanied by a distractor in the opposite hemifield. The RDE was observed for the eyes but not for the hand. Experiment 2 replicated this dissociation in a more naturalistic task in which eye movements were not constrained during manual aiming. Experiment 3 confirmed the lack of manual RDE across a wider range of distractor delays (0, 50, 100, and 150 ms). Our data imply that the RDE is specific to the oculomotor system, at least for non-foveal distractors. We suggest that the oculomotor RDE reflects competitive interactions between target and distractor representations in the superior colliculus, which are not necessarily shared by manual aiming.
Richmond, Jenny L; Power, Jessica
2014-09-01
Relational memory, or the ability to bind components of an event into a network of linked representations, is a primary function of the hippocampus. Here we extend eye-tracking research showing that infants are capable of forming memories for the relation between arbitrarily paired scenes and faces, by looking at age-related changes in relational memory over the first year of life. Six- and 12-month-old infants were familiarized with pairs of faces and scenes before being tested with arrays of three familiar faces that were presented on a familiar scene. Preferential looking at the face that matches the scene is typically taken as evidence of relational memory. The results showed that while 6-month-old showed very early preferential looking when face/scene pairs were tested immediately, 12-month-old did not exhibit evidence of relational memory either immediately or after a short delay. Theoretical implications for the functional development of the hippocampus and practical implications for the use of eye tracking to measure memory during early life are discussed. © 2014 Wiley Periodicals, Inc.
Processing and Representation of Ambiguous Words in Chinese Reading: Evidence from Eye Movements.
Shen, Wei; Li, Xingshan
2016-01-01
In the current study, we used eye tracking to investigate whether senses of polysemous words and meanings of homonymous words are represented and processed similarly or differently in Chinese reading. Readers read sentences containing target words which was either homonymous words or polysemous words. The contexts of text preceding the target words were manipulated to bias the participants toward reading the ambiguous words according to their dominant, subordinate, or neutral meanings. Similarly, disambiguating regions following the target words were also manipulated to favor either the dominant or subordinate meanings of ambiguous words. The results showed that there were similar eye movement patterns when Chinese participants read sentences containing homonymous and polysemous words. The study also found that participants took longer to read the target word and the disambiguating text following it when the prior context and disambiguating regions favored divergent meanings rather than the same meaning. These results suggested that homonymy and polysemy are represented similarly in the mental lexicon when a particular meaning (sense) is fully specified by disambiguating information. Furthermore, multiple meanings (senses) are represented as separate entries in the mental lexicon.
Fukushima, Kikuro; Fukushima, Junko; Warabi, Tateo
2011-01-01
Smooth-pursuit eye movements are voluntary responses to small slow-moving objects in the fronto-parallel plane. They evolved in primates, who possess high-acuity foveae, to ensure clear vision about the moving target. The primate frontal cortex contains two smooth-pursuit related areas; the caudal part of the frontal eye fields (FEF) and the supplementary eye fields (SEF). Both areas receive vestibular inputs. We review functional differences between the two areas in smooth-pursuit. Most FEF pursuit neurons signal pursuit parameters such as eye velocity and gaze-velocity, and are involved in canceling the vestibulo-ocular reflex by linear addition of vestibular and smooth-pursuit responses. In contrast, gaze-velocity signals are rarely represented in the SEF. Most FEF pursuit neurons receive neck velocity inputs, while discharge modulation during pursuit and trunk-on-head rotation adds linearly. Linear addition also occurs between neck velocity responses and vestibular responses during head-on-trunk rotation in a task-dependent manner. During cross-axis pursuit–vestibular interactions, vestibular signals effectively initiate predictive pursuit eye movements. Most FEF pursuit neurons discharge during the interaction training after the onset of pursuit eye velocity, making their involvement unlikely in the initial stages of generating predictive pursuit. Comparison of representative signals in the two areas and the results of chemical inactivation during a memory-based smooth-pursuit task indicate they have different roles; the SEF plans smooth-pursuit including working memory of motion–direction, whereas the caudal FEF generates motor commands for pursuit eye movements. Patients with idiopathic Parkinson’s disease were asked to perform this task, since impaired smooth-pursuit and visual working memory deficit during cognitive tasks have been reported in most patients. Preliminary results suggested specific roles of the basal ganglia in memory-based smooth-pursuit. PMID:22174706
ERIC Educational Resources Information Center
Barmao, Catherine
2013-01-01
This paper analyses factors contributing to under representation of female teachers in headship positions in Eldoret Municipality Kenya. The study was guided by socialization theory to hierarchical gender prescriptions which gave three distinct theoretical traditions that help, understand sex and gender. Descriptive survey was adopted for the…
Lexical Representation of Schwa Words: Two Mackerels, but Only One Salami
ERIC Educational Resources Information Center
Burki, Audrey; Gaskell, M. Gareth
2012-01-01
The present study investigated the lexical representations underlying the production of English schwa words. Two types of schwa words were compared: words with a schwa in poststress position (e.g., mack"e"rel), whose schwa and reduced variants differ in a categorical way, and words with a schwa in prestress position (e.g.,…
Moreno-López, Bernardo; Escudero, Miguel; Estrada, Carmen
2002-01-01
Nitric oxide (NO) synthesis by prepositus hypoglossi (PH) neurons is necessary for the normal performance of horizontal eye movements. We have previously shown that unilateral injections of NO synthase (NOS) inhibitors into the PH nucleus of alert cats produce velocity imbalance without alteration of the eye position control, both during spontaneous eye movements and the vestibulo-ocular reflex (VOR). This NO effect is exerted on the dorsal PH neuropil, whose fibres increase their cGMP content when stimulated by NO. In an attempt to determine whether NO acts by modulation of a specific neurotransmission system, we have now compared the oculomotor effects of NOS inhibition with those produced by local blockade of glutamatergic, GABAergic or glycinergic receptors in the PH nucleus of alert cats. Both glutamatergic antagonists used, 2-amino-5-phosphonovaleric acid (APV) and 2,3-dihydro-6-nitro-7-sulphamoyl-benzo quinoxaline (NBQX), induced a nystagmus contralateral to that observed upon NOS inhibition, and caused exponential eye position drift. In contrast, bicuculline and strychnine induced eye velocity alterations similar to those produced by NOS inhibitors, suggesting that NO oculomotor effects were due to facilitation of some inhibitory input to the PH nucleus. To investigate the anatomical location of the putative NO target neurons, the retrograde tracer Fast Blue was injected in one PH nucleus, and the brainstem sections containing Fast Blue-positive neurons were stained with double immunohistochemistry for NO-sensitive cGMP and glutamic acid decarboxylase. GABAergic neurons projecting to the PH nucleus and containing NO-sensitive cGMP were found almost exclusively in the ipsilateral medial vestibular nucleus and marginal zone. The results suggest that the nitrergic PH neurons control their own firing rate by a NO-mediated facilitation of GABAergic afferents from the ipsilateral medial vestibular nucleus. This self-control mechanism could play an important role in the maintenance of the vestibular balance necessary to generate a stable and adequate eye position signal. PMID:11927688
Susceptibility of proliferating cells to benzo[a]pyrene-induced homologous recombination in mice.
Bishop, A J; Kosaras, B; Carls, N; Sidman, R L; Schiestl, R H
2001-04-01
The pink-eyed unstable mutation, p(un), is the result of a 70 kb tandem duplication within the murine pink-eyed, p, gene. Deletion of one copy of the duplicated region by homologous deletion/recombination occurs spontaneously in embryos and results in pigmented spots in the fur and eye. Such deletion events are inducible by a variety of DNA damaging agents, as we have observed previously with both fur- and eye-spot assays. Here we describe a study of the effect of exposure to benzo[a]pyrene (B[a]P) at different times of development on reversion induction in the eye. Previously we, among others, have reported that the retinal pigment epithelium (RPE) displays a position effect variegation phenotype in the pattern of pink-eyed unstable reversions. Following an acute exposure to B[a]P or X-rays on the tenth day of gestation an increased frequency of reversion events was detected in a distinct region of the adult RPE. Examining exposure at different times of eye development reveals that both B[a]P and X-rays result in an increased frequency of reversion events, though the increase was only significant following B[a]P exposure, similar to our previous report limited to exposure on the tenth day of gestation. Examination of B[a]P-exposed RPE in the present study revealed distinct regions where the induced events lie and that the positions of these regions are found at increasing distances from the optic nerve the later the time of exposure. This position effect directly reflects the previously observed developmental pattern of the RPE, namely that cells in the regions most distal from the optic nerve are proliferating most vigorously. The numbers and positions of RPE cells displaying the transformed (pigmented) phenotype strongly advocate the proposal that dividing cells are at highest risk to deletions induced by carcinogens.
Suzuki, Nami; Noh, Jaeduk Yoshimura; Kameda, Toshiaki; Yoshihara, Ai; Ohye, Hidemi; Suzuki, Miho; Matsumoto, Masako; Kunii, Yo; Iwaku, Kenji; Watanabe, Natsuko; Mukasa, Koji; Kozaki, Ai; Inoue, Toshu; Sugino, Kiminori; Ito, Koichi
2018-01-01
Euthyroid Graves' disease (EGD) is a rare condition defined as the presence of thyroid-associated ophthalmopathy (TAO) in patients with normal thyroid function. Due to the rarity of this disease, only a limited number of studies and case reports are available for further evaluation of the characteristics of the disease. The aim of this study was to examine the changes in the thyroid function, thyrotropin receptor antibodies (TRAb) and eye symptoms, and then determine whether TRAb is related to TAO in EGD patients. TRAb in this study was defined as including both thyrotropin-binding inhibitory immunoglobulin (TBII) and thyroid-stimulating immunoglobulin (TSAb). Medical records of patients diagnosed with EGD were reviewed. Ophthalmologists specializing in TAO examined the eyes of all subjects. Of the 58 patients diagnosed with EGD, 24.1% developed hyperthyroidism, while 3.4% developed hypothyroidism. A total of 72.4% of the 58 patients remained euthyroid throughout the entire follow-up period. At the initial presentation, TBII and TSAb were positive in 74.5% and 70.5%, respectively. Ophthalmic treatments were administered to 30 (51.7%) out of the 58 patients. A significant spontaneous improvement of the eye symptoms was found in 28 of the EGD patients who did not require eye treatments. EGD patients exhibited positive rates for both TBII and TSAb, with the number of the TRAb-positive patients gradually decreasing while the eye symptoms spontaneously improved over time. There were no correlations found between TRAb at initial presentation and the eye symptoms. TBII and TSAb were positive in about 70% of EGD patients at their initial visit. Thyroid functions of EGD patients who have been euthyroid for more than 6.7 years may continue to remain euthyroid in the future.
Primary position and listing's law in acquired and congenital trochlear nerve palsy.
Straumann, Dominik; Steffen, Heimo; Landau, Klara; Bergamin, Oliver; Mudgil, Ananth V; Walker, Mark F; Guyton, David L; Zee, David S
2003-10-01
In ocular kinematics, the primary position (PP) of the eye is defined by the position from which movements do not induce ocular rotations around the line of sight (Helmholtz). PP is mathematically linked to the orientation of Listing's plane. This study was conducted to determine whether PP is affected differently in patients with clinically diagnosed congenital (conTNP) and acquired (acqTNP) trochlear nerve palsy. Patients with unilateral conTNP (n = 25) and acqTNP (n = 9) performed a modified Hess screen test. Three-dimensional eye positions were recorded with dual search coils. PP in eyes with acqTNP was significantly more temporal (mean: 21.2 degrees ) than in eyes with conTNP (6.8 degrees ) or healthy eyes (7.2 degrees ). In the pooled data of all patients, the horizontal location of PP significantly correlated with vertical noncomitance with the paretic eye in adduction (R = 0.59). Using a computer model, PP in acqTNP could be reproduced by a neural lesion of the superior oblique (SO) muscle. An additional simulated overaction of the inferior oblique (IO) muscle moved PP back to normal, as in conTNP. Lengthening the SO and shortening the IO muscles could also simulate PP in conTNP. The temporal displacement of PP in acqTNP is a direct consequence of the reduced force of the SO muscle. The reversal of this temporal displacement of PP, which occurs in some patients with conTNP, can be explained by a secondary overaction of the IO muscle. Alternatively, length changes in the SO and IO muscles, or other anatomic anomalies within the orbit, without a neural lesion, may also explain the difference in location of PP between conTNP and acqTNP.
... a defect in, the control of voluntary purposeful eye movement. Children with this condition have difficulty moving their ... to compensate for this inability to initiate horizontal eye movements away from the straight-ahead gaze position. Typically, ...
Lee, Sun Young; Cheng, Vincent; Rodger, Damien; Rao, Narsing
2015-12-01
Ocular syphilis is reemerging as an important cause of uveitis in the new era of common co-infection with HIV. This study will reveal the clinical and laboratory characteristics in the group of individuals co-infected with ocular syphilis and HIV compared with HIV-negative individuals. In this retrospective observational case series, medical records of patients diagnosed with ocular syphilis with serologic support from 2008 to 2014 were reviewed. Ocular and systemic manifestation and laboratory profiles were reviewed. Twenty-nine eyes of 16 consecutive patients (10 HIV-positive and 6 HIV-negative) were included. All patients were males, and mean age of onset for ocular syphilis was 43 (mean 42.65 ± 13.13). In both HIV-positive and HIV-negative groups, ocular manifestations of syphilis were variable including anterior uveitis (4 eyes), posterior uveitis (8 eyes), panuveitis (13 eyes), and isolated papillitis (4 eyes). In HIV-positive patients, panuveitis was the most common feature (12/18 eyes, 67 %) and serum rapid plasma reagin (RPR) titers were significantly higher (range 1:64-1:16,348; mean 1:768; p = 0.018) than in HIV-negative patients. Upon the diagnosis of ocular syphilis in HIV-positive patients, HIV-1 viral load was high (median 206,887 copies/ml) and CD4 cell count ranged from 127 to 535 cells/ml (mean 237 ± 142; median 137). Regardless of HIV status, cerebrospinal fluid (CSF) exam was frequently abnormal: positive CSF fluorescent treponemal antibody absorption (FTA-ABS) or Venereal Disease Research Laboratory (VDRL) test results in seven patients or either elevated CSF WBC count or elevated CSF protein in six patients. Our results reveal that the patients with ocular syphilis with high serum RPR titers may have concomitant HIV infection requiring further testing for HIV status and ocular syphilis is likely associated with the central nervous system involvement and therefore needs to be managed according to the treatment recommendations for neurosyphilis.
Yamada, T; Suzuki, D A; Yee, R D
1996-11-01
1. Smooth pursuitlike eye movements were evoked with low current microstimulation delivered to rostral portions of the nucleus reticularis tegmenti pontis (rNRTP) in alert macaques. Microstimulation sites were selected by the observation of modulations in single-cell firing rates that were correlated with periodic smoothpursuit eye movements. Current intensities ranged from 10 to 120 microA and were routinely < 40 microA. Microstimulation was delivered either in the dark with no fixation, 100 ms after a fixation target was extinguished, or during maintained fixation of a stationary or moving target. Evoked eye movements also were studied under open-loop conditions with the target image stabilized on the retina. 2. Eye movements evoked in the absence of a target rapidly accelerated to a constant velocity that was maintained for the duration of the microstimulation. Evoked eye speeds ranged from 3.7 to 23 deg/s and averaged 11 deg/s. Evoked eye speed appeared to be linearly related to initial eye position with a sensitivity to initial eye position that averaged 0.23 deg.s-1.deg-1. While some horizontal and oblique smooth eye movements were elicited, microstimulation resulted in upward eye movements in 89% of the sites. 3. Evoked eye speed was found to be dependent on microstimulation pulse frequency and current intensity. Within limits, evoked eye speed increased with increases in stimulation frequency or current intensity. For stimulation frequencies < 300-400 Hz, only smooth pursuit-like eye movements were evoked. At higher stimulation frequencies, accompanying saccades consistently were elicited. 4. Feedback of retinal image motion interacted with the evoked eye movements to decrease eye speed if the visual motion was in the opposite direction as the evoked, pursuit-like eye movements. 5. The results implicate rNRTP as part of the neuronal substrate that controls smooth-pursuit eye movements. NRTP appears to be divided functionally into a rostral, pursuit-related portion and a caudal, saccade-related area. rNRTP is a component of a corticopontocerebellar circuit that presumably involves the pursuit area of the frontal eye field and that parallels the middle and medial superior temporal cerebral cortical/dorsalateral pontine nucleus (MT/MST-DLPN-cerebellum) pathway known to be involved also with regulating smooth-pursuit eye movements.
Effects of aircraft windscreens and canopies on HMT/D aiming accuracy: III
NASA Astrophysics Data System (ADS)
Task, H. Lee; Goodyear, Chuck
1999-07-01
Modern fighter aircraft windscreens and canopies are typically made of curved, transparent plastic for improved aero-dynamics and bird-strike protection. Since they are curved these transparencies often refract light in such a way that a pilot looking through the transparency will see a target in a location other than where it really is. This effect has been known for many years and methods to correct the aircraft head- up display (HUD) for these angular deviations have been developed and employed. The same problem occurs for helmet- mounted display/trackers (HMD/Ts) used for target acquisition. However, in this case, the pilot can look through any part of the transparency instead of being constrained to just the forward section as in the case of the HUD and his/her head position can be anywhere in a rather large motion box. To explore the magnitude of these aiming errors several F-15, F- 16, F-18, and F-22 transparency systems were measured from a total of 12 different eye positions centered around the HMD Eye (the HMD Eye was defined to be a point 1.25 inches to the right of the aircraft Design Eye). The collection of eye points for assessing HMT/D aiming accuracy were: HMD Eye, 3 inches left and right of HMD Eye, 2 inches above HMD Eye, and 2 inches forward of HMD Eye plus all combinations of these. Results from these measurements along with recommendations regarding means of assessing 'goodness' of correction algorithms are presented.
Web GIS in practice VII: stereoscopic 3-D solutions for online maps and virtual globes
Boulos, Maged N Kamel; Robinson, Larry R
2009-01-01
Because our pupils are about 6.5 cm apart, each eye views a scene from a different angle and sends a unique image to the visual cortex, which then merges the images from both eyes into a single picture. The slight difference between the right and left images allows the brain to properly perceive the 'third dimension' or depth in a scene (stereopsis). However, when a person views a conventional 2-D (two-dimensional) image representation of a 3-D (three-dimensional) scene on a conventional computer screen, each eye receives essentially the same information. Depth in such cases can only be approximately inferred from visual clues in the image, such as perspective, as only one image is offered to both eyes. The goal of stereoscopic 3-D displays is to project a slightly different image into each eye to achieve a much truer and realistic perception of depth, of different scene planes, and of object relief. This paper presents a brief review of a number of stereoscopic 3-D hardware and software solutions for creating and displaying online maps and virtual globes (such as Google Earth) in "true 3D", with costs ranging from almost free to multi-thousand pounds sterling. A practical account is also given of the experience of the USGS BRD UMESC (United States Geological Survey's Biological Resources Division, Upper Midwest Environmental Sciences Center) in setting up a low-cost, full-colour stereoscopic 3-D system. PMID:19849837
Web GIS in practice VII: stereoscopic 3-D solutions for online maps and virtual globes.
Boulos, Maged N Kamel; Robinson, Larry R
2009-10-22
Because our pupils are about 6.5 cm apart, each eye views a scene from a different angle and sends a unique image to the visual cortex, which then merges the images from both eyes into a single picture. The slight difference between the right and left images allows the brain to properly perceive the 'third dimension' or depth in a scene (stereopsis). However, when a person views a conventional 2-D (two-dimensional) image representation of a 3-D (three-dimensional) scene on a conventional computer screen, each eye receives essentially the same information. Depth in such cases can only be approximately inferred from visual clues in the image, such as perspective, as only one image is offered to both eyes. The goal of stereoscopic 3-D displays is to project a slightly different image into each eye to achieve a much truer and realistic perception of depth, of different scene planes, and of object relief. This paper presents a brief review of a number of stereoscopic 3-D hardware and software solutions for creating and displaying online maps and virtual globes (such as Google Earth) in "true 3D", with costs ranging from almost free to multi-thousand pounds sterling. A practical account is also given of the experience of the USGS BRD UMESC (United States Geological Survey's Biological Resources Division, Upper Midwest Environmental Sciences Center) in setting up a low-cost, full-colour stereoscopic 3-D system.
Web GIS in practice VII: stereoscopic 3-D solutions for online maps and virtual globes
Boulos, Maged N.K.; Robinson, Larry R.
2009-01-01
Because our pupils are about 6.5 cm apart, each eye views a scene from a different angle and sends a unique image to the visual cortex, which then merges the images from both eyes into a single picture. The slight difference between the right and left images allows the brain to properly perceive the 'third dimension' or depth in a scene (stereopsis). However, when a person views a conventional 2-D (two-dimensional) image representation of a 3-D (three-dimensional) scene on a conventional computer screen, each eye receives essentially the same information. Depth in such cases can only be approximately inferred from visual clues in the image, such as perspective, as only one image is offered to both eyes. The goal of stereoscopic 3-D displays is to project a slightly different image into each eye to achieve a much truer and realistic perception of depth, of different scene planes, and of object relief. This paper presents a brief review of a number of stereoscopic 3-D hardware and software solutions for creating and displaying online maps and virtual globes (such as Google Earth) in "true 3D", with costs ranging from almost free to multi-thousand pounds sterling. A practical account is also given of the experience of the USGS BRD UMESC (United States Geological Survey's Biological Resources Division, Upper Midwest Environmental Sciences Center) in setting up a low-cost, full-colour stereoscopic 3-D system.
Quality of life of eye amputated patients.
Rasmussen, Marie L R; Ekholm, Ola; Prause, Jan U; Toft, Peter B
2012-08-01
To evaluate eye-amputated patients' health-related quality of life, perceived stress, self-rated health, job separation because of illness or disability and socioeconomic position. Patients were recruited from a tertiary referral centre situated in Copenhagen. Inclusion criteria were eye amputation, i.e. evisceration, enucleation, orbital exenteration or secondary implantation of an orbital implant during the period 1996-2003, and participation in a previous investigation (2005). In total, 159 eye-amputated patients were included, and completed a self-administered questionnaire containing health-related quality of life (SF-36), the perceived stress scale and answered questions about self-rated health, job changes because of illness or disability and socioeconomic status. These results were compared with findings from the Danish Health Interview Survey 2005. The eye-amputated patients had significantly (p < 0.05) lower scores (poorer health) on all SF-36 subscales and more perceived stress compared to the general population. In all, 43.3% of the patients rated their health as excellent or very good compared to 52.1% of the general population. In total, 25% of the study population has retired or changed to a part-time job because of eye disease. The percentage of eye amputated patients, who were divorced or separated, was twice as high as in the general population. The impact of an eye amputation is considerable. The quality of life, perceived stress and self-rated health of many eye-amputated patients are drastically changed. Eye amputation has a marked negative influence on job separation because of illness or disability and on socioeconomic position. © 2011 The Authors. Acta Ophthalmologica © 2011 Acta Ophthalmologica Scandinavica Foundation.
Computational models of location-invariant orthographic processing
NASA Astrophysics Data System (ADS)
Dandurand, Frédéric; Hannagan, Thomas; Grainger, Jonathan
2013-03-01
We trained three topologies of backpropagation neural networks to discriminate 2000 words (lexical representations) presented at different positions of a horizontal letter array. The first topology (zero-deck) contains no hidden layer, the second (one-deck) has a single hidden layer, and for the last topology (two-deck), the task is divided in two subtasks implemented as two stacked neural networks, with explicit word-centred letters as intermediate representations. All topologies successfully simulated two key benchmark phenomena observed in skilled human reading: transposed-letter priming and relative-position priming. However, the two-deck topology most accurately simulated the ability to discriminate words from nonwords, while containing the fewest connection weights. We analysed the internal representations after training. Zero-deck networks implement a letter-based scheme with a position bias to differentiate anagrams. One-deck networks implement a holographic overlap coding in which representations are essentially letter-based and words are linear combinations of letters. Two-deck networks also implement holographic-coding.
Hao, Jie; Zhen, Yi; Wang, Hao; Yang, Diya; Wang, Ningli
2014-01-01
To investigate the effect of lateral decubitus position (LDP) on nocturnal intraocular pressure (IOP) and the effect of LDP on 24-hour habitual IOP pattern in healthy subjects. Intraocular pressure was measured every 2-hours using an Accupen Applanation Tonometer (Accutome, USA). During the diurnal period (7:30 am, 9:30 am, 11:30 am, 1:30 pm, 3:30 pm, 5:30 pm, 7:30 pm, and 9:30 pm), IOP was measured in the sitting position under bright light (500-1000 lux) after the subjects had been seated for 5 min. The nocturnal IOP was measured in the supine position, right LDP, and left LDP, with randomized sequences, under dim light (<10 lux) at 11:30 pm, 1:30 am, 3:30 am, and 5:30 am. The subjects were awakened and maintained each position for 5 min before the measurement. The 24-hour habitual IOP patterns were obtained according to the nocturnal position (supine, right LDP and left LDP) for either eye. P<0.05 was considered to be significant. Nineteen healthy subjects were included with a mean age of 51.3±5.8 years. During the nocturnal period, a significant IOP difference was found between the dependent eye (the eye on the lower side) of LDP and the supine position, but not for all the nocturnal time points. Over a 24-hour period, the effect of LDP on habitual IOP pattern was not statistically significant, although the mean nocturnal IOP and the diurnal-nocturnal IOP change for the right and the left eye in the LDP pattern was slightly higher than that in the sitting-supine pattern. Significant nocturnal IOP differences existed between the dependent eye and the supine, but did not occur consistently for all time points. Over a 24-hour period, the effect of LDP on habitual IOP pattern was not statistically significant in healthy subjects.
A Microcomputer-Based Software Package for Eye-Monitoring Research. Technical Report No. 434.
ERIC Educational Resources Information Center
McConkie, George W.; And Others
A software package is described that collects and reduces eye behavior data (eye position and pupil size) using an IBM-PC compatible computer. Written in C language for speed and portability, it includes several features: (1) data can be simultaneously collected from other sources (such as electroencephalography and electromyography); (2)…
Delgado-Garcia, J M; Evinger, C; Escudero, M; Baker, R
1990-08-01
1. The activity of both accessory abducens (Acc Abd) and abducens (Abd) motoneurons (Mns) was recorded in the alert cat during eye retraction and rotational eye movements. Cats were fitted with two scleral coils, one measured rotational eye movements directly and the other retraction by distinguishing the translational component. 2. Acc Abd and Abd Mns were identified following antidromic activation from electrical stimulation of the ipsilateral VIth nerve. 3. In response to corneal air puffs, bursts of spikes were produced in all (n = 30) Acc Abd Mns. The burst began 7.2 +/- 1.2 (SD) ms after onset of the air puff and 8.9 +/- 1.9 ms before eye retraction. 4. Acc Abd Mns were silent throughout all types of rotational eye movements, and tonic activity was not observed during intervals without air-puff stimulation. 5. In contrast, all (n = 50) identified Abd Mns exhibited a burst and/or pause in activity preceding and during horizontal saccades as well as a tonic activity proportional to eye position. 6. Only 10% of Abd Mns fired a weak burst of spikes in response to air-puff stimulation. 7. We conclude that Acc Abd Mns are exclusively involved in eye retraction in the cat and that only a few Abd Mns have an eye-retraction signal added to their eye position and velocity signals. Thus any rotational eye-movement response described in retractor bulbi muscle must result from innervation by Mns located in the Abd and/or the oculomotor nuclei. 8. The organization of the prenuclear circuitry and species variation are discussed in view of the nictiating membrane extension response measured in associative learning.
[An image of Saint Ottilia with reading stones].
Daxecker, F; Broucek, A
1995-01-01
Reading stones to facilitate reading in cases of presbyopia are mentioned in the literature, for example in the works of the Middle High German poet Albrecht and of Konrad of Würzburg. Most representations of the abbess, Saint Ottilia, show her holding a book with a pair of eyes in her hands. A gothic altarpiece (1485-1490), kept in the museum of the Premonstratensian Canons of Wilten in Innsbruck, Tyrol, shows a triune representation of St. Anne, the mother of the Virgin, with Mary and Jesus and St. Ursula with her companions. St. Ottilia is depicted on the edge of the painting. Two lenses, one on either side of the open book in her hand, magnify the letters underneath. As the two lenses are not held together by bows or similar devices, they are probably a rare representation of reading stones. The alter showing scenes of the life of St. Mary and St. Ursula was done by Ludwig Konraiter. A panel on the same alter, depicting the death of the Virgin, shows an apostle with rivet spectacles.
Time course of action representations evoked during sentence comprehension.
Heard, Alison W; Masson, Michael E J; Bub, Daniel N
2015-03-01
The nature of hand-action representations evoked during language comprehension was investigated using a variant of the visual-world paradigm in which eye fixations were monitored while subjects viewed a screen displaying four hand postures and listened to sentences describing an actor using or lifting a manipulable object. Displayed postures were related to either a functional (using) or volumetric (lifting) interaction with an object that matched or did not match the object mentioned in the sentence. Subjects were instructed to select the hand posture that matched the action described in the sentence. Even before the manipulable object was mentioned in the sentence, some sentence contexts allowed subjects to infer the object's identity and the type of action performed with it and eye fixations immediately favored the corresponding hand posture. This effect was assumed to be the result of ongoing motor or perceptual imagery in which the action described in the sentence was mentally simulated. In addition, the hand posture related to the manipulable object mentioned in a sentence, but not related to the described action (e.g., a writing posture in the context of a sentence that describes lifting, but not using, a pencil), was favored over other hand postures not related to the object. This effect was attributed to motor resonance arising from conceptual processing of the manipulable object, without regard to the remainder of the sentence context. Copyright © 2014 Elsevier B.V. All rights reserved.
Matching between the light spots and lenslets of an artificial compound eye system
NASA Astrophysics Data System (ADS)
He, Jianzheng; Jian, Huijie; Zhu, Qitao; Ma, Mengchao; Wang, Keyi
2017-10-01
As the visual organ of many arthropods, the compound eye has attracted a lot of attention with the advantage of wide field-of-view, multi-channel imaging ability and high agility. Extended from this concept, a new kind of artificial compound eye device is developed. There are 141 lenslets which share one image sensor distributed evenly on a curved surface, thus it is difficult to distinguish the lenslets which the light spot belongs to during calibration and positioning process. Therefore, the matching algorithm is proposed based on the device structure and the principle of calibration and positioning. Region partition of lenslet array is performed at first. Each lenslet and its adjacent lenslets are defined as cluster eyes and constructed into an index table. In the calibration process, a polar coordinate system is established, and the matching can be accomplished by comparing the rotary table position in the polar coordinate system and the central light spot angle in the image. In the positioning process, the spot is paired to the correct region according to the spots distribution firstly, and the final results is determined by the dispersion of the distance from the target point to the incident ray in the region traversal matching. Finally, the experiment results show that the presented algorithms provide a feasible and efficient way to match the spot to the lenslet, and perfectly meet the needs in the practical application of the compound eye system.
Ueda, Sayoko; Kumagai, Gaku; Otaki, Yusuke; Yamaguchi, Shinya; Kohshima, Shiro
2014-01-01
As facial color pattern around the eyes has been suggested to serve various adaptive functions related to the gaze signal, we compared the patterns among 25 canid species, focusing on the gaze signal, to estimate the function of facial color pattern in these species. The facial color patterns of the studied species could be categorized into the following three types based on contrast indices relating to the gaze signal: A-type (both pupil position in the eye outline and eye position in the face are clear), B-type (only the eye position is clear), and C-type (both the pupil and eye position are unclear). A-type faces with light-colored irises were observed in most studied species of the wolf-like clade and some of the red fox-like clade. A-type faces tended to be observed in species living in family groups all year-round, whereas B-type faces tended to be seen in solo/pair-living species. The duration of gazing behavior during which the facial gaze-signal is displayed to the other individual was longest in gray wolves with typical A-type faces, of intermediate length in fennec foxes with typical B-type faces, and shortest in bush dogs with typical C-type faces. These results suggest that the facial color pattern of canid species is related to their gaze communication and that canids with A-type faces, especially gray wolves, use the gaze signal in conspecific communication.
Hybrid Feature Extraction-based Approach for Facial Parts Representation and Recognition
NASA Astrophysics Data System (ADS)
Rouabhia, C.; Tebbikh, H.
2008-06-01
Face recognition is a specialized image processing which has attracted a considerable attention in computer vision. In this article, we develop a new facial recognition system from video sequences images dedicated to person identification whose face is partly occulted. This system is based on a hybrid image feature extraction technique called ACPDL2D (Rouabhia et al. 2007), it combines two-dimensional principal component analysis and two-dimensional linear discriminant analysis with neural network. We performed the feature extraction task on the eyes and the nose images separately then a Multi-Layers Perceptron classifier is used. Compared to the whole face, the results of simulation are in favor of the facial parts in terms of memory capacity and recognition (99.41% for the eyes part, 98.16% for the nose part and 97.25 % for the whole face).
Viewer-centered and body-centered frames of reference in direct visuomotor transformations.
Carrozzo, M; McIntyre, J; Zago, M; Lacquaniti, F
1999-11-01
It has been hypothesized that the end-point position of reaching may be specified in an egocentric frame of reference. In most previous studies, however, reaching was toward a memorized target, rather than an actual target. Thus, the role played by sensorimotor transformation could not be disassociated from the role played by storage in short-term memory. In the present study the direct process of sensorimotor transformation was investigated in reaching toward continuously visible targets that need not be stored in memory. A virtual reality system was used to present visual targets in different three-dimensional (3D) locations in two different tasks, one with visual feedback of the hand and arm position (Seen Hand) and the other without such feedback (Unseen Hand). In the Seen Hand task, the axes of maximum variability and of maximum contraction converge toward the mid-point between the eyes. In the Unseen Hand task only the maximum contraction correlates with the sight-line and the axes of maximum variability are not viewer-centered but rotate anti-clockwise around the body and the effector arm during the move from the right to the left workspace. The bulk of findings from these and previous experiments support the hypothesis of a two-stage process, with a gradual transformation from viewer-centered to body-centered and arm-centered coordinates. Retinal, extra-retinal and arm-related signals appear to be progressively combined in superior and inferior parietal areas, giving rise to egocentric representations of the end-point position of reaching.
Trattler, William B; Majmudar, Parag A; Donnenfeld, Eric D; McDonald, Marguerite B; Stonecipher, Karl G; Goldberg, Damien F
2017-01-01
Purpose To determine the incidence and severity of dry eye as determined by the International Task Force (ITF) scale in patients being screened for cataract surgery. Patients and methods This was a prospective, multi-center, observational study of 136 patients, at least 55 years of age, who were scheduled to undergo cataract surgery. The primary outcome measure was the incidence of dry eye as evaluated by grade on the ITF scale and secondary outcome measures include tear break-up time (TBUT), ocular surface disease index score, corneal staining with fluorescein, conjunctival staining with lissamine green, and a patient questionnaire to evaluate symptoms of dry eye. Results Mean patient age was 70.7 years. A total of 73.5% of patients were Caucasian and 50% were female. Almost 60% had never complained of a foreign body sensation; only 13% complained of a foreign body sensation half or most of the time. The majority of patients (62.9%) had a TBUT ≤5 seconds, 77% of eyes had positive corneal staining and 50% of the eyes had positive central corneal staining. Eighteen percent had Schirmer’s score with anesthesia ≤5 mm. Conclusion The incidence of dry eye in patients scheduled to undergo cataract surgery in a real-world setting is higher than anticipated. PMID:28848324
Ultrasonographic biometry of the normal eye of the Persian cat.
Mirshahi, A; Shafigh, S H; Azizzadeh, M
2014-07-01
To describe the normal ultrasonographic biometry of the Persian cat's eyes using B-mode ultrasonography. In a cross-sectional study, 20 healthy Persian cats with no history of previous ophthalmic disease were examined. Ocular biometry of the left and right eyes was measured using B-mode ultrasonography. Comparison of the average measurements between left and right eyes and between vertical and horizontal planes was performed using paired-sample t test. Correlation of ocular parameters with sex, age, head circumference and eye colour was evaluated. Mean ± standard deviation (SD) measurements of the ocular structures of anterior chamber, lens thickness, vitreous chamber and anterior to posterior dimension of the globe in 40 eyes were 4.1 ± 0.7, 7.7 ± 0.5, 8.2 ± 0.4 and 20.7 ± 1.0 mm, respectively. No significant difference was found between the ocular biometry of the left and right eyes or the horizontal and vertical planes. Of the ocular parameters, the following had a significant positive correlation with head circumference: axial globe length, anterior chamber and lens thickness. The vitreous body had a positive correlation with age. Regarding the breed predisposition of Persian cats to ocular problems, the present study provides baseline information for further clinical investigations of ocular abnormalities using B-mode ultrasonography. © 2014 Australian Veterinary Association.
Onerci Celebi, Ozlem; Celebi, Ali Riza Cenk
2018-04-09
The aim of this study was to investigate the effect of topically applied ocular anesthetic proparacaine on conjunctival and nasal bacterial mucosal flora in patients with dry eye disease. A Schirmer test was done with (group 1) and without (group 2) topical anesthetic proparacaine to 40 patients in each group. Conjunctival and nasal cultures were obtained before and 10 min after performing the Schirmer test. The bacterial culture results and the isolated bacteria were recorded in two groups. Patients' mean age was 62 years (70 female, 10 male). Before the application of topical anesthetic, 50 (62.5%) and 62 (77.5%) had positive conjunctival and nasal culture, respectively, with the most commonly isolated organism being coagulase-negative Staphylococcus in each group. In group 1 the conjunctival bacterial culture positivity rate decreased from 26 (65%) to six (15%) eyes ( p < 0.001); however, this rate decreased slightly from 24 (60%) to 20 (50%) eyes in group 2 ( p > 0.05). For the nasal cultures, the bacterial culture positivity rate decreased from 80% to 20% and from 75% to 65% in groups 1 ( p < 0.001) and 2 ( p > 0.05), respectively. Topical ocular anesthetic proparacaine has antibacterial activity in both conjunctival and nasal flora in patients with dry eye disease.
Image-size differences worsen stereopsis independent of eye position
Vlaskamp, Björn N. S.; Filippini, Heather R.; Banks, Martin S.
2010-01-01
With the eyes in forward gaze, stereo performance worsens when one eye’s image is larger than the other’s. Near, eccentric objects naturally create retinal images of different sizes. Does this mean that stereopsis exhibits deficits for such stimuli? Or does the visual system compensate for the predictable image-size differences? To answer this, we measured discrimination of a disparity-defined shape for different relative image sizes. We did so for different gaze directions, some compatible with the image-size difference and some not. Magnifications of 10–15% caused a clear worsening of stereo performance. The worsening was determined only by relative image size and not by eye position. This shows that no neural compensation for image-size differences accompanies eye-position changes, at least prior to disparity estimation. We also found that a local cross-correlation model for disparity estimation performs like humans in the same task, suggesting that the decrease in stereo performance due to image-size differences is a byproduct of the disparity-estimation method. Finally, we looked for compensation in an observer who has constantly different image sizes due to differing eye lengths. She performed best when the presented images were roughly the same size, indicating that she has compensated for the persistent image-size difference. PMID:19271927
The Handicapped Can Dance Too!
ERIC Educational Resources Information Center
Lloyd, Marcia L.
1978-01-01
A program of dance therapy activities can offer handicapped individuals positive experiences in such areas as body image, spatial awareness, self-confidence, hand-eye/foot-eye coordination, visual focusing, balance and social relations. (Author/MJB)