Object-based attention underlies the rehearsal of feature binding in visual working memory.
Shen, Mowei; Huang, Xiang; Gao, Zaifeng
2015-04-01
Feature binding is a core concept in many research fields, including the study of working memory (WM). Over the past decade, it has been debated whether keeping the feature binding in visual WM consumes more visual attention than the constituent single features. Previous studies have only explored the contribution of domain-general attention or space-based attention in the binding process; no study so far has explored the role of object-based attention in retaining binding in visual WM. We hypothesized that object-based attention underlay the mechanism of rehearsing feature binding in visual WM. Therefore, during the maintenance phase of a visual WM task, we inserted a secondary mental rotation (Experiments 1-3), transparent motion (Experiment 4), or an object-based feature report task (Experiment 5) to consume the object-based attention available for binding. In line with the prediction of the object-based attention hypothesis, Experiments 1-5 revealed a more significant impairment for binding than for constituent single features. However, this selective binding impairment was not observed when inserting a space-based visual search task (Experiment 6). We conclude that object-based attention underlies the rehearsal of binding representation in visual WM. (c) 2015 APA, all rights reserved.
Object-Based Visual Attention in 8-Month-Old Infants: Evidence from an Eye-Tracking Study
ERIC Educational Resources Information Center
Bulf, Hermann; Valenza, Eloisa
2013-01-01
Visual attention is one of the infant's primary tools for gathering relevant information from the environment for further processing and learning. The space-based component of visual attention in infants has been widely investigated; however, the object-based component of visual attention has received scarce interest. This scarcity is…
ERIC Educational Resources Information Center
Bogon, Johanna; Finke, Kathrin; Schulte-Körne, Gerd; Müller, Hermann J.; Schneider, Werner X.; Stenneken, Prisca
2014-01-01
People with developmental dyslexia (DD) have been shown to be impaired in tasks that require the processing of multiple visual elements in parallel. It has been suggested that this deficit originates from disturbed visual attentional functions. The parameter-based assessment of visual attention based on Bundesen's (1990) theory of visual…
Spatial and Feature-Based Attention in a Layered Cortical Microcircuit Model
Wagatsuma, Nobuhiko; Potjans, Tobias C.; Diesmann, Markus; Sakai, Ko; Fukai, Tomoki
2013-01-01
Directing attention to the spatial location or the distinguishing feature of a visual object modulates neuronal responses in the visual cortex and the stimulus discriminability of subjects. However, the spatial and feature-based modes of attention differently influence visual processing by changing the tuning properties of neurons. Intriguingly, neurons' tuning curves are modulated similarly across different visual areas under both these modes of attention. Here, we explored the mechanism underlying the effects of these two modes of visual attention on the orientation selectivity of visual cortical neurons. To do this, we developed a layered microcircuit model. This model describes multiple orientation-specific microcircuits sharing their receptive fields and consisting of layers 2/3, 4, 5, and 6. These microcircuits represent a functional grouping of cortical neurons and mutually interact via lateral inhibition and excitatory connections between groups with similar selectivity. The individual microcircuits receive bottom-up visual stimuli and top-down attention in different layers. A crucial assumption of the model is that feature-based attention activates orientation-specific microcircuits for the relevant feature selectively, whereas spatial attention activates all microcircuits homogeneously, irrespective of their orientation selectivity. Consequently, our model simultaneously accounts for the multiplicative scaling of neuronal responses in spatial attention and the additive modulations of orientation tuning curves in feature-based attention, which have been observed widely in various visual cortical areas. Simulations of the model predict contrasting differences between excitatory and inhibitory neurons in the two modes of attentional modulations. Furthermore, the model replicates the modulation of the psychophysical discriminability of visual stimuli in the presence of external noise. Our layered model with a biologically suggested laminar structure describes the basic circuit mechanism underlying the attention-mode specific modulations of neuronal responses and visual perception. PMID:24324628
Guerin, Scott A.; Robbins, Clifford A.; Gilmore, Adrian W.; Schacter, Daniel L.
2012-01-01
SUMMARY The interaction between episodic retrieval and visual attention is relatively unexplored. Given that systems mediating attention and episodic memory appear to be segregated, and perhaps even in competition, it is unclear how visual attention is recruited during episodic retrieval. We investigated the recruitment of visual attention during the suppression of gist-based false recognition, the tendency to falsely recognize items that are similar to previously encountered items. Recruitment of visual attention was associated with activity in the dorsal attention network. The inferior parietal lobule, often implicated in episodic retrieval, tracked veridical retrieval of perceptual detail and showed reduced activity during the engagement of visual attention, consistent with a competitive relationship with the dorsal attention network. These findings suggest that the contribution of the parietal cortex to interactions between visual attention and episodic retrieval entails distinct systems that contribute to different components of the task while also suppressing each other. PMID:22998879
ERIC Educational Resources Information Center
Olivers, Christian N. L.; Meijer, Frank; Theeuwes, Jan
2006-01-01
In 7 experiments, the authors explored whether visual attention (the ability to select relevant visual information) and visual working memory (the ability to retain relevant visual information) share the same content representations. The presence of singleton distractors interfered more strongly with a visual search task when it was accompanied by…
Visual Attention Model Based on Statistical Properties of Neuron Responses
Duan, Haibin; Wang, Xiaohua
2015-01-01
Visual attention is a mechanism of the visual system that can select relevant objects from a specific scene. Interactions among neurons in multiple cortical areas are considered to be involved in attentional allocation. However, the characteristics of the encoded features and neuron responses in those attention related cortices are indefinite. Therefore, further investigations carried out in this study aim at demonstrating that unusual regions arousing more attention generally cause particular neuron responses. We suppose that visual saliency is obtained on the basis of neuron responses to contexts in natural scenes. A bottom-up visual attention model is proposed based on the self-information of neuron responses to test and verify the hypothesis. Four different color spaces are adopted and a novel entropy-based combination scheme is designed to make full use of color information. Valuable regions are highlighted while redundant backgrounds are suppressed in the saliency maps obtained by the proposed model. Comparative results reveal that the proposed model outperforms several state-of-the-art models. This study provides insights into the neuron responses based saliency detection and may underlie the neural mechanism of early visual cortices for bottom-up visual attention. PMID:25747859
Activity in human visual and parietal cortex reveals object-based attention in working memory.
Peters, Benjamin; Kaiser, Jochen; Rahm, Benjamin; Bledowski, Christoph
2015-02-25
Visual attention enables observers to select behaviorally relevant information based on spatial locations, features, or objects. Attentional selection is not limited to physically present visual information, but can also operate on internal representations maintained in working memory (WM) in service of higher-order cognition. However, only little is known about whether attention to WM contents follows the same principles as attention to sensory stimuli. To address this question, we investigated in humans whether the typically observed effects of object-based attention in perception are also evident for object-based attentional selection of internal object representations in WM. In full accordance with effects in visual perception, the key behavioral and neuronal characteristics of object-based attention were observed in WM. Specifically, we found that reaction times were shorter when shifting attention to memory positions located on the currently attended object compared with equidistant positions on a different object. Furthermore, functional magnetic resonance imaging and multivariate pattern analysis of visuotopic activity in visual (areas V1-V4) and parietal cortex revealed that directing attention to one position of an object held in WM also enhanced brain activation for other positions on the same object, suggesting that attentional selection in WM activates the entire object. This study demonstrated that all characteristic features of object-based attention are present in WM and thus follows the same principles as in perception. Copyright © 2015 the authors 0270-6474/15/353360-10$15.00/0.
Global motion compensated visual attention-based video watermarking
NASA Astrophysics Data System (ADS)
Oakes, Matthew; Bhowmik, Deepayan; Abhayaratne, Charith
2016-11-01
Imperceptibility and robustness are two key but complementary requirements of any watermarking algorithm. Low-strength watermarking yields high imperceptibility but exhibits poor robustness. High-strength watermarking schemes achieve good robustness but often suffer from embedding distortions resulting in poor visual quality in host media. This paper proposes a unique video watermarking algorithm that offers a fine balance between imperceptibility and robustness using motion compensated wavelet-based visual attention model (VAM). The proposed VAM includes spatial cues for visual saliency as well as temporal cues. The spatial modeling uses the spatial wavelet coefficients while the temporal modeling accounts for both local and global motion to arrive at the spatiotemporal VAM for video. The model is then used to develop a video watermarking algorithm, where a two-level watermarking weighting parameter map is generated from the VAM saliency maps using the saliency model and data are embedded into the host image according to the visual attentiveness of each region. By avoiding higher strength watermarking in the visually attentive region, the resulting watermarked video achieves high perceived visual quality while preserving high robustness. The proposed VAM outperforms the state-of-the-art video visual attention methods in joint saliency detection and low computational complexity performance. For the same embedding distortion, the proposed visual attention-based watermarking achieves up to 39% (nonblind) and 22% (blind) improvement in robustness against H.264/AVC compression, compared to existing watermarking methodology that does not use the VAM. The proposed visual attention-based video watermarking results in visual quality similar to that of low-strength watermarking and a robustness similar to those of high-strength watermarking.
Davidesco, Ido; Harel, Michal; Ramot, Michal; Kramer, Uri; Kipervasser, Svetlana; Andelman, Fani; Neufeld, Miri Y; Goelman, Gadi; Fried, Itzhak; Malach, Rafael
2013-01-16
One of the puzzling aspects in the visual attention literature is the discrepancy between electrophysiological and fMRI findings: whereas fMRI studies reveal strong attentional modulation in the earliest visual areas, single-unit and local field potential studies yielded mixed results. In addition, it is not clear to what extent spatial attention effects extend from early to high-order visual areas. Here we addressed these issues using electrocorticography recordings in epileptic patients. The patients performed a task that allowed simultaneous manipulation of both spatial and object-based attention. They were presented with composite stimuli, consisting of a small object (face or house) superimposed on a large one, and in separate blocks, were instructed to attend one of the objects. We found a consistent increase in broadband high-frequency (30-90 Hz) power, but not in visual evoked potentials, associated with spatial attention starting with V1/V2 and continuing throughout the visual hierarchy. The magnitude of the attentional modulation was correlated with the spatial selectivity of each electrode and its distance from the occipital pole. Interestingly, the latency of the attentional modulation showed a significant decrease along the visual hierarchy. In addition, electrodes placed over high-order visual areas (e.g., fusiform gyrus) showed both effects of spatial and object-based attention. Overall, our results help to reconcile previous observations of discrepancy between fMRI and electrophysiology. They also imply that spatial attention effects can be found both in early and high-order visual cortical areas, in parallel with their stimulus tuning properties.
Smid, Henderikus G. O. M.; Bruggeman, Richard; Martens, Sander
2013-01-01
Background Schizophrenia is associated with impairments of the perception of objects, but how this affects higher cognitive functions, whether this impairment is already present after recent onset of psychosis, and whether it is specific for schizophrenia related psychosis, is not clear. We therefore tested the hypothesis that because schizophrenia is associated with impaired object perception, schizophrenia patients should differ in shifting attention between objects compared to healthy controls. To test this hypothesis, a task was used that allowed us to separately observe space-based and object-based covert orienting of attention. To examine whether impairment of object-based visual attention is related to higher order cognitive functions, standard neuropsychological tests were also administered. Method Patients with recent onset psychosis and normal controls performed the attention task, in which space- and object-based attention shifts were induced by cue-target sequences that required reorienting of attention within an object, or reorienting attention between objects. Results Patients with and without schizophrenia showed slower than normal spatial attention shifts, but the object-based component of attention shifts in patients was smaller than normal. Schizophrenia was specifically associated with slowed right-to-left attention shifts. Reorienting speed was significantly correlated with verbal memory scores in controls, and with visual attention scores in patients, but not with speed-of-processing scores in either group. Conclusions deficits of object-perception and spatial attention shifting are not only associated with schizophrenia, but are common to all psychosis patients. Schizophrenia patients only differed by having abnormally slow right-to-left visual field reorienting. Deficits of object-perception and spatial attention shifting are already present after recent onset of psychosis. Studies investigating visual spatial attention should take into account the separable effects of space-based and object-based shifting of attention. Impaired reorienting in patients was related to impaired visual attention, but not to deficits of processing speed and verbal memory. PMID:23536901
Smid, Henderikus G O M; Bruggeman, Richard; Martens, Sander
2013-01-01
Schizophrenia is associated with impairments of the perception of objects, but how this affects higher cognitive functions, whether this impairment is already present after recent onset of psychosis, and whether it is specific for schizophrenia related psychosis, is not clear. We therefore tested the hypothesis that because schizophrenia is associated with impaired object perception, schizophrenia patients should differ in shifting attention between objects compared to healthy controls. To test this hypothesis, a task was used that allowed us to separately observe space-based and object-based covert orienting of attention. To examine whether impairment of object-based visual attention is related to higher order cognitive functions, standard neuropsychological tests were also administered. Patients with recent onset psychosis and normal controls performed the attention task, in which space- and object-based attention shifts were induced by cue-target sequences that required reorienting of attention within an object, or reorienting attention between objects. Patients with and without schizophrenia showed slower than normal spatial attention shifts, but the object-based component of attention shifts in patients was smaller than normal. Schizophrenia was specifically associated with slowed right-to-left attention shifts. Reorienting speed was significantly correlated with verbal memory scores in controls, and with visual attention scores in patients, but not with speed-of-processing scores in either group. deficits of object-perception and spatial attention shifting are not only associated with schizophrenia, but are common to all psychosis patients. Schizophrenia patients only differed by having abnormally slow right-to-left visual field reorienting. Deficits of object-perception and spatial attention shifting are already present after recent onset of psychosis. Studies investigating visual spatial attention should take into account the separable effects of space-based and object-based shifting of attention. Impaired reorienting in patients was related to impaired visual attention, but not to deficits of processing speed and verbal memory.
Buchholz, Judy; Aimola Davies, Anne
2005-02-01
Performance on a covert visual attention task is compared between a group of adults with developmental dyslexia (specifically phonological difficulties) and a group of age and IQ matched controls. The group with dyslexia were generally slower to detect validly-cued targets. Costs of shifting attention toward the periphery when the target was invalidly cued were significantly higher for the group with dyslexia, while costs associated with shifts toward the fovea tended to be lower. Higher costs were also shown by the group with dyslexia for up-down shifts of attention in the periphery. A visual field processing difference was found, in that the group with dyslexia showed higher costs associated with shifting attention between objects in they LVF. These findings indicate that these adults with dyslexia have difficulty in both the space-based and the object-based components of covert visual attention, and more specifically to stimuli located in the periphery.
Object-based attentional selection modulates anticipatory alpha oscillations
Knakker, Balázs; Weiss, Béla; Vidnyánszky, Zoltán
2015-01-01
Visual cortical alpha oscillations are involved in attentional gating of incoming visual information. It has been shown that spatial and feature-based attentional selection result in increased alpha oscillations over the cortical regions representing sensory input originating from the unattended visual field and task-irrelevant visual features, respectively. However, whether attentional gating in the case of object based selection is also associated with alpha oscillations has not been investigated before. Here we measured anticipatory electroencephalography (EEG) alpha oscillations while participants were cued to attend to foveal face or word stimuli, the processing of which is known to have right and left hemispheric lateralization, respectively. The results revealed that in the case of simultaneously displayed, overlapping face and word stimuli, attending to the words led to increased power of parieto-occipital alpha oscillations over the right hemisphere as compared to when faces were attended. This object category-specific modulation of the hemispheric lateralization of anticipatory alpha oscillations was maintained during sustained attentional selection of sequentially presented face and word stimuli. These results imply that in the case of object-based attentional selection—similarly to spatial and feature-based attention—gating of visual information processing might involve visual cortical alpha oscillations. PMID:25628554
Sneve, Markus H; Sreenivasan, Kartik K; Alnæs, Dag; Endestad, Tor; Magnussen, Svein
2015-01-01
Retention of features in visual short-term memory (VSTM) involves maintenance of sensory traces in early visual cortex. However, the mechanism through which this is accomplished is not known. Here, we formulate specific hypotheses derived from studies on feature-based attention to test the prediction that visual cortex is recruited by attentional mechanisms during VSTM of low-level features. Functional magnetic resonance imaging (fMRI) of human visual areas revealed that neural populations coding for task-irrelevant feature information are suppressed during maintenance of detailed spatial frequency memory representations. The narrow spectral extent of this suppression agrees well with known effects of feature-based attention. Additionally, analyses of effective connectivity during maintenance between retinotopic areas in visual cortex show that the observed highlighting of task-relevant parts of the feature spectrum originates in V4, a visual area strongly connected with higher-level control regions and known to convey top-down influence to earlier visual areas during attentional tasks. In line with this property of V4 during attentional operations, we demonstrate that modulations of earlier visual areas during memory maintenance have behavioral consequences, and that these modulations are a result of influences from V4. Copyright © 2014 Elsevier Ltd. All rights reserved.
Bleckley, M Kathryn; Foster, Jeffrey L; Engle, Randall W
2015-04-01
Bleckley, Durso, Crutchfield, Engle, and Khanna (Psychonomic Bulletin & Review, 10, 884-889, 2003) found that visual attention allocation differed between groups high or low in working memory capacity (WMC). High-span, but not low-span, subjects showed an invalid-cue cost during a letter localization task in which the letter appeared closer to fixation than the cue, but not when the letter appeared farther from fixation than the cue. This suggests that low-spans allocated attention as a spotlight, whereas high-spans allocated their attention to objects. In this study, we tested whether utilizing object-based visual attention is a resource-limited process that is difficult for low-span individuals. In the first experiment, we tested the uses of object versus location-based attention with high and low-span subjects, with half of the subjects completing a demanding secondary load task. Under load, high-spans were no longer able to use object-based visual attention. A second experiment supported the hypothesis that these differences in allocation were due to high-spans using object-based allocation, whereas low-spans used location-based allocation.
Simultaneous selection by object-based attention in visual and frontal cortex
Pooresmaeili, Arezoo; Poort, Jasper; Roelfsema, Pieter R.
2014-01-01
Models of visual attention hold that top-down signals from frontal cortex influence information processing in visual cortex. It is unknown whether situations exist in which visual cortex actively participates in attentional selection. To investigate this question, we simultaneously recorded neuronal activity in the frontal eye fields (FEF) and primary visual cortex (V1) during a curve-tracing task in which attention shifts are object-based. We found that accurate performance was associated with similar latencies of attentional selection in both areas and that the latency in both areas increased if the task was made more difficult. The amplitude of the attentional signals in V1 saturated early during a trial, whereas these selection signals kept increasing for a longer time in FEF, until the moment of an eye movement, as if FEF integrated attentional signals present in early visual cortex. In erroneous trials, we observed an interareal latency difference because FEF selected the wrong curve before V1 and imposed its erroneous decision onto visual cortex. The neuronal activity in visual and frontal cortices was correlated across trials, and this trial-to-trial coupling was strongest for the attended curve. These results imply that selective attention relies on reciprocal interactions within a large network of areas that includes V1 and FEF. PMID:24711379
Rolke, Bettina; Festl, Freya; Seibold, Verena C
2016-11-01
We used ERPs to investigate whether temporal attention interacts with spatial attention and feature-based attention to enhance visual processing. We presented a visual search display containing one singleton stimulus among a set of homogenous distractors. Participants were asked to respond only to target singletons of a particular color and shape that were presented in an attended spatial position. We manipulated temporal attention by presenting a warning signal before each search display and varying the foreperiod (FP) between the warning signal and the search display in a blocked manner. We observed distinctive ERP effects of both spatial and temporal attention. The amplitudes for the N2pc, SPCN, and P3 were enhanced by spatial attention indicating a processing benefit of relevant stimulus features at the attended side. Temporal attention accelerated stimulus processing; this was indexed by an earlier onset of the N2pc component and a reduction in reaction times to targets. Most importantly, temporal attention did not interact with spatial attention or stimulus features to influence visual processing. Taken together, the results suggest that temporal attention fosters visual perceptual processing in a visual search task independently from spatial attention and feature-based attention; this provides support for the nonspecific enhancement hypothesis of temporal attention. © 2016 Society for Psychophysiological Research.
Space-based visual attention: a marker of immature selective attention in toddlers?
Rivière, James; Brisson, Julie
2014-11-01
Various studies suggested that attentional difficulties cause toddlers' failure in some spatial search tasks. However, attention is not a unitary construct and this study investigated two attentional mechanisms: location selection (space-based attention) and object selection (object-based attention). We investigated how toddlers' attention is distributed in the visual field during a manual search task for objects moving out of sight, namely the moving boxes task. Results show that 2.5-year-olds who failed this task allocated more attention to the location of the relevant object than to the object itself. These findings suggest that in some manual search tasks the primacy of space-based attention over object-based attention could be a marker of immature selective attention in toddlers. © 2014 Wiley Periodicals, Inc.
Gaze-independent brain-computer interfaces based on covert attention and feature attention
NASA Astrophysics Data System (ADS)
Treder, M. S.; Schmidt, N. M.; Blankertz, B.
2011-10-01
There is evidence that conventional visual brain-computer interfaces (BCIs) based on event-related potentials cannot be operated efficiently when eye movements are not allowed. To overcome this limitation, the aim of this study was to develop a visual speller that does not require eye movements. Three different variants of a two-stage visual speller based on covert spatial attention and non-spatial feature attention (i.e. attention to colour and form) were tested in an online experiment with 13 healthy participants. All participants achieved highly accurate BCI control. They could select one out of thirty symbols (chance level 3.3%) with mean accuracies of 88%-97% for the different spellers. The best results were obtained for a speller that was operated using non-spatial feature attention only. These results show that, using feature attention, it is possible to realize high-accuracy, fast-paced visual spellers that have a large vocabulary and are independent of eye gaze.
The effects of visual search efficiency on object-based attention
Rosen, Maya; Cutrone, Elizabeth; Behrmann, Marlene
2017-01-01
The attentional prioritization hypothesis of object-based attention (Shomstein & Yantis in Perception & Psychophysics, 64, 41–51, 2002) suggests a two-stage selection process comprising an automatic spatial gradient and flexible strategic (prioritization) selection. The combined attentional priorities of these two stages of object-based selection determine the order in which participants will search the display for the presence of a target. The strategic process has often been likened to a prioritized visual search. By modifying the double-rectangle cueing paradigm (Egly, Driver, & Rafal in Journal of Experimental Psychology: General, 123, 161–177, 1994) and placing it in the context of a larger-scale visual search, we examined how the prioritization search is affected by search efficiency. By probing both targets located on the cued object and targets external to the cued object, we found that the attentional priority surrounding a selected object is strongly modulated by search mode. However, the ordering of the prioritization search is unaffected by search mode. The data also provide evidence that standard spatial visual search and object-based prioritization search may rely on distinct mechanisms. These results provide insight into the interactions between the mode of visual search and object-based selection, and help define the modulatory consequences of search efficiency for object-based attention. PMID:25832192
Mastering algebra retrains the visual system to perceive hierarchical structure in equations.
Marghetis, Tyler; Landy, David; Goldstone, Robert L
2016-01-01
Formal mathematics is a paragon of abstractness. It thus seems natural to assume that the mathematical expert should rely more on symbolic or conceptual processes, and less on perception and action. We argue instead that mathematical proficiency relies on perceptual systems that have been retrained to implement mathematical skills. Specifically, we investigated whether the visual system-in particular, object-based attention-is retrained so that parsing algebraic expressions and evaluating algebraic validity are accomplished by visual processing. Object-based attention occurs when the visual system organizes the world into discrete objects, which then guide the deployment of attention. One classic signature of object-based attention is better perceptual discrimination within, rather than between, visual objects. The current study reports that object-based attention occurs not only for simple shapes but also for symbolic mathematical elements within algebraic expressions-but only among individuals who have mastered the hierarchical syntax of algebra. Moreover, among these individuals, increased object-based attention within algebraic expressions is associated with a better ability to evaluate algebraic validity. These results suggest that, in mastering the rules of algebra, people retrain their visual system to represent and evaluate abstract mathematical structure. We thus argue that algebraic expertise involves the regimentation and reuse of evolutionarily ancient perceptual processes. Our findings implicate the visual system as central to learning and reasoning in mathematics, leading us to favor educational approaches to mathematics and related STEM fields that encourage students to adapt, not abandon, their use of perception.
TVA-based assessment of visual attentional functions in developmental dyslexia
Bogon, Johanna; Finke, Kathrin; Stenneken, Prisca
2014-01-01
There is an ongoing debate whether an impairment of visual attentional functions constitutes an additional or even an isolated deficit of developmental dyslexia (DD). Especially performance in tasks that require the processing of multiple visual elements in parallel has been reported to be impaired in DD. We review studies that used parameter-based assessment for identifying and quantifying impaired aspect(s) of visual attention that underlie this multi-element processing deficit in DD. These studies used the mathematical framework provided by the “theory of visual attention” (Bundesen, 1990) to derive quantitative measures of general attentional resources and attentional weighting aspects on the basis of behavioral performance in whole- and partial-report tasks. Based on parameter estimates in children and adults with DD, the reviewed studies support a slowed perceptual processing speed as an underlying primary deficit in DD. Moreover, a reduction in visual short term memory storage capacity seems to present a modulating component, contributing to difficulties in written language processing. Furthermore, comparing the spatial distributions of attentional weights in children and adults suggests that having limited reading and writing skills might impair the development of a slight leftward bias, that is typical for unimpaired adult readers. PMID:25360129
ERIC Educational Resources Information Center
Buchholz, J.; Davies, A.A.
2005-01-01
Performance on a covert visual attention task is compared between a group of adults with developmental dyslexia (specifically phonological difficulties) and a group of age and IQ matched controls. The group with dyslexia were generally slower to detect validly-cued targets. Costs of shifting attention toward the periphery when the target was…
Xie, Jun; Xu, Guanghua; Luo, Ailing; Li, Min; Zhang, Sicong; Han, Chengcheng; Yan, Wenqiang
2017-08-14
As a spatial selective attention-based brain-computer interface (BCI) paradigm, steady-state visual evoked potential (SSVEP) BCI has the advantages of high information transfer rate, high tolerance to artifacts, and robust performance across users. However, its benefits come at the cost of mental load and fatigue occurring in the concentration on the visual stimuli. Noise, as a ubiquitous random perturbation with the power of randomness, may be exploited by the human visual system to enhance higher-level brain functions. In this study, a novel steady-state motion visual evoked potential (SSMVEP, i.e., one kind of SSVEP)-based BCI paradigm with spatiotemporal visual noise was used to investigate the influence of noise on the compensation of mental load and fatigue deterioration during prolonged attention tasks. Changes in α , θ , θ + α powers, θ / α ratio, and electroencephalography (EEG) properties of amplitude, signal-to-noise ratio (SNR), and online accuracy, were used to evaluate mental load and fatigue. We showed that presenting a moderate visual noise to participants could reliably alleviate the mental load and fatigue during online operation of visual BCI that places demands on the attentional processes. This demonstrated that noise could provide a superior solution to the implementation of visual attention controlling-based BCI applications.
Madden, David J.
2007-01-01
Older adults are often slower and less accurate than are younger adults in performing visual-search tasks, suggesting an age-related decline in attentional functioning. Age-related decline in attention, however, is not entirely pervasive. Visual search that is based on the observer’s expectations (i.e., top-down attention) is relatively preserved as a function of adult age. Neuroimaging research suggests that age-related decline occurs in the structure and function of brain regions mediating the visual sensory input, whereas activation of regions in the frontal and parietal lobes is often greater for older adults than for younger adults. This increased activation may represent an age-related increase in the role of top-down attention during visual tasks. To obtain a more complete account of age-related decline and preservation of visual attention, current research is beginning to explore the relation of neuroimaging measures of brain structure and function to behavioral measures of visual attention. PMID:18080001
A Componential Analysis of Visual Attention in Children With ADHD.
McAvinue, Laura P; Vangkilde, Signe; Johnson, Katherine A; Habekost, Thomas; Kyllingsbæk, Søren; Bundesen, Claus; Robertson, Ian H
2015-10-01
Inattentive behaviour is a defining characteristic of ADHD. Researchers have wondered about the nature of the attentional deficit underlying these symptoms. The primary purpose of the current study was to examine this attentional deficit using a novel paradigm based upon the Theory of Visual Attention (TVA). The TVA paradigm enabled a componential analysis of visual attention through the use of a mathematical model to estimate parameters relating to attentional selectivity and capacity. Children's ability to sustain attention was also assessed using the Sustained Attention to Response Task. The sample included a comparison between 25 children with ADHD and 25 control children aged 9-13. Children with ADHD had significantly impaired sustained attention and visual processing speed but intact attentional selectivity, perceptual threshold and visual short-term memory capacity. The results of this study lend support to the notion of differential impairment of attentional functions in children with ADHD. © 2012 SAGE Publications.
Alvarez, George A; Gill, Jonathan; Cavanagh, Patrick
2012-01-01
Previous studies have shown independent attentional selection of targets in the left and right visual hemifields during attentional tracking (Alvarez & Cavanagh, 2005) but not during a visual search (Luck, Hillyard, Mangun, & Gazzaniga, 1989). Here we tested whether multifocal spatial attention is the critical process that operates independently in the two hemifields. It is explicitly required in tracking (attend to a subset of object locations, suppress the others) but not in the standard visual search task (where all items are potential targets). We used a modified visual search task in which observers searched for a target within a subset of display items, where the subset was selected based on location (Experiments 1 and 3A) or based on a salient feature difference (Experiments 2 and 3B). The results show hemifield independence in this subset visual search task with location-based selection but not with feature-based selection; this effect cannot be explained by general difficulty (Experiment 4). Combined, these findings suggest that hemifield independence is a signature of multifocal spatial attention and highlight the need for cognitive and neural theories of attention to account for anatomical constraints on selection mechanisms. PMID:22637710
Majerus, Steve; Cowan, Nelson; Péters, Frédéric; Van Calster, Laurens; Phillips, Christophe; Schrouff, Jessica
2016-01-01
Recent studies suggest common neural substrates involved in verbal and visual working memory (WM), interpreted as reflecting shared attention-based, short-term retention mechanisms. We used a machine-learning approach to determine more directly the extent to which common neural patterns characterize retention in verbal WM and visual WM. Verbal WM was assessed via a standard delayed probe recognition task for letter sequences of variable length. Visual WM was assessed via a visual array WM task involving the maintenance of variable amounts of visual information in the focus of attention. We trained a classifier to distinguish neural activation patterns associated with high- and low-visual WM load and tested the ability of this classifier to predict verbal WM load (high–low) from their associated neural activation patterns, and vice versa. We observed significant between-task prediction of load effects during WM maintenance, in posterior parietal and superior frontal regions of the dorsal attention network; in contrast, between-task prediction in sensory processing cortices was restricted to the encoding stage. Furthermore, between-task prediction of load effects was strongest in those participants presenting the highest capacity for the visual WM task. This study provides novel evidence for common, attention-based neural patterns supporting verbal and visual WM. PMID:25146374
Cognitive load reducing in destination decision system
NASA Astrophysics Data System (ADS)
Wu, Chunhua; Wang, Cong; Jiang, Qien; Wang, Jian; Chen, Hong
2007-12-01
With limited cognitive resource, the quantity of information can be processed by a person is limited. If the limitation is broken, the whole cognitive process would be affected, so did the final decision. The research of effective ways to reduce the cognitive load is launched from two aspects: cutting down the number of alternatives and directing the user to allocate his limited attention resource based on the selective visual attention theory. Decision-making is such a complex process that people usually have difficulties to express their requirements completely. An effective method to get user's hidden requirements is put forward in this paper. With more requirements be caught, the destination decision system can filtering more quantity of inappropriate alternatives. Different information piece has different utility, if the information with high utility would get attention easily, the decision might be made more easily. After analyzing the current selective visual attention theory, a new presentation style based on user's visual attention also put forward in this paper. This model arranges information presentation according to the movement of sightline. Through visual attention, the user can put their limited attention resource on the important information. Hidden requirements catching and presenting information based on the selective visual attention are effective ways to reducing the cognitive load.
Visual attention capacity: a review of TVA-based patient studies.
Habekost, Thomas; Starrfelt, Randi
2009-02-01
Psychophysical studies have identified two distinct limitations of visual attention capacity: processing speed and apprehension span. Using a simple test, these cognitive factors can be analyzed by Bundesen's Theory of Visual Attention (TVA). The method has strong specificity and sensitivity, and measurements are highly reliable. As the method is theoretically founded, it also has high validity. TVA-based assessment has recently been used to investigate a broad range of neuropsychological and neurological conditions. We present the method, including the experimental paradigm and practical guidelines to patient testing, and review existing TVA-based patient studies organized by lesion anatomy. Lesions in three anatomical regions affect visual capacity: The parietal lobes, frontal cortex and basal ganglia, and extrastriate cortex. Visual capacity thus depends on large, bilaterally distributed anatomical networks that include several regions outside the visual system. The two visual capacity parameters are functionally separable, but seem to rely on largely overlapping brain areas.
Online decoding of object-based attention using real-time fMRI.
Niazi, Adnan M; van den Broek, Philip L C; Klanke, Stefan; Barth, Markus; Poel, Mannes; Desain, Peter; van Gerven, Marcel A J
2014-01-01
Visual attention is used to selectively filter relevant information depending on current task demands and goals. Visual attention is called object-based attention when it is directed to coherent forms or objects in the visual field. This study used real-time functional magnetic resonance imaging for moment-to-moment decoding of attention to spatially overlapped objects belonging to two different object categories. First, a whole-brain classifier was trained on pictures of faces and places. Subjects then saw transparently overlapped pictures of a face and a place, and attended to only one of them while ignoring the other. The category of the attended object, face or place, was decoded on a scan-by-scan basis using the previously trained decoder. The decoder performed at 77.6% accuracy indicating that despite competing bottom-up sensory input, object-based visual attention biased neural patterns towards that of the attended object. Furthermore, a comparison between different classification approaches indicated that the representation of faces and places is distributed rather than focal. This implies that real-time decoding of object-based attention requires a multivariate decoding approach that can detect these distributed patterns of cortical activity. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Feature-based attentional modulations in the absence of direct visual stimulation.
Serences, John T; Boynton, Geoffrey M
2007-07-19
When faced with a crowded visual scene, observers must selectively attend to behaviorally relevant objects to avoid sensory overload. Often this selection process is guided by prior knowledge of a target-defining feature (e.g., the color red when looking for an apple), which enhances the firing rate of visual neurons that are selective for the attended feature. Here, we used functional magnetic resonance imaging and a pattern classification algorithm to predict the attentional state of human observers as they monitored a visual feature (one of two directions of motion). We find that feature-specific attention effects spread across the visual field-even to regions of the scene that do not contain a stimulus. This spread of feature-based attention to empty regions of space may facilitate the perception of behaviorally relevant stimuli by increasing sensitivity to attended features at all locations in the visual field.
(C)overt attention and visual speller design in an ERP-based brain-computer interface.
Treder, Matthias S; Blankertz, Benjamin
2010-05-28
In a visual oddball paradigm, attention to an event usually modulates the event-related potential (ERP). An ERP-based brain-computer interface (BCI) exploits this neural mechanism for communication. Hitherto, it was unclear to what extent the accuracy of such a BCI requires eye movements (overt attention) or whether it is also feasible for targets in the visual periphery (covert attention). Also unclear was how the visual design of the BCI can be improved to meet peculiarities of peripheral vision such as low spatial acuity and crowding. Healthy participants (N = 13) performed a copy-spelling task wherein they had to count target intensifications. EEG and eye movements were recorded concurrently. First, (c)overt attention was investigated by way of a target fixation condition and a central fixation condition. In the latter, participants had to fixate a dot in the center of the screen and allocate their attention to a target in the visual periphery. Second, the effect of visual speller layout was investigated by comparing the symbol Matrix to an ERP-based Hex-o-Spell, a two-levels speller consisting of six discs arranged on an invisible hexagon. We assessed counting errors, ERP amplitudes, and offline classification performance. There is an advantage (i.e., less errors, larger ERP amplitude modulation, better classification) of overt attention over covert attention, and there is also an advantage of the Hex-o-Spell over the Matrix. Using overt attention, P1, N1, P2, N2, and P3 components are enhanced by attention. Using covert attention, only N2 and P3 are enhanced for both spellers, and N1 and P2 are modulated when using the Hex-o-Spell but not when using the Matrix. Consequently, classifiers rely mainly on early evoked potentials in overt attention and on later cognitive components in covert attention. Both overt and covert attention can be used to drive an ERP-based BCI, but performance is markedly lower for covert attention. The Hex-o-Spell outperforms the Matrix, especially when eye movements are not permitted, illustrating that performance can be increased if one accounts for peculiarities of peripheral vision.
(C)overt attention and visual speller design in an ERP-based brain-computer interface
2010-01-01
Background In a visual oddball paradigm, attention to an event usually modulates the event-related potential (ERP). An ERP-based brain-computer interface (BCI) exploits this neural mechanism for communication. Hitherto, it was unclear to what extent the accuracy of such a BCI requires eye movements (overt attention) or whether it is also feasible for targets in the visual periphery (covert attention). Also unclear was how the visual design of the BCI can be improved to meet peculiarities of peripheral vision such as low spatial acuity and crowding. Method Healthy participants (N = 13) performed a copy-spelling task wherein they had to count target intensifications. EEG and eye movements were recorded concurrently. First, (c)overt attention was investigated by way of a target fixation condition and a central fixation condition. In the latter, participants had to fixate a dot in the center of the screen and allocate their attention to a target in the visual periphery. Second, the effect of visual speller layout was investigated by comparing the symbol Matrix to an ERP-based Hex-o-Spell, a two-levels speller consisting of six discs arranged on an invisible hexagon. Results We assessed counting errors, ERP amplitudes, and offline classification performance. There is an advantage (i.e., less errors, larger ERP amplitude modulation, better classification) of overt attention over covert attention, and there is also an advantage of the Hex-o-Spell over the Matrix. Using overt attention, P1, N1, P2, N2, and P3 components are enhanced by attention. Using covert attention, only N2 and P3 are enhanced for both spellers, and N1 and P2 are modulated when using the Hex-o-Spell but not when using the Matrix. Consequently, classifiers rely mainly on early evoked potentials in overt attention and on later cognitive components in covert attention. Conclusions Both overt and covert attention can be used to drive an ERP-based BCI, but performance is markedly lower for covert attention. The Hex-o-Spell outperforms the Matrix, especially when eye movements are not permitted, illustrating that performance can be increased if one accounts for peculiarities of peripheral vision. PMID:20509913
Spatial Attention Reduces Burstiness in Macaque Visual Cortical Area MST.
Xue, Cheng; Kaping, Daniel; Ray, Sonia Baloni; Krishna, B Suresh; Treue, Stefan
2017-01-01
Visual attention modulates the firing rate of neurons in many primate cortical areas. In V4, a cortical area in the ventral visual pathway, spatial attention has also been shown to reduce the tendency of neurons to fire closely separated spikes (burstiness). A recent model proposes that a single mechanism accounts for both the firing rate enhancement and the burstiness reduction in V4, but this has not been empirically tested. It is also unclear if the burstiness reduction by spatial attention is found in other visual areas and for other attentional types. We therefore recorded from single neurons in the medial superior temporal area (MST), a key motion-processing area along the dorsal visual pathway, of two rhesus monkeys while they performed a task engaging both spatial and feature-based attention. We show that in MST, spatial attention is associated with a clear reduction in burstiness that is independent of the concurrent enhancement of firing rate. In contrast, feature-based attention enhances firing rate but is not associated with a significant reduction in burstiness. These results establish burstiness reduction as a widespread effect of spatial attention. They also suggest that in contrast to the recently proposed model, the effects of spatial attention on burstiness and firing rate emerge from different mechanisms. © The Author 2016. Published by Oxford University Press.
Spatial Attention Reduces Burstiness in Macaque Visual Cortical Area MST
Xue, Cheng; Kaping, Daniel; Ray, Sonia Baloni; Krishna, B. Suresh; Treue, Stefan
2017-01-01
Abstract Visual attention modulates the firing rate of neurons in many primate cortical areas. In V4, a cortical area in the ventral visual pathway, spatial attention has also been shown to reduce the tendency of neurons to fire closely separated spikes (burstiness). A recent model proposes that a single mechanism accounts for both the firing rate enhancement and the burstiness reduction in V4, but this has not been empirically tested. It is also unclear if the burstiness reduction by spatial attention is found in other visual areas and for other attentional types. We therefore recorded from single neurons in the medial superior temporal area (MST), a key motion-processing area along the dorsal visual pathway, of two rhesus monkeys while they performed a task engaging both spatial and feature-based attention. We show that in MST, spatial attention is associated with a clear reduction in burstiness that is independent of the concurrent enhancement of firing rate. In contrast, feature-based attention enhances firing rate but is not associated with a significant reduction in burstiness. These results establish burstiness reduction as a widespread effect of spatial attention. They also suggest that in contrast to the recently proposed model, the effects of spatial attention on burstiness and firing rate emerge from different mechanisms. PMID:28365773
Kraft, Antje; Dyrholm, Mads; Kehrer, Stefanie; Kaufmann, Christian; Bruening, Jovita; Kathmann, Norbert; Bundesen, Claus; Irlbacher, Kerstin; Brandt, Stephan A
2015-01-01
Several studies have demonstrated a bilateral field advantage (BFA) in early visual attentional processing, that is, enhanced visual processing when stimuli are spread across both visual hemifields. The results are reminiscent of a hemispheric resource model of parallel visual attentional processing, suggesting more attentional resources on an early level of visual processing for bilateral displays [e.g. Sereno AB, Kosslyn SM. Discrimination within and between hemifields: a new constraint on theories of attention. Neuropsychologia 1991;29(7):659-75.]. Several studies have shown that the BFA extends beyond early stages of visual attentional processing, demonstrating that visual short term memory (VSTM) capacity is higher when stimuli are distributed bilaterally rather than unilaterally. Here we examine whether hemisphere-specific resources are also evident on later stages of visual attentional processing. Based on the Theory of Visual Attention (TVA) [Bundesen C. A theory of visual attention. Psychol Rev 1990;97(4):523-47.] we used a whole report paradigm that allows investigating visual attention capacity variability in unilateral and bilateral displays during navigated repetitive transcranial magnetic stimulation (rTMS) of the precuneus region. A robust BFA in VSTM storage capacity was apparent after rTMS over the left precuneus and in the control condition without rTMS. In contrast, the BFA diminished with rTMS over the right precuneus. This finding indicates that the right precuneus plays a causal role in VSTM capacity, particularly in bilateral visual displays. Copyright © 2015 Elsevier Inc. All rights reserved.
Majerus, Steve; Cowan, Nelson; Péters, Frédéric; Van Calster, Laurens; Phillips, Christophe; Schrouff, Jessica
2016-01-01
Recent studies suggest common neural substrates involved in verbal and visual working memory (WM), interpreted as reflecting shared attention-based, short-term retention mechanisms. We used a machine-learning approach to determine more directly the extent to which common neural patterns characterize retention in verbal WM and visual WM. Verbal WM was assessed via a standard delayed probe recognition task for letter sequences of variable length. Visual WM was assessed via a visual array WM task involving the maintenance of variable amounts of visual information in the focus of attention. We trained a classifier to distinguish neural activation patterns associated with high- and low-visual WM load and tested the ability of this classifier to predict verbal WM load (high-low) from their associated neural activation patterns, and vice versa. We observed significant between-task prediction of load effects during WM maintenance, in posterior parietal and superior frontal regions of the dorsal attention network; in contrast, between-task prediction in sensory processing cortices was restricted to the encoding stage. Furthermore, between-task prediction of load effects was strongest in those participants presenting the highest capacity for the visual WM task. This study provides novel evidence for common, attention-based neural patterns supporting verbal and visual WM. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Cognitive Control Network Contributions to Memory-Guided Visual Attention
Rosen, Maya L.; Stern, Chantal E.; Michalka, Samantha W.; Devaney, Kathryn J.; Somers, David C.
2016-01-01
Visual attentional capacity is severely limited, but humans excel in familiar visual contexts, in part because long-term memories guide efficient deployment of attention. To investigate the neural substrates that support memory-guided visual attention, we performed a set of functional MRI experiments that contrast long-term, memory-guided visuospatial attention with stimulus-guided visuospatial attention in a change detection task. Whereas the dorsal attention network was activated for both forms of attention, the cognitive control network (CCN) was preferentially activated during memory-guided attention. Three posterior nodes in the CCN, posterior precuneus, posterior callosal sulcus/mid-cingulate, and lateral intraparietal sulcus exhibited the greatest specificity for memory-guided attention. These 3 regions exhibit functional connectivity at rest, and we propose that they form a subnetwork within the broader CCN. Based on the task activation patterns, we conclude that the nodes of this subnetwork are preferentially recruited for long-term memory guidance of visuospatial attention. PMID:25750253
Visual attention based bag-of-words model for image classification
NASA Astrophysics Data System (ADS)
Wang, Qiwei; Wan, Shouhong; Yue, Lihua; Wang, Che
2014-04-01
Bag-of-words is a classical method for image classification. The core problem is how to count the frequency of the visual words and what visual words to select. In this paper, we propose a visual attention based bag-of-words model (VABOW model) for image classification task. The VABOW model utilizes visual attention method to generate a saliency map, and uses the saliency map as a weighted matrix to instruct the statistic process for the frequency of the visual words. On the other hand, the VABOW model combines shape, color and texture cues and uses L1 regularization logistic regression method to select the most relevant and most efficient features. We compare our approach with traditional bag-of-words based method on two datasets, and the result shows that our VABOW model outperforms the state-of-the-art method for image classification.
Modality-specificity of Selective Attention Networks.
Stewart, Hannah J; Amitay, Sygal
2015-01-01
To establish the modality specificity and generality of selective attention networks. Forty-eight young adults completed a battery of four auditory and visual selective attention tests based upon the Attention Network framework: the visual and auditory Attention Network Tests (vANT, aANT), the Test of Everyday Attention (TEA), and the Test of Attention in Listening (TAiL). These provided independent measures for auditory and visual alerting, orienting, and conflict resolution networks. The measures were subjected to an exploratory factor analysis to assess underlying attention constructs. The analysis yielded a four-component solution. The first component comprised of a range of measures from the TEA and was labeled "general attention." The third component was labeled "auditory attention," as it only contained measures from the TAiL using pitch as the attended stimulus feature. The second and fourth components were labeled as "spatial orienting" and "spatial conflict," respectively-they were comprised of orienting and conflict resolution measures from the vANT, aANT, and TAiL attend-location task-all tasks based upon spatial judgments (e.g., the direction of a target arrow or sound location). These results do not support our a-priori hypothesis that attention networks are either modality specific or supramodal. Auditory attention separated into selectively attending to spatial and non-spatial features, with the auditory spatial attention loading onto the same factor as visual spatial attention, suggesting spatial attention is supramodal. However, since our study did not include a non-spatial measure of visual attention, further research will be required to ascertain whether non-spatial attention is modality-specific.
Theory of Visual Attention (TVA) applied to mice in the 5-choice serial reaction time task.
Fitzpatrick, C M; Caballero-Puntiverio, M; Gether, U; Habekost, T; Bundesen, C; Vangkilde, S; Woldbye, D P D; Andreasen, J T; Petersen, A
2017-03-01
The 5-choice serial reaction time task (5-CSRTT) is widely used to measure rodent attentional functions. In humans, many attention studies in healthy and clinical populations have used testing based on Bundesen's Theory of Visual Attention (TVA) to estimate visual processing speeds and other parameters of attentional capacity. We aimed to bridge these research fields by modifying the 5-CSRTT's design and by mathematically modelling data to derive attentional parameters analogous to human TVA-based measures. C57BL/6 mice were tested in two 1-h sessions on consecutive days with a version of the 5-CSRTT where stimulus duration (SD) probe length was varied based on information from previous TVA studies. Thereafter, a scopolamine hydrobromide (HBr; 0.125 or 0.25 mg/kg) pharmacological challenge was undertaken, using a Latin square design. Mean score values were modelled using a new three-parameter version of TVA to obtain estimates of visual processing speeds, visual thresholds and motor response baselines in each mouse. The parameter estimates for each animal were reliable across sessions, showing that the data were stable enough to support analysis on an individual level. Scopolamine HBr dose-dependently reduced 5-CSRTT attentional performance while also increasing reward collection latency at the highest dose. Upon TVA modelling, scopolamine HBr significantly reduced visual processing speed at both doses, while having less pronounced effects on visual thresholds and motor response baselines. This study shows for the first time how 5-CSRTT performance in mice can be mathematically modelled to yield estimates of attentional capacity that are directly comparable to estimates from human studies.
Research on metallic material defect detection based on bionic sensing of human visual properties
NASA Astrophysics Data System (ADS)
Zhang, Pei Jiang; Cheng, Tao
2018-05-01
Due to the fact that human visual system can quickly lock the areas of interest in complex natural environment and focus on it, this paper proposes an eye-based visual attention mechanism by simulating human visual imaging features based on human visual attention mechanism Bionic Sensing Visual Inspection Model Method to Detect Defects of Metallic Materials in the Mechanical Field. First of all, according to the biologically visually significant low-level features, the mark of defect experience marking is used as the intermediate feature of simulated visual perception. Afterwards, SVM method was used to train the advanced features of visual defects of metal material. According to the weight of each party, the biometrics detection model of metal material defect, which simulates human visual characteristics, is obtained.
Visual attention: The past 25 years
Carrasco, Marisa
2012-01-01
This review focuses on covert attention and how it alters early vision. I explain why attention is considered a selective process, the constructs of covert attention, spatial endogenous and exogenous attention, and feature-based attention. I explain how in the last 25 years research on attention has characterized the effects of covert attention on spatial filters and how attention influences the selection of stimuli of interest. This review includes the effects of spatial attention on discriminability and appearance in tasks mediated by contrast sensitivity and spatial resolution; the effects of feature-based attention on basic visual processes, and a comparison of the effects of spatial and feature-based attention. The emphasis of this review is on psychophysical studies, but relevant electrophysiological and neuroimaging studies and models regarding how and where neuronal responses are modulated are also discussed. PMID:21549742
Visual attention: the past 25 years.
Carrasco, Marisa
2011-07-01
This review focuses on covert attention and how it alters early vision. I explain why attention is considered a selective process, the constructs of covert attention, spatial endogenous and exogenous attention, and feature-based attention. I explain how in the last 25 years research on attention has characterized the effects of covert attention on spatial filters and how attention influences the selection of stimuli of interest. This review includes the effects of spatial attention on discriminability and appearance in tasks mediated by contrast sensitivity and spatial resolution; the effects of feature-based attention on basic visual processes, and a comparison of the effects of spatial and feature-based attention. The emphasis of this review is on psychophysical studies, but relevant electrophysiological and neuroimaging studies and models regarding how and where neuronal responses are modulated are also discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.
Toward a Unified Theory of Visual Area V4
Roe, Anna W.; Chelazzi, Leonardo; Connor, Charles E.; Conway, Bevil R.; Fujita, Ichiro; Gallant, Jack L.; Lu, Haidong; Vanduffel, Wim
2016-01-01
Visual area V4 is a midtier cortical area in the ventral visual pathway. It is crucial for visual object recognition and has been a focus of many studies on visual attention. However, there is no unifying view of V4’s role in visual processing. Neither is there an understanding of how its role in feature processing interfaces with its role in visual attention. This review captures our current knowledge of V4, largely derived from electrophysiological and imaging studies in the macaque monkey. Based on recent discovery of functionally specific domains in V4, we propose that the unifying function of V4 circuitry is to enable selective extraction of specific functional domain-based networks, whether it be by bottom-up specification of object features or by top-down attentionally driven selection. PMID:22500626
Fujisawa, Junya; Touyama, Hideaki; Hirose, Michitaka
2008-01-01
In this paper, alpha band modulation during visual spatial attention without visual stimuli was focused. Visual spatial attention has been expected to provide a new channel of non-invasive independent brain computer interface (BCI), but little work has been done on the new interfacing method. The flickering stimuli used in previous work cause a decline of independency and have difficulties in a practical use. Therefore we investigated whether visual spatial attention could be detected without such stimuli. Further, the common spatial patterns (CSP) were for the first time applied to the brain states during visual spatial attention. The performance evaluation was based on three brain states of left, right and center direction attention. The 30-channel scalp electroencephalographic (EEG) signals over occipital cortex were recorded for five subjects. Without CSP, the analyses made 66.44 (range 55.42 to 72.27) % of average classification performance in discriminating left and right attention classes. With CSP, the averaged classification accuracy was 75.39 (range 63.75 to 86.13) %. It is suggested that CSP is useful in the context of visual spatial attention, and the alpha band modulation during visual spatial attention without flickering stimuli has the possibility of a new channel for independent BCI as well as motor imagery.
Serial grouping of 2D-image regions with object-based attention in humans.
Jeurissen, Danique; Self, Matthew W; Roelfsema, Pieter R
2016-06-13
After an initial stage of local analysis within the retina and early visual pathways, the human visual system creates a structured representation of the visual scene by co-selecting image elements that are part of behaviorally relevant objects. The mechanisms underlying this perceptual organization process are only partially understood. We here investigate the time-course of perceptual grouping of two-dimensional image-regions by measuring the reaction times of human participants and report that it is associated with the gradual spread of object-based attention. Attention spreads fastest over large and homogeneous areas and is slowed down at locations that require small-scale processing. We find that the time-course of the object-based selection process is well explained by a 'growth-cone' model, which selects surface elements in an incremental, scale-dependent manner. We discuss how the visual cortical hierarchy can implement this scale-dependent spread of object-based attention, leveraging the different receptive field sizes in distinct cortical areas.
Modality-specificity of Selective Attention Networks
Stewart, Hannah J.; Amitay, Sygal
2015-01-01
Objective: To establish the modality specificity and generality of selective attention networks. Method: Forty-eight young adults completed a battery of four auditory and visual selective attention tests based upon the Attention Network framework: the visual and auditory Attention Network Tests (vANT, aANT), the Test of Everyday Attention (TEA), and the Test of Attention in Listening (TAiL). These provided independent measures for auditory and visual alerting, orienting, and conflict resolution networks. The measures were subjected to an exploratory factor analysis to assess underlying attention constructs. Results: The analysis yielded a four-component solution. The first component comprised of a range of measures from the TEA and was labeled “general attention.” The third component was labeled “auditory attention,” as it only contained measures from the TAiL using pitch as the attended stimulus feature. The second and fourth components were labeled as “spatial orienting” and “spatial conflict,” respectively—they were comprised of orienting and conflict resolution measures from the vANT, aANT, and TAiL attend-location task—all tasks based upon spatial judgments (e.g., the direction of a target arrow or sound location). Conclusions: These results do not support our a-priori hypothesis that attention networks are either modality specific or supramodal. Auditory attention separated into selectively attending to spatial and non-spatial features, with the auditory spatial attention loading onto the same factor as visual spatial attention, suggesting spatial attention is supramodal. However, since our study did not include a non-spatial measure of visual attention, further research will be required to ascertain whether non-spatial attention is modality-specific. PMID:26635709
Behavior Selection of Mobile Robot Based on Integration of Multimodal Information
NASA Astrophysics Data System (ADS)
Chen, Bin; Kaneko, Masahide
Recently, biologically inspired robots have been developed to acquire the capacity for directing visual attention to salient stimulus generated from the audiovisual environment. On purpose to realize this behavior, a general method is to calculate saliency maps to represent how much the external information attracts the robot's visual attention, where the audiovisual information and robot's motion status should be involved. In this paper, we represent a visual attention model where three modalities, that is, audio information, visual information and robot's motor status are considered, while the previous researches have not considered all of them. Firstly, we introduce a 2-D density map, on which the value denotes how much the robot pays attention to each spatial location. Then we model the attention density using a Bayesian network where the robot's motion statuses are involved. Secondly, the information from both of audio and visual modalities is integrated with the attention density map in integrate-fire neurons. The robot can direct its attention to the locations where the integrate-fire neurons are fired. Finally, the visual attention model is applied to make the robot select the visual information from the environment, and react to the content selected. Experimental results show that it is possible for robots to acquire the visual information related to their behaviors by using the attention model considering motion statuses. The robot can select its behaviors to adapt to the dynamic environment as well as to switch to another task according to the recognition results of visual attention.
Visual attention spreads broadly but selects information locally.
Shioiri, Satoshi; Honjyo, Hajime; Kashiwase, Yoshiyuki; Matsumiya, Kazumichi; Kuriki, Ichiro
2016-10-19
Visual attention spreads over a range around the focus as the spotlight metaphor describes. Spatial spread of attentional enhancement and local selection/inhibition are crucial factors determining the profile of the spatial attention. Enhancement and ignorance/suppression are opposite effects of attention, and appeared to be mutually exclusive. Yet, no unified view of the factors has been provided despite their necessity for understanding the functions of spatial attention. This report provides electroencephalographic and behavioral evidence for the attentional spread at an early stage and selection/inhibition at a later stage of visual processing. Steady state visual evoked potential showed broad spatial tuning whereas the P3 component of the event related potential showed local selection or inhibition of the adjacent areas. Based on these results, we propose a two-stage model of spatial attention with broad spread at an early stage and local selection at a later stage.
Feature-based attention elicits surround suppression in feature space.
Störmer, Viola S; Alvarez, George A
2014-09-08
It is known that focusing attention on a particular feature (e.g., the color red) facilitates the processing of all objects in the visual field containing that feature [1-7]. Here, we show that such feature-based attention not only facilitates processing but also actively inhibits processing of similar, but not identical, features globally across the visual field. We combined behavior and electrophysiological recordings of frequency-tagged potentials in human observers to measure this inhibitory surround in feature space. We found that sensory signals of an attended color (e.g., red) were enhanced, whereas sensory signals of colors similar to the target color (e.g., orange) were suppressed relative to colors more distinct from the target color (e.g., yellow). Importantly, this inhibitory effect spreads globally across the visual field, thus operating independently of location. These findings suggest that feature-based attention comprises an excitatory peak surrounded by a narrow inhibitory zone in color space to attenuate the most distracting and potentially confusable stimuli during visual perception. This selection profile is akin to what has been reported for location-based attention [8-10] and thus suggests that such center-surround mechanisms are an overarching principle of attention across different domains in the human brain. Copyright © 2014 Elsevier Ltd. All rights reserved.
Memory-Based Attention Capture when Multiple Items Are Maintained in Visual Working Memory
Hollingworth, Andrew; Beck, Valerie M.
2016-01-01
Efficient visual search requires that attention is guided strategically to relevant objects, and most theories of visual search implement this function by means of a target template maintained in visual working memory (VWM). However, there is currently debate over the architecture of VWM-based attentional guidance. We contrasted a single-item-template hypothesis with a multiple-item-template hypothesis, which differ in their claims about structural limits on the interaction between VWM representations and perceptual selection. Recent evidence from van Moorselaar, Theeuwes, and Olivers (2014) indicated that memory-based capture during search—an index of VWM guidance—is not observed when memory set size is increased beyond a single item, suggesting that multiple items in VWM do not guide attention. In the present study, we maximized the overlap between multiple colors held in VWM and the colors of distractors in a search array. Reliable capture was observed when two colors were held in VWM and both colors were present as distractors, using both the original van Moorselaar et al. singleton-shape search task and a search task that required focal attention to array elements (gap location in outline square stimuli). In the latter task, memory-based capture was consistent with the simultaneous guidance of attention by multiple VWM representations. PMID:27123681
The role of lightness, hue and saturation in feature-based visual attention.
Stuart, Geoffrey W; Barsdell, Wendy N; Day, Ross H
2014-03-01
Visual attention is used to select part of the visual array for higher-level processing. Visual selection can be based on spatial location, but it has also been demonstrated that multiple locations can be selected simultaneously on the basis of a visual feature such as color. One task that has been used to demonstrate feature-based attention is the judgement of the symmetry of simple four-color displays. In a typical task, when symmetry is violated, four squares on either side of the display do not match. When four colors are involved, symmetry judgements are made more quickly than when only two of the four colors are involved. This indicates that symmetry judgements are made one color at a time. Previous studies have confounded lightness, hue, and saturation when defining the colors used in such displays. In three experiments, symmetry was defined by lightness alone, lightness plus hue, or by hue or saturation alone, with lightness levels randomised. The difference between judgements of two- and four-color asymmetry was maintained, showing that hue and saturation can provide the sole basis for feature-based attentional selection. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.
Schneider, Werner X.
2013-01-01
The goal of this review is to introduce a theory of task-driven visual attention and working memory (TRAM). Based on a specific biased competition model, the ‘theory of visual attention’ (TVA) and its neural interpretation (NTVA), TRAM introduces the following assumption. First, selective visual processing over time is structured in competition episodes. Within an episode, that is, during its first two phases, a limited number of proto-objects are competitively encoded—modulated by the current task—in activation-based visual working memory (VWM). In processing phase 3, relevant VWM objects are transferred via a short-term consolidation into passive VWM. Second, each time attentional priorities change (e.g. after an eye movement), a new competition episode is initiated. Third, if a phase 3 VWM process (e.g. short-term consolidation) is not finished, whereas a new episode is called, a protective maintenance process allows its completion. After a VWM object change, its protective maintenance process is followed by an encapsulation of the VWM object causing attentional resource costs in trailing competition episodes. Viewed from this perspective, a new explanation of key findings of the attentional blink will be offered. Finally, a new suggestion will be made as to how VWM items might interact with visual search processes. PMID:24018722
Robotic Attention Processing And Its Application To Visual Guidance
NASA Astrophysics Data System (ADS)
Barth, Matthew; Inoue, Hirochika
1988-03-01
This paper describes a method of real-time visual attention processing for robots performing visual guidance. This robot attention processing is based on a novel vision processor, the multi-window vision system that was developed at the University of Tokyo. The multi-window vision system is unique in that it only processes visual information inside local area windows. These local area windows are quite flexible in their ability to move anywhere on the visual screen, change their size and shape, and alter their pixel sampling rate. By using these windows for specific attention tasks, it is possible to perform high speed attention processing. The primary attention skills of detecting motion, tracking an object, and interpreting an image are all performed at high speed on the multi-window vision system. A basic robotic attention scheme using the attention skills was developed. The attention skills involved detection and tracking of salient visual features. The tracking and motion information thus obtained was utilized in producing the response to the visual stimulus. The response of the attention scheme was quick enough to be applicable to the real-time vision processing tasks of playing a video 'pong' game, and later using an automobile driving simulator. By detecting the motion of a 'ball' on a video screen and then tracking the movement, the attention scheme was able to control a 'paddle' in order to keep the ball in play. The response was faster than that of a human's, allowing the attention scheme to play the video game at higher speeds. Further, in the application to the driving simulator, the attention scheme was able to control both direction and velocity of a simulated vehicle following a lead car. These two applications show the potential of local visual processing in its use for robotic attention processing.
Dissociable Electroencephalograph Correlates of Visual Awareness and Feature-Based Attention
Chen, Yifan; Wang, Xiaochun; Yu, Yanglan; Liu, Ying
2017-01-01
Background: The relationship between awareness and attention is complex and controversial. A growing body of literature has shown that the neural bases of consciousness and endogenous attention (voluntary attention) are independent. The important role of exogenous attention (reflexive attention) on conscious experience has been noted in several studies. However, exogenous attention can also modulate subliminal processing, suggesting independence between the two processes. The question of whether visual awareness and exogenous attention rely on independent mechanisms under certain circumstances remains unanswered. Methods: In the current study, electroencephalograph recordings were conducted using 64 channels from 16 subjects while subjects attempted to detect faint speed changes of colored rotating dots. Awareness and attention were manipulated throughout trials in order to test whether exogenous attention and visual awareness rely on independent mechanisms. Results: Neural activity related to consciousness was recorded in the following cue-locked time-windows (event related potential, cluster- based permutation test): 0–50, 150–200, and 750–800 ms. With a more liberal threshold, the inferior occipital lobe was found to be the source of awareness-related activity in the 0–50 ms range. In the later 150–200 ms range, activity in the fusiform and post-central gyrus was related to awareness. Awareness-related activation in the later 750–800 ms range was more widely distributed. This awareness-related activation pattern was quite different from that of attention. Attention-related neural activity was emphasized in the 750–800 ms time window and the main source of attention-related activity was localized to the right angular gyrus. These results suggest that exogenous attention and visual consciousness correspond to different and relatively independent neural mechanisms and are distinct processes under certain conditions. PMID:29180950
Attention affects visual perceptual processing near the hand.
Cosman, Joshua D; Vecera, Shaun P
2010-09-01
Specialized, bimodal neural systems integrate visual and tactile information in the space near the hand. Here, we show that visuo-tactile representations allow attention to influence early perceptual processing, namely, figure-ground assignment. Regions that were reached toward were more likely than other regions to be assigned as foreground figures, and hand position competed with image-based information to bias figure-ground assignment. Our findings suggest that hand position allows attention to influence visual perceptual processing and that visual processes typically viewed as unimodal can be influenced by bimodal visuo-tactile representations.
Olivers, Christian N L; Meijer, Frank; Theeuwes, Jan
2006-10-01
In 7 experiments, the authors explored whether visual attention (the ability to select relevant visual information) and visual working memory (the ability to retain relevant visual information) share the same content representations. The presence of singleton distractors interfered more strongly with a visual search task when it was accompanied by an additional memory task. Singleton distractors interfered even more when they were identical or related to the object held in memory, but only when it was difficult to verbalize the memory content. Furthermore, this content-specific interaction occurred for features that were relevant to the memory task but not for irrelevant features of the same object or for once-remembered objects that could be forgotten. Finally, memory-related distractors attracted more eye movements but did not result in longer fixations. The results demonstrate memory-driven attentional capture on the basis of content-specific representations. Copyright 2006 APA.
Neural Correlates of Individual Differences in Infant Visual Attention and Recognition Memory
ERIC Educational Resources Information Center
Reynolds, Greg D.; Guy, Maggie W.; Zhang, Dantong
2011-01-01
Past studies have identified individual differences in infant visual attention based upon peak look duration during initial exposure to a stimulus. Colombo and colleagues found that infants that demonstrate brief visual fixations (i.e., short lookers) during familiarization are more likely to demonstrate evidence of recognition memory during…
Behavioral and Brain Measures of Phasic Alerting Effects on Visual Attention.
Wiegand, Iris; Petersen, Anders; Finke, Kathrin; Bundesen, Claus; Lansner, Jon; Habekost, Thomas
2017-01-01
In the present study, we investigated effects of phasic alerting on visual attention in a partial report task, in which half of the displays were preceded by an auditory warning cue. Based on the computational Theory of Visual Attention (TVA), we estimated parameters of spatial and non-spatial aspects of visual attention and measured event-related lateralizations (ERLs) over visual processing areas. We found that the TVA parameter sensory effectiveness a , which is thought to reflect visual processing capacity, significantly increased with phasic alerting. By contrast, the distribution of visual processing resources according to task relevance and spatial position, as quantified in parameters top-down control α and spatial bias w index , was not modulated by phasic alerting. On the electrophysiological level, the latencies of ERLs in response to the task displays were reduced following the warning cue. These results suggest that phasic alerting facilitates visual processing in a general, unselective manner and that this effect originates in early stages of visual information processing.
Serial grouping of 2D-image regions with object-based attention in humans
Jeurissen, Danique; Self, Matthew W; Roelfsema, Pieter R
2016-01-01
After an initial stage of local analysis within the retina and early visual pathways, the human visual system creates a structured representation of the visual scene by co-selecting image elements that are part of behaviorally relevant objects. The mechanisms underlying this perceptual organization process are only partially understood. We here investigate the time-course of perceptual grouping of two-dimensional image-regions by measuring the reaction times of human participants and report that it is associated with the gradual spread of object-based attention. Attention spreads fastest over large and homogeneous areas and is slowed down at locations that require small-scale processing. We find that the time-course of the object-based selection process is well explained by a 'growth-cone' model, which selects surface elements in an incremental, scale-dependent manner. We discuss how the visual cortical hierarchy can implement this scale-dependent spread of object-based attention, leveraging the different receptive field sizes in distinct cortical areas. DOI: http://dx.doi.org/10.7554/eLife.14320.001 PMID:27291188
Intensive video gaming improves encoding speed to visual short-term memory in young male adults.
Wilms, Inge L; Petersen, Anders; Vangkilde, Signe
2013-01-01
The purpose of this study was to measure the effect of action video gaming on central elements of visual attention using Bundesen's (1990) Theory of Visual Attention. To examine the cognitive impact of action video gaming, we tested basic functions of visual attention in 42 young male adults. Participants were divided into three groups depending on the amount of time spent playing action video games: non-players (<2h/month, N=12), casual players (4-8h/month, N=10), and experienced players (>15h/month, N=20). All participants were tested in three tasks which tap central functions of visual attention and short-term memory: a test based on the Theory of Visual Attention (TVA), an enumeration test and finally the Attentional Network Test (ANT). The results show that action video gaming does not seem to impact the capacity of visual short-term memory. However, playing action video games does seem to improve the encoding speed of visual information into visual short-term memory and the improvement does seem to depend on the time devoted to gaming. This suggests that intense action video gaming improves basic attentional functioning and that this improvement generalizes into other activities. The implications of these findings for cognitive rehabilitation training are discussed. Copyright © 2012 Elsevier B.V. All rights reserved.
Cholinergic enhancement of visual attention and neural oscillations in the human brain.
Bauer, Markus; Kluge, Christian; Bach, Dominik; Bradbury, David; Heinze, Hans Jochen; Dolan, Raymond J; Driver, Jon
2012-03-06
Cognitive processes such as visual perception and selective attention induce specific patterns of brain oscillations. The neurochemical bases of these spectral changes in neural activity are largely unknown, but neuromodulators are thought to regulate processing. The cholinergic system is linked to attentional function in vivo, whereas separate in vitro studies show that cholinergic agonists induce high-frequency oscillations in slice preparations. This has led to theoretical proposals that cholinergic enhancement of visual attention might operate via gamma oscillations in visual cortex, although low-frequency alpha/beta modulation may also play a key role. Here we used MEG to record cortical oscillations in the context of administration of a cholinergic agonist (physostigmine) during a spatial visual attention task in humans. This cholinergic agonist enhanced spatial attention effects on low-frequency alpha/beta oscillations in visual cortex, an effect correlating with a drug-induced speeding of performance. By contrast, the cholinergic agonist did not alter high-frequency gamma oscillations in visual cortex. Thus, our findings show that cholinergic neuromodulation enhances attentional selection via an impact on oscillatory synchrony in visual cortex, for low rather than high frequencies. We discuss this dissociation between high- and low-frequency oscillations in relation to proposals that lower-frequency oscillations are generated by feedback pathways within visual cortex. Copyright © 2012 Elsevier Ltd. All rights reserved.
Attentional bias to food-related visual cues: is there a role in obesity?
Doolan, K J; Breslin, G; Hanna, D; Gallagher, A M
2015-02-01
The incentive sensitisation model of obesity suggests that modification of the dopaminergic associated reward systems in the brain may result in increased awareness of food-related visual cues present in the current food environment. Having a heightened awareness of these visual food cues may impact on food choices and eating behaviours with those being most aware of or demonstrating greater attention to food-related stimuli potentially being at greater risk of overeating and subsequent weight gain. To date, research related to attentional responses to visual food cues has been both limited and conflicting. Such inconsistent findings may in part be explained by the use of different methodological approaches to measure attentional bias and the impact of other factors such as hunger levels, energy density of visual food cues and individual eating style traits that may influence visual attention to food-related cues outside of weight status alone. This review examines the various methodologies employed to measure attentional bias with a particular focus on the role that attentional processing of food-related visual cues may have in obesity. Based on the findings of this review, it appears that it may be too early to clarify the role visual attention to food-related cues may have in obesity. Results however highlight the importance of considering the most appropriate methodology to use when measuring attentional bias and the characteristics of the study populations targeted while interpreting results to date and in designing future studies.
Cognitive Control Network Contributions to Memory-Guided Visual Attention.
Rosen, Maya L; Stern, Chantal E; Michalka, Samantha W; Devaney, Kathryn J; Somers, David C
2016-05-01
Visual attentional capacity is severely limited, but humans excel in familiar visual contexts, in part because long-term memories guide efficient deployment of attention. To investigate the neural substrates that support memory-guided visual attention, we performed a set of functional MRI experiments that contrast long-term, memory-guided visuospatial attention with stimulus-guided visuospatial attention in a change detection task. Whereas the dorsal attention network was activated for both forms of attention, the cognitive control network(CCN) was preferentially activated during memory-guided attention. Three posterior nodes in the CCN, posterior precuneus, posterior callosal sulcus/mid-cingulate, and lateral intraparietal sulcus exhibited the greatest specificity for memory-guided attention. These 3 regions exhibit functional connectivity at rest, and we propose that they form a subnetwork within the broader CCN. Based on the task activation patterns, we conclude that the nodes of this subnetwork are preferentially recruited for long-term memory guidance of visuospatial attention. Published by Oxford University Press 2015. This work is written by (a) US Government employee(s) and is in the public domain in the US.
Koda, Hiroki; Sato, Anna; Kato, Akemi
2013-09-01
Humans innately perceive infantile features as cute. The ethologist Konrad Lorenz proposed that the infantile features of mammals and birds, known as the baby schema (kindchenschema), motivate caretaking behaviour. As biologically relevant stimuli, newborns are likely to be processed specially in terms of visual attention, perception, and cognition. Recent demonstrations on human participants have shown visual attentional prioritisation to newborn faces (i.e., newborn faces capture visual attention). Although characteristics equivalent to those found in the faces of human infants are found in nonhuman primates, attentional capture by newborn faces has not been tested in nonhuman primates. We examined whether conspecific newborn faces captured the visual attention of two Japanese monkeys using a target-detection task based on dot-probe tasks commonly used in human visual attention studies. Although visual cues enhanced target detection in subject monkeys, our results, unlike those for humans, showed no evidence of an attentional prioritisation for newborn faces by monkeys. Our demonstrations showed the validity of dot-probe task for visual attention studies in monkeys and propose a novel approach to bridge the gap between human and nonhuman primate social cognition research. This suggests that attentional capture by newborn faces is not common to macaques, but it is unclear if nursing experiences influence their perception and recognition of infantile appraisal stimuli. We need additional comparative studies to reveal the evolutionary origins of baby-schema perception and recognition. Copyright © 2013 Elsevier B.V. All rights reserved.
Memory-based attention capture when multiple items are maintained in visual working memory.
Hollingworth, Andrew; Beck, Valerie M
2016-07-01
Efficient visual search requires that attention is guided strategically to relevant objects, and most theories of visual search implement this function by means of a target template maintained in visual working memory (VWM). However, there is currently debate over the architecture of VWM-based attentional guidance. We contrasted a single-item-template hypothesis with a multiple-item-template hypothesis, which differ in their claims about structural limits on the interaction between VWM representations and perceptual selection. Recent evidence from van Moorselaar, Theeuwes, and Olivers (2014) indicated that memory-based capture during search, an index of VWM guidance, is not observed when memory set size is increased beyond a single item, suggesting that multiple items in VWM do not guide attention. In the present study, we maximized the overlap between multiple colors held in VWM and the colors of distractors in a search array. Reliable capture was observed when 2 colors were held in VWM and both colors were present as distractors, using both the original van Moorselaar et al. singleton-shape search task and a search task that required focal attention to array elements (gap location in outline square stimuli). In the latter task, memory-based capture was consistent with the simultaneous guidance of attention by multiple VWM representations. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Visual Motion Perception and Visual Attentive Processes.
1988-04-01
88-0551 Visual Motion Perception and Visual Attentive Processes George Spering , New YorkUnivesity A -cesson For DTIC TAB rant AFOSR 85-0364... Spering . HIPSt: A Unix-based image processing syslem. Computer Vision, Graphics, and Image Processing, 1984,25. 331-347. ’HIPS is the Human Information...Processing Laboratory’s Image Processing System. 1985 van Santen, Jan P. It, and George Spering . Elaborated Reichardt detectors. Journal of the Optical
Steady-State Somatosensory Evoked Potential for Brain-Computer Interface—Present and Future
Ahn, Sangtae; Kim, Kiwoong; Jun, Sung Chan
2016-01-01
Brain-computer interface (BCI) performance has achieved continued improvement over recent decades, and sensorimotor rhythm-based BCIs that use motor function have been popular subjects of investigation. However, it remains problematic to introduce them to the public market because of their low reliability. As an alternative resolution to this issue, visual-based BCIs that use P300 or steady-state visually evoked potentials (SSVEPs) seem promising; however, the inherent visual fatigue that occurs with these BCIs may be unavoidable. For these reasons, steady-state somatosensory evoked potential (SSSEP) BCIs, which are based on tactile selective attention, have gained increasing attention recently. These may reduce the fatigue induced by visual attention and overcome the low reliability of motor activity. In this literature survey, recent findings on SSSEP and its methodological uses in BCI are reviewed. Further, existing limitations of SSSEP BCI and potential future directions for the technique are discussed. PMID:26834611
Botly, Leigh C P; De Rosa, Eve
2012-10-01
The visual search task established the feature integration theory of attention in humans and measures visuospatial attentional contributions to feature binding. We recently demonstrated that the neuromodulator acetylcholine (ACh), from the nucleus basalis magnocellularis (NBM), supports the attentional processes required for feature binding using a rat digging-based task. Additional research has demonstrated cholinergic contributions from the NBM to visuospatial attention in rats. Here, we combined these lines of evidence and employed visual search in rats to examine whether cortical cholinergic input supports visuospatial attention specifically for feature binding. We trained 18 male Long-Evans rats to perform visual search using touch screen-equipped operant chambers. Sessions comprised Feature Search (no feature binding required) and Conjunctive Search (feature binding required) trials using multiple stimulus set sizes. Following acquisition of visual search, 8 rats received bilateral NBM lesions using 192 IgG-saporin to selectively reduce cholinergic afferentation of the neocortex, which we hypothesized would selectively disrupt the visuospatial attentional processes needed for efficient conjunctive visual search. As expected, relative to sham-lesioned rats, ACh-NBM-lesioned rats took significantly longer to locate the target stimulus on Conjunctive Search, but not Feature Search trials, thus demonstrating that cholinergic contributions to visuospatial attention are important for feature binding in rats.
Exploring conflict- and target-related movement of visual attention.
Wendt, Mike; Garling, Marco; Luna-Rodriguez, Aquiles; Jacobsen, Thomas
2014-01-01
Intermixing trials of a visual search task with trials of a modified flanker task, the authors investigated whether the presentation of conflicting distractors at only one side (left or right) of a target stimulus triggers shifts of visual attention towards the contralateral side. Search time patterns provided evidence for lateral attention shifts only when participants performed the flanker task under an instruction assumed to widen the focus of attention, demonstrating that instruction-based control settings of an otherwise identical task can impact performance in an unrelated task. Contrasting conditions with response-related and response-unrelated distractors showed that shifting attention does not depend on response conflict and may be explained as stimulus-conflict-related withdrawal or target-related deployment of attention.
Painter, David R; Dux, Paul E; Mattingley, Jason B
2015-09-01
When visual attention is set for a particular target feature, such as color or shape, neural responses to that feature are enhanced across the visual field. This global feature-based enhancement is hypothesized to underlie the contingent attentional capture effect, in which task-irrelevant items with the target feature capture spatial attention. In humans, however, different cortical regions have been implicated in global feature-based enhancement and contingent capture. Here, we applied intermittent theta-burst stimulation (iTBS) to assess the causal roles of two regions of extrastriate cortex - right area MT and the right temporoparietal junction (TPJ) - in both global feature-based enhancement and contingent capture. We recorded cortical activity using EEG while participants monitored centrally for targets defined by color and ignored peripheral checkerboards that matched the distractor or target color. In central vision, targets were preceded by colored cues designed to capture attention. Stimuli flickered at unique frequencies, evoking distinct cortical oscillations. Analyses of these oscillations and behavioral performance revealed contingent capture in central vision and global feature-based enhancement in the periphery. Stimulation of right area MT selectively increased global feature-based enhancement, but did not influence contingent attentional capture. By contrast, stimulation of the right TPJ left both processes unaffected. Our results reveal a causal role for the right area MT in feature-based attention, and suggest that global feature-based enhancement does not underlie the contingent capture effect. Copyright © 2015 Elsevier Inc. All rights reserved.
Eccentricity effects in vision and attention.
Staugaard, Camilla Funch; Petersen, Anders; Vangkilde, Signe
2016-11-01
Stimulus eccentricity affects visual processing in multiple ways. Performance on a visual task is often better when target stimuli are presented near or at the fovea compared to the retinal periphery. For instance, reaction times and error rates are often reported to increase with increasing eccentricity. Such findings have been interpreted as purely visual, reflecting neurophysiological differences in central and peripheral vision, as well as attentional, reflecting a central bias in the allocation of attentional resources. Other findings indicate that in some cases, information from the periphery is preferentially processed. Specifically, it has been suggested that visual processing speed increases with increasing stimulus eccentricity, and that this positive correlation is reduced, but not eliminated, when the amount of cortex activated by a stimulus is kept constant by magnifying peripheral stimuli (Carrasco et al., 2003). In this study, we investigated effects of eccentricity on visual attentional capacity with and without magnification, using computational modeling based on Bundesen's (1990) theory of visual attention. Our results suggest a general decrease in attentional capacity with increasing stimulus eccentricity, irrespective of magnification. We discuss these results in relation to the physiology of the visual system, the use of different paradigms for investigating visual perception across the visual field, and the use of different stimulus materials (e.g. Gabor patches vs. letters). Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
ERIC Educational Resources Information Center
Huang, Y-M.; Liu, C-J.; Shadiev, Rustam; Shen, M-H.; Hwang, W-Y.
2015-01-01
One major drawback of previous research on speech-to-text recognition (STR) is that most findings showing the effectiveness of STR for learning were based upon subjective evidence. Very few studies have used eye-tracking techniques to investigate visual attention of students on STR-generated text. Furthermore, not much attention was paid to…
Association of blood antioxidants status with visual and auditory sustained attention.
Shiraseb, Farideh; Siassi, Fereydoun; Sotoudeh, Gity; Qorbani, Mostafa; Rostami, Reza; Sadeghi-Firoozabadi, Vahid; Narmaki, Elham
2015-01-01
A low antioxidants status has been shown to result in oxidative stress and cognitive impairment. Because antioxidants can protect the nervous system, it is expected that a better blood antioxidant status might be related to sustained attention. However, the relationship between the blood antioxidant status and visual and auditory sustained attention has not been investigated. The aim of this study was to evaluate the association of fruits and vegetables intake and the blood antioxidant status with visual and auditory sustained attention in women. This cross-sectional study was performed on 400 healthy women (20-50 years) who attended the sports clubs of Tehran Municipality. Sustained attention was evaluated based on the Integrated Visual and Auditory Continuous Performance Test using the Integrated Visual and Auditory (IVA) software. The 24-hour food recall questionnaire was used for estimating fruits and vegetables intake. Serum total antioxidant capacity (TAC), and erythrocyte superoxide dismutase (SOD) and glutathione peroxidase (GPx) activities were measured in 90 participants. After adjusting for energy intake, age, body mass index (BMI), years of education and physical activity, higher reported fruits, and vegetables intake was associated with better visual and auditory sustained attention (P < 0.001). A high intake of some subgroups of fruits and vegetables (i.e. berries, cruciferous vegetables, green leafy vegetables, and other vegetables) was also associated with better sustained attention (P < 0.02). Serum TAC, and erythrocyte SOD and GPx activities increased with the increase in the tertiles of visual and auditory sustained attention after adjusting for age, years of education, physical activity, energy, BMI, and caffeine intake (P < 0.05). Improved visual and auditory sustained attention is associated with a better blood antioxidant status. Therefore, improvement of the antioxidant status through an appropriate dietary intake can possibly enhance sustained attention.
Gillebert, Celine R; Petersen, Anders; Van Meel, Chayenne; Müller, Tanja; McIntyre, Alexandra; Wagemans, Johan; Humphreys, Glyn W
2016-06-01
Previous studies have shown that the perceptual organization of the visual scene constrains the deployment of attention. Here we investigated how the organization of multiple elements into larger configurations alters their attentional weight, depending on the "pertinence" or behavioral importance of the elements' features. We assessed object-based effects on distinct aspects of the attentional priority map: top-down control, reflecting the tendency to encode targets rather than distracters, and the spatial distribution of attention weights across the visual scene, reflecting the tendency to report elements belonging to the same rather than different objects. In 2 experiments participants had to report the letters in briefly presented displays containing 8 letters and digits, in which pairs of characters could be connected with a line. Quantitative estimates of top-down control were obtained using Bundesen's Theory of Visual Attention (1990). The spatial distribution of attention weights was assessed using the "paired response index" (PRI), indicating responses for within-object pairs of letters. In Experiment 1, grouping along the task-relevant dimension (targets with targets and distracters with distracters) increased top-down control and enhanced the PRI; in contrast, task-irrelevant grouping (targets with distracters) did not affect performance. In Experiment 2, we disentangled the effect of target-target and distracter-distracter grouping: Pairwise grouping of distracters enhanced top-down control whereas pairwise grouping of targets changed the PRI. We conclude that object-based perceptual representations interact with pertinence values (of the elements' features and location) in the computation of attention weights, thereby creating a widespread pattern of attentional facilitation across the visual scene. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
The contributions of visual and central attention to visual working memory.
Souza, Alessandra S; Oberauer, Klaus
2017-10-01
We investigated the role of two kinds of attention-visual and central attention-for the maintenance of visual representations in working memory (WM). In Experiment 1 we directed attention to individual items in WM by presenting cues during the retention interval of a continuous delayed-estimation task, and instructing participants to think of the cued items. Attending to items improved recall commensurate with the frequency with which items were attended (0, 1, or 2 times). Experiments 1 and 3 further tested which kind of attention-visual or central-was involved in WM maintenance. We assessed the dual-task costs of two types of distractor tasks, one tapping sustained visual attention and one tapping central attention. Only the central attention task yielded substantial dual-task costs, implying that central attention substantially contributes to maintenance of visual information in WM. Experiment 2 confirmed that the visual-attention distractor task was demanding enough to disrupt performance in a task relying on visual attention. We combined the visual-attention and the central-attention distractor tasks with a multiple object tracking (MOT) task. Distracting visual attention, but not central attention, impaired MOT performance. Jointly, the three experiments provide a double dissociation between visual and central attention, and between visual WM and visual object tracking: Whereas tracking multiple targets across the visual filed depends on visual attention, visual WM depends mostly on central attention.
The Attentional Field Revealed by Single-Voxel Modeling of fMRI Time Courses
DeYoe, Edgar A.
2015-01-01
The spatial topography of visual attention is a distinguishing and critical feature of many theoretical models of visuospatial attention. Previous fMRI-based measurements of the topography of attention have typically been too crude to adequately test the predictions of different competing models. This study demonstrates a new technique to make detailed measurements of the topography of visuospatial attention from single-voxel, fMRI time courses. Briefly, this technique involves first estimating a voxel's population receptive field (pRF) and then “drifting” attention through the pRF such that the modulation of the voxel's fMRI time course reflects the spatial topography of attention. The topography of the attentional field (AF) is then estimated using a time-course modeling procedure. Notably, we are able to make these measurements in many visual areas including smaller, higher order areas, thus enabling a more comprehensive comparison of attentional mechanisms throughout the full hierarchy of human visual cortex. Using this technique, we show that the AF scales with eccentricity and varies across visual areas. We also show that voxels in multiple visual areas exhibit suppressive attentional effects that are well modeled by an AF having an enhancing Gaussian center with a suppressive surround. These findings provide extensive, quantitative neurophysiological data for use in modeling the psychological effects of visuospatial attention. PMID:25810532
Kiyonaga, Anastasia; Egner, Tobias
2014-01-01
It is unclear why and under what circumstances working memory (WM) and attention interact. Here, we apply the logic of the time-based resource-sharing (TBRS) model of WM (e.g., Barrouillet et al., 2004) to explore the mixed findings of a separate, but related, literature that studies the guidance of visual attention by WM contents. Specifically, we hypothesize that the linkage between WM representations and visual attention is governed by a time-shared cognitive resource that alternately refreshes internal (WM) and selects external (visual attention) information. If this were the case, WM content should guide visual attention (involuntarily), but only when there is time for it to be refreshed in an internal focus of attention. To provide an initial test for this hypothesis, we examined whether the amount of unoccupied time during a WM delay could impact the magnitude of attentional capture by WM contents. Participants were presented with a series of visual search trials while they maintained a WM cue for a delayed-recognition test. WM cues could coincide with the search target, a distracter, or neither. We varied both the number of searches to be performed, and the amount of available time to perform them. Slowing of visual search by a WM matching distracter-and facilitation by a matching target-were curtailed when the delay was filled with fast-paced (refreshing-preventing) search trials, as was subsequent memory probe accuracy. WM content may, therefore, only capture visual attention when it can be refreshed, suggesting that internal (WM) and external attention demands reciprocally impact one another because they share a limited resource. The TBRS rationale can thus be applied in a novel context to explain why WM contents capture attention, and under what conditions that effect should be observed.
Kiyonaga, Anastasia; Egner, Tobias
2014-01-01
It is unclear why and under what circumstances working memory (WM) and attention interact. Here, we apply the logic of the time-based resource-sharing (TBRS) model of WM (e.g., Barrouillet et al., 2004) to explore the mixed findings of a separate, but related, literature that studies the guidance of visual attention by WM contents. Specifically, we hypothesize that the linkage between WM representations and visual attention is governed by a time-shared cognitive resource that alternately refreshes internal (WM) and selects external (visual attention) information. If this were the case, WM content should guide visual attention (involuntarily), but only when there is time for it to be refreshed in an internal focus of attention. To provide an initial test for this hypothesis, we examined whether the amount of unoccupied time during a WM delay could impact the magnitude of attentional capture by WM contents. Participants were presented with a series of visual search trials while they maintained a WM cue for a delayed-recognition test. WM cues could coincide with the search target, a distracter, or neither. We varied both the number of searches to be performed, and the amount of available time to perform them. Slowing of visual search by a WM matching distracter—and facilitation by a matching target—were curtailed when the delay was filled with fast-paced (refreshing-preventing) search trials, as was subsequent memory probe accuracy. WM content may, therefore, only capture visual attention when it can be refreshed, suggesting that internal (WM) and external attention demands reciprocally impact one another because they share a limited resource. The TBRS rationale can thus be applied in a novel context to explain why WM contents capture attention, and under what conditions that effect should be observed. PMID:25221499
Color impact in visual attention deployment considering emotional images
NASA Astrophysics Data System (ADS)
Chamaret, C.
2012-03-01
Color is a predominant factor in the human visual attention system. Even if it cannot be sufficient to the global or complete understanding of a scene, it may impact the visual attention deployment. We propose to study the color impact as well as the emotion aspect of pictures regarding the visual attention deployment. An eye-tracking campaign has been conducted involving twenty people watching half pictures of database in full color and the other half of database in grey color. The eye fixations of color and black and white images were highly correlated leading to the question of the integration of such cues in the design of visual attention model. Indeed, the prediction of two state-of-the-art computational models shows similar results for the two color categories. Similarly, the study of saccade amplitude and fixation duration versus time viewing did not bring any significant differences between the two mentioned categories. In addition, spatial coordinates of eye fixations reveal an interesting indicator for investigating the differences of visual attention deployment over time and fixation number. The second factor related to emotion categories shows evidences of emotional inter-categories differences between color and grey eye fixations for passive and positive emotion. The particular aspect associated to this category induces a specific behavior, rather based on high frequencies, where the color components influence the visual attention deployment.
Reward speeds up and increases consistency of visual selective attention: a lifespan comparison.
Störmer, Viola; Eppinger, Ben; Li, Shu-Chen
2014-06-01
Children and older adults often show less favorable reward-based learning and decision making, relative to younger adults. It is unknown, however, whether reward-based processes that influence relatively early perceptual and attentional processes show similar lifespan differences. In this study, we investigated whether stimulus-reward associations affect selective visual attention differently across the human lifespan. Children, adolescents, younger adults, and older adults performed a visual search task in which the target colors were associated with either high or low monetary rewards. We discovered that high reward value speeded up response times across all four age groups, indicating that reward modulates attentional selection across the lifespan. This speed-up in response time was largest in younger adults, relative to the other three age groups. Furthermore, only younger adults benefited from high reward value in increasing response consistency (i.e., reduction of trial-by-trial reaction time variability). Our findings suggest that reward-based modulations of relatively early and implicit perceptual and attentional processes are operative across the lifespan, and the effects appear to be greater in adulthood. The age-specific effect of reward on reducing intraindividual response variability in younger adults likely reflects mechanisms underlying the development and aging of reward processing, such as lifespan age differences in the efficacy of dopaminergic modulation. Overall, the present results indicate that reward shapes visual perception across different age groups by biasing attention to motivationally salient events.
The visual attention span deficit in Chinese children with reading fluency difficulty.
Zhao, Jing; Liu, Menglian; Liu, Hanlong; Huang, Chen
2018-02-01
With reading development, some children fail to learn to read fluently. However, reading fluency difficulty (RFD) has not been fully investigated. The present study explored the underlying mechanism of RFD from the aspect of visual attention span. Fourteen Chinese children with RFD and fourteen age-matched normal readers participated. The visual 1-back task was adopted to examine visual attention span. Reaction time and accuracy were recorded, and relevant d-prime (d') scores were computed. Results showed that children with RFD exhibited lower accuracy and lower d' values than the controls did in the visual 1-back task, revealing a visual attention span deficit. Further analyses on d' values revealed that the attention distribution seemed to exhibit an inverted U-shaped pattern without lateralization for normal readers, but a W-shaped pattern with a rightward bias for children with RFD, which was discussed based on between-group variation in reading strategies. Results of the correlation analyses showed that visual attention span was associated with reading fluency at the sentence level for normal readers, but was related to reading fluency at the single-character level for children with RFD. The different patterns in correlations between groups revealed that visual attention span might be affected by the variation in reading strategies. The current findings extend previous data from alphabetic languages to Chinese, a logographic language with a particularly deep orthography, and have implications for reading-dysfluency remediation. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Nagai, Yukie; Hosoda, Koh; Morita, Akio; Asada, Minoru
This study argues how human infants acquire the ability of joint attention through interactions with their caregivers from a viewpoint of cognitive developmental robotics. In this paper, a mechanism by which a robot acquires sensorimotor coordination for joint attention through bootstrap learning is described. Bootstrap learning is a process by which a learner acquires higher capabilities through interactions with its environment based on embedded lower capabilities even if the learner does not receive any external evaluation nor the environment is controlled. The proposed mechanism for bootstrap learning of joint attention consists of the robot's embedded mechanisms: visual attention and learning with self-evaluation. The former is to find and attend to a salient object in the field of the robot's view, and the latter is to evaluate the success of visual attention, not joint attention, and then to learn the sensorimotor coordination. Since the object which the robot looks at based on visual attention does not always correspond to the object which the caregiver is looking at in an environment including multiple objects, the robot may have incorrect learning situations for joint attention as well as correct ones. However, the robot is expected to statistically lose the learning data of the incorrect ones as outliers because of its weaker correlation between the sensor input and the motor output than that of the correct ones, and consequently to acquire appropriate sensorimotor coordination for joint attention even if the caregiver does not provide any task evaluation to the robot. The experimental results show the validity of the proposed mechanism. It is suggested that the proposed mechanism could explain the developmental mechanism of infants' joint attention because the learning process of the robot's joint attention can be regarded as equivalent to the developmental process of infants' one.
Object-Based Attention and Cognitive Tunneling
ERIC Educational Resources Information Center
Jarmasz, Jerzy; Herdman, Chris M.; Johannsdottir, Kamilla Run
2005-01-01
Simulator-based research has shown that pilots cognitively tunnel their attention on head-up displays (HUDs). Cognitive tunneling has been linked to object-based visual attention on the assumption that HUD symbology is perceptually grouped into an object that is perceived and attended separately from the external scene. The present research…
Raffone, Antonino; Srinivasan, Narayanan; van Leeuwen, Cees
2014-01-01
Despite the acknowledged relationship between consciousness and attention, theories of the two have mostly been developed separately. Moreover, these theories have independently attempted to explain phenomena in which both are likely to interact, such as the attentional blink (AB) and working memory (WM) consolidation. Here, we make an effort to bridge the gap between, on the one hand, a theory of consciousness based on the notion of global workspace (GW) and, on the other, a synthesis of theories of visual attention. We offer a theory of attention and consciousness (TAC) that provides a unified neurocognitive account of several phenomena associated with visual search, AB and WM consolidation. TAC assumes multiple processing stages between early visual representation and conscious access, and extends the dynamics of the global neuronal workspace model to a visual attentional workspace (VAW). The VAW is controlled by executive routers, higher-order representations of executive operations in the GW, without the need for explicit saliency or priority maps. TAC leads to newly proposed mechanisms for illusory conjunctions, AB, inattentional blindness and WM capacity, and suggests neural correlates of phenomenal consciousness. Finally, the theory reconciles the all-or-none and graded perspectives on conscious representation. PMID:24639586
Raffone, Antonino; Srinivasan, Narayanan; van Leeuwen, Cees
2014-05-05
Despite the acknowledged relationship between consciousness and attention, theories of the two have mostly been developed separately. Moreover, these theories have independently attempted to explain phenomena in which both are likely to interact, such as the attentional blink (AB) and working memory (WM) consolidation. Here, we make an effort to bridge the gap between, on the one hand, a theory of consciousness based on the notion of global workspace (GW) and, on the other, a synthesis of theories of visual attention. We offer a theory of attention and consciousness (TAC) that provides a unified neurocognitive account of several phenomena associated with visual search, AB and WM consolidation. TAC assumes multiple processing stages between early visual representation and conscious access, and extends the dynamics of the global neuronal workspace model to a visual attentional workspace (VAW). The VAW is controlled by executive routers, higher-order representations of executive operations in the GW, without the need for explicit saliency or priority maps. TAC leads to newly proposed mechanisms for illusory conjunctions, AB, inattentional blindness and WM capacity, and suggests neural correlates of phenomenal consciousness. Finally, the theory reconciles the all-or-none and graded perspectives on conscious representation.
Wiegand, Iris; Töllner, Thomas; Habekost, Thomas; Dyrholm, Mads; Müller, Hermann J; Finke, Kathrin
2014-08-01
An individual's visual attentional capacity is characterized by 2 central processing resources, visual perceptual processing speed and visual short-term memory (vSTM) storage capacity. Based on Bundesen's theory of visual attention (TVA), independent estimates of these parameters can be obtained from mathematical modeling of performance in a whole report task. The framework's neural interpretation (NTVA) further suggests distinct brain mechanisms underlying these 2 functions. Using an interindividual difference approach, the present study was designed to establish the respective ERP correlates of both parameters. Participants with higher compared to participants with lower processing speed were found to show significantly reduced visual N1 responses, indicative of higher efficiency in early visual processing. By contrast, for participants with higher relative to lower vSTM storage capacity, contralateral delay activity over visual areas was enhanced while overall nonlateralized delay activity was reduced, indicating that holding (the maximum number of) items in vSTM relies on topographically specific sustained activation within the visual system. Taken together, our findings show that the 2 main aspects of visual attentional capacity are reflected in separable neurophysiological markers, validating a central assumption of NTVA. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Preparatory attention in visual cortex.
Battistoni, Elisa; Stein, Timo; Peelen, Marius V
2017-05-01
Top-down attention is the mechanism that allows us to selectively process goal-relevant aspects of a scene while ignoring irrelevant aspects. A large body of research has characterized the effects of attention on neural activity evoked by a visual stimulus. However, attention also includes a preparatory phase before stimulus onset in which the attended dimension is internally represented. Here, we review neurophysiological, functional magnetic resonance imaging, magnetoencephalography, electroencephalography, and transcranial magnetic stimulation (TMS) studies investigating the neural basis of preparatory attention, both when attention is directed to a location in space and when it is directed to nonspatial stimulus attributes (content-based attention) ranging from low-level features to object categories. Results show that both spatial and content-based attention lead to increased baseline activity in neural populations that selectively code for the attended attribute. TMS studies provide evidence that this preparatory activity is causally related to subsequent attentional selection and behavioral performance. Attention thus acts by preactivating selective neurons in the visual cortex before stimulus onset. This appears to be a general mechanism that can operate on multiple levels of representation. We discuss the functional relevance of this mechanism, its limitations, and its relation to working memory, imagery, and expectation. We conclude by outlining open questions and future directions. © 2017 New York Academy of Sciences.
TVA-Based Assessment of Visual Attention Using Line-Drawings of Fruits and Vegetables
Wang, Tianlu; Gillebert, Celine R.
2018-01-01
Visuospatial attention and short-term memory allow us to prioritize, select, and briefly maintain part of the visual information that reaches our senses. These cognitive abilities are quantitatively accounted for by Bundesen’s theory of visual attention (TVA; Bundesen, 1990). Previous studies have suggested that TVA-based assessments are sensitive to inter-individual differences in spatial bias, visual short-term memory capacity, top-down control, and processing speed in healthy volunteers as well as in patients with various neurological and psychiatric conditions. However, most neuropsychological assessments of attention and executive functions, including TVA-based assessment, make use of alphanumeric stimuli and/or are performed verbally, which can pose difficulties for individuals who have troubles processing letters or numbers. Here we examined the reliability of TVA-based assessments when stimuli are used that are not alphanumeric, but instead based on line-drawings of fruits and vegetables. We compared five TVA parameters quantifying the aforementioned cognitive abilities, obtained by modeling accuracy data on a whole/partial report paradigm using conventional alphabet stimuli versus the food stimuli. Significant correlations were found for all TVA parameters, indicating a high parallel-form reliability. Split-half correlations assessing internal reliability, and correlations between predicted and observed data assessing goodness-of-fit were both significant. Our results provide an indication that line-drawings of fruits and vegetables can be used for a reliable assessment of attention and short-term memory. PMID:29535660
Yamin, Stephanie; Stinchcombe, Arne; Gagnon, Sylvain
2016-06-01
This study sought to predict driving performance of drivers with Alzheimer's disease (AD) using measures of attention, visual processing, and global cognition. Simulated driving performance of individuals with mild AD (n = 20) was contrasted with performance of a group of healthy controls (n = 21). Performance on measures of global cognitive function and specific tests of attention and visual processing were examined in relation to simulated driving performance. Strong associations were observed between measures of attention, notably the Test of Everyday Attention (sustained attention; r = -.651, P = .002) and the Useful Field of View (r = .563, P = .010), and driving performance among drivers with mild AD. The Visual Object and Space Perception Test-object was significantly correlated with the occurrence of crashes (r = .652, P = .002). Tests of global cognition did not correlate with simulated driving outcomes. The results suggest that professionals exercise caution when extrapolating driving performance based on global cognitive indicators. © The Author(s) 2015.
Attention Determines Contextual Enhancement versus Suppression in Human Primary Visual Cortex.
Flevaris, Anastasia V; Murray, Scott O
2015-09-02
Neural responses in primary visual cortex (V1) depend on stimulus context in seemingly complex ways. For example, responses to an oriented stimulus can be suppressed when it is flanked by iso-oriented versus orthogonally oriented stimuli but can also be enhanced when attention is directed to iso-oriented versus orthogonal flanking stimuli. Thus the exact same contextual stimulus arrangement can have completely opposite effects on neural responses-in some cases leading to orientation-tuned suppression and in other cases leading to orientation-tuned enhancement. Here we show that stimulus-based suppression and enhancement of fMRI responses in humans depends on small changes in the focus of attention and can be explained by a model that combines feature-based attention with response normalization. Neurons in the primary visual cortex (V1) respond to stimuli within a restricted portion of the visual field, termed their "receptive field." However, neuronal responses can also be influenced by stimuli that surround a receptive field, although the nature of these contextual interactions and underlying neural mechanisms are debated. Here we show that the response in V1 to a stimulus in the same context can either be suppressed or enhanced depending on the focus of attention. We are able to explain the results using a simple computational model that combines two well established properties of visual cortical responses: response normalization and feature-based enhancement. Copyright © 2015 the authors 0270-6474/15/3512273-08$15.00/0.
Wilaiprasitporn, Theerawit; Yagi, Tohru
2015-01-01
This research demonstrates the orientation-modulated attention effect on visual evoked potential. We combined this finding with our previous findings about the motion-modulated attention effect and used the result to develop novel visual stimuli for a personal identification number (PIN) application based on a brain-computer interface (BCI) framework. An electroencephalography amplifier with a single electrode channel was sufficient for our application. A computationally inexpensive algorithm and small datasets were used in processing. Seven healthy volunteers participated in experiments to measure offline performance. Mean accuracy was 83.3% at 13.9 bits/min. Encouraged by these results, we plan to continue developing the BCI-based personal identification application toward real-time systems.
Object-based target templates guide attention during visual search.
Berggren, Nick; Eimer, Martin
2018-05-03
During visual search, attention is believed to be controlled in a strictly feature-based fashion, without any guidance by object-based target representations. To challenge this received view, we measured electrophysiological markers of attentional selection (N2pc component) and working memory (sustained posterior contralateral negativity; SPCN) in search tasks where two possible targets were defined by feature conjunctions (e.g., blue circles and green squares). Critically, some search displays also contained nontargets with two target features (incorrect conjunction objects, e.g., blue squares). Because feature-based guidance cannot distinguish these objects from targets, any selective bias for targets will reflect object-based attentional control. In Experiment 1, where search displays always contained only one object with target-matching features, targets and incorrect conjunction objects elicited identical N2pc and SPCN components, demonstrating that attentional guidance was entirely feature-based. In Experiment 2, where targets and incorrect conjunction objects could appear in the same display, clear evidence for object-based attentional control was found. The target N2pc became larger than the N2pc to incorrect conjunction objects from 250 ms poststimulus, and only targets elicited SPCN components. This demonstrates that after an initial feature-based guidance phase, object-based templates are activated when they are required to distinguish target and nontarget objects. These templates modulate visual processing and control access to working memory, and their activation may coincide with the start of feature integration processes. Results also suggest that while multiple feature templates can be activated concurrently, only a single object-based target template can guide attention at any given time. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Johnson, Aaron W; Duda, Kevin R; Sheridan, Thomas B; Oman, Charles M
2017-03-01
This article describes a closed-loop, integrated human-vehicle model designed to help understand the underlying cognitive processes that influenced changes in subject visual attention, mental workload, and situation awareness across control mode transitions in a simulated human-in-the-loop lunar landing experiment. Control mode transitions from autopilot to manual flight may cause total attentional demands to exceed operator capacity. Attentional resources must be reallocated and reprioritized, which can increase the average uncertainty in the operator's estimates of low-priority system states. We define this increase in uncertainty as a reduction in situation awareness. We present a model built upon the optimal control model for state estimation, the crossover model for manual control, and the SEEV (salience, effort, expectancy, value) model for visual attention. We modify the SEEV attention executive to direct visual attention based, in part, on the uncertainty in the operator's estimates of system states. The model was validated using the simulated lunar landing experimental data, demonstrating an average difference in the percentage of attention ≤3.6% for all simulator instruments. The model's predictions of mental workload and situation awareness, measured by task performance and system state uncertainty, also mimicked the experimental data. Our model supports the hypothesis that visual attention is influenced by the uncertainty in system state estimates. Conceptualizing situation awareness around the metric of system state uncertainty is a valuable way for system designers to understand and predict how reallocations in the operator's visual attention during control mode transitions can produce reallocations in situation awareness of certain states.
Visual attention: Linking prefrontal sources to neuronal and behavioral correlates.
Clark, Kelsey; Squire, Ryan Fox; Merrikhi, Yaser; Noudoost, Behrad
2015-09-01
Attention is a means of flexibly selecting and enhancing a subset of sensory input based on the current behavioral goals. Numerous signatures of attention have been identified throughout the brain, and now experimenters are seeking to determine which of these signatures are causally related to the behavioral benefits of attention, and the source of these modulations within the brain. Here, we review the neural signatures of attention throughout the brain, their theoretical benefits for visual processing, and their experimental correlations with behavioral performance. We discuss the importance of measuring cue benefits as a way to distinguish between impairments on an attention task, which may instead be visual or motor impairments, and true attentional deficits. We examine evidence for various areas proposed as sources of attentional modulation within the brain, with a focus on the prefrontal cortex. Lastly, we look at studies that aim to link sources of attention to its neuronal signatures elsewhere in the brain. Copyright © 2015. Published by Elsevier Ltd.
Object formation in visual working memory: Evidence from object-based attention.
Zhou, Jifan; Zhang, Haihang; Ding, Xiaowei; Shui, Rende; Shen, Mowei
2016-09-01
We report on how visual working memory (VWM) forms intact perceptual representations of visual objects using sub-object elements. Specifically, when objects were divided into fragments and sequentially encoded into VWM, the fragments were involuntarily integrated into objects in VWM, as evidenced by the occurrence of both positive and negative object-based attention effects: In Experiment 1, when subjects' attention was cued to a location occupied by the VWM object, the target presented at the location of that object was perceived as occurring earlier than that presented at the location of a different object. In Experiment 2, responses to a target were significantly slower when a distractor was presented at the same location as the cued object (Experiment 2). These results suggest that object fragments can be integrated into objects within VWM in a manner similar to that of visual perception. Copyright © 2016 Elsevier B.V. All rights reserved.
Inhibition of Return and Object-Based Attentional Selection
ERIC Educational Resources Information Center
List, Alexandra; Robertson, Lynn C.
2007-01-01
Visual attention research has revealed that attentional allocation can occur in space- and/or object-based coordinates. Using the direct and elegant design of R. Egly, J. Driver, and R. Rafal (1994), the present experiments tested whether space- and object-based inhibition of return (IOR) emerge under similar time courses. The experiments were…
Effect of display size on visual attention.
Chen, I-Ping; Liao, Chia-Ning; Yeh, Shih-Hao
2011-06-01
Attention plays an important role in the design of human-machine interfaces. However, current knowledge about attention is largely based on data obtained when using devices of moderate display size. With advancement in display technology comes the need for understanding attention behavior over a wider range of viewing sizes. The effect of display size on test participants' visual search performance was studied. The participants (N = 12) performed two types of visual search tasks, that is, parallel and serial search, under three display-size conditions (16 degrees, 32 degrees, and 60 degrees). Serial, but not parallel, search was affected by display size. In the serial task, mean reaction time for detecting a target increased with the display size.
Visual attention to food cues in obesity: an eye-tracking study.
Doolan, Katy J; Breslin, Gavin; Hanna, Donncha; Murphy, Kate; Gallagher, Alison M
2014-12-01
Based on the theory of incentive sensitization, the aim of this study was to investigate differences in attentional processing of food-related visual cues between normal-weight and overweight/obese males and females. Twenty-six normal-weight (14M, 12F) and 26 overweight/obese (14M, 12F) adults completed a visual probe task and an eye-tracking paradigm. Reaction times and eye movements to food and control images were collected during both a fasted and fed condition in a counterbalanced design. Participants had greater visual attention towards high-energy-density food images compared to low-energy-density food images regardless of hunger condition. This was most pronounced in overweight/obese males who had significantly greater maintained attention towards high-energy-density food images when compared with their normal-weight counterparts however no between weight group differences were observed for female participants. High-energy-density food images appear to capture visual attention more readily than low-energy-density food images. Results also suggest the possibility of an altered visual food cue-associated reward system in overweight/obese males. Attentional processing of food cues may play a role in eating behaviors thus should be taken into consideration as part of an integrated approach to curbing obesity. © 2014 The Obesity Society.
The effect of visual salience on memory-based choices.
Pooresmaeili, Arezoo; Bach, Dominik R; Dolan, Raymond J
2014-02-01
Deciding whether a stimulus is the "same" or "different" from a previous presented one involves integrating among the incoming sensory information, working memory, and perceptual decision making. Visual selective attention plays a crucial role in selecting the relevant information that informs a subsequent course of action. Previous studies have mainly investigated the role of visual attention during the encoding phase of working memory tasks. In this study, we investigate whether manipulation of bottom-up attention by changing stimulus visual salience impacts on later stages of memory-based decisions. In two experiments, we asked subjects to identify whether a stimulus had either the same or a different feature to that of a memorized sample. We manipulated visual salience of the test stimuli by varying a task-irrelevant feature contrast. Subjects chose a visually salient item more often when they looked for matching features and less often so when they looked for a nonmatch. This pattern of results indicates that salient items are more likely to be identified as a match. We interpret the findings in terms of capacity limitations at a comparison stage where a visually salient item is more likely to exhaust resources leading it to be prematurely parsed as a match.
Humphreys, Glyn W
2016-10-01
The Treisman Bartlett lecture, reported in the Quarterly Journal of Experimental Psychology in 1988, provided a major overview of the feature integration theory of attention. This has continued to be a dominant account of human visual attention to this day. The current paper provides a summary of the work reported in the lecture and an update on critical aspects of the theory as applied to visual object perception. The paper highlights the emergence of findings that pose significant challenges to the theory and which suggest that revisions are required that allow for (a) several rather than a single form of feature integration, (b) some forms of feature integration to operate preattentively, (c) stored knowledge about single objects and interactions between objects to modulate perceptual integration, (d) the application of feature-based inhibition to object files where visual features are specified, which generates feature-based spreading suppression and scene segmentation, and (e) a role for attention in feature confirmation rather than feature integration in visual selection. A feature confirmation account of attention in object perception is outlined.
Wilkinson, Krista M; Light, Janice
2011-12-01
Many individuals with complex communication needs may benefit from visual aided augmentative and alternative communication systems. In visual scene displays (VSDs), language concepts are embedded into a photograph of a naturalistic event. Humans play a central role in communication development and might be important elements in VSDs. However, many VSDs omit human figures. In this study, the authors sought to describe the distribution of visual attention to humans in naturalistic scenes as compared with other elements. Nineteen college students observed 8 photographs in which a human figure appeared near 1 or more items that might be expected to compete for visual attention (such as a Christmas tree or a table loaded with food). Eye-tracking technology allowed precise recording of participants' gaze. The fixation duration over a 7-s viewing period and latency to view elements in the photograph were measured. Participants fixated on the human figures more rapidly and for longer than expected based on the size of these figures, regardless of the other elements in the scene. Human figures attract attention in a photograph even when presented alongside other attractive distracters. Results suggest that humans may be a powerful means to attract visual attention to key elements in VSDs.
Majerus, Steve; Attout, Lucie; D'Argembeau, Arnaud; Degueldre, Christian; Fias, Wim; Maquet, Pierre; Martinez Perez, Trecy; Stawarczyk, David; Salmon, Eric; Van der Linden, Martial; Phillips, Christophe; Balteau, Evelyne
2012-05-01
Interactions between the neural correlates of short-term memory (STM) and attention have been actively studied in the visual STM domain but much less in the verbal STM domain. Here we show that the same attention mechanisms that have been shown to shape the neural networks of visual STM also shape those of verbal STM. Based on previous research in visual STM, we contrasted the involvement of a dorsal attention network centered on the intraparietal sulcus supporting task-related attention and a ventral attention network centered on the temporoparietal junction supporting stimulus-related attention. We observed that, with increasing STM load, the dorsal attention network was activated while the ventral attention network was deactivated, especially during early maintenance. Importantly, activation in the ventral attention network increased in response to task-irrelevant stimuli briefly presented during the maintenance phase of the STM trials but only during low-load STM conditions, which were associated with the lowest levels of activity in the dorsal attention network during encoding and early maintenance. By demonstrating a trade-off between task-related and stimulus-related attention networks during verbal STM, this study highlights the dynamics of attentional processes involved in verbal STM.
Organizational and Spatial Dynamics of Attentional Focusing in Hierarchically Structured Objects
ERIC Educational Resources Information Center
Yeari, Menahem; Goldsmith, Morris
2011-01-01
Is the focusing of visual attention object-based, space-based, both, or neither? Attentional focusing latencies in hierarchically structured compound-letter objects were examined, orthogonally manipulating global size (larger vs. smaller) and organizational complexity (two-level structure vs. three-level structure). In a dynamic focusing task,…
Remote sensing image ship target detection method based on visual attention model
NASA Astrophysics Data System (ADS)
Sun, Yuejiao; Lei, Wuhu; Ren, Xiaodong
2017-11-01
The traditional methods of detecting ship targets in remote sensing images mostly use sliding window to search the whole image comprehensively. However, the target usually occupies only a small fraction of the image. This method has high computational complexity for large format visible image data. The bottom-up selective attention mechanism can selectively allocate computing resources according to visual stimuli, thus improving the computational efficiency and reducing the difficulty of analysis. Considering of that, a method of ship target detection in remote sensing images based on visual attention model was proposed in this paper. The experimental results show that the proposed method can reduce the computational complexity while improving the detection accuracy, and improve the detection efficiency of ship targets in remote sensing images.
Turatto, Massimo; Pascucci, David
2016-04-01
Attention is known to be crucial for learning and to regulate activity-dependent brain plasticity. Here we report the opposite scenario, with plasticity affecting the onset-driven automatic deployment of spatial attention. Specifically, we showed that attentional capture is subject to habituation, a fundamental form of plasticity consisting in a response decrement to repeated stimulations. Participants performed a visual discrimination task with focused attention, while being occasionally exposed to a distractor consisting of a high-luminance peripheral onset. With practice, short-term and long-term habituation of attentional capture emerged, making the visual-attention system fully immune to distraction. Furthermore, spontaneous recovery of attentional capture was found when the distractor was temporarily removed. Capture, however, once habituated was surprisingly resistant to spontaneous recovery, taking from several minutes to days to recover. The results suggest that the mechanisms subserving exogenous attentional orienting are subject to profound and enduring plastic changes based on previous experience, and that habituation can impact high-order cognitive functions. Copyright © 2016 Elsevier Inc. All rights reserved.
Is goal-directed attentional guidance just intertrial priming? A review.
Lamy, Dominique F; Kristjánsson, Arni
2013-07-01
According to most models of selective visual attention, our goals at any given moment and saliency in the visual field determine attentional priority. But selection is not carried out in isolation--we typically track objects through space and time. This is not well captured within the distinction between goal-directed and saliency-based attentional guidance. Recent studies have shown that selection is strongly facilitated when the characteristics of the objects to be attended and of those to be ignored remain constant between consecutive selections. These studies have generated the proposal that goal-directed or top-down effects are best understood as intertrial priming effects. Here, we provide a detailed overview and critical appraisal of the arguments, experimental strategies, and findings that have been used to promote this idea, along with a review of studies providing potential counterarguments. We divide this review according to different types of attentional control settings that observers are thought to adopt during visual search: feature-based settings, dimension-based settings, and singleton detection mode. We conclude that priming accounts for considerable portions of effects attributed to top-down guidance, but that top-down guidance can be independent of intertrial priming.
Computational Model of Primary Visual Cortex Combining Visual Attention for Action Recognition
Shu, Na; Gao, Zhiyong; Chen, Xiangan; Liu, Haihua
2015-01-01
Humans can easily understand other people’s actions through visual systems, while computers cannot. Therefore, a new bio-inspired computational model is proposed in this paper aiming for automatic action recognition. The model focuses on dynamic properties of neurons and neural networks in the primary visual cortex (V1), and simulates the procedure of information processing in V1, which consists of visual perception, visual attention and representation of human action. In our model, a family of the three-dimensional spatial-temporal correlative Gabor filters is used to model the dynamic properties of the classical receptive field of V1 simple cell tuned to different speeds and orientations in time for detection of spatiotemporal information from video sequences. Based on the inhibitory effect of stimuli outside the classical receptive field caused by lateral connections of spiking neuron networks in V1, we propose surround suppressive operator to further process spatiotemporal information. Visual attention model based on perceptual grouping is integrated into our model to filter and group different regions. Moreover, in order to represent the human action, we consider the characteristic of the neural code: mean motion map based on analysis of spike trains generated by spiking neurons. The experimental evaluation on some publicly available action datasets and comparison with the state-of-the-art approaches demonstrate the superior performance of the proposed model. PMID:26132270
Predicting Visual Distraction Using Driving Performance Data
Kircher, Katja; Ahlstrom, Christer
2010-01-01
Behavioral variables are often used as performance indicators (PIs) of visual or internal distraction induced by secondary tasks. The objective of this study is to investigate whether visual distraction can be predicted by driving performance PIs in a naturalistic setting. Visual distraction is here defined by a gaze based real-time distraction detection algorithm called AttenD. Seven drivers used an instrumented vehicle for one month each in a small scale field operational test. For each of the visual distraction events detected by AttenD, seven PIs such as steering wheel reversal rate and throttle hold were calculated. Corresponding data were also calculated for time periods during which the drivers were classified as attentive. For each PI, means between distracted and attentive states were calculated using t-tests for different time-window sizes (2 – 40 s), and the window width with the smallest resulting p-value was selected as optimal. Based on the optimized PIs, logistic regression was used to predict whether the drivers were attentive or distracted. The logistic regression resulted in predictions which were 76 % correct (sensitivity = 77 % and specificity = 76 %). The conclusion is that there is a relationship between behavioral variables and visual distraction, but the relationship is not strong enough to accurately predict visual driver distraction. Instead, behavioral PIs are probably best suited as complementary to eye tracking based algorithms in order to make them more accurate and robust. PMID:21050615
Katzner, Steffen; Busse, Laura; Treue, Stefan
2009-01-01
Directing visual attention to spatial locations or to non-spatial stimulus features can strongly modulate responses of individual cortical sensory neurons. Effects of attention typically vary in magnitude, not only between visual cortical areas but also between individual neurons from the same area. Here, we investigate whether the size of attentional effects depends on the match between the tuning properties of the recorded neuron and the perceptual task at hand. We recorded extracellular responses from individual direction-selective neurons in the middle temporal area (MT) of rhesus monkeys trained to attend either to the color or the motion signal of a moving stimulus. We found that effects of spatial and feature-based attention in MT, which are typically observed in tasks allocating attention to motion, were very similar even when attention was directed to the color of the stimulus. We conclude that attentional modulation can occur in extrastriate cortex, even under conditions without a match between the tuning properties of the recorded neuron and the perceptual task at hand. Our data are consistent with theories of object-based attention describing a transfer of attention from relevant to irrelevant features, within the attended object and across the visual field. These results argue for a unified attentional system that modulates responses to a stimulus across cortical areas, even if a given area is specialized for processing task-irrelevant aspects of that stimulus.
The attentive brain: insights from developmental cognitive neuroscience.
Amso, Dima; Scerif, Gaia
2015-10-01
Visual attention functions as a filter to select environmental information for learning and memory, making it the first step in the eventual cascade of thought and action systems. Here, we review studies of typical and atypical visual attention development and explain how they offer insights into the mechanisms of adult visual attention. We detail interactions between visual processing and visual attention, as well as the contribution of visual attention to memory. Finally, we discuss genetic mechanisms underlying attention disorders and how attention may be modified by training.
Heinen, Klaartje; Feredoes, Eva; Weiskopf, Nikolaus; Ruff, Christian C; Driver, Jon
2014-11-01
Voluntary selective attention can prioritize different features in a visual scene. The frontal eye-fields (FEF) are one potential source of such feature-specific top-down signals, but causal evidence for influences on visual cortex (as was shown for "spatial" attention) has remained elusive. Here, we show that transcranial magnetic stimulation (TMS) applied to right FEF increased the blood oxygen level-dependent (BOLD) signals in visual areas processing "target feature" but not in "distracter feature"-processing regions. TMS-induced BOLD signals increase in motion-responsive visual cortex (MT+) when motion was attended in a display with moving dots superimposed on face stimuli, but in face-responsive fusiform area (FFA) when faces were attended to. These TMS effects on BOLD signal in both regions were negatively related to performance (on the motion task), supporting the behavioral relevance of this pathway. Our findings provide new causal evidence for the human FEF in the control of nonspatial "feature"-based attention, mediated by dynamic influences on feature-specific visual cortex that vary with the currently attended property. © The Author 2013. Published by Oxford University Press.
Spering, Miriam; Carrasco, Marisa
2012-01-01
Feature-based attention enhances visual processing and improves perception, even for visual features that we are not aware of. Does feature-based attention also modulate motor behavior in response to visual information that does or does not reach awareness? Here we compare the effect of feature-based attention on motion perception and smooth pursuit eye movements in response to moving dichoptic plaids–stimuli composed of two orthogonally-drifting gratings, presented separately to each eye–in human observers. Monocular adaptation to one grating prior to the presentation of both gratings renders the adapted grating perceptually weaker than the unadapted grating and decreases the level of awareness. Feature-based attention was directed to either the adapted or the unadapted grating’s motion direction or to both (neutral condition). We show that observers were better in detecting a speed change in the attended than the unattended motion direction, indicating that they had successfully attended to one grating. Speed change detection was also better when the change occurred in the unadapted than the adapted grating, indicating that the adapted grating was perceptually weaker. In neutral conditions, perception and pursuit in response to plaid motion were dissociated: While perception followed one grating’s motion direction almost exclusively (component motion), the eyes tracked the average of both gratings (pattern motion). In attention conditions, perception and pursuit were shifted towards the attended component. These results suggest that attention affects perception and pursuit similarly even though only the former reflects awareness. The eyes can track an attended feature even if observers do not perceive it. PMID:22649238
Spering, Miriam; Carrasco, Marisa
2012-05-30
Feature-based attention enhances visual processing and improves perception, even for visual features that we are not aware of. Does feature-based attention also modulate motor behavior in response to visual information that does or does not reach awareness? Here we compare the effect of feature-based attention on motion perception and smooth-pursuit eye movements in response to moving dichoptic plaids--stimuli composed of two orthogonally drifting gratings, presented separately to each eye--in human observers. Monocular adaptation to one grating before the presentation of both gratings renders the adapted grating perceptually weaker than the unadapted grating and decreases the level of awareness. Feature-based attention was directed to either the adapted or the unadapted grating's motion direction or to both (neutral condition). We show that observers were better at detecting a speed change in the attended than the unattended motion direction, indicating that they had successfully attended to one grating. Speed change detection was also better when the change occurred in the unadapted than the adapted grating, indicating that the adapted grating was perceptually weaker. In neutral conditions, perception and pursuit in response to plaid motion were dissociated: While perception followed one grating's motion direction almost exclusively (component motion), the eyes tracked the average of both gratings (pattern motion). In attention conditions, perception and pursuit were shifted toward the attended component. These results suggest that attention affects perception and pursuit similarly even though only the former reflects awareness. The eyes can track an attended feature even if observers do not perceive it.
Traffic Sign Detection Based on Biologically Visual Mechanism
NASA Astrophysics Data System (ADS)
Hu, X.; Zhu, X.; Li, D.
2012-07-01
TSR (Traffic sign recognition) is an important problem in ITS (intelligent traffic system), which is being paid more and more attention for realizing drivers assisting system and unmanned vehicle etc. TSR consists of two steps: detection and recognition, and this paper describe a new traffic sign detection method. The design principle of the traffic sign is comply with the visual attention mechanism of human, so we propose a method using visual attention mechanism to detect traffic sign ,which is reasonable. In our method, the whole scene will firstly be analyzed by visual attention model to acquire the area where traffic signs might be placed. And then, these candidate areas will be analyzed according to the shape characteristics of the traffic sign to detect traffic signs. In traffic sign detection experiments, the result shows the proposed method is effectively and robust than other existing saliency detection method.
Parafoveal magnification: visual acuity does not modulate the perceptual span in reading.
Miellet, Sébastien; O'Donnell, Patrick J; Sereno, Sara C
2009-06-01
Models of eye guidance in reading rely on the concept of the perceptual span-the amount of information perceived during a single eye fixation, which is considered to be a consequence of visual and attentional constraints. To directly investigate attentional mechanisms underlying the perceptual span, we implemented a new reading paradigm-parafoveal magnification (PM)-that compensates for how visual acuity drops off as a function of retinal eccentricity. On each fixation and in real time, parafoveal text is magnified to equalize its perceptual impact with that of concurrent foveal text. Experiment 1 demonstrated that PM does not increase the amount of text that is processed, supporting an attentional-based account of eye movements in reading. Experiment 2 explored a contentious issue that differentiates competing models of eye movement control and showed that, even when parafoveal information is enlarged, visual attention in reading is allocated in a serial fashion from word to word.
The Role of Visual Processing Speed in Reading Speed Development
Lobier, Muriel; Dubois, Matthieu; Valdois, Sylviane
2013-01-01
A steady increase in reading speed is the hallmark of normal reading acquisition. However, little is known of the influence of visual attention capacity on children's reading speed. The number of distinct visual elements that can be simultaneously processed at a glance (dubbed the visual attention span), predicts single-word reading speed in both normal reading and dyslexic children. However, the exact processes that account for the relationship between the visual attention span and reading speed remain to be specified. We used the Theory of Visual Attention to estimate visual processing speed and visual short-term memory capacity from a multiple letter report task in eight and nine year old children. The visual attention span and text reading speed were also assessed. Results showed that visual processing speed and visual short term memory capacity predicted the visual attention span. Furthermore, visual processing speed predicted reading speed, but visual short term memory capacity did not. Finally, the visual attention span mediated the effect of visual processing speed on reading speed. These results suggest that visual attention capacity could constrain reading speed in elementary school children. PMID:23593117
The role of visual processing speed in reading speed development.
Lobier, Muriel; Dubois, Matthieu; Valdois, Sylviane
2013-01-01
A steady increase in reading speed is the hallmark of normal reading acquisition. However, little is known of the influence of visual attention capacity on children's reading speed. The number of distinct visual elements that can be simultaneously processed at a glance (dubbed the visual attention span), predicts single-word reading speed in both normal reading and dyslexic children. However, the exact processes that account for the relationship between the visual attention span and reading speed remain to be specified. We used the Theory of Visual Attention to estimate visual processing speed and visual short-term memory capacity from a multiple letter report task in eight and nine year old children. The visual attention span and text reading speed were also assessed. Results showed that visual processing speed and visual short term memory capacity predicted the visual attention span. Furthermore, visual processing speed predicted reading speed, but visual short term memory capacity did not. Finally, the visual attention span mediated the effect of visual processing speed on reading speed. These results suggest that visual attention capacity could constrain reading speed in elementary school children.
Is that disgust I see? Political ideology and biased visual attention.
Oosterhoff, Benjamin; Shook, Natalie J; Ford, Cameron
2018-01-15
Considerable evidence suggests that political liberals and conservatives vary in the way they process and respond to valenced (i.e., negative versus positive) information, with conservatives generally displaying greater negativity biases than liberals. Less is known about whether liberals and conservatives differentially prioritize certain forms of negative information over others. Across two studies using eye-tracking methodology, we examined differences in visual attention to negative scenes and facial expressions based on self-reported political ideology. In Study 1, scenes rated high in fear, disgust, sadness, and neutrality were presented simultaneously. Greater endorsement of socially conservative political attitudes was associated with less attentional engagement (i.e., lower dwell time) of disgust scenes and more attentional engagement toward neutral scenes. Socially conservative political attitudes were not significantly associated with visual attention to fear or sad scenes. In Study 2, images depicting facial expressions of fear, disgust, sadness, and neutrality were presented simultaneously. Greater endorsement of socially conservative political attitudes was associated with greater attentional engagement with facial expressions depicting disgust and less attentional engagement toward neutral faces. Visual attention to fearful or sad faces was not related to social conservatism. Endorsement of economically conservative political attitudes was not consistently associated with biases in visual attention across both studies. These findings support disease-avoidance models and suggest that social conservatism may be rooted within a greater sensitivity to disgust-related information. Copyright © 2017 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Peyrin, C.; Lallier, M.; Demonet, J. F.; Pernet, C.; Baciu, M.; Le Bas, J. F.; Valdois, S.
2012-01-01
A dissociation between phonological and visual attention (VA) span disorders has been reported in dyslexic children. This study investigates whether this cognitively-based dissociation has a neurobiological counterpart through the investigation of two cases of developmental dyslexia. LL showed a phonological disorder but preserved VA span whereas…
ERIC Educational Resources Information Center
Stenneken, Prisca; Egetemeir, Johanna; Schulte-Korne, Gerd; Muller, Hermann J.; Schneider, Werner X.; Finke, Kathrin
2011-01-01
The cognitive causes as well as the neurological and genetic basis of developmental dyslexia, a complex disorder of written language acquisition, are intensely discussed with regard to multiple-deficit models. Accumulating evidence has revealed dyslexics' impairments in a variety of tasks requiring visual attention. The heterogeneity of these…
Störmer, Viola S; Li, Shu-Chen; Heekeren, Hauke R; Lindenberger, Ulman
2011-02-01
The ability to attend to multiple objects that move in the visual field is important for many aspects of daily functioning. The attentional capacity for such dynamic tracking, however, is highly limited and undergoes age-related decline. Several aspects of the tracking process can influence performance. Here, we investigated effects of feature-based interference from distractor objects that appear in unattended regions of the visual field with a hemifield-tracking task. Younger and older participants performed an attentional tracking task in one hemifield while distractor objects were concurrently presented in the unattended hemifield. Feature similarity between objects in the attended and unattended hemifields as well as motion speed and the number of to-be-tracked objects were parametrically manipulated. The results show that increasing feature overlap leads to greater interference from the unattended visual field. This effect of feature-based interference was only present in the slow speed condition, indicating that the interference is mainly modulated by perceptual demands. High-performing older adults showed a similar interference effect as younger adults, whereas low-performing adults showed poor tracking performance overall.
Selective Attention and Sensory Modality in Aging: Curses and Blessings.
Van Gerven, Pascal W M; Guerreiro, Maria J S
2016-01-01
The notion that selective attention is compromised in older adults as a result of impaired inhibitory control is well established. Yet it is primarily based on empirical findings covering the visual modality. Auditory and especially, cross-modal selective attention are remarkably underexposed in the literature on aging. In the past 5 years, we have attempted to fill these voids by investigating performance of younger and older adults on equivalent tasks covering all four combinations of visual or auditory target, and visual or auditory distractor information. In doing so, we have demonstrated that older adults are especially impaired in auditory selective attention with visual distraction. This pattern of results was not mirrored by the results from our psychophysiological studies, however, in which both enhancement of target processing and suppression of distractor processing appeared to be age equivalent. We currently conclude that: (1) age-related differences of selective attention are modality dependent; (2) age-related differences of selective attention are limited; and (3) it remains an open question whether modality-specific age differences in selective attention are due to impaired distractor inhibition, impaired target enhancement, or both. These conclusions put the longstanding inhibitory deficit hypothesis of aging in a new perspective.
Audiovisual associations alter the perception of low-level visual motion
Kafaligonul, Hulusi; Oluk, Can
2015-01-01
Motion perception is a pervasive nature of vision and is affected by both immediate pattern of sensory inputs and prior experiences acquired through associations. Recently, several studies reported that an association can be established quickly between directions of visual motion and static sounds of distinct frequencies. After the association is formed, sounds are able to change the perceived direction of visual motion. To determine whether such rapidly acquired audiovisual associations and their subsequent influences on visual motion perception are dependent on the involvement of higher-order attentive tracking mechanisms, we designed psychophysical experiments using regular and reverse-phi random dot motions isolating low-level pre-attentive motion processing. Our results show that an association between the directions of low-level visual motion and static sounds can be formed and this audiovisual association alters the subsequent perception of low-level visual motion. These findings support the view that audiovisual associations are not restricted to high-level attention based motion system and early-level visual motion processing has some potential role. PMID:25873869
Modeling global scene factors in attention
NASA Astrophysics Data System (ADS)
Torralba, Antonio
2003-07-01
Models of visual attention have focused predominantly on bottom-up approaches that ignored structured contextual and scene information. I propose a model of contextual cueing for attention guidance based on the global scene configuration. It is shown that the statistics of low-level features across the whole image can be used to prime the presence or absence of objects in the scene and to predict their location, scale, and appearance before exploring the image. In this scheme, visual context information can become available early in the visual processing chain, which allows modulation of the saliency of image regions and provides an efficient shortcut for object detection and recognition. 2003 Optical Society of America
Horowitz-Kraus, Tzipi
2017-10-01
Reading difficulty (RD; or dyslexia) is a heritable condition characterized by slow, inaccurate reading accompanied by executive dysfunction, specifically with respect to visual attention. The current study was designed to examine the effect of familial history of RD on the relationship between reading and visual attention abilities in children with RD using a functional MRI reading task. Seventy-one children with RD participated in the study. Based on parental reports of the existence of RD in one or both of each child's parents, children with RD were divided into two groups: (1) those with a familial history of RD and (2) those without a familial history of RD. Reading and visual attention measures were collected from all participants. Functional MRI data during word reading was acquired in 30 participants of the entire cohort. Children with or without a familial history of RD demonstrated below-average reading and visual attention scores, with greater interaction between these measures in the group with a familial history of RD. Greater bilateral and diffused activation during word reading also were found in this group. We suggest that a familial history of RD is related to greater association between lower reading abilities and visual attention abilities. Parental history of RD therefore may be an important preschool screener (before reading age) to prompt early intervention focused on executive functions and reading-related skills.
Neural Correlates of Individual Differences in Infant Visual Attention and Recognition Memory
Reynolds, Greg D.; Guy, Maggie W.; Zhang, Dantong
2010-01-01
Past studies have identified individual differences in infant visual attention based upon peak look duration during initial exposure to a stimulus. Colombo and colleagues (e.g., Colombo & Mitchell, 1990) found that infants that demonstrate brief visual fixations (i.e., short lookers) during familiarization are more likely to demonstrate evidence of recognition memory during subsequent stimulus exposure than infants that demonstrate long visual fixations (i.e., long lookers). The current study utilized event-related potentials to examine possible neural mechanisms associated with individual differences in visual attention and recognition memory for 6- and 7.5-month-old infants. Short- and long-looking infants viewed images of familiar and novel objects during ERP testing. There was a stimulus type by looker type interaction at temporal and frontal electrodes on the late slow wave (LSW). Short lookers demonstrated a LSW that was significantly greater in amplitude in response to novel stimulus presentations. No significant differences in LSW amplitude were found based on stimulus type for long lookers. These results indicate deeper processing and recognition memory of the familiar stimulus for short lookers. PMID:21666833
Category-based guidance of spatial attention during visual search for feature conjunctions.
Nako, Rebecca; Grubert, Anna; Eimer, Martin
2016-10-01
The question whether alphanumerical category is involved in the control of attentional target selection during visual search remains a contentious issue. We tested whether category-based attentional mechanisms would guide the allocation of attention under conditions where targets were defined by a combination of alphanumerical category and a basic visual feature, and search displays could contain both targets and partially matching distractor objects. The N2pc component was used as an electrophysiological marker of attentional object selection in tasks where target objects were defined by a conjunction of color and category (Experiment 1) or shape and category (Experiment 2). Some search displays contained the target or a nontarget object that matched either the target color/shape or its category among 3 nonmatching distractors. In other displays, the target and a partially matching nontarget object appeared together. N2pc components were elicited not only by targets and by color- or shape-matching nontargets, but also by category-matching nontarget objects, even on trials where a target was present in the same display. On these trials, the summed N2pc components to the 2 types of partially matching nontargets were initially equal in size to the target N2pc, suggesting that attention was allocated simultaneously and independently to all objects with target-matching features during the early phase of attentional processing. Results demonstrate that alphanumerical category is a genuine guiding feature that can operate in parallel with color or shape information to control the deployment of attention during visual search. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Dissociable spatial and non-spatial attentional deficits after circumscribed thalamic stroke.
Kraft, Antje; Irlbacher, Kerstin; Finke, Kathrin; Kaufmann, Christian; Kehrer, Stefanie; Liebermann, Daniela; Bundesen, Claus; Brandt, Stephan A
2015-03-01
Thalamic nuclei act as sensory, motor and cognitive relays between multiple subcortical areas and the cerebral cortex. They play a crucial role in cognitive functions such as executive functioning, memory and attention. In the acute period after thalamic stroke attentional deficits are common. The precise functional relevance of specific nuclei or vascular sub regions of the thalamus for attentional sub functions is still unclear. The theory of visual attention (TVA) allows the measurement of four independent attentional parameters (visual short term memory storage capacity (VSTM), visual perceptual processing speed, selective control and spatial weighting). We combined parameter-based assessment based on TVA with lesion symptom mapping in standard stereotactic space in sixteen patients (mean age 41.2 ± 11.0 SD, 6 females), with focal thalamic lesions in the medial (N = 9), lateral (N = 5), anterior (N = 1) or posterior (N = 1) vascular territories of the thalamus. Compared with an age-matched control group of 52 subjects (mean age 40.1 ± 6.4, 35 females), the patients with thalamic lesions were, on the group level, mildly impaired in visual processing speed and VSTM. Patients with lateral thalamic lesions showed a deficit in processing speed while all other TVA parameters were within the normal range. Medial thalamic lesions can be associated with a spatial bias and extinction of targets either in the ipsilesional or the contralesional field. A posterior case with a thalamic lesion of the pulvinar replicated a finding of Habekost and Rostrup (2006), demonstrating a spatial bias to the ipsilesional field, as suggested by the neural theory of visual attention (NTVA) (Bundesen, Habekost, & Kyllingsbæk, 2011). A case with an anterior-medial thalamic lesion showed reduced selective attentional control. We conclude that lesions in distinct vascular sub regions of the thalamus are associated with distinct attentional syndromes (medial = spatial bias, lateral = processing speed). Copyright © 2015 Elsevier Ltd. All rights reserved.
Subtyping of Toddlers with ASD Based on Patterns of Social Attention Deficits
2014-10-01
AWARD NUMBER: W81XWH-13-1-0179 TITLE: Subtyping of Toddlers with ASD Based on Patterns of Social Attention Deficits PRINCIPAL INVESTIGATOR...TITLE AND SUBTITLE Subtyping of Toddlers with ASD Based on Patterns of Social Attention Deficits 5a. CONTRACT NUMBER 5b. GRANT NUMBER W81XWH-13-1-0179...tracking data. 15. SUBJECT TERMS ASD, subgrouping, toddlers , heterogeneity, eye-tracking, visual attention, dyadic orienting, hierarchical
ERIC Educational Resources Information Center
Fagioli, Sabrina; Macaluso, Emiliano
2009-01-01
Behavioral studies indicate that subjects are able to divide attention between multiple streams of information at different locations. However, it is still unclear to what extent the observed costs reflect processes specifically associated with spatial attention, versus more general interference due the concurrent monitoring of multiple streams of…
Exploring the relationship between object realism and object-based attention effects.
Roque, Nelson; Boot, Walter R
2015-09-01
Visual attention prioritizes processing of locations in space, and evidence also suggests that the benefits of attention can be shaped by the presence of objects (object-based attention). However, the prevalence of object-based attention effects has been called into question recently by evidence from a large-sampled study employing classic attention paradigms (Pilz et al., 2012). We conducted two experiments to explore factors that might determine when and if object-based attention effects are observed, focusing on the degree to which the concreteness and realism of objects might contribute to these effects. We adapted the classic attention paradigm first reported by Egly, Driver, and Rafal (1994) by replacing abstract bar stimuli in some conditions with objects that were more concrete and familiar to participants: items of silverware. Furthermore, we varied the realism of these items of silverware, presenting either cartoon versions or photo-realistic versions. Contrary to predictions, increased realism did not increase the size of object-based effects. In fact, no clear object-based effects were observed in either experiment, consistent with previous failures to replicate these effects in similar paradigms. While object-based attention may exist, and may have important influences on how we parse the visual world, these and other findings suggest that the two-object paradigm typically relied upon to study object-based effects may not be the best paradigm to investigate these issues. Copyright © 2015 Elsevier B.V. All rights reserved.
Common capacity-limited neural mechanisms of selective attention and spatial working memory encoding
Fusser, Fabian; Linden, David E J; Rahm, Benjamin; Hampel, Harald; Haenschel, Corinna; Mayer, Jutta S
2011-01-01
One characteristic feature of visual working memory (WM) is its limited capacity, and selective attention has been implicated as limiting factor. A possible reason why attention constrains the number of items that can be encoded into WM is that the two processes share limited neural resources. Functional magnetic resonance imaging (fMRI) studies have indeed demonstrated commonalities between the neural substrates of WM and attention. Here we investigated whether such overlapping activations reflect interacting neural mechanisms that could result in capacity limitations. To independently manipulate the demands on attention and WM encoding within one single task, we combined visual search and delayed discrimination of spatial locations. Participants were presented with a search array and performed easy or difficult visual search in order to encode one, three or five positions of target items into WM. Our fMRI data revealed colocalised activation for attention-demanding visual search and WM encoding in distributed posterior and frontal regions. However, further analysis yielded two patterns of results. Activity in prefrontal regions increased additively with increased demands on WM and attention, indicating regional overlap without functional interaction. Conversely, the WM load-dependent activation in visual, parietal and premotor regions was severely reduced during high attentional demand. We interpret this interaction as indicating the sites of shared capacity-limited neural resources. Our findings point to differential contributions of prefrontal and posterior regions to the common neural mechanisms that support spatial WM encoding and attention, providing new imaging evidence for attention-based models of WM encoding. PMID:21781193
Development of a Computerized Visual Search Test
ERIC Educational Resources Information Center
Reid, Denise; Babani, Harsha; Jon, Eugenia
2009-01-01
Visual attention and visual search are the features of visual perception, essential for attending and scanning one's environment while engaging in daily occupations. This study describes the development of a novel web-based test of visual search. The development information including the format of the test will be described. The test was designed…
Two different mechanisms support selective attention at different phases of training.
Itthipuripat, Sirawaj; Cha, Kexin; Byers, Anna; Serences, John T
2017-06-01
Selective attention supports the prioritized processing of relevant sensory information to facilitate goal-directed behavior. Studies in human subjects demonstrate that attentional gain of cortical responses can sufficiently account for attention-related improvements in behavior. On the other hand, studies using highly trained nonhuman primates suggest that reductions in neural noise can better explain attentional facilitation of behavior. Given the importance of selective information processing in nearly all domains of cognition, we sought to reconcile these competing accounts by testing the hypothesis that extensive behavioral training alters the neural mechanisms that support selective attention. We tested this hypothesis using electroencephalography (EEG) to measure stimulus-evoked visual responses from human subjects while they performed a selective spatial attention task over the course of ~1 month. Early in training, spatial attention led to an increase in the gain of stimulus-evoked visual responses. Gain was apparent within ~100 ms of stimulus onset, and a quantitative model based on signal detection theory (SDT) successfully linked the magnitude of this gain modulation to attention-related improvements in behavior. However, after extensive training, this early attentional gain was eliminated even though there were still substantial attention-related improvements in behavior. Accordingly, the SDT-based model required noise reduction to account for the link between the stimulus-evoked visual responses and attentional modulations of behavior. These findings suggest that training can lead to fundamental changes in the way attention alters the early cortical responses that support selective information processing. Moreover, these data facilitate the translation of results across different species and across experimental procedures that employ different behavioral training regimes.
Two different mechanisms support selective attention at different phases of training
Cha, Kexin; Byers, Anna; Serences, John T.
2017-01-01
Selective attention supports the prioritized processing of relevant sensory information to facilitate goal-directed behavior. Studies in human subjects demonstrate that attentional gain of cortical responses can sufficiently account for attention-related improvements in behavior. On the other hand, studies using highly trained nonhuman primates suggest that reductions in neural noise can better explain attentional facilitation of behavior. Given the importance of selective information processing in nearly all domains of cognition, we sought to reconcile these competing accounts by testing the hypothesis that extensive behavioral training alters the neural mechanisms that support selective attention. We tested this hypothesis using electroencephalography (EEG) to measure stimulus-evoked visual responses from human subjects while they performed a selective spatial attention task over the course of ~1 month. Early in training, spatial attention led to an increase in the gain of stimulus-evoked visual responses. Gain was apparent within ~100 ms of stimulus onset, and a quantitative model based on signal detection theory (SDT) successfully linked the magnitude of this gain modulation to attention-related improvements in behavior. However, after extensive training, this early attentional gain was eliminated even though there were still substantial attention-related improvements in behavior. Accordingly, the SDT-based model required noise reduction to account for the link between the stimulus-evoked visual responses and attentional modulations of behavior. These findings suggest that training can lead to fundamental changes in the way attention alters the early cortical responses that support selective information processing. Moreover, these data facilitate the translation of results across different species and across experimental procedures that employ different behavioral training regimes. PMID:28654635
Object-based spatial attention when objects have sufficient depth cues.
Takeya, Ryuji; Kasai, Tetsuko
2015-01-01
Attention directed to a part of an object tends to obligatorily spread over all of the spatial regions that belong to the object, which may be critical for rapid object-recognition in cluttered visual scenes. Previous studies have generally used simple rectangles as objects and have shown that attention spreading is reflected by amplitude modulation in the posterior N1 component (150-200 ms poststimulus) of event-related potentials, while other interpretations (i.e., rectangular holes) may arise implicitly in early visual processing stages. By using modified Kanizsa-type stimuli that provided less ambiguity of depth ordering, the present study examined early event-related potential spatial-attention effects for connected and separated objects, both of which were perceived in front of (Experiment 1) and in back of (Experiment 2) the surroundings. Typical P1 (100-140 ms) and N1 (150-220 ms) attention effects of ERP in response to unilateral probes were observed in both experiments. Importantly, the P1 attention effect was decreased for connected objects compared to separated objects only in Experiment 1, and the typical object-based modulations of N1 were not observed in either experiment. These results suggest that spatial attention spreads over a figural object at earlier stages of processing than previously indicated, in three-dimensional visual scenes with multiple depth cues.
Is Attentional Resource Allocation Across Sensory Modalities Task-Dependent?
Wahn, Basil; König, Peter
2017-01-01
Human information processing is limited by attentional resources. That is, via attentional mechanisms, humans select a limited amount of sensory input to process while other sensory input is neglected. In multisensory research, a matter of ongoing debate is whether there are distinct pools of attentional resources for each sensory modality or whether attentional resources are shared across sensory modalities. Recent studies have suggested that attentional resource allocation across sensory modalities is in part task-dependent. That is, the recruitment of attentional resources across the sensory modalities depends on whether processing involves object-based attention (e.g., the discrimination of stimulus attributes) or spatial attention (e.g., the localization of stimuli). In the present paper, we review findings in multisensory research related to this view. For the visual and auditory sensory modalities, findings suggest that distinct resources are recruited when humans perform object-based attention tasks, whereas for the visual and tactile sensory modalities, partially shared resources are recruited. If object-based attention tasks are time-critical, shared resources are recruited across the sensory modalities. When humans perform an object-based attention task in combination with a spatial attention task, partly shared resources are recruited across the sensory modalities as well. Conversely, for spatial attention tasks, attentional processing does consistently involve shared attentional resources for the sensory modalities. Generally, findings suggest that the attentional system flexibly allocates attentional resources depending on task demands. We propose that such flexibility reflects a large-scale optimization strategy that minimizes the brain's costly resource expenditures and simultaneously maximizes capability to process currently relevant information.
A relational structure of voluntary visual-attention abilities
Skogsberg, KatieAnn; Grabowecky, Marcia; Wilt, Joshua; Revelle, William; Iordanescu, Lucica; Suzuki, Satoru
2015-01-01
Many studies have examined attention mechanisms involved in specific behavioral tasks (e.g., search, tracking, distractor inhibition). However, relatively little is known about the relationships among those attention mechanisms. Is there a fundamental attention faculty that makes a person superior or inferior at most types of attention tasks, or do relatively independent processes mediate different attention skills? We focused on individual differences in voluntary visual-attention abilities using a battery of eleven representative tasks. An application of parallel analysis, hierarchical-cluster analysis, and multidimensional scaling to the inter-task correlation matrix revealed four functional clusters, representing spatiotemporal attention, global attention, transient attention, and sustained attention, organized along two dimensions, one contrasting spatiotemporal and global attention and the other contrasting transient and sustained attention. Comparison with the neuroscience literature suggests that the spatiotemporal-global dimension corresponds to the dorsal frontoparietal circuit and the transient-sustained dimension corresponds to the ventral frontoparietal circuit, with distinct sub-regions mediating the separate clusters within each dimension. We also obtained highly specific patterns of gender difference, and of deficits for college students with elevated ADHD traits. These group differences suggest that different mechanisms of voluntary visual attention can be selectively strengthened or weakened based on genetic, experiential, and/or pathological factors. PMID:25867505
Lawton, Teri; Shelley-Tremblay, John
2017-01-01
The purpose of this study was to determine whether neurotraining to discriminate a moving test pattern relative to a stationary background, figure-ground discrimination, improves vision and cognitive functioning in dyslexics, as well as typically-developing normal students. We predict that improving the speed and sensitivity of figure-ground movement discrimination ( PATH to Reading neurotraining) acts to remediate visual timing deficits in the dorsal stream, thereby improving processing speed, reading fluency, and the executive control functions of attention and working memory in both dyslexic and normal students who had PATH neurotraining more than in those students who had no neurotraining. This prediction was evaluated by measuring whether dyslexic and normal students improved on standardized tests of cognitive skills following neurotraining exercises, more than following computer-based guided reading ( Raz-Kids ( RK )). The neurotraining used in this study was visually-based training designed to improve magnocellular function at both low and high levels in the dorsal stream: the input to the executive control networks coding working memory and attention. This approach represents a paradigm shift from the phonologically-based treatment for dyslexia, which concentrates on high-level speech and reading areas. This randomized controlled-validation study was conducted by training the entire second and third grade classrooms (42 students) for 30 min twice a week before guided reading. Standardized tests were administered at the beginning and end of 12-weeks of intervention training to evaluate improvements in academic skills. Only movement-discrimination training remediated both low-level visual timing deficits and high-level cognitive functioning, including selective and sustained attention, reading fluency and working memory for both dyslexic and normal students. Remediating visual timing deficits in the dorsal stream revealed the causal role of visual movement discrimination training in improving high-level cognitive functions such as attention, reading acquisition and working memory. This study supports the hypothesis that faulty timing in synchronizing the activity of magnocellular with parvocellular visual pathways in the dorsal stream is a fundamental cause of dyslexia and being at-risk for reading problems in normal students, and argues against the assumption that reading deficiencies in dyslexia are caused by phonological or language deficits, requiring a paradigm shift from phonologically-based treatment of dyslexia to a visually-based treatment. This study shows that visual movement-discrimination can be used not only to diagnose dyslexia early, but also for its successful treatment, so that reading problems do not prevent children from readily learning.
Lawton, Teri; Shelley-Tremblay, John
2017-01-01
The purpose of this study was to determine whether neurotraining to discriminate a moving test pattern relative to a stationary background, figure-ground discrimination, improves vision and cognitive functioning in dyslexics, as well as typically-developing normal students. We predict that improving the speed and sensitivity of figure-ground movement discrimination (PATH to Reading neurotraining) acts to remediate visual timing deficits in the dorsal stream, thereby improving processing speed, reading fluency, and the executive control functions of attention and working memory in both dyslexic and normal students who had PATH neurotraining more than in those students who had no neurotraining. This prediction was evaluated by measuring whether dyslexic and normal students improved on standardized tests of cognitive skills following neurotraining exercises, more than following computer-based guided reading (Raz-Kids (RK)). The neurotraining used in this study was visually-based training designed to improve magnocellular function at both low and high levels in the dorsal stream: the input to the executive control networks coding working memory and attention. This approach represents a paradigm shift from the phonologically-based treatment for dyslexia, which concentrates on high-level speech and reading areas. This randomized controlled-validation study was conducted by training the entire second and third grade classrooms (42 students) for 30 min twice a week before guided reading. Standardized tests were administered at the beginning and end of 12-weeks of intervention training to evaluate improvements in academic skills. Only movement-discrimination training remediated both low-level visual timing deficits and high-level cognitive functioning, including selective and sustained attention, reading fluency and working memory for both dyslexic and normal students. Remediating visual timing deficits in the dorsal stream revealed the causal role of visual movement discrimination training in improving high-level cognitive functions such as attention, reading acquisition and working memory. This study supports the hypothesis that faulty timing in synchronizing the activity of magnocellular with parvocellular visual pathways in the dorsal stream is a fundamental cause of dyslexia and being at-risk for reading problems in normal students, and argues against the assumption that reading deficiencies in dyslexia are caused by phonological or language deficits, requiring a paradigm shift from phonologically-based treatment of dyslexia to a visually-based treatment. This study shows that visual movement-discrimination can be used not only to diagnose dyslexia early, but also for its successful treatment, so that reading problems do not prevent children from readily learning. PMID:28555097
An object-based visual attention model for robotic applications.
Yu, Yuanlong; Mann, George K I; Gosine, Raymond G
2010-10-01
By extending integrated competition hypothesis, this paper presents an object-based visual attention model, which selects one object of interest using low-dimensional features, resulting that visual perception starts from a fast attentional selection procedure. The proposed attention model involves seven modules: learning of object representations stored in a long-term memory (LTM), preattentive processing, top-down biasing, bottom-up competition, mediation between top-down and bottom-up ways, generation of saliency maps, and perceptual completion processing. It works in two phases: learning phase and attending phase. In the learning phase, the corresponding object representation is trained statistically when one object is attended. A dual-coding object representation consisting of local and global codings is proposed. Intensity, color, and orientation features are used to build the local coding, and a contour feature is employed to constitute the global coding. In the attending phase, the model preattentively segments the visual field into discrete proto-objects using Gestalt rules at first. If a task-specific object is given, the model recalls the corresponding representation from LTM and deduces the task-relevant feature(s) to evaluate top-down biases. The mediation between automatic bottom-up competition and conscious top-down biasing is then performed to yield a location-based saliency map. By combination of location-based saliency within each proto-object, the proto-object-based saliency is evaluated. The most salient proto-object is selected for attention, and it is finally put into the perceptual completion processing module to yield a complete object region. This model has been applied into distinct tasks of robots: detection of task-specific stationary and moving objects. Experimental results under different conditions are shown to validate this model.
Visual attention in egocentric field-of-view using RGB-D data
NASA Astrophysics Data System (ADS)
Olesova, Veronika; Benesova, Wanda; Polatsek, Patrik
2017-03-01
Most of the existing solutions predicting visual attention focus solely on referenced 2D images and disregard any depth information. This aspect has always represented a weak point since the depth is an inseparable part of the biological vision. This paper presents a novel method of saliency map generation based on results of our experiments with egocentric visual attention and investigation of its correlation with perceived depth. We propose a model to predict the attention using superpixel representation with an assumption that contrast objects are usually salient and have a sparser spatial distribution of superpixels than their background. To incorporate depth information into this model, we propose three different depth techniques. The evaluation is done on our new RGB-D dataset created by SMI eye-tracker glasses and KinectV2 device.
Norup, Anne; Guldberg, Anne-Mette; Friis, Claus Radmer; Deurell, Eva Maria; Forchhammer, Hysse Birgitte
2016-07-15
To describe the work of an interdisciplinary visual team in a stroke unit providing early identification and assessment of patients with visual symptoms, and secondly to investigate frequency, type of visual deficits after stroke and self-evaluated impact on everyday life after stroke. For a period of three months, all stroke patients with visual or visuo-attentional deficits were registered, and data concerning etiology, severity and localization of the stroke and initial visual symptoms were registered. One month after discharge patients were contacted for follow-up. Of 349 acute stroke admissions, 84 (24.1%) had visual or visuo-attentional deficits initially. Of these 84 patients, informed consent was obtained from 22 patients with a mean age of 67.7 years(SD 10.1), and the majority was female (59.1%). Based on the initial neurological examination, 45.4% had some kind of visual field defect, 27.2% had some kind of oculomotor nerve palsy, and about 31.8% had some kind of inattention or visual neglect. The patients were contacted for a phone-based follow-up one month after discharge, where 85.7% reported changes in their vision since their stroke. In this consecutive sample, a quarter of all stroke patients had visual or visuo-attentional deficits initially. This emphasizes how professionals should have increased awareness of the existence of such deficits after stroke in order to provide the necessary interdisciplinary assessment and rehabilitation.
Feature-selective attention enhances color signals in early visual areas of the human brain.
Müller, M M; Andersen, S; Trujillo, N J; Valdés-Sosa, P; Malinowski, P; Hillyard, S A
2006-09-19
We used an electrophysiological measure of selective stimulus processing (the steady-state visual evoked potential, SSVEP) to investigate feature-specific attention to color cues. Subjects viewed a display consisting of spatially intermingled red and blue dots that continually shifted their positions at random. The red and blue dots flickered at different frequencies and thereby elicited distinguishable SSVEP signals in the visual cortex. Paying attention selectively to either the red or blue dot population produced an enhanced amplitude of its frequency-tagged SSVEP, which was localized by source modeling to early levels of the visual cortex. A control experiment showed that this selection was based on color rather than flicker frequency cues. This signal amplification of attended color items provides an empirical basis for the rapid identification of feature conjunctions during visual search, as proposed by "guided search" models.
Visual search, visual streams, and visual architectures.
Green, M
1991-10-01
Most psychological, physiological, and computational models of early vision suggest that retinal information is divided into a parallel set of feature modules. The dominant theories of visual search assume that these modules form a "blackboard" architecture: a set of independent representations that communicate only through a central processor. A review of research shows that blackboard-based theories, such as feature-integration theory, cannot easily explain the existing data. The experimental evidence is more consistent with a "network" architecture, which stresses that: (1) feature modules are directly connected to one another, (2) features and their locations are represented together, (3) feature detection and integration are not distinct processing stages, and (4) no executive control process, such as focal attention, is needed to integrate features. Attention is not a spotlight that synthesizes objects from raw features. Instead, it is better to conceptualize attention as an aperture which masks irrelevant visual information.
Using Computer Based Intervention
ERIC Educational Resources Information Center
Aliee, Zeinab Shams; Jomhari, Nazean; Rezaei, Reza; Alias, Norlidah
2013-01-01
One of the most common problems in autistic children is split attention. Split attention prevents autism children from being able to focus attention on their learning, and tasks. As a result, it is important to identify how to make autistic individuals focus attention on learning. Considering autistic individuals have higher visual abilities in…
Effect of attention therapy on reading comprehension.
Solan, Harold A; Shelley-Tremblay, John; Ficarra, Anthony; Silverman, Michael; Larson, Steven
2003-01-01
This study quantified the influence of visual attention therapy on the reading comprehension of Grade 6 children with moderate reading disabilities (RD) in the absence of specific reading remediation. Thirty students with below-average reading scores were identified using standardized reading comprehension tests. Fifteen children were placed randomly in the experimental group and 15 in the control group. The Attention Battery of the Cognitive Assessment System was administered to all participants. The experimental group received 12 one-hour sessions of individually monitored, computer-based attention therapy programs; the control group received no therapy during their 12-week period. Each group was retested on attention and reading comprehension measures. In order to stimulate selective and sustained visual attention, the vision therapy stressed various aspects of arousal, activation, and vigilance. At the completion of attention therapy, the mean standard attention and reading comprehension scores of the experimental group had improved significantly. The control group, however, showed no significant improvement in reading comprehension scores after 12 weeks. Although uncertainties still exist, this investigation supports the notion that visual attention is malleable and that attention therapy has a significant effect on reading comprehension in this often neglected population.
Keeping your eyes on the prize: anger and visual attention to threats and rewards.
Ford, Brett Q; Tamir, Maya; Brunyé, Tad T; Shirer, William R; Mahoney, Caroline R; Taylor, Holly A
2010-08-01
People's emotional states influence what they focus their attention on in their environment. For example, fear focuses people's attention on threats, whereas excitement may focus their attention on rewards. This study examined the effect of anger on overt visual attention to threats and rewards. Anger is an unpleasant emotion associated with approach motivation. If the effect of emotion on visual attention depends on valence, we would expect anger to focus people's attention on threats. If, however, the effect of emotion on visual attention depends on motivation, we would expect anger to focus people's attention on rewards. Using an eye tracker, we examined the effects of anger, fear, excitement, and a neutral emotional state on participants' overt visual attention to threatening, rewarding, and control images. We found that anger increased visual attention to rewarding information, but not to threatening information. These findings demonstrate that anger increases attention to potential rewards and suggest that the effects of emotions on visual attention are motivationally driven.
Visual Attention Measures Predict Pedestrian Detection in Central Field Loss: A Pilot Study
Alberti, Concetta F.; Horowitz, Todd; Bronstad, P. Matthew; Bowers, Alex R.
2014-01-01
Purpose The ability of visually impaired people to deploy attention effectively to maximize use of their residual vision in dynamic situations is fundamental to safe mobility. We conducted a pilot study to evaluate whether tests of dynamic attention (multiple object tracking; MOT) and static attention (Useful Field of View; UFOV) were predictive of the ability of people with central field loss (CFL) to detect pedestrian hazards in simulated driving. Methods 11 people with bilateral CFL (visual acuity 20/30-20/200) and 11 age-similar normally-sighted drivers participated. Dynamic and static attention were evaluated with brief, computer-based MOT and UFOV tasks, respectively. Dependent variables were the log speed threshold for 60% correct identification of targets (MOT) and the increase in the presentation duration for 75% correct identification of a central target when a concurrent peripheral task was added (UFOV divided and selective attention subtests). Participants drove in a simulator and pressed the horn whenever they detected pedestrians that walked or ran toward the road. The dependent variable was the proportion of timely reactions (could have stopped in time to avoid a collision). Results UFOV and MOT performance of CFL participants was poorer than that of controls, and the proportion of timely reactions was also lower (worse) (84% and 97%, respectively; p = 0.001). For CFL participants, higher proportions of timely reactions correlated significantly with higher (better) MOT speed thresholds (r = 0.73, p = 0.01), with better performance on the UFOV divided and selective attention subtests (r = −0.66 and −0.62, respectively, p<0.04), with better contrast sensitivity scores (r = 0.54, p = 0.08) and smaller scotomas (r = −0.60, p = 0.05). Conclusions Our results suggest that brief laboratory-based tests of visual attention may provide useful measures of functional visual ability of individuals with CFL relevant to more complex mobility tasks. PMID:24558495
Visual attention measures predict pedestrian detection in central field loss: a pilot study.
Alberti, Concetta F; Horowitz, Todd; Bronstad, P Matthew; Bowers, Alex R
2014-01-01
The ability of visually impaired people to deploy attention effectively to maximize use of their residual vision in dynamic situations is fundamental to safe mobility. We conducted a pilot study to evaluate whether tests of dynamic attention (multiple object tracking; MOT) and static attention (Useful Field of View; UFOV) were predictive of the ability of people with central field loss (CFL) to detect pedestrian hazards in simulated driving. 11 people with bilateral CFL (visual acuity 20/30-20/200) and 11 age-similar normally-sighted drivers participated. Dynamic and static attention were evaluated with brief, computer-based MOT and UFOV tasks, respectively. Dependent variables were the log speed threshold for 60% correct identification of targets (MOT) and the increase in the presentation duration for 75% correct identification of a central target when a concurrent peripheral task was added (UFOV divided and selective attention subtests). Participants drove in a simulator and pressed the horn whenever they detected pedestrians that walked or ran toward the road. The dependent variable was the proportion of timely reactions (could have stopped in time to avoid a collision). UFOV and MOT performance of CFL participants was poorer than that of controls, and the proportion of timely reactions was also lower (worse) (84% and 97%, respectively; p = 0.001). For CFL participants, higher proportions of timely reactions correlated significantly with higher (better) MOT speed thresholds (r = 0.73, p = 0.01), with better performance on the UFOV divided and selective attention subtests (r = -0.66 and -0.62, respectively, p<0.04), with better contrast sensitivity scores (r = 0.54, p = 0.08) and smaller scotomas (r = -0.60, p = 0.05). Our results suggest that brief laboratory-based tests of visual attention may provide useful measures of functional visual ability of individuals with CFL relevant to more complex mobility tasks.
The detrimental influence of attention on time-to-contact perception.
Baurès, Robin; Balestra, Marianne; Rosito, Maxime; VanRullen, Rufin
2018-04-23
To which extent is attention necessary to estimate the time-to-contact (TTC) of a moving object, that is, determining when the object will reach a specific point? While numerous studies have aimed at determining the visual cues and gaze strategy that allow this estimation, little is known about if and how attention is involved or required in this process. To answer this question, we carried out an experiment in which the participants estimated the TTC of a moving ball, either alone (single-task condition) or concurrently with a Rapid Serial Visual Presentation task embedded within the ball (dual-task condition). The results showed that participants had a better estimation when attention was driven away from the TTC task. This suggests that drawing attention away from the TTC estimation limits cognitive interference, intrusion of knowledge, or expectations that significantly modify the visually-based TTC estimation, and argues in favor of a limited attention to correctly estimate the TTC.
Neuronal basis of covert spatial attention in the frontal eye field.
Thompson, Kirk G; Biscoe, Keri L; Sato, Takashi R
2005-10-12
The influential "premotor theory of attention" proposes that developing oculomotor commands mediate covert visual spatial attention. A likely source of this attentional bias is the frontal eye field (FEF), an area of the frontal cortex involved in converting visual information into saccade commands. We investigated the link between FEF activity and covert spatial attention by recording from FEF visual and saccade-related neurons in monkeys performing covert visual search tasks without eye movements. Here we show that the source of attention signals in the FEF is enhanced activity of visually responsive neurons. At the time attention is allocated to the visual search target, nonvisually responsive saccade-related movement neurons are inhibited. Therefore, in the FEF, spatial attention signals are independent of explicit saccade command signals. We propose that spatially selective activity in FEF visually responsive neurons corresponds to the mental spotlight of attention via modulation of ongoing visual processing.
Attentional sensitivity and asymmetries of vertical saccade generation in monkey
NASA Technical Reports Server (NTRS)
Zhou, Wu; King, W. M.; Shelhamer, M. J. (Principal Investigator)
2002-01-01
The first goal of this study was to systematically document asymmetries in vertical saccade generation. We found that visually guided upward saccades have not only shorter latencies, but higher peak velocities, shorter durations and smaller errors. The second goal was to identify possible mechanisms underlying the asymmetry in vertical saccade latencies. Based on a recent model of saccade generation, three stages of saccade generation were investigated using specific behavioral paradigms: attention shift to a visual target (CUED paradigm), initiation of saccade generation (GAP paradigm) and release of the motor command to execute the saccade (DELAY paradigm). Our results suggest that initiation of a saccade (or "ocular disengagement") and its motor release contribute little to the asymmetry in vertical saccade latency. However, analysis of saccades made in the CUED paradigm indicated that it took less time to shift attention to a target in the upper visual field than to a target in the lower visual field. These data suggest that higher attentional sensitivity to targets in the upper visual field may contribute to shorter latencies of upward saccades.
Bindings in working memory: The role of object-based attention.
Gao, Zaifeng; Wu, Fan; Qiu, Fangfang; He, Kaifeng; Yang, Yue; Shen, Mowei
2017-02-01
Over the past decade, it has been debated whether retaining bindings in working memory (WM) requires more attention than retaining constituent features, focusing on domain-general attention and space-based attention. Recently, we proposed that retaining bindings in WM needs more object-based attention than retaining constituent features (Shen, Huang, & Gao, 2015, Journal of Experimental Psychology: Human Perception and Performance, doi: 10.1037/xhp0000018 ). However, only unitized visual bindings were examined; to establish the role of object-based attention in retaining bindings in WM, more emperical evidence is required. We tested 4 new bindings that had been suggested requiring no more attention than the constituent features in the WM maintenance phase: The two constituent features of binding were stored in different WM modules (cross-module binding, Experiment 1), from auditory and visual modalities (cross-modal binding, Experiment 2), or temporally (cross-time binding, Experiments 3) or spatially (cross-space binding, Experiments 4-6) separated. In the critical condition, we added a secondary object feature-report task during the delay interval of the change-detection task, such that the secondary task competed for object-based attention with the to-be-memorized stimuli. If more object-based attention is required for retaining bindings than for retaining constituent features, the secondary task should impair the binding performance to a larger degree relative to the performance of constituent features. Indeed, Experiments 1-6 consistently revealed a significantly larger impairment for bindings than for the constituent features, suggesting that object-based attention plays a pivotal role in retaining bindings in WM.
Category-based attentional guidance can operate in parallel for multiple target objects.
Jenkins, Michael; Grubert, Anna; Eimer, Martin
2018-05-01
The question whether the control of attention during visual search is always feature-based or can also be based on the category of objects remains unresolved. Here, we employed the N2pc component as an on-line marker for target selection processes to compare the efficiency of feature-based and category-based attentional guidance. Two successive displays containing pairs of real-world objects (line drawings of kitchen or clothing items) were separated by a 10 ms SOA. In Experiment 1, target objects were defined by their category. In Experiment 2, one specific visual object served as target (exemplar-based search). On different trials, targets appeared either in one or in both displays, and participants had to report the number of targets (one or two). Target N2pc components were larger and emerged earlier during exemplar-based search than during category-based search, demonstrating the superior efficiency of feature-based attentional guidance. On trials where target objects appeared in both displays, both targets elicited N2pc components that overlapped in time, suggesting that attention was allocated in parallel to these target objects. Critically, this was the case not only in the exemplar-based task, but also when targets were defined by their category. These results demonstrate that attention can be guided by object categories, and that this type of category-based attentional control can operate concurrently for multiple target objects. Copyright © 2018 Elsevier B.V. All rights reserved.
Sakurada, Takeshi; Hirai, Masahiro; Watanabe, Eiju
2016-01-01
Motor learning performance has been shown to be affected by various cognitive factors such as the focus of attention and motor imagery ability. Most previous studies on motor learning have shown that directing the attention of participants externally, such as on the outcome of an assigned body movement, can be more effective than directing their attention internally, such as on body movement itself. However, to the best of our knowledge, no findings have been reported on the effect of the focus of attention selected according to the motor imagery ability of an individual on motor learning performance. We measured individual motor imagery ability assessed by the Movement Imagery Questionnaire and classified the participants into kinesthetic-dominant (n = 12) and visual-dominant (n = 8) groups based on the questionnaire score. Subsequently, the participants performed a motor learning task such as tracing a trajectory using visuomotor rotation. When the participants were required to direct their attention internally, the after-effects of the learning task in the kinesthetic-dominant group were significantly greater than those in the visual-dominant group. Conversely, when the participants were required to direct their attention externally, the after-effects of the visual-dominant group were significantly greater than those of the kinesthetic-dominant group. Furthermore, we found a significant positive correlation between the size of after-effects and the modality-dominance of motor imagery. These results suggest that a suitable attention strategy based on the intrinsic motor imagery ability of an individual can improve performance during motor learning tasks.
Effects of visual attention on chromatic and achromatic detection sensitivities.
Uchikawa, Keiji; Sato, Masayuki; Kuwamura, Keiko
2014-05-01
Visual attention has a significant effect on various visual functions, such as response time, detection and discrimination sensitivity, and color appearance. It has been suggested that visual attention may affect visual functions in the early visual pathways. In this study we examined selective effects of visual attention on sensitivities of the chromatic and achromatic pathways to clarify whether visual attention modifies responses in the early visual system. We used a dual task paradigm in which the observer detected a peripheral test stimulus presented at 4 deg eccentricities while the observer concurrently carried out an attention task in the central visual field. In experiment 1, it was confirmed that peripheral spectral sensitivities were reduced more for short and long wavelengths than for middle wavelengths with the central attention task so that the spectral sensitivity function changed its shape by visual attention. This indicated that visual attention affected the chromatic response more strongly than the achromatic response. In experiment 2 it was obtained that the detection thresholds increased in greater degrees in the red-green and yellow-blue chromatic directions than in the white-black achromatic direction in the dual task condition. In experiment 3 we showed that the peripheral threshold elevations depended on the combination of color-directions of the central and peripheral stimuli. Since the chromatic and achromatic responses were separately processed in the early visual pathways, the present results provided additional evidence that visual attention affects responses in the early visual pathways.
Modulation of visual physiology by behavioral state in monkeys, mice, and flies.
Maimon, Gaby
2011-08-01
When a monkey attends to a visual stimulus, neurons in visual cortex respond differently to that stimulus than when the monkey attends elsewhere. In the 25 years since the initial discovery, the study of attention in primates has been central to understanding flexible visual processing. Recent experiments demonstrate that visual neurons in mice and fruit flies are modulated by locomotor behaviors, like running and flying, in a manner that resembles attention-based modulations in primates. The similar findings across species argue for a more generalized view of state-dependent sensory processing and for a renewed dialogue among vertebrate and invertebrate research communities. Copyright © 2011 Elsevier Ltd. All rights reserved.
Feature-based attentional modulation increases with stimulus separation in divided-attention tasks.
Sally, Sharon L; Vidnyánsky, Zoltán; Papathomas, Thomas V
2009-01-01
Attention modifies our visual experience by selecting certain aspects of a scene for further processing. It is therefore important to understand factors that govern the deployment of selective attention over the visual field. Both location and feature-specific mechanisms of attention have been identified and their modulatory effects can interact at a neural level (Treue and Martinez-Trujillo, 1999). The effects of spatial parameters on feature-based attentional modulation were examined for the feature dimensions of orientation, motion and color using three divided-attention tasks. Subjects performed concurrent discriminations of two briefly presented targets (Gabor patches) to the left and right of a central fixation point at eccentricities of +/-2.5 degrees , 5 degrees , 10 degrees and 15 degrees in the horizontal plane. Gabors were size-scaled to maintain consistent single-task performance across eccentricities. For all feature dimensions, the data show a linear increase in the attentional effects with target separation. In a control experiment, Gabors were presented on an isoeccentric viewing arc at 10 degrees and 15 degrees at the closest spatial separation (+/-2.5 degrees ) of the main experiment. Under these conditions, the effects of feature-based attentional effects were largely eliminated. Our results are consistent with the hypothesis that feature-based attention prioritizes the processing of attended features. Feature-based attentional mechanisms may have helped direct the attentional focus to the appropriate target locations at greater separations, whereas similar assistance may not have been necessary at closer target spacings. The results of the present study specify conditions under which dual-task performance benefits from sharing similar target features and may therefore help elucidate the processes by which feature-based attention operates.
Visual Attention during Spatial Language Comprehension
Burigo, Michele; Knoeferle, Pia
2015-01-01
Spatial terms such as “above”, “in front of”, and “on the left of” are all essential for describing the location of one object relative to another object in everyday communication. Apprehending such spatial relations involves relating linguistic to object representations by means of attention. This requires at least one attentional shift, and models such as the Attentional Vector Sum (AVS) predict the direction of that attention shift, from the sausage to the box for spatial utterances such as “The box is above the sausage”. To the extent that this prediction generalizes to overt gaze shifts, a listener’s visual attention should shift from the sausage to the box. However, listeners tend to rapidly look at referents in their order of mention and even anticipate them based on linguistic cues, a behavior that predicts a converse attentional shift from the box to the sausage. Four eye-tracking experiments assessed the role of overt attention in spatial language comprehension by examining to which extent visual attention is guided by words in the utterance and to which extent it also shifts “against the grain” of the unfolding sentence. The outcome suggests that comprehenders’ visual attention is predominantly guided by their interpretation of the spatial description. Visual shifts against the grain occurred only when comprehenders had some extra time, and their absence did not affect comprehension accuracy. However, the timing of this reverse gaze shift on a trial correlated with that trial’s verification time. Thus, while the timing of these gaze shifts is subtly related to the verification time, their presence is not necessary for successful verification of spatial relations. PMID:25607540
Intermodal Attention Shifts in Multimodal Working Memory.
Katus, Tobias; Grubert, Anna; Eimer, Martin
2017-04-01
Attention maintains task-relevant information in working memory (WM) in an active state. We investigated whether the attention-based maintenance of stimulus representations that were encoded through different modalities is flexibly controlled by top-down mechanisms that depend on behavioral goals. Distinct components of the ERP reflect the maintenance of tactile and visual information in WM. We concurrently measured tactile (tCDA) and visual contralateral delay activity (CDA) to track the attentional activation of tactile and visual information during multimodal WM. Participants simultaneously received tactile and visual sample stimuli on the left and right sides and memorized all stimuli on one task-relevant side. After 500 msec, an auditory retrocue indicated whether the sample set's tactile or visual content had to be compared with a subsequent test stimulus set. tCDA and CDA components that emerged simultaneously during the encoding phase were consistently reduced after retrocues that marked the corresponding (tactile or visual) modality as task-irrelevant. The absolute size of cue-dependent modulations was similar for the tCDA/CDA components and did not depend on the number of tactile/visual stimuli that were initially encoded into WM. Our results suggest that modality-specific maintenance processes in sensory brain regions are flexibly modulated by top-down influences that optimize multimodal WM representations for behavioral goals.
Hüttermann, Stefanie; Memmert, Daniel
2015-01-01
A great number of studies have shown that different motivational and mood states can influence human attentional processes in a variety of ways. Yet, none of these studies have reliably quantified the exact changes of the attentional focus in order to be able to compare attentional performances based on different motivational and mood influences and, beyond that, to evaluate their effectivity. In two studies, we explored subjects' differences in the breadth and distribution of attention as a function of motivational and mood manipulations. In Study 1, motivational orientation was classified in terms of regulatory focus (promotion vs. prevention) and in Study 2, mood was classified in terms of valence (positive vs. negative). Study 1 found a 10% wider distribution of the visual attention in promotion-oriented subjects compared to prevention-oriented ones. The results in Study 2 reveal a widening of the subjects' visual attentional breadth when listening to happy music by 22% and a narrowing by 36% when listening to melancholic music. In total, the findings show that systematic differences and casual changes in the shape and scope of focused attention may be associated with different motivational and mood states.
Snyder, Adam C.; Foxe, John J.
2010-01-01
Retinotopically specific increases in alpha-band (~10 Hz) oscillatory power have been strongly implicated in the suppression of processing for irrelevant parts of the visual field during the deployment of visuospatial attention. Here, we asked whether this alpha suppression mechanism also plays a role in the nonspatial anticipatory biasing of feature-based attention. Visual word cues informed subjects what the task-relevant feature of an upcoming visual stimulus (S2) was, while high-density electroencephalographic recordings were acquired. We examined anticipatory oscillatory activity in the Cue-to-S2 interval (~2 s). Subjects were cued on a trial-by-trial basis to attend to either the color or direction of motion of an upcoming dot field array, and to respond when they detected that a subset of the dots differed from the majority along the target feature dimension. We used the features of color and motion, expressly because they have well known, spatially separated cortical processing areas, to distinguish shifts in alpha power over areas processing each feature. Alpha power from dorsal regions increased when motion was the irrelevant feature (i.e., color was cued), and alpha power from ventral regions increased when color was irrelevant. Thus, alpha-suppression mechanisms appear to operate during feature-based selection in much the same manner as has been shown for space-based attention. PMID:20237273
ERIC Educational Resources Information Center
Alvarez, George A.; Horowitz, Todd S.; Arsenio, Helga C.; DiMase, Jennifer S.; Wolfe, Jeremy M.
2005-01-01
Multielement visual tracking and visual search are 2 tasks that are held to require visual-spatial attention. The authors used the attentional operating characteristic (AOC) method to determine whether both tasks draw continuously on the same attentional resource (i.e., whether the 2 tasks are mutually exclusive). The authors found that observers…
Orienting Attention to Sound Object Representations Attenuates Change Deafness
ERIC Educational Resources Information Center
Backer, Kristina C.; Alain, Claude
2012-01-01
According to the object-based account of attention, multiple objects coexist in short-term memory (STM), and we can selectively attend to a particular object of interest. Although there is evidence that attention can be directed to visual object representations, the assumption that attention can be oriented to sound object representations has yet…
Lin, Hung-Yu; Hsieh, Hsieh-Chun; Lee, Posen; Hong, Fu-Yuan; Chang, Wen-Dien; Liu, Kuo-Cheng
2017-08-01
This study explored auditory and visual attention in children with ADHD. In a randomized, two-period crossover design, 50 children with ADHD and 50 age- and sex-matched typically developing peers were measured with the Test of Various Attention (TOVA). The deficiency of visual attention is more serious than that of auditory attention in children with ADHD. On the auditory modality, only the deficit of attentional inconsistency is sufficient to explain most cases of ADHD; however, most of the children with ADHD suffered from deficits of sustained attention, response inhibition, and attentional inconsistency on the visual modality. Our results also showed that the deficit of attentional inconsistency is the most important indicator in diagnosing and intervening in ADHD when both auditory and visual modalities are considered. The findings provide strong evidence that the deficits of auditory attention are different from those of visual attention in children with ADHD.
Shalev, Nir; De Wandel, Linde; Dockree, Paul; Demeyere, Nele; Chechlacz, Magdalena
2017-10-03
The Theory of Visual Attention (TVA) provides a mathematical formalisation of the "biased competition" account of visual attention. Applying this model to individual performance in a free recall task allows the estimation of 5 independent attentional parameters: visual short-term memory (VSTM) capacity, speed of information processing, perceptual threshold of visual detection; attentional weights representing spatial distribution of attention (spatial bias), and the top-down selectivity index. While the TVA focuses on selection in space, complementary accounts of attention describe how attention is maintained over time, and how temporal processes interact with selection. A growing body of evidence indicates that different facets of attention interact and share common neural substrates. The aim of the current study was to modulate a spatial attentional bias via transfer effects, based on a mechanistic understanding of the interplay between spatial, selective and temporal aspects of attention. Specifically, we examined here: (i) whether a single administration of a lateralized sustained attention task could prime spatial orienting and lead to transferable changes in attentional weights (assigned to the left vs right hemi-field) and/or other attentional parameters assessed within the framework of TVA (Experiment 1); (ii) whether the effects of such spatial-priming on TVA parameters could be further enhanced by bi-parietal high frequency transcranial random noise stimulation (tRNS) (Experiment 2). Our results demonstrate that spatial attentional bias, as assessed within the TVA framework, was primed by sustaining attention towards the right hemi-field, but this spatial-priming effect did not occur when sustaining attention towards the left. Furthermore, we show that bi-parietal high-frequency tRNS combined with the rightward spatial-priming resulted in an increased attentional selectivity. To conclude, we present a novel, theory-driven method for attentional modulation providing important insights into how the spatial and temporal processes in attention interact with attentional selection. Copyright © 2017 Elsevier Ltd. All rights reserved.
Improved Visual Cognition through Stroboscopic Training
Appelbaum, L. Gregory; Schroeder, Julia E.; Cain, Matthew S.; Mitroff, Stephen R.
2011-01-01
Humans have a remarkable capacity to learn and adapt, but surprisingly little research has demonstrated generalized learning in which new skills and strategies can be used flexibly across a range of tasks and contexts. In the present work we examined whether generalized learning could result from visual–motor training under stroboscopic visual conditions. Individuals were assigned to either an experimental condition that trained with stroboscopic eyewear or to a control condition that underwent identical training with non-stroboscopic eyewear. The training consisted of multiple sessions of athletic activities during which participants performed simple drills such as throwing and catching. To determine if training led to generalized benefits, we used computerized measures to assess perceptual and cognitive abilities on a variety of tasks before and after training. Computer-based assessments included measures of visual sensitivity (central and peripheral motion coherence thresholds), transient spatial attention (a useful field of view – dual task paradigm), and sustained attention (multiple-object tracking). Results revealed that stroboscopic training led to significantly greater re-test improvement in central visual field motion sensitivity and transient attention abilities. No training benefits were observed for peripheral motion sensitivity or peripheral transient attention abilities, nor were benefits seen for sustained attention during multiple-object tracking. These findings suggest that stroboscopic training can effectively improve some, but not all aspects of visual perception and attention. PMID:22059078
Carlisle, Nancy B.; Woodman, Geoffrey F.
2014-01-01
Biased competition theory proposes that representations in working memory drive visual attention to select similar inputs. However, behavioral tests of this hypothesis have led to mixed results. These inconsistent findings could be due to the inability of behavioral measures to reliably detect the early, automatic effects on attentional deployment that the memory representations exert. Alternatively, executive mechanisms may govern how working memory representations influence attention based on higher-level goals. In the present study, we tested these hypotheses using the N2pc component of participants’ event-related potentials (ERPs) to directly measure the early deployments of covert attention. Participants searched for a target in an array that sometimes contained a memory-matching distractor. In Experiments 1–3, we manipulated the difficulty of the target discrimination and the proximity of distractors, but consistently observed that covert attention was deployed to the search targets and not the memory-matching distractors. In Experiment 4, we showed that when participants’ goal involved attending to memory-matching items that these items elicited a large and early N2pc. Our findings demonstrate that working memory representations alone are not sufficient to guide early deployments of visual attention to matching inputs and that goal-dependent executive control mediates the interactions between working memory representations and visual attention. PMID:21254796
Visual memory and sustained attention impairment in youths with autism spectrum disorders.
Chien, Y-L; Gau, S S-F; Shang, C-Y; Chiu, Y-N; Tsai, W-C; Wu, Y-Y
2015-08-01
An uneven neurocognitive profile is a hallmark of autism spectrum disorder (ASD). Studies focusing on the visual memory performance in ASD have shown controversial results. We investigated visual memory and sustained attention in youths with ASD and typically developing (TD) youths. We recruited 143 pairs of youths with ASD (males 93.7%; mean age 13.1, s.d. 3.5 years) and age- and sex-matched TD youths. The ASD group consisted of 67 youths with autistic disorder (autism) and 76 with Asperger's disorder (AS) based on the DSM-IV criteria. They were assessed using the Cambridge Neuropsychological Test Automated Battery involving the visual memory [spatial recognition memory (SRM), delayed matching to sample (DMS), paired associates learning (PAL)] and sustained attention (rapid visual information processing; RVP). Youths with ASD performed significantly worse than TD youths on most of the tasks; the significance disappeared in the superior intelligence quotient (IQ) subgroup. The response latency on the tasks did not differ between the ASD and TD groups. Age had significant main effects on SRM, DMS, RVP and part of PAL tasks and had an interaction with diagnosis in DMS and RVP performance. There was no significant difference between autism and AS on visual tasks. Our findings implied that youths with ASD had a wide range of visual memory and sustained attention impairment that was moderated by age and IQ, which supports temporal and frontal lobe dysfunction in ASD. The lack of difference between autism and AS implies that visual memory and sustained attention cannot distinguish these two ASD subtypes, which supports DSM-5 ASD criteria.
Attention improves encoding of task-relevant features in the human visual cortex.
Jehee, Janneke F M; Brady, Devin K; Tong, Frank
2011-06-01
When spatial attention is directed toward a particular stimulus, increased activity is commonly observed in corresponding locations of the visual cortex. Does this attentional increase in activity indicate improved processing of all features contained within the attended stimulus, or might spatial attention selectively enhance the features relevant to the observer's task? We used fMRI decoding methods to measure the strength of orientation-selective activity patterns in the human visual cortex while subjects performed either an orientation or contrast discrimination task, involving one of two laterally presented gratings. Greater overall BOLD activation with spatial attention was observed in visual cortical areas V1-V4 for both tasks. However, multivariate pattern analysis revealed that orientation-selective responses were enhanced by attention only when orientation was the task-relevant feature and not when the contrast of the grating had to be attended. In a second experiment, observers discriminated the orientation or color of a specific lateral grating. Here, orientation-selective responses were enhanced in both tasks, but color-selective responses were enhanced only when color was task relevant. In both experiments, task-specific enhancement of feature-selective activity was not confined to the attended stimulus location but instead spread to other locations in the visual field, suggesting the concurrent involvement of a global feature-based attentional mechanism. These results suggest that attention can be remarkably selective in its ability to enhance particular task-relevant features and further reveal that increases in overall BOLD amplitude are not necessarily accompanied by improved processing of stimulus information.
Exogenous temporal cues enhance recognition memory in an object-based manner.
Ohyama, Junji; Watanabe, Katsumi
2010-11-01
Exogenous attention enhances the perception of attended items in both a space-based and an object-based manner. Exogenous attention also improves recognition memory for attended items in the space-based mode. However, it has not been examined whether object-based exogenous attention enhances recognition memory. To address this issue, we examined whether a sudden visual change in a task-irrelevant stimulus (an exogenous cue) would affect participants' recognition memory for items that were serially presented around a cued time. The results showed that recognition accuracy for an item was strongly enhanced when the visual cue occurred at the same location and time as the item (Experiments 1 and 2). The memory enhancement effect occurred when the exogenous visual cue and an item belonged to the same object (Experiments 3 and 4) and even when the cue was counterpredictive of the timing of an item to be asked about (Experiment 5). The present study suggests that an exogenous temporal cue automatically enhances the recognition accuracy for an item that is presented at close temporal proximity to the cue and that recognition memory enhancement occurs in an object-based manner.
Haeger, Mathias; Bock, Otmar; Memmert, Daniel; Hüttermann, Stefanie
2018-01-01
Virtual reality offers a good possibility for the implementation of real-life tasks in a laboratory-based training or testing scenario. Thus, a computerized training in a driving simulator offers an ecological valid training approach. Visual attention had an influence on driving performance, so we used the reverse approach to test the influence of a driving training on visual attention and executive functions. Thirty-seven healthy older participants (mean age: 71.46 ± 4.09; gender: 17 men and 20 women) took part in our controlled experimental study. We examined transfer effects from a four-week driving training (three times per week) on visual attention, executive function, and motor skill. Effects were analyzed using an analysis of variance with repeated measurements. Therefore, main factors were group and time to show training-related benefits of our intervention. Results revealed improvements for the intervention group in divided visual attention; however, there were benefits neither in the other cognitive domains nor in the additional motor task. Thus, there are no broad training-induced transfer effects from such an ecologically valid training regime. This lack of findings could be addressed to insufficient training intensities or a participant-induced bias following the cancelled randomization process.
Kyllingsbæk, Søren; Sy, Jocelyn L; Giesbrecht, Barry
2011-05-01
The allocation of visual processing capacity is a key topic in studies and theories of visual attention. The load theory of Lavie (1995) proposes that allocation happens in two steps where processing resources are first allocated to task-relevant stimuli and secondly remaining capacity 'spills over' to task-irrelevant distractors. In contrast, the Theory of Visual Attention (TVA) proposed by Bundesen (1990) assumes that allocation happens in a single step where processing capacity is allocated to all stimuli, both task-relevant and task-irrelevant, in proportion to their relative attentional weight. Here we present data from two partial report experiments where we varied the number and discriminability of the task-irrelevant stimuli (Experiment 1) and perceptual load (Experiment 2). The TVA fitted the data of the two experiments well thus favoring the simple explanation with a single step of capacity allocation. We also show that the effects of varying perceptual load can only be explained by a combined effect of allocation of processing capacity as well as limits in visual working memory. Finally, we link the results to processing capacity understood at the neural level based on the neural theory of visual attention by Bundesen et al. (2005). Copyright © 2010 Elsevier Ltd. All rights reserved.
Attention Priority Map of Face Images in Human Early Visual Cortex.
Mo, Ce; He, Dongjun; Fang, Fang
2018-01-03
Attention priority maps are topographic representations that are used for attention selection and guidance of task-related behavior during visual processing. Previous studies have identified attention priority maps of simple artificial stimuli in multiple cortical and subcortical areas, but investigating neural correlates of priority maps of natural stimuli is complicated by the complexity of their spatial structure and the difficulty of behaviorally characterizing their priority map. To overcome these challenges, we reconstructed the topographic representations of upright/inverted face images from fMRI BOLD signals in human early visual areas primary visual cortex (V1) and the extrastriate cortex (V2 and V3) based on a voxelwise population receptive field model. We characterized the priority map behaviorally as the first saccadic eye movement pattern when subjects performed a face-matching task relative to the condition in which subjects performed a phase-scrambled face-matching task. We found that the differential first saccadic eye movement pattern between upright/inverted and scrambled faces could be predicted from the reconstructed topographic representations in V1-V3 in humans of either sex. The coupling between the reconstructed representation and the eye movement pattern increased from V1 to V2/3 for the upright faces, whereas no such effect was found for the inverted faces. Moreover, face inversion modulated the coupling in V2/3, but not in V1. Our findings provide new evidence for priority maps of natural stimuli in early visual areas and extend traditional attention priority map theories by revealing another critical factor that affects priority maps in extrastriate cortex in addition to physical salience and task goal relevance: image configuration. SIGNIFICANCE STATEMENT Prominent theories of attention posit that attention sampling of visual information is mediated by a series of interacting topographic representations of visual space known as attention priority maps. Until now, neural evidence of attention priority maps has been limited to studies involving simple artificial stimuli and much remains unknown about the neural correlates of priority maps of natural stimuli. Here, we show that attention priority maps of face stimuli could be found in primary visual cortex (V1) and the extrastriate cortex (V2 and V3). Moreover, representations in extrastriate visual areas are strongly modulated by image configuration. These findings extend our understanding of attention priority maps significantly by showing that they are modulated, not only by physical salience and task-goal relevance, but also by the configuration of stimuli images. Copyright © 2018 the authors 0270-6474/18/380149-09$15.00/0.
Project DyAdd: Visual Attention in Adult Dyslexia and ADHD
ERIC Educational Resources Information Center
Laasonen, Marja; Salomaa, Jonna; Cousineau, Denis; Leppamaki, Sami; Tani, Pekka; Hokkanen, Laura; Dye, Matthew
2012-01-01
In this study of the project DyAdd, three aspects of visual attention were investigated in adults (18-55 years) with dyslexia (n = 35) or attention deficit/hyperactivity disorder (ADHD, n = 22), and in healthy controls (n = 35). Temporal characteristics of visual attention were assessed with Attentional Blink (AB), capacity of visual attention…
A probabilistic model of overt visual attention for cognitive robots.
Begum, Momotaz; Karray, Fakhri; Mann, George K I; Gosine, Raymond G
2010-10-01
Visual attention is one of the major requirements for a robot to serve as a cognitive companion for human. The robotic visual attention is mostly concerned with overt attention which accompanies head and eye movements of a robot. In this case, each movement of the camera head triggers a number of events, namely transformation of the camera and the image coordinate systems, change of content of the visual field, and partial appearance of the stimuli. All of these events contribute to the reduction in probability of meaningful identification of the next focus of attention. These events are specific to overt attention with head movement and, therefore, their effects are not addressed in the classical models of covert visual attention. This paper proposes a Bayesian model as a robot-centric solution for the overt visual attention problem. The proposed model, while taking inspiration from the primates visual attention mechanism, guides a robot to direct its camera toward behaviorally relevant and/or visually demanding stimuli. A particle filter implementation of this model addresses the challenges involved in overt attention with head movement. Experimental results demonstrate the performance of the proposed model.
2017-01-01
Selective visual attention enables organisms to enhance the representation of behaviorally relevant stimuli by altering the encoding properties of single receptive fields (RFs). Yet we know little about how the attentional modulations of single RFs contribute to the encoding of an entire visual scene. Addressing this issue requires (1) measuring a group of RFs that tile a continuous portion of visual space, (2) constructing a population-level measurement of spatial representations based on these RFs, and (3) linking how different types of RF attentional modulations change the population-level representation. To accomplish these aims, we used fMRI to characterize the responses of thousands of voxels in retinotopically organized human cortex. First, we found that the response modulations of voxel RFs (vRFs) depend on the spatial relationship between the RF center and the visual location of the attended target. Second, we used two analyses to assess the spatial encoding quality of a population of voxels. We found that attention increased fine spatial discriminability and representational fidelity near the attended target. Third, we linked these findings by manipulating the observed vRF attentional modulations and recomputing our measures of the fidelity of population codes. Surprisingly, we discovered that attentional enhancements of population-level representations largely depend on position shifts of vRFs, rather than changes in size or gain. Our data suggest that position shifts of single RFs are a principal mechanism by which attention enhances population-level representations in visual cortex. SIGNIFICANCE STATEMENT Although changes in the gain and size of RFs have dominated our view of how attention modulates visual information codes, such hypotheses have largely relied on the extrapolation of single-cell responses to population responses. Here we use fMRI to relate changes in single voxel receptive fields (vRFs) to changes in population-level representations. We find that vRF position shifts contribute more to population-level enhancements of visual information than changes in vRF size or gain. This finding suggests that position shifts are a principal mechanism by which spatial attention enhances population codes for relevant visual information. This poses challenges for labeled line theories of information processing, suggesting that downstream regions likely rely on distributed inputs rather than single neuron-to-neuron mappings. PMID:28242794
Sex differences in adults' relative visual interest in female and male faces, toys, and play styles.
Alexander, Gerianne M; Charles, Nora
2009-06-01
An individual's reproductive potential appears to influence response to attractive faces of the opposite sex. Otherwise, relatively little is known about the characteristics of the adult observer that may influence his or her affective evaluation of male and female faces. An untested hypothesis (based on the proposed role of attractive faces in mate selection) is that most women would show greater interest in male faces whereas most men would show greater interest in female faces. Further, evidence from individuals with preferences for same-sex sexual partners suggests that response to attractive male and female faces may be influenced by gender-linked play preferences. To test these hypotheses, visual attention directed to sex-linked stimuli (faces, toys, play styles) was measured in 39 men and 44 women using eye tracking technology. Consistent with our predictions, men directed greater visual attention to all male-typical stimuli and visual attention to male and female faces was associated with visual attention to gender conforming or nonconforming stimuli in a manner consistent with previous research on sexual orientation. In contrast, women showed a visual preference for female-typical toys, but no visual preference for male faces or female-typical play styles. These findings indicate that sex differences in visual processing extend beyond stimuli associated with adult sexual behavior. We speculate that sex differences in visual processing are a component of the expression of gender phenotypes across the lifespan that may reflect sex differences in the motivational properties of gender-linked stimuli.
Robot Evolutionary Localization Based on Attentive Visual Short-Term Memory
Vega, Julio; Perdices, Eduardo; Cañas, José M.
2013-01-01
Cameras are one of the most relevant sensors in autonomous robots. However, two of their challenges are to extract useful information from captured images, and to manage the small field of view of regular cameras. This paper proposes implementing a dynamic visual memory to store the information gathered from a moving camera on board a robot, followed by an attention system to choose where to look with this mobile camera, and a visual localization algorithm that incorporates this visual memory. The visual memory is a collection of relevant task-oriented objects and 3D segments, and its scope is wider than the current camera field of view. The attention module takes into account the need to reobserve objects in the visual memory and the need to explore new areas. The visual memory is useful also in localization tasks, as it provides more information about robot surroundings than the current instantaneous image. This visual system is intended as underlying technology for service robot applications in real people's homes. Several experiments have been carried out, both with simulated and real Pioneer and Nao robots, to validate the system and each of its components in office scenarios. PMID:23337333
Grubert, Anna; Eimer, Martin
2013-10-01
To find out whether attentional target selection can be effectively guided by top-down task sets for multiple colors, we measured behavioral and ERP markers of attentional target selection in an experiment where participants had to identify color-defined target digits that were accompanied by a single gray distractor object in the opposite visual field. In the One Color task, target color was constant. In the Two Color task, targets could have one of two equally likely colors. Color-guided target selection was less efficient during multiple-color relative to single-color search, and this was reflected by slower response times and delayed N2pc components. Nontarget-color items that were presented in half of all trials captured attention and gained access to working memory when participants searched for two colors, but were excluded from attentional processing in the One Color task. Results demonstrate qualitative differences in the guidance of attentional target selection between single-color and multiple-color visual search. They suggest that top-down attentional control can be applied much more effectively when it is based on a single feature-specific attentional template. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Spatial attention enhances the selective integration of activity from area MT.
Masse, Nicolas Y; Herrington, Todd M; Cook, Erik P
2012-09-01
Distinguishing which of the many proposed neural mechanisms of spatial attention actually underlies behavioral improvements in visually guided tasks has been difficult. One attractive hypothesis is that attention allows downstream neural circuits to selectively integrate responses from the most informative sensory neurons. This would allow behavioral performance to be based on the highest-quality signals available in visual cortex. We examined this hypothesis by asking how spatial attention affects both the stimulus sensitivity of middle temporal (MT) neurons and their corresponding correlation with behavior. Analyzing a data set pooled from two experiments involving four monkeys, we found that spatial attention did not appreciably affect either the stimulus sensitivity of the neurons or the correlation between their activity and behavior. However, for those sessions in which there was a robust behavioral effect of attention, focusing attention inside the neuron's receptive field significantly increased the correlation between these two metrics, an indication of selective integration. These results suggest that, similar to mechanisms proposed for the neural basis of perceptual learning, the behavioral benefits of focusing spatial attention are attributable to selective integration of neural activity from visual cortical areas by their downstream targets.
Global Enhancement but Local Suppression in Feature-based Attention.
Forschack, Norman; Andersen, Søren K; Müller, Matthias M
2017-04-01
A key property of feature-based attention is global facilitation of the attended feature throughout the visual field. Previously, we presented superimposed red and blue randomly moving dot kinematograms (RDKs) flickering at a different frequency each to elicit frequency-specific steady-state visual evoked potentials (SSVEPs) that allowed us to analyze neural dynamics in early visual cortex when participants shifted attention to one of the two colors. Results showed amplification of the attended and suppression of the unattended color as measured by SSVEP amplitudes. Here, we tested whether the suppression of the unattended color also operates globally. To this end, we presented superimposed flickering red and blue RDKs in the center of a screen and a red and blue RDK in the left and right periphery, respectively, also flickering at different frequencies. Participants shifted attention to one color of the superimposed RDKs in the center to discriminate coherent motion events in the attended from the unattended color RDK, whereas the peripheral RDKs were task irrelevant. SSVEP amplitudes elicited by the centrally presented RDKs confirmed the previous findings of amplification and suppression. For peripherally located RDKs, we found the expected SSVEP amplitude increase, relative to precue baseline when color matched the one of the centrally attended RDK. We found no reduction in SSVEP amplitude relative to precue baseline, when the peripheral color matched the unattended one of the central RDK, indicating that, while facilitation in feature-based attention operates globally, suppression seems to be linked to the location of focused attention.
Bourgeois, Alexia; Neveu, Rémi; Vuilleumier, Patrik
2016-01-01
In order to behave adaptively, attention can be directed in space either voluntarily (i.e., endogenously) according to strategic goals, or involuntarily (i.e., exogenously) through reflexive capture by salient or novel events. The emotional or motivational value of stimuli can also strongly influence attentional orienting. However, little is known about how reward-related effects compete or interact with endogenous and exogenous attention mechanisms, particularly outside of awareness. Here we developed a visual search paradigm to study subliminal value-based attentional orienting. We systematically manipulated goal-directed or stimulus-driven attentional orienting and examined whether an irrelevant, but previously rewarded stimulus could compete with both types of spatial attention during search. Critically, reward was learned without conscious awareness in a preceding phase where one among several visual symbols was consistently paired with a subliminal monetary reinforcement cue. Our results demonstrated that symbols previously associated with a monetary reward received higher attentional priority in the subsequent visual search task, even though these stimuli and reward were no longer task-relevant, and despite reward being unconsciously acquired. Thus, motivational processes operating independent of conscious awareness may provide powerful influences on mechanisms of attentional selection, which could mitigate both stimulus-driven and goal-directed shifts of attention. PMID:27483371
Perceptual and response-dependent profiles of attention in children with ADHD.
Caspersen, Ida Dyhr; Petersen, Anders; Vangkilde, Signe; Plessen, Kerstin Jessica; Habekost, Thomas
2017-05-01
Attention-deficit hyperactivity disorder (ADHD) is a complex developmental neuropsychiatric disorder, characterized by inattentiveness, impulsivity, and hyperactivity. Recent literature suggests a potential core deficit underlying these behaviors may involve inefficient processing when contextual stimulation is low. In order to specify this inefficiency, the aim of the present study was to disentangle perceptual and response-based deficits of attention by supplementing classic reaction time (RT) measures with an accuracy-only test. Moreover, it was explored whether ADHD symptom severity was systematically related to perceptual and response-based processes. We applied an RT-independent paradigm (Bundesen, 1990) and a sustained attention task (Dockree et al., 2006) to test visual attention in 24 recently diagnosed, medication-naïve children with ADHD, 14 clinical controls with pervasive developmental disorder, and 57 healthy controls. Outcome measures included perceptual processing speed, capacity of visual short-term memory, and errors of commission and omission. Children with ADHD processed information abnormally slow (d = 0.92), and performed poorly on RT variability and response stability (d's ranging from 0.60 to 1.08). In the ADHD group only, slowed visual processing speed was significantly related to response lapses (omission errors). This correlation was not explained by behavioral ratings of ADHD severity. Based on combined assessment of perceptual and response-dependent variables of attention, the present study demonstrates a specific cognitive profile in children with ADHD. This profile distinguishes the disorder at a basic level of attentional functioning, and may define subgroups of children with ADHD in a way that is more sensitive than clinical rating scales. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Executive and Perceptual Distraction in Visual Working Memory
2017-01-01
The contents of visual working memory are likely to reflect the influence of both executive control resources and information present in the environment. We investigated whether executive attention is critical in the ability to exclude unwanted stimuli by introducing concurrent potentially distracting irrelevant items to a visual working memory paradigm, and manipulating executive load using simple or more demanding secondary verbal tasks. Across 7 experiments varying in presentation format, timing, stimulus set, and distractor number, we observed clear disruptive effects of executive load and visual distraction, but relatively minimal evidence supporting an interactive relationship between these factors. These findings are in line with recent evidence using delay-based interference, and suggest that different forms of attentional selection operate relatively independently in visual working memory. PMID:28414499
A review of the findings and theories on surface size effects on visual attention
Peschel, Anne O.; Orquin, Jacob L.
2013-01-01
That surface size has an impact on attention has been well-known in advertising research for almost a century; however, theoretical accounts of this effect have been sparse. To address this issue, we review studies on surface size effects on eye movements in this paper. While most studies find that large objects are more likely to be fixated, receive more fixations, and are fixated faster than small objects, a comprehensive explanation of this effect is still lacking. To bridge the theoretical gap, we relate the findings from this review to three theories of surface size effects suggested in the literature: a linear model based on the assumption of random fixations (Lohse, 1997), a theory of surface size as visual saliency (Pieters etal., 2007), and a theory based on competition for attention (CA; Janiszewski, 1998). We furthermore suggest a fourth model – demand for attention – which we derive from the theory of CA by revising the underlying model assumptions. In order to test the models against each other, we reanalyze data from an eye tracking study investigating surface size and saliency effects on attention. The reanalysis revealed little support for the first three theories while the demand for attention model showed a much better alignment with the data. We conclude that surface size effects may best be explained as an increase in object signal strength which depends on object size, number of objects in the visual scene, and object distance to the center of the scene. Our findings suggest that advertisers should take into account how objects in the visual scene interact in order to optimize attention to, for instance, brands and logos. PMID:24367343
A review of the findings and theories on surface size effects on visual attention.
Peschel, Anne O; Orquin, Jacob L
2013-12-09
That surface size has an impact on attention has been well-known in advertising research for almost a century; however, theoretical accounts of this effect have been sparse. To address this issue, we review studies on surface size effects on eye movements in this paper. While most studies find that large objects are more likely to be fixated, receive more fixations, and are fixated faster than small objects, a comprehensive explanation of this effect is still lacking. To bridge the theoretical gap, we relate the findings from this review to three theories of surface size effects suggested in the literature: a linear model based on the assumption of random fixations (Lohse, 1997), a theory of surface size as visual saliency (Pieters etal., 2007), and a theory based on competition for attention (CA; Janiszewski, 1998). We furthermore suggest a fourth model - demand for attention - which we derive from the theory of CA by revising the underlying model assumptions. In order to test the models against each other, we reanalyze data from an eye tracking study investigating surface size and saliency effects on attention. The reanalysis revealed little support for the first three theories while the demand for attention model showed a much better alignment with the data. We conclude that surface size effects may best be explained as an increase in object signal strength which depends on object size, number of objects in the visual scene, and object distance to the center of the scene. Our findings suggest that advertisers should take into account how objects in the visual scene interact in order to optimize attention to, for instance, brands and logos.
Bauer, Anika; Schneider, Silvia; Waldorf, Manuel; Adolph, Dirk; Vocks, Silja
2017-01-01
Previous research indicates that body image disturbance is transmitted from mother to daughter via modeling of maternal body-related behaviors and attitudes (indirect transmission) and via maternal body-related feedback (direct transmission). So far, the transmission of body-related attentional biases, which according to cognitive-behavioral theories play a prominent role in the development and maintenance of eating disorders, has not been analyzed. The current eye-tracking study applied the concepts of direct and indirect transmission to body-related attentional biases by examining body-related viewing patterns on self- and other-pictures within mother-daughter dyads. Eye movements of N = 82 participants (n = 41 healthy female adolescents, mean age 15.82 years, SD = 1.80, and their mothers, mean age 47.78 years, SD = 4.52) were recorded while looking at whole-body pictures of themselves and a control peer. Based on fixations on self-defined attractive and unattractive body areas, visual attention bias scores were calculated for mothers and daughters, representing the pattern of body-related attention allocation. Based on mothers' fixations on their own daughter's and the adolescent peer's body, a second visual attention bias score was calculated, reflecting the mothers' viewing pattern on their own daughter. Analysis of variance revealed an attentional bias for self-defined unattractive body areas in adolescents. The girls' visual attention bias score correlated significantly with their mothers' bias score, indicating indirect transmission, and with their mothers' second bias score, indicating direct transmission. Moreover, the girls' bias score correlated significantly with negative body-related feedback from their mothers. Female adolescents show a deficit-oriented attentional bias for one's own and a peer's body. The correlated body-related attention patterns imply that attentional biases might be transmitted directly and indirectly from mothers to daughters. Results underline the potential relevance of maternal influences for the development of body image disturbance in girls and suggest specific family-based approaches for the prevention and treatment of eating disorders.
Gomez-Ramirez, Manuel; Trzcinski, Natalie K.; Mihalas, Stefan; Niebur, Ernst
2014-01-01
Studies in vision show that attention enhances the firing rates of cells when it is directed towards their preferred stimulus feature. However, it is unknown whether other sensory systems employ this mechanism to mediate feature selection within their modalities. Moreover, whether feature-based attention modulates the correlated activity of a population is unclear. Indeed, temporal correlation codes such as spike-synchrony and spike-count correlations (rsc) are believed to play a role in stimulus selection by increasing the signal and reducing the noise in a population, respectively. Here, we investigate (1) whether feature-based attention biases the correlated activity between neurons when attention is directed towards their common preferred feature, (2) the interplay between spike-synchrony and rsc during feature selection, and (3) whether feature attention effects are common across the visual and tactile systems. Single-unit recordings were made in secondary somatosensory cortex of three non-human primates while animals engaged in tactile feature (orientation and frequency) and visual discrimination tasks. We found that both firing rate and spike-synchrony between neurons with similar feature selectivity were enhanced when attention was directed towards their preferred feature. However, attention effects on spike-synchrony were twice as large as those on firing rate, and had a tighter relationship with behavioral performance. Further, we observed increased rsc when attention was directed towards the visual modality (i.e., away from touch). These data suggest that similar feature selection mechanisms are employed in vision and touch, and that temporal correlation codes such as spike-synchrony play a role in mediating feature selection. We posit that feature-based selection operates by implementing multiple mechanisms that reduce the overall noise levels in the neural population and synchronize activity across subpopulations that encode the relevant features of sensory stimuli. PMID:25423284
Incidental biasing of attention from visual long-term memory.
Fan, Judith E; Turk-Browne, Nicholas B
2016-06-01
Holding recently experienced information in mind can help us achieve our current goals. However, such immediate and direct forms of guidance from working memory are less helpful over extended delays or when other related information in long-term memory is useful for reaching these goals. Here we show that information that was encoded in the past but is no longer present or relevant to the task also guides attention. We examined this by associating multiple unique features with novel shapes in visual long-term memory (VLTM), and subsequently testing how memories for these objects biased the deployment of attention. In Experiment 1, VLTM for associated features guided visual search for the shapes, even when these features had never been task-relevant. In Experiment 2, associated features captured attention when presented in isolation during a secondary task that was completely unrelated to the shapes. These findings suggest that long-term memory enables a durable and automatic type of memory-based attentional control. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Cultural differences in attention: Eye movement evidence from a comparative visual search task.
Alotaibi, Albandri; Underwood, Geoffrey; Smith, Alastair D
2017-10-01
Individual differences in visual attention have been linked to thinking style: analytic thinking (common in individualistic cultures) is thought to promote attention to detail and focus on the most important part of a scene, whereas holistic thinking (common in collectivist cultures) promotes attention to the global structure of a scene and the relationship between its parts. However, this theory is primarily based on relatively simple judgement tasks. We compared groups from Great Britain (an individualist culture) and Saudi Arabia (a collectivist culture) on a more complex comparative visual search task, using simple natural scenes. A higher overall number of fixations for Saudi participants, along with longer search times, indicated less efficient search behaviour than British participants. Furthermore, intra-group comparisons of scan-path for Saudi participants revealed less similarity than within the British group. Together, these findings suggest that there is a positive relationship between an analytic cognitive style and controlled attention. Copyright © 2017 Elsevier Inc. All rights reserved.
Context and competition in the capture of visual attention.
Hickey, Clayton; Theeuwes, Jan
2011-10-01
Competition-based models of visual attention propose that perceptual ambiguity is resolved through inhibition, which is stronger when objects share a greater number of neural receptive fields (RFs). According to this theory, the misallocation of attention to a salient distractor--that is, the capture of attention--can be indexed in RF-scaled interference costs. We used this pattern to investigate distractor-related costs in visual search across several manipulations of temporal context. Distractor costs are generally larger under circumstances in which the distractor can be defined by features that have recently characterised the target, suggesting that capture occurs in these trials. However, our results show that search for a target in the presence of a salient distractor also produces RF-scaled costs when the features defining the target and distractor do not vary from trial to trial. Contextual differences in distractor costs appear to reflect something other than capture, perhaps a qualitative difference in the type of attentional mechanism deployed to the distractor.
Threat captures attention but does not affect learning of contextual regularities.
Yamaguchi, Motonori; Harwood, Sarah L
2017-04-01
Some of the stimulus features that guide visual attention are abstract properties of objects such as potential threat to one's survival, whereas others are complex configurations such as visual contexts that are learned through past experiences. The present study investigated the two functions that guide visual attention, threat detection and learning of contextual regularities, in visual search. Search arrays contained images of threat and non-threat objects, and their locations were fixed on some trials but random on other trials. Although they were irrelevant to the visual search task, threat objects facilitated attention capture and impaired attention disengagement. Search time improved for fixed configurations more than for random configurations, reflecting learning of visual contexts. Nevertheless, threat detection had little influence on learning of the contextual regularities. The results suggest that factors guiding visual attention are different from factors that influence learning to guide visual attention.
Inhibition of Return and Object-based Attentional Selection
List, Alexandra; Robertson, Lynn C.
2008-01-01
Visual attention research has revealed that attentional allocation can occur in space- and/or object-based coordinates. Using the direct and elegant design of R. Egly, J. Driver and R. Rafal (1994), we examine whether space- and object-based inhibition of return (IOR) emerge under similar time courses. The present experiments were capable of isolating both space- and object-based effects induced by peripheral and back-to-center cues. They generally support the contention that spatially non-predictive cues are effective in producing space-based IOR at a variety of SOAs, and under a variety of stimulus conditions. Whether facilitatory or inhibitory in direction, the object-based effects occurred over a very different time course than did the space-based effects. Reliable object-based IOR was only found under limited conditions and was tied to the time since the most recent cue (peripheral or central). The finding that object-based effects are generally determined by SOA from the most recent cue may help to resolve discrepancies in the IOR literature. These findings also have implications for the search facilitator role IOR is purported to play in the guidance of visual attention. PMID:18085946
Attention, Intention, and Priority in the Parietal Lobe
Bisley, James W.; Goldberg, Michael E.
2013-01-01
For many years there has been a debate about the role of the parietal lobe in the generation of behavior. Does it generate movement plans (intention) or choose objects in the environment for further processing? To answer this, we focus on the lateral intraparietal area (LIP), an area that has been shown to play independent roles in target selection for saccades and the generation of visual attention. Based on results from a variety of tasks, we propose that LIP acts as a priority map in which objects are represented by activity proportional to their behavioral priority. We present evidence to show that the priority map combines bottom-up inputs like a rapid visual response with an array of top-down signals like a saccade plan. The spatial location representing the peak of the map is used by the oculomotor system to target saccades and by the visual system to guide visual attention. PMID:20192813
The putative visual word form area is functionally connected to the dorsal attention network.
Vogel, Alecia C; Miezin, Fran M; Petersen, Steven E; Schlaggar, Bradley L
2012-03-01
The putative visual word form area (pVWFA) is the most consistently activated region in single word reading studies (i.e., Vigneau et al. 2006), yet its function remains a matter of debate. The pVWFA may be predominantly used in reading or it could be a more general visual processor used in reading but also in other visual tasks. Here, resting-state functional connectivity magnetic resonance imaging (rs-fcMRI) is used to characterize the functional relationships of the pVWFA to help adjudicate between these possibilities. rs-fcMRI defines relationships based on correlations in slow fluctuations of blood oxygen level-dependent activity occurring at rest. In this study, rs-fcMRI correlations show little relationship between the pVWFA and reading-related regions but a strong relationship between the pVWFA and dorsal attention regions thought to be related to spatial and feature attention. The rs-fcMRI correlations between the pVWFA and regions of the dorsal attention network increase with age and reading skill, while the correlations between the pVWFA and reading-related regions do not. These results argue the pVWFA is not used predominantly in reading but is a more general visual processor used in other visual tasks, as well as reading.
The Putative Visual Word Form Area Is Functionally Connected to the Dorsal Attention Network
Miezin, Fran M.; Petersen, Steven E.; Schlaggar, Bradley L.
2012-01-01
The putative visual word form area (pVWFA) is the most consistently activated region in single word reading studies (i.e., Vigneau et al. 2006), yet its function remains a matter of debate. The pVWFA may be predominantly used in reading or it could be a more general visual processor used in reading but also in other visual tasks. Here, resting-state functional connectivity magnetic resonance imaging (rs-fcMRI) is used to characterize the functional relationships of the pVWFA to help adjudicate between these possibilities. rs-fcMRI defines relationships based on correlations in slow fluctuations of blood oxygen level–dependent activity occurring at rest. In this study, rs-fcMRI correlations show little relationship between the pVWFA and reading-related regions but a strong relationship between the pVWFA and dorsal attention regions thought to be related to spatial and feature attention. The rs-fcMRI correlations between the pVWFA and regions of the dorsal attention network increase with age and reading skill, while the correlations between the pVWFA and reading-related regions do not. These results argue the pVWFA is not used predominantly in reading but is a more general visual processor used in other visual tasks, as well as reading. PMID:21690259
Multiple foci of spatial attention in multimodal working memory.
Katus, Tobias; Eimer, Martin
2016-11-15
The maintenance of sensory information in working memory (WM) is mediated by the attentional activation of stimulus representations that are stored in perceptual brain regions. Using event-related potentials (ERPs), we measured tactile and visual contralateral delay activity (tCDA/CDA components) in a bimodal WM task to concurrently track the attention-based maintenance of information stored in anatomically segregated (somatosensory and visual) brain areas. Participants received tactile and visual sample stimuli on both sides, and in different blocks, memorized these samples on the same side or on opposite sides. After a retention delay, memory was unpredictably tested for touch or vision. In the same side blocks, tCDA and CDA components simultaneously emerged over the same hemisphere, contralateral to the memorized tactile/visual sample set. In opposite side blocks, these two components emerged over different hemispheres, but had the same sizes and onset latencies as in the same side condition. Our results reveal distinct foci of tactile and visual spatial attention that were concurrently maintained on task-relevant stimulus representations in WM. The independence of spatially-specific biasing mechanisms for tactile and visual WM content suggests that multimodal information is stored in distributed perceptual brain areas that are activated through modality-specific processes that can operate simultaneously and largely independently of each other. Copyright © 2016 Elsevier Inc. All rights reserved.
Hollingworth, Andrew; Hwang, Seongmin
2013-10-19
We examined the conditions under which a feature value in visual working memory (VWM) recruits visual attention to matching stimuli. Previous work has suggested that VWM supports two qualitatively different states of representation: an active state that interacts with perceptual selection and a passive (or accessory) state that does not. An alternative hypothesis is that VWM supports a single form of representation, with the precision of feature memory controlling whether or not the representation interacts with perceptual selection. The results of three experiments supported the dual-state hypothesis. We established conditions under which participants retained a relatively precise representation of a parcticular colour. If the colour was immediately task relevant, it reliably recruited attention to matching stimuli. However, if the colour was not immediately task relevant, it failed to interact with perceptual selection. Feature maintenance in VWM is not necessarily equivalent with feature-based attentional selection.
Single Canonical Model of Reflexive Memory and Spatial Attention
Patel, Saumil S.; Red, Stuart; Lin, Eric; Sereno, Anne B.
2015-01-01
Many neurons in the dorsal and ventral visual stream have the property that after a brief visual stimulus presentation in their receptive field, the spiking activity in these neurons persists above their baseline levels for several seconds. This maintained activity is not always correlated with the monkey’s task and its origin is unknown. We have previously proposed a simple neural network model, based on shape selective neurons in monkey lateral intraparietal cortex, which predicts the valence and time course of reflexive (bottom-up) spatial attention. In the same simple model, we demonstrate here that passive maintained activity or short-term memory of specific visual events can result without need for an external or top-down modulatory signal. Mutual inhibition and neuronal adaptation play distinct roles in reflexive attention and memory. This modest 4-cell model provides the first simple and unified physiologically plausible mechanism of reflexive spatial attention and passive short-term memory processes. PMID:26493949
Liu, Sisi; Liu, Duo; Pan, Zhihui; Xu, Zhengye
2018-03-25
A growing body of research suggests that visual-spatial attention is important for reading achievement. However, few studies have been conducted in non-alphabetic orthographies. This study extended the current research to reading development in Chinese, a logographic writing system known for its visual complexity. Eighty Hong Kong Chinese children were selected and divided into poor reader and typical reader groups, based on their performance on the measures of reading fluency, Chinese character reading, and reading comprehension. The poor and typical readers were matched on age and nonverbal intelligence. A Posner's spatial cueing task was adopted to measure the exogenous and endogenous orienting of visual-spatial attention. Although the typical readers showed the cueing effect in the central cue condition (i.e., responses to targets following valid cues were faster than those to targets following invalid cues), the poor readers did not respond differently in valid and invalid conditions, suggesting an impairment of the endogenous orienting of attention. The two groups, however, showed a similar cueing effect in the peripheral cue condition, indicating intact exogenous orienting in the poor readers. These findings generally supported a link between the orienting of covert attention and Chinese reading, providing evidence for the attentional-deficit theory of dyslexia. Copyright © 2018 John Wiley & Sons, Ltd.
Stenneken, Prisca; Egetemeir, Johanna; Schulte-Körne, Gerd; Müller, Hermann J; Schneider, Werner X; Finke, Kathrin
2011-10-01
The cognitive causes as well as the neurological and genetic basis of developmental dyslexia, a complex disorder of written language acquisition, are intensely discussed with regard to multiple-deficit models. Accumulating evidence has revealed dyslexics' impairments in a variety of tasks requiring visual attention. The heterogeneity of these experimental results, however, points to the need for measures that are sufficiently sensitive to differentiate between impaired and preserved attentional components within a unified framework. This first parameter-based group study of attentional components in developmental dyslexia addresses potentially altered attentional components that have recently been associated with parietal dysfunctions in dyslexia. We aimed to isolate the general attentional resources that might underlie reduced span performance, i.e., either a deficient working memory storage capacity, or a slowing in visual perceptual processing speed, or both. Furthermore, by analysing attentional selectivity in dyslexia, we addressed a potential lateralized abnormality of visual attention, i.e., a previously suggested rightward spatial deviation compared to normal readers. We investigated a group of high-achieving young adults with persisting dyslexia and matched normal readers in an experimental whole report and a partial report of briefly presented letter arrays. Possible deviations in the parametric values of the dyslexic compared to the control group were taken as markers for the underlying deficit. The dyslexic group showed a striking reduction in perceptual processing speed (by 26% compared to controls) while their working memory storage capacity was in the normal range. In addition, a spatial deviation of attentional weighting compared to the control group was confirmed in dyslexic readers, which was larger in participants with a more severe dyslexic disorder. In general, the present study supports the relevance of perceptual processing speed in disorders of written language acquisition and demonstrates that the parametric assessment provides a suitable tool for specifying the underlying deficit within a unitary framework. Copyright © 2011 Elsevier Ltd. All rights reserved.
Chechlacz, Magdalena; Gillebert, Celine R; Vangkilde, Signe A; Petersen, Anders; Humphreys, Glyn W
2015-07-29
Visuospatial attention allows us to select and act upon a subset of behaviorally relevant visual stimuli while ignoring distraction. Bundesen's theory of visual attention (TVA) (Bundesen, 1990) offers a quantitative analysis of the different facets of attention within a unitary model and provides a powerful analytic framework for understanding individual differences in attentional functions. Visuospatial attention is contingent upon large networks, distributed across both hemispheres, consisting of several cortical areas interconnected by long-association frontoparietal pathways, including three branches of the superior longitudinal fasciculus (SLF I-III) and the inferior fronto-occipital fasciculus (IFOF). Here we examine whether structural variability within human frontoparietal networks mediates differences in attention abilities as assessed by the TVA. Structural measures were based on spherical deconvolution and tractography-derived indices of tract volume and hindrance-modulated orientational anisotropy (HMOA). Individual differences in visual short-term memory (VSTM) were linked to variability in the microstructure (HMOA) of SLF II, SLF III, and IFOF within the right hemisphere. Moreover, VSTM and speed of information processing were linked to hemispheric lateralization within the IFOF. Differences in spatial bias were mediated by both variability in microstructure and volume of the right SLF II. Our data indicate that the microstructural and macrostrucutral organization of white matter pathways differentially contributes to both the anatomical lateralization of frontoparietal attentional networks and to individual differences in attentional functions. We conclude that individual differences in VSTM capacity, processing speed, and spatial bias, as assessed by TVA, link to variability in structural organization within frontoparietal pathways. Copyright © 2015 Chechlacz et al.
Aging affects the balance between goal-guided and habitual spatial attention.
Twedell, Emily L; Koutstaal, Wilma; Jiang, Yuhong V
2017-08-01
Visual clutter imposes significant challenges to older adults in everyday tasks and often calls on selective processing of relevant information. Previous research has shown that both visual search habits and task goals influence older adults' allocation of spatial attention, but has not examined the relative impact of these two sources of attention when they compete. To examine how aging affects the balance between goal-driven and habitual attention, and to inform our understanding of different attentional subsystems, we tested young and older adults in an adapted visual search task involving a display laid flat on a desk. To induce habitual attention, unbeknownst to participants, the target was more often placed in one quadrant than in the others. All participants rapidly acquired habitual attention toward the high-probability quadrant. We then informed participants where the high-probability quadrant was and instructed them to search that screen location first-but pitted their habit-based, viewer-centered search against this instruction by requiring participants to change their physical position relative to the desk. Both groups prioritized search in the instructed location, but this effect was stronger in young adults than in older adults. In contrast, age did not influence viewer-centered search habits: the two groups showed similar attentional preference for the visual field where the target was most often found before. Aging disrupted goal-guided but not habitual attention. Product, work, and home design for people of all ages--but especially for older individuals--should take into account the strong viewer-centered nature of habitual attention.
Task specificity of attention training: the case of probability cuing
Jiang, Yuhong V.; Swallow, Khena M.; Won, Bo-Yeong; Cistera, Julia D.; Rosenbaum, Gail M.
2014-01-01
Statistical regularities in our environment enhance perception and modulate the allocation of spatial attention. Surprisingly little is known about how learning-induced changes in spatial attention transfer across tasks. In this study, we investigated whether a spatial attentional bias learned in one task transfers to another. Most of the experiments began with a training phase in which a search target was more likely to be located in one quadrant of the screen than in the other quadrants. An attentional bias toward the high-probability quadrant developed during training (probability cuing). In a subsequent, testing phase, the target's location distribution became random. In addition, the training and testing phases were based on different tasks. Probability cuing did not transfer between visual search and a foraging-like task. However, it did transfer between various types of visual search tasks that differed in stimuli and difficulty. These data suggest that different visual search tasks share a common and transferrable learned attentional bias. However, this bias is not shared by high-level, decision-making tasks such as foraging. PMID:25113853
The Effects of Context and Attention on Spiking Activity in Human Early Visual Cortex.
Self, Matthew W; Peters, Judith C; Possel, Jessy K; Reithler, Joel; Goebel, Rainer; Ris, Peterjan; Jeurissen, Danique; Reddy, Leila; Claus, Steven; Baayen, Johannes C; Roelfsema, Pieter R
2016-03-01
Here we report the first quantitative analysis of spiking activity in human early visual cortex. We recorded multi-unit activity from two electrodes in area V2/V3 of a human patient implanted with depth electrodes as part of her treatment for epilepsy. We observed well-localized multi-unit receptive fields with tunings for contrast, orientation, spatial frequency, and size, similar to those reported in the macaque. We also observed pronounced gamma oscillations in the local-field potential that could be used to estimate the underlying spiking response properties. Spiking responses were modulated by visual context and attention. We observed orientation-tuned surround suppression: responses were suppressed by image regions with a uniform orientation and enhanced by orientation contrast. Additionally, responses were enhanced on regions that perceptually segregated from the background, indicating that neurons in the human visual cortex are sensitive to figure-ground structure. Spiking responses were also modulated by object-based attention. When the patient mentally traced a curve through the neurons' receptive fields, the accompanying shift of attention enhanced neuronal activity. These results demonstrate that the tuning properties of cells in the human early visual cortex are similar to those in the macaque and that responses can be modulated by both contextual factors and behavioral relevance. Our results, therefore, imply that the macaque visual system is an excellent model for the human visual cortex.
The Effects of Context and Attention on Spiking Activity in Human Early Visual Cortex
Reithler, Joel; Goebel, Rainer; Ris, Peterjan; Jeurissen, Danique; Reddy, Leila; Claus, Steven; Baayen, Johannes C.; Roelfsema, Pieter R.
2016-01-01
Here we report the first quantitative analysis of spiking activity in human early visual cortex. We recorded multi-unit activity from two electrodes in area V2/V3 of a human patient implanted with depth electrodes as part of her treatment for epilepsy. We observed well-localized multi-unit receptive fields with tunings for contrast, orientation, spatial frequency, and size, similar to those reported in the macaque. We also observed pronounced gamma oscillations in the local-field potential that could be used to estimate the underlying spiking response properties. Spiking responses were modulated by visual context and attention. We observed orientation-tuned surround suppression: responses were suppressed by image regions with a uniform orientation and enhanced by orientation contrast. Additionally, responses were enhanced on regions that perceptually segregated from the background, indicating that neurons in the human visual cortex are sensitive to figure-ground structure. Spiking responses were also modulated by object-based attention. When the patient mentally traced a curve through the neurons’ receptive fields, the accompanying shift of attention enhanced neuronal activity. These results demonstrate that the tuning properties of cells in the human early visual cortex are similar to those in the macaque and that responses can be modulated by both contextual factors and behavioral relevance. Our results, therefore, imply that the macaque visual system is an excellent model for the human visual cortex. PMID:27015604
Studying visual attention using the multiple object tracking paradigm: A tutorial review.
Meyerhoff, Hauke S; Papenmeier, Frank; Huff, Markus
2017-07-01
Human observers are capable of tracking multiple objects among identical distractors based only on their spatiotemporal information. Since the first report of this ability in the seminal work of Pylyshyn and Storm (1988, Spatial Vision, 3, 179-197), multiple object tracking has attracted many researchers. A reason for this is that it is commonly argued that the attentional processes studied with the multiple object paradigm apparently match the attentional processing during real-world tasks such as driving or team sports. We argue that multiple object tracking provides a good mean to study the broader topic of continuous and dynamic visual attention. Indeed, several (partially contradicting) theories of attentive tracking have been proposed within the almost 30 years since its first report, and a large body of research has been conducted to test these theories. With regard to the richness and diversity of this literature, the aim of this tutorial review is to provide researchers who are new in the field of multiple object tracking with an overview over the multiple object tracking paradigm, its basic manipulations, as well as links to other paradigms investigating visual attention and working memory. Further, we aim at reviewing current theories of tracking as well as their empirical evidence. Finally, we review the state of the art in the most prominent research fields of multiple object tracking and how this research has helped to understand visual attention in dynamic settings.
Interactions between space-based and feature-based attention.
Leonard, Carly J; Balestreri, Angela; Luck, Steven J
2015-02-01
Although early research suggested that attention to nonspatial features (i.e., red) was confined to stimuli appearing at an attended spatial location, more recent research has emphasized the global nature of feature-based attention. For example, a distractor sharing a target feature may capture attention even if it occurs at a task-irrelevant location. Such findings have been used to argue that feature-based attention operates independently of spatial attention. However, feature-based attention may nonetheless interact with spatial attention, yielding larger feature-based effects at attended locations than at unattended locations. The present study tested this possibility. In 2 experiments, participants viewed a rapid serial visual presentation (RSVP) stream and identified a target letter defined by its color. Target-colored distractors were presented at various task-irrelevant locations during the RSVP stream. We found that feature-driven attentional capture effects were largest when the target-colored distractor was closer to the attended location. These results demonstrate that spatial attention modulates the strength of feature-based attention capture, calling into question the prior evidence that feature-based attention operates in a global manner that is independent of spatial attention.
The Influence of Selective and Divided Attention on Audiovisual Integration in Children.
Yang, Weiping; Ren, Yanna; Yang, Dan Ou; Yuan, Xue; Wu, Jinglong
2016-01-24
This article aims to investigate whether there is a difference in audiovisual integration in school-aged children (aged 6 to 13 years; mean age = 9.9 years) between the selective attention condition and divided attention condition. We designed a visual and/or auditory detection task that included three blocks (divided attention, visual-selective attention, and auditory-selective attention). The results showed that the response to bimodal audiovisual stimuli was faster than to unimodal auditory or visual stimuli under both divided attention and auditory-selective attention conditions. However, in the visual-selective attention condition, no significant difference was found between the unimodal visual and bimodal audiovisual stimuli in response speed. Moreover, audiovisual behavioral facilitation effects were compared between divided attention and selective attention (auditory or visual attention). In doing so, we found that audiovisual behavioral facilitation was significantly difference between divided attention and selective attention. The results indicated that audiovisual integration was stronger in the divided attention condition than that in the selective attention condition in children. Our findings objectively support the notion that attention can modulate audiovisual integration in school-aged children. Our study might offer a new perspective for identifying children with conditions that are associated with sustained attention deficit, such as attention-deficit hyperactivity disorder. © The Author(s) 2016.
Common neural substrates for visual working memory and attention.
Mayer, Jutta S; Bittner, Robert A; Nikolić, Danko; Bledowski, Christoph; Goebel, Rainer; Linden, David E J
2007-06-01
Humans are severely limited in their ability to memorize visual information over short periods of time. Selective attention has been implicated as a limiting factor. Here we used functional magnetic resonance imaging to test the hypothesis that this limitation is due to common neural resources shared by visual working memory (WM) and selective attention. We combined visual search and delayed discrimination of complex objects and independently modulated the demands on selective attention and WM encoding. Participants were presented with a search array and performed easy or difficult visual search in order to encode one or three complex objects into visual WM. Overlapping activation for attention-demanding visual search and WM encoding was observed in distributed posterior and frontal regions. In the right prefrontal cortex and bilateral insula blood oxygen-level-dependent activation additively increased with increased WM load and attentional demand. Conversely, several visual, parietal and premotor areas showed overlapping activation for the two task components and were severely reduced in their WM load response under the condition with high attentional demand. Regions in the left prefrontal cortex were selectively responsive to WM load. Areas selectively responsive to high attentional demand were found within the right prefrontal and bilateral occipital cortex. These results indicate that encoding into visual WM and visual selective attention require to a high degree access to common neural resources. We propose that competition for resources shared by visual attention and WM encoding can limit processing capabilities in distributed posterior brain regions.
Recovery of Visual Search following Moderate to Severe Traumatic Brain Injury
Schmitter-Edgecombe, Maureen; Robertson, Kayela
2015-01-01
Introduction Deficits in attentional abilities can significantly impact rehabilitation and recovery from traumatic brain injury (TBI). This study investigated the nature and recovery of pre-attentive (parallel) and attentive (serial) visual search abilities after TBI. Methods Participants were 40 individuals with moderate to severe TBI who were tested following emergence from post-traumatic amnesia and approximately 8-months post-injury, as well as 40 age and education matched controls. Pre-attentive (automatic) and attentive (controlled) visual search situations were created by manipulating the saliency of the target item amongst distractor items in visual displays. The relationship between pre-attentive and attentive visual search rates and follow-up community integration were also explored. Results The results revealed intact parallel (automatic) processing skills in the TBI group both post-acutely and at follow-up. In contrast, when attentional demands on visual search were increased by reducing the saliency of the target, the TBI group demonstrated poorer performances compared to the control group both post-acutely and 8-months post-injury. Neither pre-attentive nor attentive visual search slope values correlated with follow-up community integration. Conclusions These results suggest that utilizing intact pre-attentive visual search skills during rehabilitation may help to reduce high mental workload situations, thereby improving the rehabilitation process. For example, making commonly used objects more salient in the environment should increase reliance or more automatic visual search processes and reduce visual search time for individuals with TBI. PMID:25671675
A world unglued: simultanagnosia as a spatial restriction of attention
Dalrymple, Kirsten A.; Barton, Jason J. S.; Kingstone, Alan
2013-01-01
Simultanagnosia is a disorder of visual attention that leaves a patient's world unglued: scenes and objects are perceived in a piecemeal manner. It is generally agreed that simultanagnosia is related to an impairment of attention, but it is unclear whether this impairment is object- or space-based in nature. We first consider the findings that support a concept of simultanagnosia as deficit of object-based attention. We then examine the evidence suggesting that simultanagnosia results from damage to a space-based attentional system, and in particular a model of simultanagnosia as a narrowed spatial window of attention. We ask whether seemingly object-based deficits can be explained by space-based mechanisms, and consider the evidence that object processing influences spatial deficits in this condition. Finally, we discuss limitations of a space-based attentional explanation. PMID:23616758
Infant Visual Attention and Object Recognition
Reynolds, Greg D.
2015-01-01
This paper explores the role visual attention plays in the recognition of objects in infancy. Research and theory on the development of infant attention and recognition memory are reviewed in three major sections. The first section reviews some of the major findings and theory emerging from a rich tradition of behavioral research utilizing preferential looking tasks to examine visual attention and recognition memory in infancy. The second section examines research utilizing neural measures of attention and object recognition in infancy as well as research on brain-behavior relations in the early development of attention and recognition memory. The third section addresses potential areas of the brain involved in infant object recognition and visual attention. An integrated synthesis of some of the existing models of the development of visual attention is presented which may account for the observed changes in behavioral and neural measures of visual attention and object recognition that occur across infancy. PMID:25596333
Feature-selective attention in healthy old age: a selective decline in selective attention?
Quigley, Cliodhna; Müller, Matthias M
2014-02-12
Deficient selection against irrelevant information has been proposed to underlie age-related cognitive decline. We recently reported evidence for maintained early sensory selection when older and younger adults used spatial selective attention to perform a challenging task. Here we explored age-related differences when spatial selection is not possible and feature-selective attention must be deployed. We additionally compared the integrity of feedforward processing by exploiting the well established phenomenon of suppression of visual cortical responses attributable to interstimulus competition. Electroencephalogram was measured while older and younger human adults responded to brief occurrences of coherent motion in an attended stimulus composed of randomly moving, orientation-defined, flickering bars. Attention was directed to horizontal or vertical bars by a pretrial cue, after which two orthogonally oriented, overlapping stimuli or a single stimulus were presented. Horizontal and vertical bars flickered at different frequencies and thereby elicited separable steady-state visual-evoked potentials, which were used to examine the effect of feature-based selection and the competitive influence of a second stimulus on ongoing visual processing. Age differences were found in feature-selective attentional modulation of visual responses: older adults did not show consistent modulation of magnitude or phase. In contrast, the suppressive effect of a second stimulus was robust and comparable in magnitude across age groups, suggesting that bottom-up processing of the current stimuli is essentially unchanged in healthy old age. Thus, it seems that visual processing per se is unchanged, but top-down attentional control is compromised in older adults when space cannot be used to guide selection.
Auditory and Visual Capture during Focused Visual Attention
ERIC Educational Resources Information Center
Koelewijn, Thomas; Bronkhorst, Adelbert; Theeuwes, Jan
2009-01-01
It is well known that auditory and visual onsets presented at a particular location can capture a person's visual attention. However, the question of whether such attentional capture disappears when attention is focused endogenously beforehand has not yet been answered. Moreover, previous studies have not differentiated between capture by onsets…
Value-driven attentional capture in the auditory domain.
Anderson, Brian A
2016-01-01
It is now well established that the visual attention system is shaped by reward learning. When visual features are associated with a reward outcome, they acquire high priority and can automatically capture visual attention. To date, evidence for value-driven attentional capture has been limited entirely to the visual system. In the present study, I demonstrate that previously reward-associated sounds also capture attention, interfering more strongly with the performance of a visual task. This finding suggests that value-driven attention reflects a broad principle of information processing that can be extended to other sensory modalities and that value-driven attention can bias cross-modal stimulus competition.
Smith, Philip L; Sewell, David K; Lilburn, Simon D
2015-11-01
Normalization models of visual sensitivity assume that the response of a visual mechanism is scaled divisively by the sum of the activity in the excitatory and inhibitory mechanisms in its neighborhood. Normalization models of attention assume that the weighting of excitatory and inhibitory mechanisms is modulated by attention. Such models have provided explanations of the effects of attention in both behavioral and single-cell recording studies. We show how normalization models can be obtained as the asymptotic solutions of shunting differential equations, in which stimulus inputs and the activity in the mechanism control growth rates multiplicatively rather than additively. The value of the shunting equation approach is that it characterizes the entire time course of the response, not just its asymptotic strength. We describe two models of attention based on shunting dynamics, the integrated system model of Smith and Ratcliff (2009) and the competitive interaction theory of Smith and Sewell (2013). These models assume that attention, stimulus salience, and the observer's strategy for the task jointly determine the selection of stimuli into visual short-term memory (VSTM) and the way in which stimulus representations are weighted. The quality of the VSTM representation determines the speed and accuracy of the decision. The models provide a unified account of a variety of attentional phenomena found in psychophysical tasks using single-element and multi-element displays. Our results show the generality and utility of the normalization approach to modeling attention. Copyright © 2014 Elsevier B.V. All rights reserved.
Li, Chunlin; Chen, Kewei; Han, Hongbin; Chui, Dehua; Wu, Jinglong
2012-01-01
Top-down attention to spatial and temporal cues has been thoroughly studied in the visual domain. However, because the neural systems that are important for auditory top-down temporal attention (i.e., attention based on time interval cues) remain undefined, the differences in brain activity between directed attention to auditory spatial location (compared with time intervals) are unclear. Using fMRI (magnetic resonance imaging), we measured the activations caused by cue-target paradigms by inducing the visual cueing of attention to an auditory target within a spatial or temporal domain. Imaging results showed that the dorsal frontoparietal network (dFPN), which consists of the bilateral intraparietal sulcus and the frontal eye field, responded to spatial orienting of attention, but activity was absent in the bilateral frontal eye field (FEF) during temporal orienting of attention. Furthermore, the fMRI results indicated that activity in the right ventrolateral prefrontal cortex (VLPFC) was significantly stronger during spatial orienting of attention than during temporal orienting of attention, while the DLPFC showed no significant differences between the two processes. We conclude that the bilateral dFPN and the right VLPFC contribute to auditory spatial orienting of attention. Furthermore, specific activations related to temporal cognition were confirmed within the superior occipital gyrus, tegmentum, motor area, thalamus and putamen. PMID:23166800
A visual model for object detection based on active contours and level-set method.
Satoh, Shunji
2006-09-01
A visual model for object detection is proposed. In order to make the detection ability comparable with existing technical methods for object detection, an evolution equation of neurons in the model is derived from the computational principle of active contours. The hierarchical structure of the model emerges naturally from the evolution equation. One drawback involved with initial values of active contours is alleviated by introducing and formulating convexity, which is a visual property. Numerical experiments show that the proposed model detects objects with complex topologies and that it is tolerant of noise. A visual attention model is introduced into the proposed model. Other simulations show that the visual properties of the model are consistent with the results of psychological experiments that disclose the relation between figure-ground reversal and visual attention. We also demonstrate that the model tends to perceive smaller regions as figures, which is a characteristic observed in human visual perception.
Duncan, Lesley A; Park, Justin H; Faulkner, Jason; Schaller, Mark; Neuberg, Steven L; Kenrick, Douglas T
2007-09-01
We tested the hypothesis that, compared with sociosexually restricted individuals, those with an unrestricted approach to mating would selectively allocate visual attention to attractive opposite-sex others. We also tested for sex differences in this effect. Seventy-four participants completed the Sociosexual Orientation Inventory, and performed a computer-based task that assessed the speed with which they detected changes in attractive and unattractive male and female faces. Differences in reaction times served as indicators of selective attention. Results revealed a Sex X Sociosexuality interaction: Compared with sociosexually restricted men, unrestricted men selectively allocated attention to attractive opposite-sex others; no such effect emerged among women. This finding was specific to opposite-sex targets and did not occur in attention to same-sex others. These results contribute to a growing literature on the adaptive allocation of attention in social environments.
2012-01-01
Background There is at present crescent empirical evidence deriving from different lines of ERPs research that, unlike previously observed, the earliest sensory visual response, known as C1 component or P/N80, generated within the striate cortex, might be modulated by selective attention to visual stimulus features. Up to now, evidence of this modulation has been related to space location, and simple features such as spatial frequency, luminance, and texture. Additionally, neurophysiological conditions, such as emotion, vigilance, the reflexive or voluntary nature of input attentional selection, and workload have also been related to C1 modulations, although at least the workload status has received controversial indications. No information is instead available, at present, for objects attentional selection. Methods In this study object- and space-based attention mechanisms were conjointly investigated by presenting complex, familiar shapes of artefacts and animals, intermixed with distracters, in different tasks requiring the selection of a relevant target-category within a relevant spatial location, while ignoring the other shape categories within this location, and, overall, all the categories at an irrelevant location. EEG was recorded from 30 scalp electrode sites in 21 right-handed participants. Results and Conclusions ERP findings showed that visual processing was modulated by both shape- and location-relevance per se, beginning separately at the latency of the early phase of a precocious negativity (60-80 ms) at mesial scalp sites consistent with the C1 component, and a positivity at more lateral sites. The data also showed that the attentional modulation progressed conjointly at the latency of the subsequent P1 (100-120 ms) and N1 (120-180 ms), as well as later-latency components. These findings support the views that (1) V1 may be precociously modulated by direct top-down influences, and participates to object, besides simple features, attentional selection; (2) object spatial and non-spatial features selection might begin with an early, parallel detection of a target object in the visual field, followed by the progressive focusing of spatial attention onto the location of an actual target for its identification, somehow in line with neural mechanisms reported in the literature as "object-based space selection", or with those proposed for visual search. PMID:22300540
Harris, Jill; Kamke, Marc R
2014-11-01
Selective attention fundamentally alters sensory perception, but little is known about the functioning of attention in individuals who use a cochlear implant. This study aimed to investigate visual and auditory attention in adolescent cochlear implant users. Event related potentials were used to investigate the influence of attention on visual and auditory evoked potentials in six cochlear implant users and age-matched normally-hearing children. Participants were presented with streams of alternating visual and auditory stimuli in an oddball paradigm: each modality contained frequently presented 'standard' and infrequent 'deviant' stimuli. Across different blocks attention was directed to either the visual or auditory modality. For the visual stimuli attention boosted the early N1 potential, but this effect was larger for cochlear implant users. Attention was also associated with a later P3 component for the visual deviant stimulus, but there was no difference between groups in the later attention effects. For the auditory stimuli, attention was associated with a decrease in N1 latency as well as a robust P3 for the deviant tone. Importantly, there was no difference between groups in these auditory attention effects. The results suggest that basic mechanisms of auditory attention are largely normal in children who are proficient cochlear implant users, but that visual attention may be altered. Ultimately, a better understanding of how selective attention influences sensory perception in cochlear implant users will be important for optimising habilitation strategies. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Redel, P; Bublak, P; Sorg, C; Kurz, A; Förstl, H; Müller, H J; Schneider, W X; Perneczky, R; Finke, K
2012-01-01
Visual selective attention was assessed with a partial-report task in patients with probable Alzheimer's disease (AD), amnestic mild cognitive impairment (MCI), and healthy elderly controls. Based on Bundesen's "theory of visual attention" (TVA), two parameters were derived: top-down control of attentional selection, representing task-related attentional weighting for prioritizing relevant visual objects, and spatial distribution of attentional weights across the left and the right hemifield. Compared with controls, MCI patients showed significantly reduced top-down controlled selection, which was further deteriorated in AD subjects. Moreover, attentional weighting was significantly unbalanced across hemifields in MCI and tended to be more lateralized in AD. Across MCI and AD patients, carriers of the apolipoprotein E ε4 allele (ApoE4) displayed a leftward spatial bias, which was the more pronounced the younger the ApoE4-positive patients and the earlier disease onset. These results indicate that impaired top-down control may be linked to early dysfunction of fronto-parietal networks. An early temporo-parietal interhemispheric asymmetry might cause a pathological spatial bias which is associated with ApoE4 genotype and may therefore function as early cognitive marker of upcoming AD. Copyright © 2012 Elsevier Inc. All rights reserved.
A foreground object features-based stereoscopic image visual comfort assessment model
NASA Astrophysics Data System (ADS)
Jin, Xin; Jiang, G.; Ying, H.; Yu, M.; Ding, S.; Peng, Z.; Shao, F.
2014-11-01
Since stereoscopic images provide observers with both realistic and discomfort viewing experience, it is necessary to investigate the determinants of visual discomfort. By considering that foreground object draws most attention when human observing stereoscopic images. This paper proposes a new foreground object based visual comfort assessment (VCA) metric. In the first place, a suitable segmentation method is applied to disparity map and then the foreground object is ascertained as the one having the biggest average disparity. In the second place, three visual features being average disparity, average width and spatial complexity of foreground object are computed from the perspective of visual attention. Nevertheless, object's width and complexity do not consistently influence the perception of visual comfort in comparison with disparity. In accordance with this psychological phenomenon, we divide the whole images into four categories on the basis of different disparity and width, and exert four different models to more precisely predict its visual comfort in the third place. Experimental results show that the proposed VCA metric outperformance other existing metrics and can achieve a high consistency between objective and subjective visual comfort scores. The Pearson Linear Correlation Coefficient (PLCC) and Spearman Rank Order Correlation Coefficient (SROCC) are over 0.84 and 0.82, respectively.
Video-Based Eye Tracking to Detect the Attention Shift: A Computer Classroom Context-Aware System
ERIC Educational Resources Information Center
Kuo, Yung-Lung; Lee, Jiann-Shu; Hsieh, Min-Chai
2014-01-01
Eye and head movements evoked in response to obvious visual attention shifts. However, there has been little progress on the causes of absent-mindedness so far. The paper proposes an attention awareness system that captures the conditions regarding the interaction of eye gaze and head pose under various attentional switching in computer classroom.…
ERIC Educational Resources Information Center
Foley, Nicholas C.; Grossberg, Stephen; Mingolla, Ennio
2012-01-01
How are spatial and object attention coordinated to achieve rapid object learning and recognition during eye movement search? How do prefrontal priming and parietal spatial mechanisms interact to determine the reaction time costs of intra-object attention shifts, inter-object attention shifts, and shifts between visible objects and covertly cued…
Vatovec, Christine
2013-01-01
Theory-based research is needed to understand how maps of environmental health risk information influence risk beliefs and protective behavior. Using theoretical concepts from multiple fields of study including visual cognition, semiotics, health behavior, and learning and memory supports a comprehensive assessment of this influence. We report results from thirteen cognitive interviews that provide theory-based insights into how visual features influenced what participants saw and the meaning of what they saw as they viewed three formats of water test results for private wells (choropleth map, dot map, and a table). The unit of perception, color, proximity to hazards, geographic distribution, and visual salience had substantial influences on what participants saw and their resulting risk beliefs. These influences are explained by theoretical factors that shape what is seen, properties of features that shape cognition (pre-attentive, symbolic, visual salience), information processing (top-down and bottom-up), and the strength of concrete compared to abstract information. Personal relevance guided top-down attention to proximal and larger hazards that shaped stronger risk beliefs. Meaning was more local for small perceptual units and global for large units. Three aspects of color were important: pre-attentive “incremental risk” meaning of sequential shading, symbolic safety meaning of stoplight colors, and visual salience that drew attention. The lack of imagery, geographic information, and color diminished interest in table information. Numeracy and prior beliefs influenced comprehension for some participants. Results guided the creation of an integrated conceptual framework for application to future studies. Ethics should guide the selection of map features that support appropriate communication goals. PMID:22715919
The role of attention in figure-ground segregation in areas V1 and V4 of the visual cortex.
Poort, Jasper; Raudies, Florian; Wannig, Aurel; Lamme, Victor A F; Neumann, Heiko; Roelfsema, Pieter R
2012-07-12
Our visual system segments images into objects and background. Figure-ground segregation relies on the detection of feature discontinuities that signal boundaries between the figures and the background and on a complementary region-filling process that groups together image regions with similar features. The neuronal mechanisms for these processes are not well understood and it is unknown how they depend on visual attention. We measured neuronal activity in V1 and V4 in a task where monkeys either made an eye movement to texture-defined figures or ignored them. V1 activity predicted the timing and the direction of the saccade if the figures were task relevant. We found that boundary detection is an early process that depends little on attention, whereas region filling occurs later and is facilitated by visual attention, which acts in an object-based manner. Our findings are explained by a model with local, bottom-up computations for boundary detection and feedback processing for region filling. Copyright © 2012 Elsevier Inc. All rights reserved.
Visual Attention Modeling for Stereoscopic Video: A Benchmark and Computational Model.
Fang, Yuming; Zhang, Chi; Li, Jing; Lei, Jianjun; Perreira Da Silva, Matthieu; Le Callet, Patrick
2017-10-01
In this paper, we investigate the visual attention modeling for stereoscopic video from the following two aspects. First, we build one large-scale eye tracking database as the benchmark of visual attention modeling for stereoscopic video. The database includes 47 video sequences and their corresponding eye fixation data. Second, we propose a novel computational model of visual attention for stereoscopic video based on Gestalt theory. In the proposed model, we extract the low-level features, including luminance, color, texture, and depth, from discrete cosine transform coefficients, which are used to calculate feature contrast for the spatial saliency computation. The temporal saliency is calculated by the motion contrast from the planar and depth motion features in the stereoscopic video sequences. The final saliency is estimated by fusing the spatial and temporal saliency with uncertainty weighting, which is estimated by the laws of proximity, continuity, and common fate in Gestalt theory. Experimental results show that the proposed method outperforms the state-of-the-art stereoscopic video saliency detection models on our built large-scale eye tracking database and one other database (DML-ITRACK-3D).
Feature-based and object-based attention orientation during short-term memory maintenance.
Ku, Yixuan
2015-12-01
Top-down attention biases the short-term memory (STM) processing at multiple stages. Orienting attention during the maintenance period of STM by a retrospective cue (retro-cue) strengthens the representation of the cued item and improves the subsequent STM performance. In a recent article, Backer et al. (Backer KC, Binns MA, Alain C. J Neurosci 35: 1307-1318, 2015) extended these findings from the visual to the auditory domain and combined electroencephalography to dissociate neural mechanisms underlying feature-based and object-based attention orientation. Both event-related potentials and neural oscillations explained the behavioral benefits of retro-cues and favored the theory that feature-based and object-based attention orientation were independent. Copyright © 2015 the American Physiological Society.
An integrative, experience-based theory of attentional control.
Wilder, Matthew H; Mozer, Michael C; Wickens, Christopher D
2011-02-09
Although diverse, theories of visual attention generally share the notion that attention is controlled by some combination of three distinct strategies: (1) exogenous cuing from locally contrasting primitive visual features, such as abrupt onsets or color singletons (e.g., L. Itti, C. Koch, & E. Neiber, 1998), (2) endogenous gain modulation of exogenous activations, used to guide attention to task-relevant features (e.g., V. Navalpakkam & L. Itti, 2007; J. Wolfe, 1994, 2007), and (3) endogenous prediction of likely locations of interest, based on task and scene gist (e.g., A. Torralba, A. Oliva, M. Castelhano, & J. Henderson, 2006). However, little work has been done to synthesize these disparate theories. In this work, we propose a unifying conceptualization in which attention is controlled along two dimensions: the degree of task focus and the contextual scale of operation. Previously proposed strategies-and their combinations-can be viewed as instances of this one mechanism. Thus, this theory serves not as a replacement for existing models but as a means of bringing them into a coherent framework. We present an implementation of this theory and demonstrate its applicability to a wide range of attentional phenomena. The model accounts for key results in visual search with synthetic images and makes reasonable predictions for human eye movements in search tasks involving real-world images. In addition, the theory offers an unusual perspective on attention that places a fundamental emphasis on the role of experience and task-related knowledge.
A computational visual saliency model based on statistics and machine learning.
Lin, Ru-Je; Lin, Wei-Song
2014-08-01
Identifying the type of stimuli that attracts human visual attention has been an appealing topic for scientists for many years. In particular, marking the salient regions in images is useful for both psychologists and many computer vision applications. In this paper, we propose a computational approach for producing saliency maps using statistics and machine learning methods. Based on four assumptions, three properties (Feature-Prior, Position-Prior, and Feature-Distribution) can be derived and combined by a simple intersection operation to obtain a saliency map. These properties are implemented by a similarity computation, support vector regression (SVR) technique, statistical analysis of training samples, and information theory using low-level features. This technique is able to learn the preferences of human visual behavior while simultaneously considering feature uniqueness. Experimental results show that our approach performs better in predicting human visual attention regions than 12 other models in two test databases. © 2014 ARVO.
Finke, Kathrin; Schwarzkopf, Wolfgang; Müller, Ulrich; Frodl, Thomas; Müller, Hermann J; Schneider, Werner X; Engel, Rolf R; Riedel, Michael; Möller, Hans-Jürgen; Hennig-Fast, Kristina
2011-11-01
Attention deficit hyperactivity disorder (ADHD) persists frequently into adulthood. The decomposition of endophenotypes by means of experimental neuro-cognitive assessment has the potential to improve diagnostic assessment, evaluation of treatment response, and disentanglement of genetic and environmental influences. We assessed four parameters of attentional capacity and selectivity derived from simple psychophysical tasks (verbal report of briefly presented letter displays) and based on a "theory of visual attention." These parameters are mathematically independent, quantitative measures, and previous studies have shown that they are highly sensitive for subtle attention deficits. Potential reductions of attentional capacity, that is, of perceptual processing speed and working memory storage capacity, were assessed with a whole report paradigm. Furthermore, possible pathologies of attentional selectivity, that is, selection of task-relevant information and bias in the spatial distribution of attention, were measured with a partial report paradigm. A group of 30 unmedicated adult ADHD patients and a group of 30 demographically matched healthy controls were tested. ADHD patients showed significant reductions of working memory storage capacity of a moderate to large effect size. Perceptual processing speed, task-based, and spatial selection were unaffected. The results imply a working memory deficit as an important source of behavioral impairments. The theory of visual attention parameter working memory storage capacity might constitute a quantifiable and testable endophenotype of ADHD.
Tünnermann, Jan; Petersen, Anders; Scharlau, Ingrid
2015-03-02
Selective visual attention improves performance in many tasks. Among others, it leads to "prior entry"--earlier perception of an attended compared to an unattended stimulus. Whether this phenomenon is purely based on an increase of the processing rate of the attended stimulus or if a decrease in the processing rate of the unattended stimulus also contributes to the effect is, up to now, unanswered. Here we describe a novel approach to this question based on Bundesen's Theory of Visual Attention, which we use to overcome the limitations of earlier prior-entry assessment with temporal order judgments (TOJs) that only allow relative statements regarding the processing speed of attended and unattended stimuli. Prevalent models of prior entry in TOJs either indirectly predict a pure acceleration or cannot model the difference between acceleration and deceleration. In a paradigm that combines a letter-identification task with TOJs, we show that indeed acceleration of the attended and deceleration of the unattended stimuli conjointly cause prior entry. © 2015 ARVO.
Lin, Zhicheng
2013-11-01
Visual attention can be deployed to stimuli based on our willful, top-down goal (endogenous attention) or on their intrinsic saliency against the background (exogenous attention). Flexibility is thought to be a hallmark of endogenous attention, whereas decades of research show that exogenous attention is attracted to the retinotopic locations of the salient stimuli. However, to the extent that salient stimuli in the natural environment usually form specific spatial relations with the surrounding context and are dynamic, exogenous attention, to be adaptive, should embrace these structural regularities. Here we test a non-retinotopic, object-centered mechanism in exogenous attention, in which exogenous attention is dynamically attracted to a relative, object-centered location. Using a moving frame configuration, we presented two frames in succession, forming either apparent translational motion or in mirror reflection, with a completely uninformative, transient cue presented at one of the item locations in the first frame. Despite that the cue is presented in a spatially separate frame, in both translation and mirror reflection, behavioralperformance in visual search is enhanced when the target in the second frame appears at the same relative location as the cue location than at other locations. These results provide unambiguous evidence for non-retinotopic exogenous attention and further reveal an object-centered mechanism supporting flexible exogenous attention. Moreover, attentional generalization across mirror reflection may constitute an attentional correlate of perceptual generalization across lateral mirror images, supporting an adaptive, functional account of mirror images confusion. Copyright © 2013 Elsevier B.V. All rights reserved.
Lin, Zhicheng
2013-01-01
Visual attention can be deployed to stimuli based on our willful, top-down goal (endogenous attention) or on their intrinsic saliency against the background (exogenous attention). Flexibility is thought to be a hallmark of endogenous attention, whereas decades of research show that exogenous attention is attracted to the retinotopic locations of the salient stimuli. However, to the extent that salient stimuli in the natural environment usually form specific spatial relations with the surrounding context and are dynamic, exogenous attention, to be adaptive, should embrace these structural regularities. Here we test a non-retinotopic, object-centered mechanism in exogenous attention, in which exogenous attention is dynamically attracted to a relative, object-centered location. Using a moving frame configuration, we presented two frames in succession, forming either apparent translational motion or in mirror reflection, with a completely uninformative, transient cue presented at one of the item locations in the first frame. Despite that the cue is presented in a spatially separate frame, in both translation and mirror reflection, human performance in visual search is enhanced when the target in the second frame appears at the same relative location as the cue location than at other locations. These results provide unambiguous evidence for non-retinotopic exogenous attention and further reveal an object-centered mechanism supporting flexible exogenous attention. Moreover, attentional generalization across mirror reflection may constitute an attentional correlate of perceptual generalization across lateral mirror images, supporting an adaptive, functional account of mirror images confusion. PMID:23942348
Visual attention shifting in autism spectrum disorders.
Richard, Annette E; Lajiness-O'Neill, Renee
2015-01-01
Abnormal visual attention has been frequently observed in autism spectrum disorders (ASD). Abnormal shifting of visual attention is related to abnormal development of social cognition and has been identified as a key neuropsychological finding in ASD. Better characterizing attention shifting in ASD and its relationship with social functioning may help to identify new targets for intervention and improving social communication in these disorders. Thus, the current study investigated deficits in attention shifting in ASD as well as relationships between attention shifting and social communication in ASD and neurotypicals (NT). To investigate deficits in visual attention shifting in ASD, 20 ASD and 20 age- and gender-matched NT completed visual search (VS) and Navon tasks with attention-shifting demands as well as a set-shifting task. VS was a feature search task with targets defined in one of two dimensions; Navon required identification of a target letter presented at the global or local level. Psychomotor and processing speed were entered as covariates. Relationships between visual attention shifting, set shifting, and social functioning were also examined. ASD and NT showed comparable costs of shifting attention. However, psychomotor and processing speed were slower in ASD than in NT, and psychomotor and processing speed were positively correlated with attention-shifting costs on Navon and VS, respectively, for both groups. Attention shifting on VS and Navon were correlated among NT, while attention shifting on Navon was correlated with set shifting among ASD. Attention-shifting costs on Navon were positively correlated with restricted and repetitive behaviors among ASD. Relationships between attention shifting and psychomotor and processing speed, as well as relationships between measures of different aspects of visual attention shifting, suggest inefficient top-down influences over preattentive visual processing in ASD. Inefficient attention shifting may be related to restricted and repetitive behaviors in these disorders.
Gherri, Elena; Eimer, Martin
2011-04-01
The ability to drive safely is disrupted by cell phone conversations, and this has been attributed to a diversion of attention from the visual environment. We employed behavioral and ERP measures to study whether the attentive processing of spoken messages is, in itself, sufficient to produce visual-attentional deficits. Participants searched for visual targets defined by a unique feature (Experiment 1) or feature conjunction (Experiment 2), and simultaneously listened to narrated text passages that had to be recalled later (encoding condition), or heard backward-played speech sounds that could be ignored (control condition). Responses to targets were slower in the encoding condition, and ERPs revealed that the visual processing of search arrays and the attentional selection of target stimuli were less efficient in the encoding relative to the control condition. Results demonstrate that the attentional processing of visual information is impaired when concurrent spoken messages are encoded and maintained, in line with cross-modal links in selective attention, but inconsistent with the view that attentional resources are modality-specific. The distraction of visual attention by active listening could contribute to the adverse effects of cell phone use on driving performance.
Changes in the distribution of sustained attention alter the perceived structure of visual space.
Fortenbaugh, Francesca C; Robertson, Lynn C; Esterman, Michael
2017-02-01
Visual spatial attention is a critical process that allows for the selection and enhanced processing of relevant objects and locations. While studies have shown attentional modulations of perceived location and the representation of distance information across multiple objects, there remains disagreement regarding what influence spatial attention has on the underlying structure of visual space. The present study utilized a method of magnitude estimation in which participants must judge the location of briefly presented targets within the boundaries of their individual visual fields in the absence of any other objects or boundaries. Spatial uncertainty of target locations was used to assess perceived locations across distributed and focused attention conditions without the use of external stimuli, such as visual cues. Across two experiments we tested locations along the cardinal and 45° oblique axes. We demonstrate that focusing attention within a region of space can expand the perceived size of visual space; even in cases where doing so makes performance less accurate. Moreover, the results of the present studies show that when fixation is actively maintained, focusing attention along a visual axis leads to an asymmetrical stretching of visual space that is predominantly focused across the central half of the visual field, consistent with an expansive gradient along the focus of voluntary attention. These results demonstrate that focusing sustained attention peripherally during active fixation leads to an asymmetrical expansion of visual space within the central visual field. Published by Elsevier Ltd.
Higher dietary diversity is related to better visual and auditory sustained attention.
Shiraseb, Farideh; Siassi, Fereydoun; Qorbani, Mostafa; Sotoudeh, Gity; Rostami, Reza; Narmaki, Elham; Yavari, Parvaneh; Aghasi, Mohadeseh; Shaibu, Osman Mohammed
2016-04-01
Attention is a complex cognitive function that is necessary for learning, for following social norms of behaviour and for effective performance of responsibilities and duties. It is especially important in sensitive occupations requiring sustained attention. Improvement of dietary diversity (DD) is recognised as an important factor in health promotion, but its association with sustained attention is unknown. The aim of this study was to determine the association between auditory and visual sustained attention and DD. A cross-sectional study was carried out on 400 women aged 20-50 years who attended sports clubs at Tehran Municipality. Sustained attention was evaluated on the basis of the Integrated Visual and Auditory Continuous Performance Test using Integrated Visual and Auditory software. A single 24-h dietary recall questionnaire was used for DD assessment. Dietary diversity scores (DDS) were determined using the FAO guidelines. The mean visual and auditory sustained attention scores were 40·2 (sd 35·2) and 42·5 (sd 38), respectively. The mean DDS was 4·7 (sd 1·5). After adjusting for age, education years, physical activity, energy intake and BMI, mean visual and auditory sustained attention showed a significant increase as the quartiles of DDS increased (P=0·001). In addition, the mean subscales of attention, including auditory consistency and vigilance, visual persistence, visual and auditory focus, speed, comprehension and full attention, increased significantly with increasing DDS (P<0·05). In conclusion, higher DDS is associated with better visual and auditory sustained attention.
Attending Globally or Locally: Incidental Learning of Optimal Visual Attention Allocation
ERIC Educational Resources Information Center
Beck, Melissa R.; Goldstein, Rebecca R.; van Lamsweerde, Amanda E.; Ericson, Justin M.
2018-01-01
Attention allocation determines the information that is encoded into memory. Can participants learn to optimally allocate attention based on what types of information are most likely to change? The current study examined whether participants could incidentally learn that changes to either high spatial frequency (HSF) or low spatial frequency (LSF)…
Designing Computer-Based Learning Contents: Influence of Digital Zoom on Attention
ERIC Educational Resources Information Center
Glaser, Manuela; Lengyel, Dominik; Toulouse, Catherine; Schwan, Stephan
2017-01-01
In the present study, we investigated the role of digital zoom as a tool for directing attention while looking at visual learning material. In particular, we analyzed whether minimal digital zoom functions similarly to a rhetorical device by cueing mental zooming of attention accordingly. Participants were presented either static film clips, film…
Object-Based Control of Attention Is Sensitive to Recent Experience
ERIC Educational Resources Information Center
Lee, Hyunkyu; Mozer, Michael C.; Kramer, Arthur F.; Vecera, Shaun P.
2012-01-01
How is attention guided by past experience? In visual search, numerous studies have shown that recent trials influence responses to the current trial. Repeating features such as color, shape, or location of a target facilitates performance. Here we examine whether recent experience also modulates a more abstract dimension of attentional control,…
Ruiz-Rizzo, Adriana L; Neitzel, Julia; Müller, Hermann J; Sorg, Christian; Finke, Kathrin
2018-01-01
Separable visual attention functions are assumed to rely on distinct but interacting neural mechanisms. Bundesen's "theory of visual attention" (TVA) allows the mathematical estimation of independent parameters that characterize individuals' visual attentional capacity (i.e., visual processing speed and visual short-term memory storage capacity) and selectivity functions (i.e., top-down control and spatial laterality). However, it is unclear whether these parameters distinctively map onto different brain networks obtained from intrinsic functional connectivity, which organizes slowly fluctuating ongoing brain activity. In our study, 31 demographically homogeneous healthy young participants performed whole- and partial-report tasks and underwent resting-state functional magnetic resonance imaging (rs-fMRI). Report accuracy was modeled using TVA to estimate, individually, the four TVA parameters. Networks encompassing cortical areas relevant for visual attention were derived from independent component analysis of rs-fMRI data: visual, executive control, right and left frontoparietal, and ventral and dorsal attention networks. Two TVA parameters were mapped on particular functional networks. First, participants with higher (vs. lower) visual processing speed showed lower functional connectivity within the ventral attention network. Second, participants with more (vs. less) efficient top-down control showed higher functional connectivity within the dorsal attention network and lower functional connectivity within the visual network. Additionally, higher performance was associated with higher functional connectivity between networks: specifically, between the ventral attention and right frontoparietal networks for visual processing speed, and between the visual and executive control networks for top-down control. The higher inter-network functional connectivity was related to lower intra-network connectivity. These results demonstrate that separable visual attention parameters that are assumed to constitute relatively stable traits correspond distinctly to the functional connectivity both within and between particular functional networks. This implies that individual differences in basic attention functions are represented by differences in the coherence of slowly fluctuating brain activity.
Visual search and attention: an overview.
Davis, Elizabeth T; Palmer, John
2004-01-01
This special feature issue is devoted to attention and visual search. Attention is a central topic in psychology and visual search is both a versatile paradigm for the study of visual attention and a topic of study in itself. Visual search depends on sensory, perceptual, and cognitive processes. As a result, the search paradigm has been used to investigate a diverse range of phenomena. Manipulating the search task can vary the demands on attention. In turn, attention modulates visual search by selecting and limiting the information available at various levels of processing. Focusing on the intersection of attention and search provides a relatively structured window into the wide world of attentional phenomena. In particular, the effects of divided attention are illustrated by the effects of set size (the number of stimuli in a display) and the effects of selective attention are illustrated by cueing subsets of stimuli within the display. These two phenomena provide the starting point for the articles in this special issue. The articles are organized into four general topics to help structure the issues of attention and search.
Andersen, Søren K; Müller, Matthias M; Hillyard, Steven A
2015-07-08
Experiments that study feature-based attention have often examined situations in which selection is based on a single feature (e.g., the color red). However, in more complex situations relevant stimuli may not be set apart from other stimuli by a single defining property but by a specific combination of features. Here, we examined sustained attentional selection of stimuli defined by conjunctions of color and orientation. Human observers attended to one out of four concurrently presented superimposed fields of randomly moving horizontal or vertical bars of red or blue color to detect brief intervals of coherent motion. Selective stimulus processing in early visual cortex was assessed by recordings of steady-state visual evoked potentials (SSVEPs) elicited by each of the flickering fields of stimuli. We directly contrasted attentional selection of single features and feature conjunctions and found that SSVEP amplitudes on conditions in which selection was based on a single feature only (color or orientation) exactly predicted the magnitude of attentional enhancement of SSVEPs when attending to a conjunction of both features. Furthermore, enhanced SSVEP amplitudes elicited by attended stimuli were accompanied by equivalent reductions of SSVEP amplitudes elicited by unattended stimuli in all cases. We conclude that attentional selection of a feature-conjunction stimulus is accomplished by the parallel and independent facilitation of its constituent feature dimensions in early visual cortex. The ability to perceive the world is limited by the brain's processing capacity. Attention affords adaptive behavior by selectively prioritizing processing of relevant stimuli based on their features (location, color, orientation, etc.). We found that attentional mechanisms for selection of different features belonging to the same object operate independently and in parallel: concurrent attentional selection of two stimulus features is simply the sum of attending to each of those features separately. This result is key to understanding attentional selection in complex (natural) scenes, where relevant stimuli are likely to be defined by a combination of stimulus features. Copyright © 2015 the authors 0270-6474/15/359912-08$15.00/0.
Dissociation between the neural correlates of conscious face perception and visual attention.
Navajas, Joaquin; Nitka, Aleksander W; Quian Quiroga, Rodrigo
2017-08-01
Given the higher chance to recognize attended compared to unattended stimuli, the specific neural correlates of these two processes, attention and awareness, tend to be intermingled in experimental designs. In this study, we dissociated the neural correlates of conscious face perception from the effects of visual attention. To do this, we presented faces at the threshold of awareness and manipulated attention through the use of exogenous prestimulus cues. We show that the N170 component, a scalp EEG marker of face perception, was modulated independently by attention and by awareness. An earlier P1 component was not modulated by either of the two effects and a later P3 component was indicative of awareness but not of attention. These claims are supported by converging evidence from (a) modulations observed in the average evoked potentials, (b) correlations between neural and behavioral data at the single-subject level, and (c) single-trial analyses. Overall, our results show a clear dissociation between the neural substrates of attention and awareness. Based on these results, we argue that conscious face perception is triggered by a boost in face-selective cortical ensembles that can be modulated by, but are still independent from, visual attention. © 2017 Society for Psychophysiological Research.
Neural oscillatory deficits in schizophrenia predict behavioral and neurocognitive impairments
Martínez, Antígona; Gaspar, Pablo A.; Hillyard, Steven A.; Bickel, Stephan; Lakatos, Peter; Dias, Elisa C.; Javitt, Daniel C.
2015-01-01
Paying attention to visual stimuli is typically accompanied by event-related desynchronizations (ERD) of ongoing alpha (7–14 Hz) activity in visual cortex. The present study used time-frequency based analyses to investigate the role of impaired alpha ERD in visual processing deficits in schizophrenia (Sz). Subjects viewed sinusoidal gratings of high (HSF) and low (LSF) spatial frequency (SF) designed to test functioning of the parvo- vs. magnocellular pathways, respectively. Patients with Sz and healthy controls paid attention selectively to either the LSF or HSF gratings which were presented in random order. Event-related brain potentials (ERPs) were recorded to all stimuli. As in our previous study, it was found that Sz patients were selectively impaired at detecting LSF target stimuli and that ERP amplitudes to LSF stimuli were diminished, both for the early sensory-evoked components and for the attend minus unattend difference component (the Selection Negativity), which is generally regarded as a specific index of feature-selective attention. In the time-frequency domain, the differential ERP deficits to LSF stimuli were echoed in a virtually absent theta-band phase locked response to both unattended and attended LSF stimuli (along with relatively intact theta-band activity for HSF stimuli). In contrast to the theta-band evoked responses which were tightly stimulus locked, stimulus-induced desynchronizations of ongoing alpha activity were not tightly stimulus locked and were apparent only in induced power analyses. Sz patients were significantly impaired in the attention-related modulation of ongoing alpha activity for both HSF and LSF stimuli. These deficits correlated with patients’ behavioral deficits in visual information processing as well as with visually based neurocognitive deficits. These findings suggest an additional, pathway-independent, mechanism by which deficits in early visual processing contribute to overall cognitive impairment in Sz. PMID:26190988
Learning-based saliency model with depth information.
Ma, Chih-Yao; Hang, Hsueh-Ming
2015-01-01
Most previous studies on visual saliency focused on two-dimensional (2D) scenes. Due to the rapidly growing three-dimensional (3D) video applications, it is very desirable to know how depth information affects human visual attention. In this study, we first conducted eye-fixation experiments on 3D images. Our fixation data set comprises 475 3D images and 16 subjects. We used a Tobii TX300 eye tracker (Tobii, Stockholm, Sweden) to track the eye movement of each subject. In addition, this database contains 475 computed depth maps. Due to the scarcity of public-domain 3D fixation data, this data set should be useful to the 3D visual attention research community. Then, a learning-based visual attention model was designed to predict human attention. In addition to the popular 2D features, we included the depth map and its derived features. The results indicate that the extra depth information can enhance the saliency estimation accuracy specifically for close-up objects hidden in a complex-texture background. In addition, we examined the effectiveness of various low-, mid-, and high-level features on saliency prediction. Compared with both 2D and 3D state-of-the-art saliency estimation models, our methods show better performance on the 3D test images. The eye-tracking database and the MATLAB source codes for the proposed saliency model and evaluation methods are available on our website.
Infant visual attention and object recognition.
Reynolds, Greg D
2015-05-15
This paper explores the role visual attention plays in the recognition of objects in infancy. Research and theory on the development of infant attention and recognition memory are reviewed in three major sections. The first section reviews some of the major findings and theory emerging from a rich tradition of behavioral research utilizing preferential looking tasks to examine visual attention and recognition memory in infancy. The second section examines research utilizing neural measures of attention and object recognition in infancy as well as research on brain-behavior relations in the early development of attention and recognition memory. The third section addresses potential areas of the brain involved in infant object recognition and visual attention. An integrated synthesis of some of the existing models of the development of visual attention is presented which may account for the observed changes in behavioral and neural measures of visual attention and object recognition that occur across infancy. Copyright © 2015 Elsevier B.V. All rights reserved.
Fu, Kun; Jin, Junqi; Cui, Runpeng; Sha, Fei; Zhang, Changshui
2017-12-01
Recent progress on automatic generation of image captions has shown that it is possible to describe the most salient information conveyed by images with accurate and meaningful sentences. In this paper, we propose an image captioning system that exploits the parallel structures between images and sentences. In our model, the process of generating the next word, given the previously generated ones, is aligned with the visual perception experience where the attention shifts among the visual regions-such transitions impose a thread of ordering in visual perception. This alignment characterizes the flow of latent meaning, which encodes what is semantically shared by both the visual scene and the text description. Our system also makes another novel modeling contribution by introducing scene-specific contexts that capture higher-level semantic information encoded in an image. The contexts adapt language models for word generation to specific scene types. We benchmark our system and contrast to published results on several popular datasets, using both automatic evaluation metrics and human evaluation. We show that either region-based attention or scene-specific contexts improves systems without those components. Furthermore, combining these two modeling ingredients attains the state-of-the-art performance.
An independent brain-computer interface using covert non-spatial visual selective attention
NASA Astrophysics Data System (ADS)
Zhang, Dan; Maye, Alexander; Gao, Xiaorong; Hong, Bo; Engel, Andreas K.; Gao, Shangkai
2010-02-01
In this paper, a novel independent brain-computer interface (BCI) system based on covert non-spatial visual selective attention of two superimposed illusory surfaces is described. Perception of two superimposed surfaces was induced by two sets of dots with different colors rotating in opposite directions. The surfaces flickered at different frequencies and elicited distinguishable steady-state visual evoked potentials (SSVEPs) over parietal and occipital areas of the brain. By selectively attending to one of the two surfaces, the SSVEP amplitude at the corresponding frequency was enhanced. An online BCI system utilizing the attentional modulation of SSVEP was implemented and a 3-day online training program with healthy subjects was carried out. The study was conducted with Chinese subjects at Tsinghua University, and German subjects at University Medical Center Hamburg-Eppendorf (UKE) using identical stimulation software and equivalent technical setup. A general improvement of control accuracy with training was observed in 8 out of 18 subjects. An averaged online classification accuracy of 72.6 ± 16.1% was achieved on the last training day. The system renders SSVEP-based BCI paradigms possible for paralyzed patients with substantial head or ocular motor impairments by employing covert attention shifts instead of changing gaze direction.
An independent brain-computer interface using covert non-spatial visual selective attention.
Zhang, Dan; Maye, Alexander; Gao, Xiaorong; Hong, Bo; Engel, Andreas K; Gao, Shangkai
2010-02-01
In this paper, a novel independent brain-computer interface (BCI) system based on covert non-spatial visual selective attention of two superimposed illusory surfaces is described. Perception of two superimposed surfaces was induced by two sets of dots with different colors rotating in opposite directions. The surfaces flickered at different frequencies and elicited distinguishable steady-state visual evoked potentials (SSVEPs) over parietal and occipital areas of the brain. By selectively attending to one of the two surfaces, the SSVEP amplitude at the corresponding frequency was enhanced. An online BCI system utilizing the attentional modulation of SSVEP was implemented and a 3-day online training program with healthy subjects was carried out. The study was conducted with Chinese subjects at Tsinghua University, and German subjects at University Medical Center Hamburg-Eppendorf (UKE) using identical stimulation software and equivalent technical setup. A general improvement of control accuracy with training was observed in 8 out of 18 subjects. An averaged online classification accuracy of 72.6 +/- 16.1% was achieved on the last training day. The system renders SSVEP-based BCI paradigms possible for paralyzed patients with substantial head or ocular motor impairments by employing covert attention shifts instead of changing gaze direction.
Spatial Scaling of the Profile of Selective Attention in the Visual Field.
Gannon, Matthew A; Knapp, Ashley A; Adams, Thomas G; Long, Stephanie M; Parks, Nathan A
2016-01-01
Neural mechanisms of selective attention must be capable of adapting to variation in the absolute size of an attended stimulus in the ever-changing visual environment. To date, little is known regarding how attentional selection interacts with fluctuations in the spatial expanse of an attended object. Here, we use event-related potentials (ERPs) to investigate the scaling of attentional enhancement and suppression across the visual field. We measured ERPs while participants performed a task at fixation that varied in its attentional demands (attentional load) and visual angle (1.0° or 2.5°). Observers were presented with a stream of task-relevant stimuli while foveal, parafoveal, and peripheral visual locations were probed by irrelevant distractor stimuli. We found two important effects in the N1 component of visual ERPs. First, N1 modulations to task-relevant stimuli indexed attentional selection of stimuli during the load task and further correlated with task performance. Second, with increased task size, attentional modulation of the N1 to distractor stimuli showed a differential pattern that was consistent with a scaling of attentional selection. Together, these results demonstrate that the size of an attended stimulus scales the profile of attentional selection across the visual field and provides insights into the attentional mechanisms associated with such spatial scaling.
Vaidya, Avinash R; Fellows, Lesley K
2015-09-16
Adaptively interacting with our environment requires extracting information that will allow us to successfully predict reward. This can be a challenge, particularly when there are many candidate cues, and when rewards are probabilistic. Recent work has demonstrated that visual attention is allocated to stimulus features that have been associated with reward on previous trials. The ventromedial frontal lobe (VMF) has been implicated in learning in dynamic environments of this kind, but the mechanism by which this region influences this process is not clear. Here, we hypothesized that the VMF plays a critical role in guiding attention to reward-predictive stimulus features based on feedback. We tested the effects of VMF damage in human subjects on a visual search task in which subjects were primed to attend to task-irrelevant colors associated with different levels of reward, incidental to the search task. Consistent with previous work, we found that distractors had a greater influence on reaction time when they appeared in colors associated with high reward in the previous trial compared with colors associated with low reward in healthy control subjects and patients with prefrontal damage sparing the VMF. However, this reward modulation of attentional priming was absent in patients with VMF damage. Thus, an intact VMF is necessary for directing attention based on experience with cue-reward associations. We suggest that this region plays a role in selecting reward-predictive cues to facilitate future learning. There has been a swell of interest recently in the ventromedial frontal cortex (VMF), a brain region critical to associative learning. However, the underlying mechanism by which this region guides learning is not well understood. Here, we tested the effects of damage to this region in humans on a task in which rewards were linked incidentally to visual features, resulting in trial-by-trial attentional priming. Controls and subjects with prefrontal damage sparing the VMF showed normal reward priming, but VMF-damaged patients did not. This work sheds light on a potential mechanism through which this region influences behavior. We suggest that the VMF is necessary for directing attention to reward-predictive visual features based on feedback, facilitating future learning and decision-making. Copyright © 2015 the authors 0270-6474/15/3512813-11$15.00/0.
The role of early visual cortex in visual short-term memory and visual attention.
Offen, Shani; Schluppeck, Denis; Heeger, David J
2009-06-01
We measured cortical activity with functional magnetic resonance imaging to probe the involvement of early visual cortex in visual short-term memory and visual attention. In four experimental tasks, human subjects viewed two visual stimuli separated by a variable delay period. The tasks placed differential demands on short-term memory and attention, but the stimuli were visually identical until after the delay period. Early visual cortex exhibited sustained responses throughout the delay when subjects performed attention-demanding tasks, but delay-period activity was not distinguishable from zero when subjects performed a task that required short-term memory. This dissociation reveals different computational mechanisms underlying the two processes.
Characterizing the effects of feature salience and top-down attention in the early visual system.
Poltoratski, Sonia; Ling, Sam; McCormack, Devin; Tong, Frank
2017-07-01
The visual system employs a sophisticated balance of attentional mechanisms: salient stimuli are prioritized for visual processing, yet observers can also ignore such stimuli when their goals require directing attention elsewhere. A powerful determinant of visual salience is local feature contrast: if a local region differs from its immediate surround along one or more feature dimensions, it will appear more salient. We used high-resolution functional MRI (fMRI) at 7T to characterize the modulatory effects of bottom-up salience and top-down voluntary attention within multiple sites along the early visual pathway, including visual areas V1-V4 and the lateral geniculate nucleus (LGN). Observers viewed arrays of spatially distributed gratings, where one of the gratings immediately to the left or right of fixation differed from all other items in orientation or motion direction, making it salient. To investigate the effects of directed attention, observers were cued to attend to the grating to the left or right of fixation, which was either salient or nonsalient. Results revealed reliable additive effects of top-down attention and stimulus-driven salience throughout visual areas V1-hV4. In comparison, the LGN exhibited significant attentional enhancement but was not reliably modulated by orientation- or motion-defined salience. Our findings indicate that top-down effects of spatial attention can influence visual processing at the earliest possible site along the visual pathway, including the LGN, whereas the processing of orientation- and motion-driven salience primarily involves feature-selective interactions that take place in early cortical visual areas. NEW & NOTEWORTHY While spatial attention allows for specific, goal-driven enhancement of stimuli, salient items outside of the current focus of attention must also be prioritized. We used 7T fMRI to compare salience and spatial attentional enhancement along the early visual hierarchy. We report additive effects of attention and bottom-up salience in early visual areas, suggesting that salience enhancement is not contingent on the observer's attentional state. Copyright © 2017 the American Physiological Society.
Subliminally presented and stored objects capture spatial attention.
Astle, Duncan E; Nobre, Anna C; Scerif, Gaia
2010-03-10
When objects disappear from view, we can still bring them to mind, at least for brief periods of time, because we can represent those objects in visual short-term memory (VSTM) (Sperling, 1960; Cowan, 2001). A defining characteristic of this representation is that it is topographic, that is, it preserves a spatial organization based on the original visual percept (Vogel and Machizawa, 2004; Astle et al., 2009; Kuo et al., 2009). Recent research has also shown that features or locations of visual items that match those being maintained in conscious VSTM automatically capture our attention (Awh and Jonides, 2001; Olivers et al., 2006; Soto et al., 2008). But do objects leave some trace that can guide spatial attention, even without participants intentionally remembering them? Furthermore, could subliminally presented objects leave a topographically arranged representation that can capture attention? We presented objects either supraliminally or subliminally and then 1 s later re-presented one of those objects in a new location, as a "probe" shape. As participants made an arbitrary perceptual judgment on the probe shape, their covert spatial attention was drawn to the original location of that shape, regardless of whether its initial presentation had been supraliminal or subliminal. We demonstrate this with neural and behavioral measures of memory-driven attentional capture. These findings reveal the existence of a topographically arranged store of "visual" objects, the content of which is beyond our explicit awareness but which nonetheless guides spatial attention.
Evidence for Deficits in the Temporal Attention Span of Poor Readers
Visser, Troy A. W.
2014-01-01
Background While poor reading is often associated with phonological deficits, many studies suggest that visual processing might also be impaired. In particular, recent research has indicated that poor readers show impaired spatial visual attention spans in partial and whole report tasks. Given the similarities between competition-based accounts for reduced visual attention span and similar explanations for impairments in sequential object processing, the present work examined whether poor readers show deficits in their “temporal attention span” – that is, their ability to rapidly and accurately process sequences of consecutive target items. Methodology/Principal Findings Poor and normal readers monitored a sequential stream of visual items for two (TT condition) or three (TTT condition) consecutive target digits. Target identification was examined using both unconditional and conditional measures of accuracy in order to gauge the overall likelihood of identifying a target and the likelihood of identifying a target given successful identification of previous items. Compared to normal readers, poor readers showed small but consistent deficits in identification across targets whether unconditional or conditional accuracy was used. Additionally, in the TTT condition, final-target conditional accuracy was poorer than unconditional accuracy, particularly for poor readers, suggesting a substantial cost arising from processing the previous two targets that was not present in normal readers. Conclusions/Significance Mirroring the differences found between poor and normal readers in spatial visual attention span, the present findings suggest two principal differences between the temporal attention spans of poor and normal readers. First, the consistent pattern of reduced performance across targets suggests increased competition amongst items within the same span for poor readers. Second, the steeper decline in final target performance amongst poor readers in the TTT condition suggests a reduction in the extent of their temporal attention span. PMID:24651313
Evidence for deficits in the temporal attention span of poor readers.
Visser, Troy A W
2014-01-01
While poor reading is often associated with phonological deficits, many studies suggest that visual processing might also be impaired. In particular, recent research has indicated that poor readers show impaired spatial visual attention spans in partial and whole report tasks. Given the similarities between competition-based accounts for reduced visual attention span and similar explanations for impairments in sequential object processing, the present work examined whether poor readers show deficits in their "temporal attention span"--that is, their ability to rapidly and accurately process sequences of consecutive target items. Poor and normal readers monitored a sequential stream of visual items for two (TT condition) or three (TTT condition) consecutive target digits. Target identification was examined using both unconditional and conditional measures of accuracy in order to gauge the overall likelihood of identifying a target and the likelihood of identifying a target given successful identification of previous items. Compared to normal readers, poor readers showed small but consistent deficits in identification across targets whether unconditional or conditional accuracy was used. Additionally, in the TTT condition, final-target conditional accuracy was poorer than unconditional accuracy, particularly for poor readers, suggesting a substantial cost arising from processing the previous two targets that was not present in normal readers. Mirroring the differences found between poor and normal readers in spatial visual attention span, the present findings suggest two principal differences between the temporal attention spans of poor and normal readers. First, the consistent pattern of reduced performance across targets suggests increased competition amongst items within the same span for poor readers. Second, the steeper decline in final target performance amongst poor readers in the TTT condition suggests a reduction in the extent of their temporal attention span.
Rapid acquisition but slow extinction of an attentional bias in space.
Jiang, Yuhong V; Swallow, Khena M; Rosenbaum, Gail M; Herzig, Chelsey
2013-02-01
Substantial research has focused on the allocation of spatial attention based on goals or perceptual salience. In everyday life, however, people also direct attention using their previous experience. Here we investigate the pace at which people incidentally learn to prioritize specific locations. Participants searched for a T among Ls in a visual search task. Unbeknownst to them, the target was more often located in one region of the screen than in other regions. An attentional bias toward the rich region developed over dozens of trials. However, the bias did not rapidly readjust to new contexts. It persisted for at least a week and for hundreds of trials after the target's position became evenly distributed. The persistence of the bias did not reflect a long window over which visual statistics were calculated. Long-term persistence differentiates incidentally learned attentional biases from the more flexible goal-driven attention. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Awh, E; Anllo-Vento, L; Hillyard, S A
2000-09-01
We investigated the hypothesis that the covert focusing of spatial attention mediates the on-line maintenance of location information in spatial working memory. During the delay period of a spatial working-memory task, behaviorally irrelevant probe stimuli were flashed at both memorized and nonmemorized locations. Multichannel recordings of event-related potentials (ERPs) were used to assess visual processing of the probes at the different locations. Consistent with the hypothesis of attention-based rehearsal, early ERP components were enlarged in response to probes that appeared at memorized locations. These visual modulations were similar in latency and topography to those observed after explicit manipulations of spatial selective attention in a parallel experimental condition that employed an identical stimulus display.
Reimer, Christina B; Strobach, Tilo; Schubert, Torsten
2017-12-01
Visual attention and response selection are limited in capacity. Here, we investigated whether visual attention requires the same bottleneck mechanism as response selection in a dual-task of the psychological refractory period (PRP) paradigm. The dual-task consisted of an auditory two-choice discrimination Task 1 and a conjunction search Task 2, which were presented at variable temporal intervals (stimulus onset asynchrony, SOA). In conjunction search, visual attention is required to select items and to bind their features resulting in a serial search process around the items in the search display (i.e., set size). We measured the reaction time of the visual search task (RT2) and the N2pc, an event-related potential (ERP), which reflects lateralized visual attention processes. If the response selection processes in Task 1 influence the visual attention processes in Task 2, N2pc latency and amplitude would be delayed and attenuated at short SOA compared to long SOA. The results, however, showed that latency and amplitude were independent of SOA, indicating that visual attention was concurrently deployed to response selection. Moreover, the RT2 analysis revealed an underadditive interaction of SOA and set size. We concluded that visual attention does not require the same bottleneck mechanism as response selection in dual-tasks.
Using frequency tagging to quantify attentional deployment in a visual divided attention task.
Toffanin, Paolo; de Jong, Ritske; Johnson, Addie; Martens, Sander
2009-06-01
Frequency tagging is an EEG method based on the quantification of the steady state visual evoked potential (SSVEP) elicited from stimuli which flicker with a distinctive frequency. Because the amplitude of the SSVEP is modulated by attention such that attended stimuli elicit higher SSVEP amplitudes than do ignored stimuli, the method has been used to investigate the neural mechanisms of spatial attention. However, up to now it has not been shown whether the amplitude of the SSVEP is sensitive to gradations of attention and there has been debate about whether attention effects on the SSVEP are dependent on the tagging frequency used. We thus compared attention effects on SSVEP across three attention conditions-focused, divided, and ignored-with six different tagging frequencies. Participants performed a visual detection task (respond to the digit 5 embedded in a stream of characters). Two stimulus streams, one to the left and one to the right of fixation, were displayed simultaneously, each with a background grey square whose hue was sine-modulated with one of the six tagging frequencies. At the beginning of each trial a cue indicated whether targets on the left, right, or both sides should be responded to. Accuracy was higher in the focused- than in the divided-attention condition. SSVEP amplitudes were greatest in the focused-attention condition, intermediate in the divided-attention condition, and smallest in the ignored-attention condition. The effect of attention on SSVEP amplitude did not depend on the tagging frequency used. Frequency tagging appears to be a flexible technique for studying attention.
Attention Increases Spike Count Correlations between Visual Cortical Areas.
Ruff, Douglas A; Cohen, Marlene R
2016-07-13
Visual attention, which improves perception of attended locations or objects, has long been known to affect many aspects of the responses of neuronal populations in visual cortex. There are two nonmutually exclusive hypotheses concerning the neuronal mechanisms that underlie these perceptual improvements. The first hypothesis, that attention improves the information encoded by a population of neurons in a particular cortical area, has considerable physiological support. The second hypothesis is that attention improves perception by selectively communicating relevant visual information. This idea has been tested primarily by measuring interactions between neurons on very short timescales, which are mathematically nearly independent of neuronal interactions on longer timescales. We tested the hypothesis that attention changes the way visual information is communicated between cortical areas on longer timescales by recording simultaneously from neurons in primary visual cortex (V1) and the middle temporal area (MT) in rhesus monkeys. We used two independent and complementary approaches. Our correlative experiment showed that attention increases the trial-to-trial response variability that is shared between the two areas. In our causal experiment, we electrically microstimulated V1 and found that attention increased the effect of stimulation on MT responses. Together, our results suggest that attention affects both the way visual stimuli are encoded within a cortical area and the extent to which visual information is communicated between areas on behaviorally relevant timescales. Visual attention dramatically improves the perception of attended stimuli. Attention has long been thought to act by selecting relevant visual information for further processing. It has been hypothesized that this selection is accomplished by increasing communication between neurons that encode attended information in different cortical areas. We recorded simultaneously from neurons in primary visual cortex and the middle temporal area while rhesus monkeys performed an attention task. We found that attention increased shared variability between neurons in the two areas and that attention increased the effect of microstimulation in V1 on the firing rates of MT neurons. Our results provide support for the hypothesis that attention increases communication between neurons in different brain areas on behaviorally relevant timescales. Copyright © 2016 the authors 0270-6474/16/367523-12$15.00/0.
Attention Increases Spike Count Correlations between Visual Cortical Areas
Cohen, Marlene R.
2016-01-01
Visual attention, which improves perception of attended locations or objects, has long been known to affect many aspects of the responses of neuronal populations in visual cortex. There are two nonmutually exclusive hypotheses concerning the neuronal mechanisms that underlie these perceptual improvements. The first hypothesis, that attention improves the information encoded by a population of neurons in a particular cortical area, has considerable physiological support. The second hypothesis is that attention improves perception by selectively communicating relevant visual information. This idea has been tested primarily by measuring interactions between neurons on very short timescales, which are mathematically nearly independent of neuronal interactions on longer timescales. We tested the hypothesis that attention changes the way visual information is communicated between cortical areas on longer timescales by recording simultaneously from neurons in primary visual cortex (V1) and the middle temporal area (MT) in rhesus monkeys. We used two independent and complementary approaches. Our correlative experiment showed that attention increases the trial-to-trial response variability that is shared between the two areas. In our causal experiment, we electrically microstimulated V1 and found that attention increased the effect of stimulation on MT responses. Together, our results suggest that attention affects both the way visual stimuli are encoded within a cortical area and the extent to which visual information is communicated between areas on behaviorally relevant timescales. SIGNIFICANCE STATEMENT Visual attention dramatically improves the perception of attended stimuli. Attention has long been thought to act by selecting relevant visual information for further processing. It has been hypothesized that this selection is accomplished by increasing communication between neurons that encode attended information in different cortical areas. We recorded simultaneously from neurons in primary visual cortex and the middle temporal area while rhesus monkeys performed an attention task. We found that attention increased shared variability between neurons in the two areas and that attention increased the effect of microstimulation in V1 on the firing rates of MT neurons. Our results provide support for the hypothesis that attention increases communication between neurons in different brain areas on behaviorally relevant timescales. PMID:27413161
Attraction of position preference by spatial attention throughout human visual cortex.
Klein, Barrie P; Harvey, Ben M; Dumoulin, Serge O
2014-10-01
Voluntary spatial attention concentrates neural resources at the attended location. Here, we examined the effects of spatial attention on spatial position selectivity in humans. We measured population receptive fields (pRFs) using high-field functional MRI (fMRI) (7T) while subjects performed an attention-demanding task at different locations. We show that spatial attention attracts pRF preferred positions across the entire visual field, not just at the attended location. This global change in pRF preferred positions systematically increases up the visual hierarchy. We model these pRF preferred position changes as an interaction between two components: an attention field and a pRF without the influence of attention. This computational model suggests that increasing effects of attention up the hierarchy result primarily from differences in pRF size and that the attention field is similar across the visual hierarchy. A similar attention field suggests that spatial attention transforms different neural response selectivities throughout the visual hierarchy in a similar manner. Copyright © 2014 Elsevier Inc. All rights reserved.
Evolution of attention mechanisms for early visual processing
NASA Astrophysics Data System (ADS)
Müller, Thomas; Knoll, Alois
2011-03-01
Early visual processing as a method to speed up computations on visual input data has long been discussed in the computer vision community. The general target of a such approaches is to filter nonrelevant information from the costly higher-level visual processing algorithms. By insertion of this additional filter layer the overall approach can be speeded up without actually changing the visual processing methodology. Being inspired by the layered architecture of the human visual processing apparatus, several approaches for early visual processing have been recently proposed. Most promising in this field is the extraction of a saliency map to determine regions of current attention in the visual field. Such saliency can be computed in a bottom-up manner, i.e. the theory claims that static regions of attention emerge from a certain color footprint, and dynamic regions of attention emerge from connected blobs of textures moving in a uniform way in the visual field. Top-down saliency effects are either unconscious through inherent mechanisms like inhibition-of-return, i.e. within a period of time the attention level paid to a certain region automatically decreases if the properties of that region do not change, or volitional through cognitive feedback, e.g. if an object moves consistently in the visual field. These bottom-up and top-down saliency effects have been implemented and evaluated in a previous computer vision system for the project JAST. In this paper an extension applying evolutionary processes is proposed. The prior vision system utilized multiple threads to analyze the regions of attention delivered from the early processing mechanism. Here, in addition, multiple saliency units are used to produce these regions of attention. All of these saliency units have different parameter-sets. The idea is to let the population of saliency units create regions of attention, then evaluate the results with cognitive feedback and finally apply the genetic mechanism: mutation and cloning of the best performers and extinction of the worst performers considering computation of regions of attention. A fitness function can be derived by evaluating, whether relevant objects are found in the regions created. It can be seen from various experiments, that the approach significantly speeds up visual processing, especially regarding robust ealtime object recognition, compared to an approach not using saliency based preprocessing. Furthermore, the evolutionary algorithm improves the overall performance of the preprocessing system in terms of quality, as the system automatically and autonomously tunes the saliency parameters. The computational overhead produced by periodical clone/delete/mutation operations can be handled well within the realtime constraints of the experimental computer vision system. Nevertheless, limitations apply whenever the visual field does not contain any significant saliency information for some time, but the population still tries to tune the parameters - overfitting avoids generalization in this case and the evolutionary process may be reset by manual intervention.
Perceptual organization and visual attention.
Kimchi, Ruth
2009-01-01
Perceptual organization--the processes structuring visual information into coherent units--and visual attention--the processes by which some visual information in a scene is selected--are crucial for the perception of our visual environment and to visuomotor behavior. Recent research points to important relations between attentional and organizational processes. Several studies demonstrated that perceptual organization constrains attentional selectivity, and other studies suggest that attention can also constrain perceptual organization. In this chapter I focus on two aspects of the relationship between perceptual organization and attention. The first addresses the question of whether or not perceptual organization can take place without attention. I present findings demonstrating that some forms of grouping and figure-ground segmentation can occur without attention, whereas others require controlled attentional processing, depending on the processes involved and the conditions prevailing for each process. These findings challenge the traditional view, which assumes that perceptual organization is a unitary entity that operates preattentively. The second issue addresses the question of whether perceptual organization can affect the automatic deployment of attention. I present findings showing that the mere organization of some elements in the visual field by Gestalt factors into a coherent perceptual unit (an "object"), with no abrupt onset or any other unique transient, can capture attention automatically in a stimulus-driven manner. Taken together, the findings discussed in this chapter demonstrate the multifaceted, interactive relations between perceptual organization and visual attention.
Distinctive Correspondence Between Separable Visual Attention Functions and Intrinsic Brain Networks
Ruiz-Rizzo, Adriana L.; Neitzel, Julia; Müller, Hermann J.; Sorg, Christian; Finke, Kathrin
2018-01-01
Separable visual attention functions are assumed to rely on distinct but interacting neural mechanisms. Bundesen's “theory of visual attention” (TVA) allows the mathematical estimation of independent parameters that characterize individuals' visual attentional capacity (i.e., visual processing speed and visual short-term memory storage capacity) and selectivity functions (i.e., top-down control and spatial laterality). However, it is unclear whether these parameters distinctively map onto different brain networks obtained from intrinsic functional connectivity, which organizes slowly fluctuating ongoing brain activity. In our study, 31 demographically homogeneous healthy young participants performed whole- and partial-report tasks and underwent resting-state functional magnetic resonance imaging (rs-fMRI). Report accuracy was modeled using TVA to estimate, individually, the four TVA parameters. Networks encompassing cortical areas relevant for visual attention were derived from independent component analysis of rs-fMRI data: visual, executive control, right and left frontoparietal, and ventral and dorsal attention networks. Two TVA parameters were mapped on particular functional networks. First, participants with higher (vs. lower) visual processing speed showed lower functional connectivity within the ventral attention network. Second, participants with more (vs. less) efficient top-down control showed higher functional connectivity within the dorsal attention network and lower functional connectivity within the visual network. Additionally, higher performance was associated with higher functional connectivity between networks: specifically, between the ventral attention and right frontoparietal networks for visual processing speed, and between the visual and executive control networks for top-down control. The higher inter-network functional connectivity was related to lower intra-network connectivity. These results demonstrate that separable visual attention parameters that are assumed to constitute relatively stable traits correspond distinctly to the functional connectivity both within and between particular functional networks. This implies that individual differences in basic attention functions are represented by differences in the coherence of slowly fluctuating brain activity. PMID:29662444
Attention improves encoding of task-relevant features in the human visual cortex
Jehee, Janneke F.M.; Brady, Devin K.; Tong, Frank
2011-01-01
When spatial attention is directed towards a particular stimulus, increased activity is commonly observed in corresponding locations of the visual cortex. Does this attentional increase in activity indicate improved processing of all features contained within the attended stimulus, or might spatial attention selectively enhance the features relevant to the observer’s task? We used fMRI decoding methods to measure the strength of orientation-selective activity patterns in the human visual cortex while subjects performed either an orientation or contrast discrimination task, involving one of two laterally presented gratings. Greater overall BOLD activation with spatial attention was observed in areas V1-V4 for both tasks. However, multivariate pattern analysis revealed that orientation-selective responses were enhanced by attention only when orientation was the task-relevant feature, and not when the grating’s contrast had to be attended. In a second experiment, observers discriminated the orientation or color of a specific lateral grating. Here, orientation-selective responses were enhanced in both tasks but color-selective responses were enhanced only when color was task-relevant. In both experiments, task-specific enhancement of feature-selective activity was not confined to the attended stimulus location, but instead spread to other locations in the visual field, suggesting the concurrent involvement of a global feature-based attentional mechanism. These results suggest that attention can be remarkably selective in its ability to enhance particular task-relevant features, and further reveal that increases in overall BOLD amplitude are not necessarily accompanied by improved processing of stimulus information. PMID:21632942
Qin, Shuo; Ray, Nicholas R; Ramakrishnan, Nithya; Nashiro, Kaoru; O'Connell, Margaret A; Basak, Chandramallika
2016-11-01
Overloading the capacity of visual attention can result in mistakenly combining the various features of an object, that is, illusory conjunctions. We hypothesize that if the two hemispheres separately process visual information by splitting attention, connectivity of corpus callosum-a brain structure integrating the two hemispheres-would predict the degree of illusory conjunctions. In the current study, we assessed two types of illusory conjunctions using a memory-scanning paradigm; the features were either presented across the two opposite hemifields or within the same hemifield. Four objects, each with two visual features, were briefly presented together followed by a probe-recognition and a confidence rating for the recognition accuracy. MRI scans were also obtained. Results indicated that successful recollection during probe recognition was better for across hemifields conjunctions compared to within hemifield conjunctions, lending support to the bilateral advantage of the two hemispheres in visual short-term memory. Age-related differences regarding the underlying mechanisms of the bilateral advantage indicated greater reliance on recollection-based processing in young and on familiarity-based processing in old. Moreover, the integrity of the posterior corpus callosum was more predictive of opposite hemifield illusory conjunctions compared to within hemifield illusory conjunctions, even after controlling for age. That is, individuals with lesser posterior corpus callosum connectivity had better recognition for objects when their features were recombined from the opposite hemifields than from the same hemifield. This study is the first to investigate the role of the corpus callosum in splitting attention between versus within hemifields. © 2016 Society for Psychophysiological Research.
Visual working memory simultaneously guides facilitation and inhibition during visual search.
Dube, Blaire; Basciano, April; Emrich, Stephen M; Al-Aidroos, Naseem
2016-07-01
During visual search, visual working memory (VWM) supports the guidance of attention in two ways: It stores the identity of the search target, facilitating the selection of matching stimuli in the search array, and it maintains a record of the distractors processed during search so that they can be inhibited. In two experiments, we investigated whether the full contents of VWM can be used to support both of these abilities simultaneously. In Experiment 1, participants completed a preview search task in which (a) a subset of search distractors appeared before the remainder of the search items, affording participants the opportunity to inhibit them, and (b) the search target varied from trial to trial, requiring the search target template to be maintained in VWM. We observed the established signature of VWM-based inhibition-reduced ability to ignore previewed distractors when the number of distractors exceeds VWM's capacity-suggesting that VWM can serve this role while also representing the target template. In Experiment 2, we replicated Experiment 1, but added to the search displays a singleton distractor that sometimes matched the color (a task-irrelevant feature) of the search target, to evaluate capture. We again observed the signature of VWM-based preview inhibition along with attentional capture by (and, thus, facilitation of) singletons matching the target template. These findings indicate that more than one VWM representation can bias attention at a time, and that these representations can separately affect selection through either facilitation or inhibition, placing constraints on existing models of the VWM-based guidance of attention.
Spatial attention improves the quality of population codes in human visual cortex.
Saproo, Sameer; Serences, John T
2010-08-01
Selective attention enables sensory input from behaviorally relevant stimuli to be processed in greater detail, so that these stimuli can more accurately influence thoughts, actions, and future goals. Attention has been shown to modulate the spiking activity of single feature-selective neurons that encode basic stimulus properties (color, orientation, etc.). However, the combined output from many such neurons is required to form stable representations of relevant objects and little empirical work has formally investigated the relationship between attentional modulations on population responses and improvements in encoding precision. Here, we used functional MRI and voxel-based feature tuning functions to show that spatial attention induces a multiplicative scaling in orientation-selective population response profiles in early visual cortex. In turn, this multiplicative scaling correlates with an improvement in encoding precision, as evidenced by a concurrent increase in the mutual information between population responses and the orientation of attended stimuli. These data therefore demonstrate how multiplicative scaling of neural responses provides at least one mechanism by which spatial attention may improve the encoding precision of population codes. Increased encoding precision in early visual areas may then enhance the speed and accuracy of perceptual decisions computed by higher-order neural mechanisms.
Bartsch, Mandy V; Loewe, Kristian; Merkel, Christian; Heinze, Hans-Jochen; Schoenfeld, Mircea A; Tsotsos, John K; Hopf, Jens-Max
2017-10-25
Attention can facilitate the selection of elementary object features such as color, orientation, or motion. This is referred to as feature-based attention and it is commonly attributed to a modulation of the gain and tuning of feature-selective units in visual cortex. Although gain mechanisms are well characterized, little is known about the cortical processes underlying the sharpening of feature selectivity. Here, we show with high-resolution magnetoencephalography in human observers (men and women) that sharpened selectivity for a particular color arises from feedback processing in the human visual cortex hierarchy. To assess color selectivity, we analyze the response to a color probe that varies in color distance from an attended color target. We find that attention causes an initial gain enhancement in anterior ventral extrastriate cortex that is coarsely selective for the target color and transitions within ∼100 ms into a sharper tuned profile in more posterior ventral occipital cortex. We conclude that attention sharpens selectivity over time by attenuating the response at lower levels of the cortical hierarchy to color values neighboring the target in color space. These observations support computational models proposing that attention tunes feature selectivity in visual cortex through backward-propagating attenuation of units less tuned to the target. SIGNIFICANCE STATEMENT Whether searching for your car, a particular item of clothing, or just obeying traffic lights, in everyday life, we must select items based on color. But how does attention allow us to select a specific color? Here, we use high spatiotemporal resolution neuromagnetic recordings to examine how color selectivity emerges in the human brain. We find that color selectivity evolves as a coarse to fine process from higher to lower levels within the visual cortex hierarchy. Our observations support computational models proposing that feature selectivity increases over time by attenuating the responses of less-selective cells in lower-level brain areas. These data emphasize that color perception involves multiple areas across a hierarchy of regions, interacting with each other in a complex, recursive manner. Copyright © 2017 the authors 0270-6474/17/3710346-12$15.00/0.
Occipitoparietal alpha-band responses to the graded allocation of top-down spatial attention.
Dombrowe, Isabel; Hilgetag, Claus C
2014-09-15
The voluntary, top-down allocation of visual spatial attention has been linked to changes in the alpha-band of the electroencephalogram (EEG) signal measured over occipital and parietal lobes. In the present study, we investigated how occipitoparietal alpha-band activity changes when people allocate their attentional resources in a graded fashion across the visual field. We asked participants to either completely shift their attention into one hemifield, to balance their attention equally across the entire visual field, or to attribute more attention to one-half of the visual field than to the other. As expected, we found that alpha-band amplitudes decreased stronger contralaterally than ipsilaterally to the attended side when attention was shifted completely. Alpha-band amplitudes decreased bilaterally when attention was balanced equally across the visual field. However, when participants allocated more attentional resources to one-half of the visual field, this was not reflected in the alpha-band amplitudes, which just decreased bilaterally. We found that the performance of the participants was more strongly reflected in the coherence between frontal and occipitoparietal brain regions. We conclude that low alpha-band amplitudes seem to be necessary for stimulus detection. Furthermore, complete shifts of attention are directly reflected in the lateralization of alpha-band amplitudes. In the present study, a gradual allocation of visual attention across the visual field was only indirectly reflected in the alpha-band activity over occipital and parietal cortexes. Copyright © 2014 the American Physiological Society.
Eye Gaze versus Arrows as Spatial Cues: Two Qualitatively Different Modes of Attentional Selection
ERIC Educational Resources Information Center
Marotta, Andrea; Lupianez, Juan; Martella, Diana; Casagrande, Maria
2012-01-01
This study aimed to evaluate the type of attentional selection (location- and/or object-based) triggered by two different types of central noninformative cues: eye gaze and arrows. Two rectangular objects were presented in the visual field, and subjects' attention was directed to the end of a rectangle via the observation of noninformative…
Neural Mechanisms of Selective Visual Attention.
Moore, Tirin; Zirnsak, Marc
2017-01-03
Selective visual attention describes the tendency of visual processing to be confined largely to stimuli that are relevant to behavior. It is among the most fundamental of cognitive functions, particularly in humans and other primates for whom vision is the dominant sense. We review recent progress in identifying the neural mechanisms of selective visual attention. We discuss evidence from studies of different varieties of selective attention and examine how these varieties alter the processing of stimuli by neurons within the visual system, current knowledge of their causal basis, and methods for assessing attentional dysfunctions. In addition, we identify some key questions that remain in identifying the neural mechanisms that give rise to the selective processing of visual information.
Harrison, Neil R; Woodhouse, Rob
2016-05-01
Previous research has demonstrated that threatening, compared to neutral pictures, can bias attention towards non-emotional auditory targets. Here we investigated which subcomponents of attention contributed to the influence of emotional visual stimuli on auditory spatial attention. Participants indicated the location of an auditory target, after brief (250 ms) presentation of a spatially non-predictive peripheral visual cue. Responses to targets were faster at the location of the preceding visual cue, compared to at the opposite location (cue validity effect). The cue validity effect was larger for targets following pleasant and unpleasant cues compared to neutral cues, for right-sided targets. For unpleasant cues, the crossmodal cue validity effect was driven by delayed attentional disengagement, and for pleasant cues, it was driven by enhanced engagement. We conclude that both pleasant and unpleasant visual cues influence the distribution of attention across modalities and that the associated attentional mechanisms depend on the valence of the visual cue.
Hollingworth, Andrew; Hwang, Seongmin
2013-01-01
We examined the conditions under which a feature value in visual working memory (VWM) recruits visual attention to matching stimuli. Previous work has suggested that VWM supports two qualitatively different states of representation: an active state that interacts with perceptual selection and a passive (or accessory) state that does not. An alternative hypothesis is that VWM supports a single form of representation, with the precision of feature memory controlling whether or not the representation interacts with perceptual selection. The results of three experiments supported the dual-state hypothesis. We established conditions under which participants retained a relatively precise representation of a parcticular colour. If the colour was immediately task relevant, it reliably recruited attention to matching stimuli. However, if the colour was not immediately task relevant, it failed to interact with perceptual selection. Feature maintenance in VWM is not necessarily equivalent with feature-based attentional selection. PMID:24018723
Modeling the Effects of Perceptual Load: Saliency, Competitive Interactions, and Top-Down Biases.
Neokleous, Kleanthis; Shimi, Andria; Avraamides, Marios N
2016-01-01
A computational model of visual selective attention has been implemented to account for experimental findings on the Perceptual Load Theory (PLT) of attention. The model was designed based on existing neurophysiological findings on attentional processes with the objective to offer an explicit and biologically plausible formulation of PLT. Simulation results verified that the proposed model is capable of capturing the basic pattern of results that support the PLT as well as findings that are considered contradictory to the theory. Importantly, the model is able to reproduce the behavioral results from a dilution experiment, providing thus a way to reconcile PLT with the competing Dilution account. Overall, the model presents a novel account for explaining PLT effects on the basis of the low-level competitive interactions among neurons that represent visual input and the top-down signals that modulate neural activity. The implications of the model concerning the debate on the locus of selective attention as well as the origins of distractor interference in visual displays of varying load are discussed.
Audience gaze while appreciating a multipart musical performance.
Kawase, Satoshi; Obata, Satoshi
2016-11-01
Visual information has been observed to be crucial for audience members during musical performances. The present study used an eye tracker to investigate audience members' gazes while appreciating an audiovisual musical ensemble performance, based on evidence of the dominance of musical part in auditory attention when listening to multipart music that contains different melody lines and the joint-attention theory of gaze. We presented singing performances, by a female duo. The main findings were as follows: (1) the melody part (soprano) attracted more visual attention than the accompaniment part (alto) throughout the piece, (2) joint attention emerged when the singers shifted their gazes toward their co-performer, suggesting that inter-performer gazing interactions that play a spotlight role mediated performer-audience visual interaction, and (3) musical part (melody or accompaniment) strongly influenced the total duration of gazes among audiences, while the spotlight effect of gaze was limited to just after the singers' gaze shifts. Copyright © 2016. Published by Elsevier Inc.
Visual Field Asymmetries in Attention Vary with Self-Reported Attention Deficits
ERIC Educational Resources Information Center
Poynter, William; Ingram, Paul; Minor, Scott
2010-01-01
The purpose of this study was to determine whether an index of self-reported attention deficits predicts the pattern of visual field asymmetries observed in behavioral measures of attention. Studies of "normal" subjects do not present a consistent pattern of asymmetry in attention functions, with some studies showing better left visual field (LVF)…
Liu, Wen-Long; Zhao, Xu; Tan, Jian-Hui; Wang, Juan
2014-09-01
To explore the attention characteristics of children with different clinical subtypes of attention deficit hyperactivity disorder (ADHD) and to provide a basis for clinical intervention. A total of 345 children diagnosed with ADHD were selected and the subtypes were identified. Attention assessment was performed by the intermediate visual and auditory continuous performance test at diagnosis, and the visual and auditory attention characteristics were compared between children with different subtypes. A total of 122 normal children were recruited in the control group and their attention characteristics were compared with those of children with ADHD. The scores of full scale attention quotient (AQ) and full scale response control quotient (RCQ) of children with all three subtypes of ADHD were significantly lower than those of normal children (P<0.01). The score of auditory RCQ was significantly lower than that of visual RCQ in children with ADHD-hyperactive/impulsive subtype (P<0.05). The scores of auditory AQ and speed quotient (SQ) were significantly higher than those of visual AQ and SQ in three subtypes of ADHD children (P<0.01), while the score of visual precaution quotient (PQ) was significantly higher than that of auditory PQ (P<0.01). No significant differences in auditory or visual AQ were observed between the three subtypes of ADHD. The attention function of children with ADHD is worse than that of normal children, and the impairment of visual attention function is severer than that of auditory attention function. The degree of functional impairment of visual or auditory attention shows no significant differences between three subtypes of ADHD.
NASA Astrophysics Data System (ADS)
Haigang, Sui; Zhina, Song
2016-06-01
Reliably ship detection in optical satellite images has a wide application in both military and civil fields. However, this problem is very difficult in complex backgrounds, such as waves, clouds, and small islands. Aiming at these issues, this paper explores an automatic and robust model for ship detection in large-scale optical satellite images, which relies on detecting statistical signatures of ship targets, in terms of biologically-inspired visual features. This model first selects salient candidate regions across large-scale images by using a mechanism based on biologically-inspired visual features, combined with visual attention model with local binary pattern (CVLBP). Different from traditional studies, the proposed algorithm is high-speed and helpful to focus on the suspected ship areas avoiding the separation step of land and sea. Largearea images are cut into small image chips and analyzed in two complementary ways: Sparse saliency using visual attention model and detail signatures using LBP features, thus accordant with sparseness of ship distribution on images. Then these features are employed to classify each chip as containing ship targets or not, using a support vector machine (SVM). After getting the suspicious areas, there are still some false alarms such as microwaves and small ribbon clouds, thus simple shape and texture analysis are adopted to distinguish between ships and nonships in suspicious areas. Experimental results show the proposed method is insensitive to waves, clouds, illumination and ship size.
Attention and Visual Motor Integration in Young Children with Uncorrected Hyperopia.
Kulp, Marjean Taylor; Ciner, Elise; Maguire, Maureen; Pistilli, Maxwell; Candy, T Rowan; Ying, Gui-Shuang; Quinn, Graham; Cyert, Lynn; Moore, Bruce
2017-10-01
Among 4- and 5-year-old children, deficits in measures of attention, visual-motor integration (VMI) and visual perception (VP) are associated with moderate, uncorrected hyperopia (3 to 6 diopters [D]) accompanied by reduced near visual function (near visual acuity worse than 20/40 or stereoacuity worse than 240 seconds of arc). To compare attention, visual motor, and visual perceptual skills in uncorrected hyperopes and emmetropes attending preschool or kindergarten and evaluate their associations with visual function. Participants were 4 and 5 years of age with either hyperopia (≥3 to ≤6 D, astigmatism ≤1.5 D, anisometropia ≤1 D) or emmetropia (hyperopia ≤1 D; astigmatism, anisometropia, and myopia each <1 D), without amblyopia or strabismus. Examiners masked to refractive status administered tests of attention (sustained, receptive, and expressive), VMI, and VP. Binocular visual acuity, stereoacuity, and accommodative accuracy were also assessed at near. Analyses were adjusted for age, sex, race/ethnicity, and parent's/caregiver's education. Two hundred forty-four hyperopes (mean, +3.8 ± [SD] 0.8 D) and 248 emmetropes (+0.5 ± 0.5 D) completed testing. Mean sustained attention score was worse in hyperopes compared with emmetropes (mean difference, -4.1; P < .001 for 3 to 6 D). Mean Receptive Attention score was worse in 4 to 6 D hyperopes compared with emmetropes (by -2.6, P = .01). Hyperopes with reduced near visual acuity (20/40 or worse) had worse scores than emmetropes (-6.4, P < .001 for sustained attention; -3.0, P = .004 for Receptive Attention; -0.7, P = .006 for VMI; -1.3, P = .008 for VP). Hyperopes with stereoacuity of 240 seconds of arc or worse scored significantly worse than emmetropes (-6.7, P < .001 for sustained attention; -3.4, P = .03 for Expressive Attention; -2.2, P = .03 for Receptive Attention; -0.7, P = .01 for VMI; -1.7, P < .001 for VP). Overall, hyperopes with better near visual function generally performed similarly to emmetropes. Moderately hyperopic children were found to have deficits in measures of attention. Hyperopic children with reduced near visual function also had lower scores on VMI and VP than emmetropic children.
Deep Visual Attention Prediction
NASA Astrophysics Data System (ADS)
Wang, Wenguan; Shen, Jianbing
2018-05-01
In this work, we aim to predict human eye fixation with view-free scenes based on an end-to-end deep learning architecture. Although Convolutional Neural Networks (CNNs) have made substantial improvement on human attention prediction, it is still needed to improve CNN based attention models by efficiently leveraging multi-scale features. Our visual attention network is proposed to capture hierarchical saliency information from deep, coarse layers with global saliency information to shallow, fine layers with local saliency response. Our model is based on a skip-layer network structure, which predicts human attention from multiple convolutional layers with various reception fields. Final saliency prediction is achieved via the cooperation of those global and local predictions. Our model is learned in a deep supervision manner, where supervision is directly fed into multi-level layers, instead of previous approaches of providing supervision only at the output layer and propagating this supervision back to earlier layers. Our model thus incorporates multi-level saliency predictions within a single network, which significantly decreases the redundancy of previous approaches of learning multiple network streams with different input scales. Extensive experimental analysis on various challenging benchmark datasets demonstrate our method yields state-of-the-art performance with competitive inference time.
An Empirical Study on Using Visual Embellishments in Visualization.
Borgo, R; Abdul-Rahman, A; Mohamed, F; Grant, P W; Reppa, I; Floridi, L; Chen, Min
2012-12-01
In written and spoken communications, figures of speech (e.g., metaphors and synecdoche) are often used as an aid to help convey abstract or less tangible concepts. However, the benefits of using rhetorical illustrations or embellishments in visualization have so far been inconclusive. In this work, we report an empirical study to evaluate hypotheses that visual embellishments may aid memorization, visual search and concept comprehension. One major departure from related experiments in the literature is that we make use of a dual-task methodology in our experiment. This design offers an abstraction of typical situations where viewers do not have their full attention focused on visualization (e.g., in meetings and lectures). The secondary task introduces "divided attention", and makes the effects of visual embellishments more observable. In addition, it also serves as additional masking in memory-based trials. The results of this study show that visual embellishments can help participants better remember the information depicted in visualization. On the other hand, visual embellishments can have a negative impact on the speed of visual search. The results show a complex pattern as to the benefits of visual embellishments in helping participants grasp key concepts from visualization.
Auditory and visual capture during focused visual attention.
Koelewijn, Thomas; Bronkhorst, Adelbert; Theeuwes, Jan
2009-10-01
It is well known that auditory and visual onsets presented at a particular location can capture a person's visual attention. However, the question of whether such attentional capture disappears when attention is focused endogenously beforehand has not yet been answered. Moreover, previous studies have not differentiated between capture by onsets presented at a nontarget (invalid) location and possible performance benefits occurring when the target location is (validly) cued. In this study, the authors modulated the degree of attentional focus by presenting endogenous cues with varying reliability and by displaying placeholders indicating the precise areas where the target stimuli could occur. By using not only valid and invalid exogenous cues but also neutral cues that provide temporal but no spatial information, they found performance benefits as well as costs when attention is not strongly focused. The benefits disappear when the attentional focus is increased. These results indicate that there is bottom-up capture of visual attention by irrelevant auditory and visual stimuli that cannot be suppressed by top-down attentional control. PsycINFO Database Record (c) 2009 APA, all rights reserved.
Does perceptual learning require consciousness or attention?
Meuwese, Julia D I; Post, Ruben A G; Scholte, H Steven; Lamme, Victor A F
2013-10-01
It has been proposed that visual attention and consciousness are separate [Koch, C., & Tsuchiya, N. Attention and consciousness: Two distinct brain processes. Trends in Cognitive Sciences, 11, 16-22, 2007] and possibly even orthogonal processes [Lamme, V. A. F. Why visual attention and awareness are different. Trends in Cognitive Sciences, 7, 12-18, 2003]. Attention and consciousness converge when conscious visual percepts are attended and hence become available for conscious report. In such a view, a lack of reportability can have two causes: the absence of attention or the absence of a conscious percept. This raises an important question in the field of perceptual learning. It is known that learning can occur in the absence of reportability [Gutnisky, D. A., Hansen, B. J., Iliescu, B. F., & Dragoi, V. Attention alters visual plasticity during exposure-based learning. Current Biology, 19, 555-560, 2009; Seitz, A. R., Kim, D., & Watanabe, T. Rewards evoke learning of unconsciously processed visual stimuli in adult humans. Neuron, 61, 700-707, 2009; Seitz, A. R., & Watanabe, T. Is subliminal learning really passive? Nature, 422, 36, 2003; Watanabe, T., Náñez, J. E., & Sasaki, Y. Perceptual learning without perception. Nature, 413, 844-848, 2001], but it is unclear which of the two ingredients-consciousness or attention-is not necessary for learning. We presented textured figure-ground stimuli and manipulated reportability either by masking (which only interferes with consciousness) or with an inattention paradigm (which only interferes with attention). During the second session (24 hr later), learning was assessed neurally and behaviorally, via differences in figure-ground ERPs and via a detection task. Behavioral and neural learning effects were found for stimuli presented in the inattention paradigm and not for masked stimuli. Interestingly, the behavioral learning effect only became apparent when performance feedback was given on the task to measure learning, suggesting that the memory trace that is formed during inattention is latent until accessed. The results suggest that learning requires consciousness, and not attention, and further strengthen the idea that consciousness is separate from attention.
Selective and sustained attention in children with spina bifida myelomeningocele.
Caspersen, Ida Dyhr; Habekost, Thomas
2013-01-01
Spina bifida myelomeningocele (SBM) is a neural tube defect that has been related to deficits in several cognitive domains including attention. Attention function in children with SBM has often been studied using tasks that are confounded by complex motor demands or tasks that do not clearly distinguish perceptual from response-related components of attention. We used a verbal-report paradigm based on the Theory of Visual Attention (Bundesen, 1990) and a new continuous performance test, the Dual Attention to Response Task (Dockree et al., 2006), for measuring parameters of selective and sustained attention in 6 children with SBM and 18 healthy control children. The two tasks had minimal motor demands, were functionally specific and were sensitive to minor deficits. As a group, the children with SBM were significantly less efficient at filtering out irrelevant stimuli. Moreover, they exhibited frequent failures of sustained attention and response control in terms of omission errors, premature responses, and prolonged inhibition responses. All 6 children with SBM showed deficits in one or more parameters of attention; for example, three patients had elevated visual perception thresholds, but large individual variation was evident in their performance patterns, which highlights the relevance of an effective case-based assessment method in this patient group. Overall, the study demonstrates the strengths of a new testing approach for evaluating attention function in children with SBM.
The Deployment of Visual Attention
2006-03-01
targets: Evidence for memory-based control of attention. Psychonomic Bulletin & Review , 11(1), 71-76. Torralba, A. (2003). Modeling global scene...S., Fencsik, D. E., Tran, L., & Wolfe, J. M. (in press). How do we track invisible objects? Psychonomic Bulletin & Review . *Horowitz, T. S. (in press
Motivation and short-term memory in visual search: Attention's accelerator revisited.
Schneider, Daniel; Bonmassar, Claudia; Hickey, Clayton
2018-05-01
A cue indicating the possibility of cash reward will cause participants to perform memory-based visual search more efficiently. A recent study has suggested that this performance benefit might reflect the use of multiple memory systems: when needed, participants may maintain the to-be-remembered object in both long-term and short-term visual memory, with this redundancy benefitting target identification during search (Reinhart, McClenahan & Woodman, 2016). Here we test this compelling hypothesis. We had participants complete a memory-based visual search task involving a reward cue that either preceded presentation of the to-be-remembered target (pre-cue) or followed it (retro-cue). Following earlier work, we tracked memory representation using two components of the event-related potential (ERP): the contralateral delay activity (CDA), reflecting short-term visual memory, and the anterior P170, reflecting long-term storage. We additionally tracked attentional preparation and deployment in the contingent negative variation (CNV) and N2pc, respectively. Results show that only the reward pre-cue impacted our ERP indices of memory. However, both types of cue elicited a robust CNV, reflecting an influence on task preparation, both had equivalent impact on deployment of attention to the target, as indexed in the N2pc, and both had equivalent impact on visual search behavior. Reward prospect thus has an influence on memory-guided visual search, but this does not appear to be necessarily mediated by a change in the visual memory representations indexed by CDA. Our results demonstrate that the impact of motivation on search is not a simple product of improved memory for target templates. Copyright © 2017 Elsevier Ltd. All rights reserved.
Altering spatial priority maps via reward-based learning.
Chelazzi, Leonardo; Eštočinová, Jana; Calletti, Riccardo; Lo Gerfo, Emanuele; Sani, Ilaria; Della Libera, Chiara; Santandrea, Elisa
2014-06-18
Spatial priority maps are real-time representations of the behavioral salience of locations in the visual field, resulting from the combined influence of stimulus driven activity and top-down signals related to the current goals of the individual. They arbitrate which of a number of (potential) targets in the visual scene will win the competition for attentional resources. As a result, deployment of visual attention to a specific spatial location is determined by the current peak of activation (corresponding to the highest behavioral salience) across the map. Here we report a behavioral study performed on healthy human volunteers, where we demonstrate that spatial priority maps can be shaped via reward-based learning, reflecting long-lasting alterations (biases) in the behavioral salience of specific spatial locations. These biases exert an especially strong influence on performance under conditions where multiple potential targets compete for selection, conferring competitive advantage to targets presented in spatial locations associated with greater reward during learning relative to targets presented in locations associated with lesser reward. Such acquired biases of spatial attention are persistent, are nonstrategic in nature, and generalize across stimuli and task contexts. These results suggest that reward-based attentional learning can induce plastic changes in spatial priority maps, endowing these representations with the "intelligent" capacity to learn from experience. Copyright © 2014 the authors 0270-6474/14/348594-11$15.00/0.
Visual Attention and Applications in Multimedia Technologies
Le Callet, Patrick; Niebur, Ernst
2013-01-01
Making technological advances in the field of human-machine interactions requires that the capabilities and limitations of the human perceptual system are taken into account. The focus of this report is an important mechanism of perception, visual selective attention, which is becoming more and more important for multimedia applications. We introduce the concept of visual attention and describe its underlying mechanisms. In particular, we introduce the concepts of overt and covert visual attention, and of bottom-up and top-down processing. Challenges related to modeling visual attention and their validation using ad hoc ground truth are also discussed. Examples of the usage of visual attention models in image and video processing are presented. We emphasize multimedia delivery, retargeting and quality assessment of image and video, medical imaging, and the field of stereoscopic 3D images applications. PMID:24489403
Degraded attentional modulation of cortical neural populations in strabismic amblyopia
Hou, Chuan; Kim, Yee-Joon; Lai, Xin Jie; Verghese, Preeti
2016-01-01
Behavioral studies have reported reduced spatial attention in amblyopia, a developmental disorder of spatial vision. However, the neural populations in the visual cortex linked with these behavioral spatial attention deficits have not been identified. Here, we use functional MRI–informed electroencephalography source imaging to measure the effect of attention on neural population activity in the visual cortex of human adult strabismic amblyopes who were stereoblind. We show that compared with controls, the modulatory effects of selective visual attention on the input from the amblyopic eye are substantially reduced in the primary visual cortex (V1) as well as in extrastriate visual areas hV4 and hMT+. Degraded attentional modulation is also found in the normal-acuity fellow eye in areas hV4 and hMT+ but not in V1. These results provide electrophysiological evidence that abnormal binocular input during a developmental critical period may impact cortical connections between the visual cortex and higher level cortices beyond the known amblyopic losses in V1 and V2, suggesting that a deficit of attentional modulation in the visual cortex is an important component of the functional impairment in amblyopia. Furthermore, we find that degraded attentional modulation in V1 is correlated with the magnitude of interocular suppression and the depth of amblyopia. These results support the view that the visual suppression often seen in strabismic amblyopia might be a form of attentional neglect of the visual input to the amblyopic eye. PMID:26885628
Degraded attentional modulation of cortical neural populations in strabismic amblyopia.
Hou, Chuan; Kim, Yee-Joon; Lai, Xin Jie; Verghese, Preeti
2016-01-01
Behavioral studies have reported reduced spatial attention in amblyopia, a developmental disorder of spatial vision. However, the neural populations in the visual cortex linked with these behavioral spatial attention deficits have not been identified. Here, we use functional MRI-informed electroencephalography source imaging to measure the effect of attention on neural population activity in the visual cortex of human adult strabismic amblyopes who were stereoblind. We show that compared with controls, the modulatory effects of selective visual attention on the input from the amblyopic eye are substantially reduced in the primary visual cortex (V1) as well as in extrastriate visual areas hV4 and hMT+. Degraded attentional modulation is also found in the normal-acuity fellow eye in areas hV4 and hMT+ but not in V1. These results provide electrophysiological evidence that abnormal binocular input during a developmental critical period may impact cortical connections between the visual cortex and higher level cortices beyond the known amblyopic losses in V1 and V2, suggesting that a deficit of attentional modulation in the visual cortex is an important component of the functional impairment in amblyopia. Furthermore, we find that degraded attentional modulation in V1 is correlated with the magnitude of interocular suppression and the depth of amblyopia. These results support the view that the visual suppression often seen in strabismic amblyopia might be a form of attentional neglect of the visual input to the amblyopic eye.
Neuroplus biofeedback improves attention, resilience, and injury prevention in elite soccer players.
Rusciano, Aiace; Corradini, Giuliano; Stoianov, Ivilin
2017-06-01
Performance and injury prevention in elite soccer players are typically investigated from physical-tactical, biomechanical, and metabolic perspectives. However, executive functions, visuospatial abilities, and psychophysiological adaptability or resilience are also fundamental for efficiency and well-being in sports. Based on previous research associating autonomic flexibility with prefrontal cortical control, we designed a novel integrated autonomic biofeedback training method called Neuroplus to improve resilience, visual attention, and injury prevention. Herein, we introduce the method and provide an evaluation of 20 elite soccer players from the Italian Soccer High Division (Serie-A): 10 players trained with Neuroplus and 10 trained with a control treatment. The assessments included psychophysiological stress profiles, a visual search task, and indexes of injury prevention, which were measured pre- and posttreatment. The analysis showed a significant enhancement of physiological adaptability, recovery following stress, visual selective attention, and injury prevention that were specific to the Neuroplus group. Enhancing the interplay between autonomic and cognitive functions through biofeedback may become a key principle for obtaining excellence and well-being in sports. To our knowledge, this is the first evidence that shows improvement in visual selective attention following intense autonomic biofeedback. © 2017 Society for Psychophysiological Research.
Visual attention and flexible normalization pools
Schwartz, Odelia; Coen-Cagli, Ruben
2013-01-01
Attention to a spatial location or feature in a visual scene can modulate the responses of cortical neurons and affect perceptual biases in illusions. We add attention to a cortical model of spatial context based on a well-founded account of natural scene statistics. The cortical model amounts to a generalized form of divisive normalization, in which the surround is in the normalization pool of the center target only if they are considered statistically dependent. Here we propose that attention influences this computation by accentuating the neural unit activations at the attended location, and that the amount of attentional influence of the surround on the center thus depends on whether center and surround are deemed in the same normalization pool. The resulting form of model extends a recent divisive normalization model of attention (Reynolds & Heeger, 2009). We simulate cortical surround orientation experiments with attention and show that the flexible model is suitable for capturing additional data and makes nontrivial testable predictions. PMID:23345413
Cross-modal cueing of attention alters appearance and early cortical processing of visual stimuli
Störmer, Viola S.; McDonald, John J.; Hillyard, Steven A.
2009-01-01
The question of whether attention makes sensory impressions appear more intense has been a matter of debate for over a century. Recent psychophysical studies have reported that attention increases apparent contrast of visual stimuli, but the issue continues to be debated. We obtained converging neurophysiological evidence from human observers as they judged the relative contrast of visual stimuli presented to the left and right visual fields following a lateralized auditory cue. Cross-modal cueing of attention boosted the apparent contrast of the visual target in association with an enlarged neural response in the contralateral visual cortex that began within 100 ms after target onset. The magnitude of the enhanced neural response was positively correlated with perceptual reports of the cued target being higher in contrast. The results suggest that attention increases the perceived contrast of visual stimuli by boosting early sensory processing in the visual cortex. PMID:20007778
Cross-modal cueing of attention alters appearance and early cortical processing of visual stimuli.
Störmer, Viola S; McDonald, John J; Hillyard, Steven A
2009-12-29
The question of whether attention makes sensory impressions appear more intense has been a matter of debate for over a century. Recent psychophysical studies have reported that attention increases apparent contrast of visual stimuli, but the issue continues to be debated. We obtained converging neurophysiological evidence from human observers as they judged the relative contrast of visual stimuli presented to the left and right visual fields following a lateralized auditory cue. Cross-modal cueing of attention boosted the apparent contrast of the visual target in association with an enlarged neural response in the contralateral visual cortex that began within 100 ms after target onset. The magnitude of the enhanced neural response was positively correlated with perceptual reports of the cued target being higher in contrast. The results suggest that attention increases the perceived contrast of visual stimuli by boosting early sensory processing in the visual cortex.
Attentional Processes in Young Children with Congenital Visual Impairment
ERIC Educational Resources Information Center
Tadic, Valerie; Pring, Linda; Dale, Naomi
2009-01-01
The study investigated attentional processes of 32 preschool children with congenital visual impairment (VI). Children with profound visual impairment (PVI) and severe visual impairment (SVI) were compared to a group of typically developing sighted children in their ability to respond to adult directed attention in terms of establishing,…
Visual Attention to Antismoking PSAs: Smoking Cues versus Other Attention-Grabbing Features
ERIC Educational Resources Information Center
Sanders-Jackson, Ashley N.; Cappella, Joseph N.; Linebarger, Deborah L.; Piotrowski, Jessica Taylor; O'Keeffe, Moira; Strasser, Andrew A.
2011-01-01
This study examines how addicted smokers attend visually to smoking-related public service announcements (PSAs) in adults smokers. Smokers' onscreen visual fixation is an indicator of cognitive resources allocated to visual attention. Characteristic of individuals with addictive tendencies, smokers are expected to be appetitively activated by…
Baars, B J
1999-07-01
A common confound between consciousness and attention makes it difficult to think clearly about recent advances in the understanding of the visual brain. Visual consciousness involves phenomenal experience of the visual world, but visual attention is more plausibly treated as a function that selects and maintains the selection of potential conscious contents, often unconsciously. In the same sense, eye movements select conscious visual events, which are not the same as conscious visual experience. According to common sense, visual experience is consciousness, and selective processes are labeled as attention. The distinction is reflected in very different behavioral measures and in very different brain anatomy and physiology. Visual consciousness tends to be associated with the "what" stream of visual feature neurons in the ventral temporal lobe. In contrast, attentional selection and maintenance are mediated by other brain regions, ranging from superior colliculi to thalamus, prefrontal cortex, and anterior cingulate. The author applied the common-sense distinction between attention and consciousness to the theoretical positions of M. I. Posner (1992, 1994) and D. LaBerge (1997, 1998) to show how it helps to clarify the evidence. He concluded that clarity of thought is served by calling a thing by its proper name.
NASA Astrophysics Data System (ADS)
Ahmetoglu, Emine; Aral, Neriman; Butun Ayhan, Aynur
This study was conducted in order to (a) compare the visual perceptions of seven-year-old children diagnosed with attention deficit hyperactivity disorder with those of normally developing children of the same age and development level and (b) determine whether the visual perceptions of children with attention deficit hyperactivity disorder vary with respect to gender, having received preschool education and parents` educational level. A total of 60 children, 30 with attention deficit hyperactivity disorder and 30 with normal development, were assigned to the study. Data about children with attention deficit hyperactivity disorder and their families was collected by using a General Information Form and the visual perception of children was examined through the Frostig Developmental Test of Visual Perception. The Mann-Whitney U-test and Kruskal-Wallis variance analysis was used to determine whether there was a difference of between the visual perceptions of children with normal development and those diagnosed with attention deficit hyperactivity disorder and to discover whether the variables of gender, preschool education and parents` educational status affected the visual perceptions of children with attention deficit hyperactivity disorder. The results showed that there was a statistically meaningful difference between the visual perceptions of the two groups and that the visual perceptions of children with attention deficit hyperactivity disorder were affected meaningfully by gender, preschool education and parents` educational status.
Chen, Chen; Schneps, Matthew H; Masyn, Katherine E; Thomson, Jennifer M
2016-11-01
Increasing evidence has shown visual attention span to be a factor, distinct from phonological skills, that explains single-word identification (pseudo-word/word reading) performance in dyslexia. Yet, little is known about how well visual attention span explains text comprehension. Observing reading comprehension in a sample of 105 high school students with dyslexia, we used a pathway analysis to examine the direct and indirect path between visual attention span and reading comprehension while controlling for other factors such as phonological awareness, letter identification, short-term memory, IQ and age. Integrating phonemic decoding efficiency skills in the analytic model, this study aimed to disentangle how visual attention span and phonological skills work together in reading comprehension for readers with dyslexia. We found visual attention span to have a significant direct effect on more difficult reading comprehension but not on an easier level. It also had a significant direct effect on pseudo-word identification but not on word identification. In addition, we found that visual attention span indirectly explains reading comprehension through pseudo-word reading and word reading skills. This study supports the hypothesis that at least part of the dyslexic profile can be explained by visual attention abilities. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
The role of visual attention in multiple object tracking: evidence from ERPs.
Doran, Matthew M; Hoffman, James E
2010-01-01
We examined the role of visual attention in the multiple object tracking (MOT) task by measuring the amplitude of the N1 component of the event-related potential (ERP) to probe flashes presented on targets, distractors, or empty background areas. We found evidence that visual attention enhances targets and suppresses distractors (Experiment 1 & 3). However, we also found that when tracking load was light (two targets and two distractors), accurate tracking could be carried out without any apparent contribution from the visual attention system (Experiment 2). Our results suggest that attentional selection during MOT is flexibly determined by task demands as well as tracking load and that visual attention may not always be necessary for accurate tracking.
Entrainment to an auditory signal: Is attention involved?
Kunert, Richard; Jongman, Suzanne R
2017-01-01
Many natural auditory signals, including music and language, change periodically. The effect of such auditory rhythms on the brain is unclear however. One widely held view, dynamic attending theory, proposes that the attentional system entrains to the rhythm and increases attention at moments of rhythmic salience. In support, 2 experiments reported here show reduced response times to visual letter strings shown at auditory rhythm peaks, compared with rhythm troughs. However, we argue that an account invoking the entrainment of general attention should further predict rhythm entrainment to also influence memory for visual stimuli. In 2 pseudoword memory experiments we find evidence against this prediction. Whether a pseudoword is shown during an auditory rhythm peak or not is irrelevant for its later recognition memory in silence. Other attention manipulations, dividing attention and focusing attention, did result in a memory effect. This raises doubts about the suggested attentional nature of rhythm entrainment. We interpret our findings as support for auditory rhythm perception being based on auditory-motor entrainment, not general attention entrainment. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
The effects of divided attention on auditory priming.
Mulligan, Neil W; Duke, Marquinn; Cooper, Angela W
2007-09-01
Traditional theorizing stresses the importance of attentional state during encoding for later memory, based primarily on research with explicit memory. Recent research has begun to investigate the role of attention in implicit memory but has focused almost exclusively on priming in the visual modality. The present experiments examined the effect of divided attention on auditory implicit memory, using auditory perceptual identification, word-stem completion and word-fragment completion. Participants heard study words under full attention conditions or while simultaneously carrying out a distractor task (the divided attention condition). In Experiment 1, a distractor task with low response frequency failed to disrupt later auditory priming (but diminished explicit memory as assessed with auditory recognition). In Experiment 2, a distractor task with greater response frequency disrupted priming on all three of the auditory priming tasks as well as the explicit test. These results imply that although auditory priming is less reliant on attention than explicit memory, it is still greatly affected by at least some divided-attention manipulations. These results are consistent with research using visual priming tasks and have relevance for hypotheses regarding attention and auditory priming.
Dube, Blaire; Emrich, Stephen M; Al-Aidroos, Naseem
2017-10-01
Across 2 experiments we revisited the filter account of how feature-based attention regulates visual working memory (VWM). Originally drawing from discrete-capacity ("slot") models, the filter account proposes that attention operates like the "bouncer in the brain," preventing distracting information from being encoded so that VWM resources are reserved for relevant information. Given recent challenges to the assumptions of discrete-capacity models, we investigated whether feature-based attention plays a broader role in regulating memory. Both experiments used partial report tasks in which participants memorized the colors of circle and square stimuli, and we provided a feature-based goal by manipulating the likelihood that 1 shape would be probed over the other across a range of probabilities. By decomposing participants' responses using mixture and variable-precision models, we estimated the contributions of guesses, nontarget responses, and imprecise memory representations to their errors. Consistent with the filter account, participants were less likely to guess when the probed memory item matched the feature-based goal. Interestingly, this effect varied with goal strength, even across high probabilities where goal-matching information should always be prioritized, demonstrating strategic control over filter strength. Beyond this effect of attention on which stimuli were encoded, we also observed effects on how they were encoded: Estimates of both memory precision and nontarget errors varied continuously with feature-based attention. The results offer support for an extension to the filter account, where feature-based attention dynamically regulates the distribution of resources within working memory so that the most relevant items are encoded with the greatest precision. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Modeling human comprehension of data visualizations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matzen, Laura E.; Haass, Michael Joseph; Divis, Kristin Marie
This project was inspired by two needs. The first is a need for tools to help scientists and engineers to design effective data visualizations for communicating information, whether to the user of a system, an analyst who must make decisions based on complex data, or in the context of a technical report or publication. Most scientists and engineers are not trained in visualization design, and they could benefit from simple metrics to assess how well their visualization's design conveys the intended message. In other words, will the most important information draw the viewer's attention? The second is the need formore » cognition-based metrics for evaluating new types of visualizations created by researchers in the information visualization and visual analytics communities. Evaluating visualizations is difficult even for experts. However, all visualization methods and techniques are intended to exploit the properties of the human visual system to convey information efficiently to a viewer. Thus, developing evaluation methods that are rooted in the scientific knowledge of the human visual system could be a useful approach. In this project, we conducted fundamental research on how humans make sense of abstract data visualizations, and how this process is influenced by their goals and prior experience. We then used that research to develop a new model, the Data Visualization Saliency Model, that can make accurate predictions about which features in an abstract visualization will draw a viewer's attention. The model is an evaluation tool that can address both of the needs described above, supporting both visualization research and Sandia mission needs.« less
Hietanen, Jari K; Kirjavainen, Ilkka; Nummenmaa, Lauri
2014-12-01
The early visual event-related 'N170 response' is sensitive to human body configuration and it is enhanced to nude versus clothed bodies. We tested whether the N170 response as well as later EPN and P3/LPP responses to nude bodies reflect the effect of increased arousal elicited by these stimuli, or top-down allocation of object-based attention to the nude bodies. Participants saw pictures of clothed and nude bodies and faces. In each block, participants were asked to direct their attention towards stimuli from a specified target category while ignoring others. Object-based attention did not modulate the N170 amplitudes towards attended stimuli; instead N170 response was larger to nude bodies compared to stimuli from other categories. Top-down attention and affective arousal had additive effects on the EPN and P3/LPP responses reflecting later processing stages. We conclude that nude human bodies have a privileged status in the visual processing system due to the affective arousal they trigger. Copyright © 2014 Elsevier B.V. All rights reserved.
Social Image Captioning: Exploring Visual Attention and User Attention.
Wang, Leiquan; Chu, Xiaoliang; Zhang, Weishan; Wei, Yiwei; Sun, Weichen; Wu, Chunlei
2018-02-22
Image captioning with a natural language has been an emerging trend. However, the social image, associated with a set of user-contributed tags, has been rarely investigated for a similar task. The user-contributed tags, which could reflect the user attention, have been neglected in conventional image captioning. Most existing image captioning models cannot be applied directly to social image captioning. In this work, a dual attention model is proposed for social image captioning by combining the visual attention and user attention simultaneously.Visual attention is used to compress a large mount of salient visual information, while user attention is applied to adjust the description of the social images with user-contributed tags. Experiments conducted on the Microsoft (MS) COCO dataset demonstrate the superiority of the proposed method of dual attention.
Social Image Captioning: Exploring Visual Attention and User Attention
Chu, Xiaoliang; Zhang, Weishan; Wei, Yiwei; Sun, Weichen; Wu, Chunlei
2018-01-01
Image captioning with a natural language has been an emerging trend. However, the social image, associated with a set of user-contributed tags, has been rarely investigated for a similar task. The user-contributed tags, which could reflect the user attention, have been neglected in conventional image captioning. Most existing image captioning models cannot be applied directly to social image captioning. In this work, a dual attention model is proposed for social image captioning by combining the visual attention and user attention simultaneously.Visual attention is used to compress a large mount of salient visual information, while user attention is applied to adjust the description of the social images with user-contributed tags. Experiments conducted on the Microsoft (MS) COCO dataset demonstrate the superiority of the proposed method of dual attention. PMID:29470409
The involvement of central attention in visual search is determined by task demands.
Han, Suk Won
2017-04-01
Attention, the mechanism by which a subset of sensory inputs is prioritized over others, operates at multiple processing stages. Specifically, attention enhances weak sensory signal at the perceptual stage, while it serves to select appropriate responses or consolidate sensory representations into short-term memory at the central stage. This study investigated the independence and interaction between perceptual and central attention. To do so, I used a dual-task paradigm, pairing a four-alternative choice task with a visual search task. The results showed that central attention for response selection was engaged in perceptual processing for visual search when the number of search items increased, thereby increasing the demand for serial allocation of focal attention. By contrast, central attention and perceptual attention remained independent as far as the demand for serial shifting of focal attention remained constant; decreasing stimulus contrast or increasing the set size of a parallel search did not evoke the involvement of central attention in visual search. These results suggest that the nature of concurrent visual search process plays a crucial role in the functional interaction between two different types of attention.
Conscious visual memory with minimal attention.
Pinto, Yair; Vandenbroucke, Annelinde R; Otten, Marte; Sligte, Ilja G; Seth, Anil K; Lamme, Victor A F
2017-02-01
Is conscious visual perception limited to the locations that a person attends? The remarkable phenomenon of change blindness, which shows that people miss nearly all unattended changes in a visual scene, suggests the answer is yes. However, change blindness is found after visual interference (a mask or a new scene), so that subjects have to rely on working memory (WM), which has limited capacity, to detect the change. Before such interference, however, a much larger capacity store, called fragile memory (FM), which is easily overwritten by newly presented visual information, is present. Whether these different stores depend equally on spatial attention is central to the debate on the role of attention in conscious vision. In 2 experiments, we found that minimizing spatial attention almost entirely erases visual WM, as expected. Critically, FM remains largely intact. Moreover, minimally attended FM responses yield accurate metacognition, suggesting that conscious memory persists with limited spatial attention. Together, our findings help resolve the fundamental issue of how attention affects perception: Both visual consciousness and memory can be supported by only minimal attention. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Low-level visual attention and its relation to joint attention in autism spectrum disorder.
Jaworski, Jessica L Bean; Eigsti, Inge-Marie
2017-04-01
Visual attention is integral to social interaction and is a critical building block for development in other domains (e.g., language). Furthermore, atypical attention (especially joint attention) is one of the earliest markers of autism spectrum disorder (ASD). The current study assesses low-level visual attention and its relation to social attentional processing in youth with ASD and typically developing (TD) youth, aged 7 to 18 years. The findings indicate difficulty overriding incorrect attentional cues in ASD, particularly with non-social (arrow) cues relative to social (face) cues. The findings also show reduced competition in ASD from cues that remain on-screen. Furthermore, social attention, autism severity, and age were all predictors of competing cue processing. The results suggest that individuals with ASD may be biased towards speeded rather than accurate responding, and further, that reduced engagement with visual information may impede responses to visual attentional cues. Once attention is engaged, individuals with ASD appear to interpret directional cues as meaningful. These findings from a controlled, experimental paradigm were mirrored in results from an ecologically valid measure of social attention. Attentional difficulties may be exacerbated during the complex and dynamic experience of actual social interaction. Implications for intervention are discussed.
Harris, Joseph A; Donohue, Sarah E; Schoenfeld, Mircea A; Hopf, Jens-Max; Heinze, Hans-Jochen; Woldorff, Marty G
2016-08-15
Reward-associated visual features have been shown to capture visual attention, evidenced in faster and more accurate behavioral performance, as well as in neural responses reflecting lateralized shifts of visual attention to those features. Specifically, the contralateral N2pc event-related-potential (ERP) component that reflects attentional shifting exhibits increased amplitude in response to task-relevant targets containing a reward-associated feature. In the present study, we examined the automaticity of such reward-association effects using object-substitution masking (OSM) in conjunction with MEG measures of visual attentional shifts. In OSM, a visual-search array is presented, with the target item to be detected indicated by a surrounding mask (here, four surrounding squares). Delaying the offset of the target-surrounding four-dot mask relative to the offset of the rest of the target/distracter array disrupts the viewer's awareness of the target (masked condition), whereas simultaneous offsets do not (unmasked condition). Here we manipulated whether the color of the OSM target was or was not of a previously reward-associated color. By tracking reward-associated enhancements of behavior and the N2pc in response to masked targets containing a previously rewarded or unrewarded feature, the automaticity of attentional capture by reward could be probed. We found an enhanced N2pc response to targets containing a previously reward-associated color feature. Moreover, this enhancement of the N2pc by reward did not differ between masking conditions, nor did it differ as a function of the apparent visibility of the target within the masked condition. Overall, these results underscore the automaticity of attentional capture by reward-associated features, and demonstrate the ability of feature-based reward associations to shape attentional capture and allocation outside of perceptual awareness. Copyright © 2016 Elsevier Inc. All rights reserved.
Feature-based and spatial attentional selection in visual working memory.
Heuer, Anna; Schubö, Anna
2016-05-01
The contents of visual working memory (VWM) can be modulated by spatial cues presented during the maintenance interval ("retrocues"). Here, we examined whether attentional selection of representations in VWM can also be based on features. In addition, we investigated whether the mechanisms of feature-based and spatial attention in VWM differ with respect to parallel access to noncontiguous locations. In two experiments, we tested the efficacy of valid retrocues relying on different kinds of information. Specifically, participants were presented with a typical spatial retrocue pointing to two locations, a symbolic spatial retrocue (numbers mapping onto two locations), and two feature-based retrocues: a color retrocue (a blob of the same color as two of the items) and a shape retrocue (an outline of the shape of two of the items). The two cued items were presented at either contiguous or noncontiguous locations. Overall retrocueing benefits, as compared to a neutral condition, were observed for all retrocue types. Whereas feature-based retrocues yielded benefits for cued items presented at both contiguous and noncontiguous locations, spatial retrocues were only effective when the cued items had been presented at contiguous locations. These findings demonstrate that attentional selection and updating in VWM can operate on different kinds of information, allowing for a flexible and efficient use of this limited system. The observation that the representations of items presented at noncontiguous locations could only be reliably selected with feature-based retrocues suggests that feature-based and spatial attentional selection in VWM rely on different mechanisms, as has been shown for attentional orienting in the external world.
Feature-based attention to unconscious shapes and colors.
Schmidt, Filipp; Schmidt, Thomas
2010-08-01
Two experiments employed feature-based attention to modulate the impact of completely masked primes on subsequent pointing responses. Participants processed a color cue to select a pair of possible pointing targets out of multiple targets on the basis of their color, and then pointed to the one of those two targets with a prespecified shape. All target pairs were preceded by prime pairs triggering either the correct or the opposite response. The time interval between cue and primes was varied to modulate the time course of feature-based attentional selection. In a second experiment, the roles of color and shape were switched. Pointing trajectories showed large priming effects that were amplified by feature-based attention, indicating that attention modulated the earliest phases of motor output. Priming effects as well as their attentional modulation occurred even though participants remained unable to identify the primes, indicating distinct processes underlying visual awareness, attention, and response control.
Xie, Jun; Xu, Guanghua; Wang, Jing; Li, Min; Han, Chengcheng; Jia, Yaguang
Steady-state visual evoked potentials (SSVEP) based paradigm is a conventional BCI method with the advantages of high information transfer rate, high tolerance to artifacts and the robust performance across users. But the occurrence of mental load and fatigue when users stare at flickering stimuli is a critical problem in implementation of SSVEP-based BCIs. Based on electroencephalography (EEG) power indices α, θ, θ + α, ratio index θ/α and response properties of amplitude and SNR, this study quantitatively evaluated the mental load and fatigue in both of conventional flickering and the novel motion-reversal visual attention tasks. Results over nine subjects revealed significant mental load alleviation in motion-reversal task rather than flickering task. The interaction between factors of "stimulation type" and "fatigue level" also illustrated the motion-reversal stimulation as a superior anti-fatigue solution for long-term BCI operation. Taken together, our work provided an objective method favorable for the design of more practically applicable steady-state evoked potential based BCIs.
Training of attention functions in children with attention deficit hyperactivity disorder.
Tucha, Oliver; Tucha, Lara; Kaumann, Gesa; König, Sebastian; Lange, Katharina M; Stasik, Dorota; Streather, Zoe; Engelschalk, Tobias; Lange, Klaus W
2011-09-01
Pharmacological treatment of children with ADHD has been shown to be successful; however, medication may not normalize attention functions. The present study was based on a neuropsychological model of attention and assessed the effect of an attention training program on attentional functioning of children with ADHD. Thirty-two children with ADHD and 16 healthy children participated in the study. Children with ADHD were randomly assigned to one of the two conditions, i.e., an attention training program which trained aspects of vigilance, selective attention and divided attention, or a visual perception training which trained perceptual skills, such as perception of figure and ground, form constancy and position in space. The training programs were applied in individual sessions, twice a week, for a period of four consecutive weeks. Healthy children did not receive any training. Alertness, vigilance, selective attention, divided attention, and flexibility were examined prior to and following the interventions. Children with ADHD were assessed and trained while on ADHD medications. Data analysis revealed that the attention training used in the present study led to significant improvements of various aspects of attention, including vigilance, divided attention, and flexibility, while the visual perception training had no specific effects. The findings indicate that attention training programs have the potential to facilitate attentional functioning in children with ADHD treated with ADHD drugs.
Spatial attention increases high-frequency gamma synchronisation in human medial visual cortex.
Koelewijn, Loes; Rich, Anina N; Muthukumaraswamy, Suresh D; Singh, Krish D
2013-10-01
Visual information processing involves the integration of stimulus and goal-driven information, requiring neuronal communication. Gamma synchronisation is linked to neuronal communication, and is known to be modulated in visual cortex both by stimulus properties and voluntarily-directed attention. Stimulus-driven modulations of gamma activity are particularly associated with early visual areas such as V1, whereas attentional effects are generally localised to higher visual areas such as V4. The absence of a gamma increase in early visual cortex is at odds with robust attentional enhancements found with other measures of neuronal activity in this area. Here we used magnetoencephalography (MEG) to explore the effect of spatial attention on gamma activity in human early visual cortex using a highly effective gamma-inducing stimulus and strong attentional manipulation. In separate blocks, subjects tracked either a parafoveal grating patch that induced gamma activity in contralateral medial visual cortex, or a small line at fixation, effectively attending away from the gamma-inducing grating. Both items were always present, but rotated unpredictably and independently of each other. The rotating grating induced gamma synchronisation in medial visual cortex at 30-70 Hz, and in lateral visual cortex at 60-90 Hz, regardless of whether it was attended. Directing spatial attention to the grating increased gamma synchronisation in medial visual cortex, but only at 60-90 Hz. These results suggest that the generally found increase in gamma activity by spatial attention can be localised to early visual cortex in humans, and that stimulus and goal-driven modulations may be mediated at different frequencies within the gamma range. Copyright © 2013 Elsevier Inc. All rights reserved.
Rehearsal in serial memory for visual-spatial information: evidence from eye movements.
Tremblay, Sébastien; Saint-Aubin, Jean; Jalbert, Annie
2006-06-01
It is well established that rote rehearsal plays a key role in serial memory for lists of verbal items. Although a great deal of research has informed us about the nature of verbal rehearsal, much less attention has been devoted to rehearsal in serial memory for visual-spatial information. By using the dot task--a visual-spatial analogue of the classical verbal serial recall task--with delayed recall, performance and eyetracking data were recorded in order to establish whether visual-spatial rehearsal could be evidenced by eye movement. The use of eye movement as a form of rehearsal is detectable (Experiment 1), and it seems to contribute to serial memory performance over and above rehearsal based on shifts of spatial attention (Experiments 1 and 2).
Retraining moderately impaired stroke survivors in driving-related visual attention skills.
Akinwuntan, Abiodun E; Devos, Hannes; Verheyden, Geert; Baten, Guido; Kiekens, Carlotte; Feys, Hilde; De Weerdt, Willy
2010-01-01
Visual inattention is a major cause of road accidents and is a problem commonly experienced after stroke. This study investigated the effects of 2 training programs on performance in the Useful Field of View (UFOV), a validated test of driving-related visual attention skills. Data from 69 first-ever, moderately impaired stroke survivors who participated in a randomized controlled trial (RCT) to determine the effects of simulator training on driving after stroke were analyzed. In addition to regular interventions at a rehabilitation center, participants received 15 hours of either simulator-based driving-related training or non-computer-based cognitive training over 5 weeks. Total percentage reduction in UFOV and performance in divided and selective attention and speed of processing subtests were documented at 6 to 9 weeks (pretraining), 11 to 15 weeks (posttraining), and 6 months post stroke (follow-up). Generalized estimating equation (GEE) model revealed neither group effects nor significant interaction effects of group with time in the UFOV total score and the 3 subtests. However, there were significant within-group improvements from pre- through posttraining to follow-up for all the UFOV parameters. Post-hoc GEE analysis revealed that most improvement in both groups occurred from pre- to posttraining. Both training programs significantly improved visual attention skills of moderately impaired stroke survivors after 15 hours of training and retention of benefit lasted up to 6 months after stroke. Neither of the training programs was better than the other.
ERIC Educational Resources Information Center
Humphreys, Glyn W.; Wulff, Melanie; Yoon, Eun Young; Riddoch, M. Jane
2010-01-01
Two experiments are reported that use patients with visual extinction to examine how visual attention is influenced by action information in images. In Experiment 1 patients saw images of objects that were either correctly or incorrectly colocated for action, with the objects held by hands that were congruent or incongruent with those used…
NASA Astrophysics Data System (ADS)
Khosla, Deepak; Huber, David J.; Martin, Kevin
2017-05-01
This paper† describes a technique in which we improve upon the prior performance of the Rapid Serial Visual Presentation (RSVP) EEG paradigm for image classification though the insertion of visual attention distracters and overall sequence reordering based upon the expected ratio of rare to common "events" in the environment and operational context. Inserting distracter images maintains the ratio of common events to rare events at an ideal level, maximizing the rare event detection via P300 EEG response to the RSVP stimuli. The method has two steps: first, we compute the optimal number of distracters needed for an RSVP stimuli based on the desired sequence length and expected number of targets and insert the distracters into the RSVP sequence, and then we reorder the RSVP sequence to maximize P300 detection. We show that by reducing the ratio of target events to nontarget events using this method, we can allow RSVP sequences with more targets without sacrificing area under the ROC curve (azimuth).
Hyperspectral image visualization based on a human visual model
NASA Astrophysics Data System (ADS)
Zhang, Hongqin; Peng, Honghong; Fairchild, Mark D.; Montag, Ethan D.
2008-02-01
Hyperspectral image data can provide very fine spectral resolution with more than 200 bands, yet presents challenges for visualization techniques for displaying such rich information on a tristimulus monitor. This study developed a visualization technique by taking advantage of both the consistent natural appearance of a true color image and the feature separation of a PCA image based on a biologically inspired visual attention model. The key part is to extract the informative regions in the scene. The model takes into account human contrast sensitivity functions and generates a topographic saliency map for both images. This is accomplished using a set of linear "center-surround" operations simulating visual receptive fields as the difference between fine and coarse scales. A difference map between the saliency map of the true color image and that of the PCA image is derived and used as a mask on the true color image to select a small number of interesting locations where the PCA image has more salient features than available in the visible bands. The resulting representations preserve hue for vegetation, water, road etc., while the selected attentional locations may be analyzed by more advanced algorithms.
Explicit attention interferes with selective emotion processing in human extrastriate cortex.
Schupp, Harald T; Stockburger, Jessica; Bublatzky, Florian; Junghöfer, Markus; Weike, Almut I; Hamm, Alfons O
2007-02-22
Brain imaging and event-related potential studies provide strong evidence that emotional stimuli guide selective attention in visual processing. A reflection of the emotional attention capture is the increased Early Posterior Negativity (EPN) for pleasant and unpleasant compared to neutral images (approximately 150-300 ms poststimulus). The present study explored whether this early emotion discrimination reflects an automatic phenomenon or is subject to interference by competing processing demands. Thus, emotional processing was assessed while participants performed a concurrent feature-based attention task varying in processing demands. Participants successfully performed the primary visual attention task as revealed by behavioral performance and selected event-related potential components (Selection Negativity and P3b). Replicating previous results, emotional modulation of the EPN was observed in a task condition with low processing demands. In contrast, pleasant and unpleasant pictures failed to elicit increased EPN amplitudes compared to neutral images in more difficult explicit attention task conditions. Further analyses determined that even the processing of pleasant and unpleasant pictures high in emotional arousal is subject to interference in experimental conditions with high task demand. Taken together, performing demanding feature-based counting tasks interfered with differential emotion processing indexed by the EPN. The present findings demonstrate that taxing processing resources by a competing primary visual attention task markedly attenuated the early discrimination of emotional from neutral picture contents. Thus, these results provide further empirical support for an interference account of the emotion-attention interaction under conditions of competition. Previous studies revealed the interference of selective emotion processing when attentional resources were directed to locations of explicitly task-relevant stimuli. The present data suggest that interference of emotion processing by competing task demands is a more general phenomenon extending to the domain of feature-based attention. Furthermore, the results are inconsistent with the notion of effortlessness, i.e., early emotion discrimination despite concurrent task demands. These findings implicate to assess the presumed automatic nature of emotion processing at the level of specific aspects rather than considering automaticity as an all-or-none phenomenon.
Explicit attention interferes with selective emotion processing in human extrastriate cortex
Schupp, Harald T; Stockburger, Jessica; Bublatzky, Florian; Junghöfer, Markus; Weike, Almut I; Hamm, Alfons O
2007-01-01
Background Brain imaging and event-related potential studies provide strong evidence that emotional stimuli guide selective attention in visual processing. A reflection of the emotional attention capture is the increased Early Posterior Negativity (EPN) for pleasant and unpleasant compared to neutral images (~150–300 ms poststimulus). The present study explored whether this early emotion discrimination reflects an automatic phenomenon or is subject to interference by competing processing demands. Thus, emotional processing was assessed while participants performed a concurrent feature-based attention task varying in processing demands. Results Participants successfully performed the primary visual attention task as revealed by behavioral performance and selected event-related potential components (Selection Negativity and P3b). Replicating previous results, emotional modulation of the EPN was observed in a task condition with low processing demands. In contrast, pleasant and unpleasant pictures failed to elicit increased EPN amplitudes compared to neutral images in more difficult explicit attention task conditions. Further analyses determined that even the processing of pleasant and unpleasant pictures high in emotional arousal is subject to interference in experimental conditions with high task demand. Taken together, performing demanding feature-based counting tasks interfered with differential emotion processing indexed by the EPN. Conclusion The present findings demonstrate that taxing processing resources by a competing primary visual attention task markedly attenuated the early discrimination of emotional from neutral picture contents. Thus, these results provide further empirical support for an interference account of the emotion-attention interaction under conditions of competition. Previous studies revealed the interference of selective emotion processing when attentional resources were directed to locations of explicitly task-relevant stimuli. The present data suggest that interference of emotion processing by competing task demands is a more general phenomenon extending to the domain of feature-based attention. Furthermore, the results are inconsistent with the notion of effortlessness, i.e., early emotion discrimination despite concurrent task demands. These findings implicate to assess the presumed automatic nature of emotion processing at the level of specific aspects rather than considering automaticity as an all-or-none phenomenon. PMID:17316444
Interaction Between Spatial and Feature Attention in Posterior Parietal Cortex
Ibos, Guilhem; Freedman, David J.
2016-01-01
Summary Lateral intraparietal (LIP) neurons encode a vast array of sensory and cognitive variables. Recently, we proposed that the flexibility of feature representations in LIP reflect the bottom-up integration of sensory signals, modulated by feature-based attention (FBA), from upstream feature-selective cortical neurons. Moreover, LIP activity is also strongly modulated by the position of space-based attention (SBA). However, the mechanisms by which SBA and FBA interact to facilitate the representation of task-relevant spatial and non-spatial features in LIP remain unclear. We recorded from LIP neurons during performance of a task which required monkeys to detect specific conjunctions of color, motion-direction, and stimulus position. Here we show that FBA and SBA potentiate each other’s effect in a manner consistent with attention gating the flow of visual information along the cortical visual pathway. Our results suggest that linear bottom-up integrative mechanisms allow LIP neurons to emphasize task-relevant spatial and non-spatial features. PMID:27499082
Feature-based attentional weighting and spreading in visual working memory
Niklaus, Marcel; Nobre, Anna C.; van Ede, Freek
2017-01-01
Attention can be directed at features and feature dimensions to facilitate perception. Here, we investigated whether feature-based-attention (FBA) can also dynamically weight feature-specific representations within multi-feature objects held in visual working memory (VWM). Across three experiments, participants retained coloured arrows in working memory and, during the delay, were cued to either the colour or the orientation dimension. We show that directing attention towards a feature dimension (1) improves the performance in the cued feature dimension at the expense of the uncued dimension, (2) is more efficient if directed to the same rather than to different dimensions for different objects, and (3) at least for colour, automatically spreads to the colour representation of non-attended objects in VWM. We conclude that FBA also continues to operate on VWM representations (with similar principles that govern FBA in the perceptual domain) and challenge the classical view that VWM representations are stored solely as integrated objects. PMID:28233830
Interaction between Spatial and Feature Attention in Posterior Parietal Cortex.
Ibos, Guilhem; Freedman, David J
2016-08-17
Lateral intraparietal (LIP) neurons encode a vast array of sensory and cognitive variables. Recently, we proposed that the flexibility of feature representations in LIP reflect the bottom-up integration of sensory signals, modulated by feature-based attention (FBA), from upstream feature-selective cortical neurons. Moreover, LIP activity is also strongly modulated by the position of space-based attention (SBA). However, the mechanisms by which SBA and FBA interact to facilitate the representation of task-relevant spatial and non-spatial features in LIP remain unclear. We recorded from LIP neurons during performance of a task that required monkeys to detect specific conjunctions of color, motion direction, and stimulus position. Here we show that FBA and SBA potentiate each other's effect in a manner consistent with attention gating the flow of visual information along the cortical visual pathway. Our results suggest that linear bottom-up integrative mechanisms allow LIP neurons to emphasize task-relevant spatial and non-spatial features. Copyright © 2016 Elsevier Inc. All rights reserved.
The Development of Attentional Networks: Cross-Sectional Findings from a Life Span Sample
ERIC Educational Resources Information Center
Waszak, Florian; Li, Shu-Chen; Hommel, Bernhard
2010-01-01
Using a population-based sample of 263 individuals ranging from 6 to 89 years of age, we investigated the gains and losses in the abilities to (a) use exogenous cues to shift attention covertly and (b) ignore conflicting information across the life span. The participants' ability to shift visual attention was tested by a typical Posner-type…
Attentional Allocation of Autism Spectrum Disorder Individuals: Searching for a Face-in-the-Crowd
ERIC Educational Resources Information Center
Moore, David J.; Reidy, John; Heavey, Lisa
2016-01-01
A study is reported which tests the proposition that faces capture the attention of those with autism spectrum disorders less than a typical population. A visual search task based on the Face-in-the-Crowd paradigm was used to examine the attentional allocation of autism spectrum disorder adults for faces. Participants were required to search for…
The Speed of Serial Attention Shifts in Visual Search: Evidence from the N2pc Component.
Grubert, Anna; Eimer, Martin
2016-02-01
Finding target objects among distractors in visual search display is often assumed to be based on sequential movements of attention between different objects. However, the speed of such serial attention shifts is still under dispute. We employed a search task that encouraged the successive allocation of attention to two target objects in the same search display and measured N2pc components to determine how fast attention moved between these objects. Each display contained one digit in a known color (fixed-color target) and another digit whose color changed unpredictably across trials (variable-color target) together with two gray distractor digits. Participants' task was to find the fixed-color digit and compare its numerical value with that of the variable-color digit. N2pc components to fixed-color targets preceded N2pc components to variable-color digits, demonstrating that these two targets were indeed selected in a fixed serial order. The N2pc to variable-color digits emerged approximately 60 msec after the N2pc to fixed-color digits, which shows that attention can be reallocated very rapidly between different target objects in the visual field. When search display durations were increased, thereby relaxing the temporal demands on serial selection, the two N2pc components to fixed-color and variable-color targets were elicited within 90 msec of each other. Results demonstrate that sequential shifts of attention between different target locations can operate very rapidly at speeds that are in line with the assumptions of serial selection models of visual search.
Global facilitation of attended features is obligatory and restricts divided attention.
Andersen, Søren K; Hillyard, Steven A; Müller, Matthias M
2013-11-13
In many common situations such as driving an automobile it is advantageous to attend concurrently to events at different locations (e.g., the car in front, the pedestrian to the side). While spatial attention can be divided effectively between separate locations, studies investigating attention to nonspatial features have often reported a "global effect", whereby items having the attended feature may be preferentially processed throughout the entire visual field. These findings suggest that spatial and feature-based attention may at times act in direct opposition: spatially divided foci of attention cannot be truly independent if feature attention is spatially global and thereby affects all foci equally. In two experiments, human observers attended concurrently to one of two overlapping fields of dots of different colors presented in both the left and right visual fields. When the same color or two different colors were attended on the two sides, deviant targets were detected accurately, and visual-cortical potentials elicited by attended dots were enhanced. However, when the attended color on one side matched the ignored color on the opposite side, attentional modulation of cortical potentials was abolished. This loss of feature selectivity could be attributed to enhanced processing of unattended items that shared the color of the attended items in the opposite field. Thus, while it is possible to attend to two different colors at the same time, this ability is fundamentally constrained by spatially global feature enhancement in early visual-cortical areas, which is obligatory and persists even when it explicitly conflicts with task demands.
Ni, Jianguang; Jiang, Huihui; Jin, Yixiang; Chen, Nanhui; Wang, Jianhong; Wang, Zhengbo; Luo, Yuejia; Ma, Yuanye; Hu, Xintian
2011-01-01
Emotional stimuli have evolutionary significance for the survival of organisms; therefore, they are attention-grabbing and are processed preferentially. The neural underpinnings of two principle emotional dimensions in affective space, valence (degree of pleasantness) and arousal (intensity of evoked emotion), have been shown to be dissociable in the olfactory, gustatory and memory systems. However, the separable roles of valence and arousal in scene perception are poorly understood. In this study, we asked how these two emotional dimensions modulate overt visual attention. Twenty-two healthy volunteers freely viewed images from the International Affective Picture System (IAPS) that were graded for affective levels of valence and arousal (high, medium, and low). Subjects' heads were immobilized and eye movements were recorded by camera to track overt shifts of visual attention. Algebraic graph-based approaches were introduced to model scan paths as weighted undirected path graphs, generating global topology metrics that characterize the algebraic connectivity of scan paths. Our data suggest that human subjects show different scanning patterns to stimuli with different affective ratings. Valence salient stimuli (with neutral arousal) elicited faster and larger shifts of attention, while arousal salient stimuli (with neutral valence) elicited local scanning, dense attention allocation and deep processing. Furthermore, our model revealed that the modulatory effect of valence was linearly related to the valence level, whereas the relation between the modulatory effect and the level of arousal was nonlinear. Hence, visual attention seems to be modulated by mechanisms that are separate for valence and arousal. PMID:21494331
A Neural Theory of Visual Attention: Bridging Cognition and Neurophysiology
ERIC Educational Resources Information Center
Bundesen, Claus; Habekost, Thomas; Kyllingsbaek, Soren
2005-01-01
A neural theory of visual attention (NTVA) is presented. NTVA is a neural interpretation of C. Bundesen's (1990) theory of visual attention (TVA). In NTVA, visual processing capacity is distributed across stimuli by dynamic remapping of receptive fields of cortical cells such that more processing resources (cells) are devoted to behaviorally…
ERIC Educational Resources Information Center
Solan, Harold A.; Shelley-Tremblay, John F.; Hansen, Peter C.; Larson, Steven
2007-01-01
The authors examined the relationships between reading comprehension, visual attention, and magnocellular processing in 42 Grade 7 students. The goal was to quantify the sensitivity of visual attention and magnocellular visual processing as concomitants of poor reading comprehension in the absence of either vision therapy or cognitive…
Spatial Working Memory Interferes with Explicit, but Not Probabilistic Cuing of Spatial Attention
ERIC Educational Resources Information Center
Won, Bo-Yeong; Jiang, Yuhong V.
2015-01-01
Recent empirical and theoretical work has depicted a close relationship between visual attention and visual working memory. For example, rehearsal in spatial working memory depends on spatial attention, whereas adding a secondary spatial working memory task impairs attentional deployment in visual search. These findings have led to the proposal…
Fast and robust generation of feature maps for region-based visual attention.
Aziz, Muhammad Zaheer; Mertsching, Bärbel
2008-05-01
Visual attention is one of the important phenomena in biological vision which can be followed to achieve more efficiency, intelligence, and robustness in artificial vision systems. This paper investigates a region-based approach that performs pixel clustering prior to the processes of attention in contrast to late clustering as done by contemporary methods. The foundation steps of feature map construction for the region-based attention model are proposed here. The color contrast map is generated based upon the extended findings from the color theory, the symmetry map is constructed using a novel scanning-based method, and a new algorithm is proposed to compute a size contrast map as a formal feature channel. Eccentricity and orientation are computed using the moments of obtained regions and then saliency is evaluated using the rarity criteria. The efficient design of the proposed algorithms allows incorporating five feature channels while maintaining a processing rate of multiple frames per second. Another salient advantage over the existing techniques is the reusability of the salient regions in the high-level machine vision procedures due to preservation of their shapes and precise locations. The results indicate that the proposed model has the potential to efficiently integrate the phenomenon of attention into the main stream of machine vision and systems with restricted computing resources such as mobile robots can benefit from its advantages.
Cognitive and Neural Bases of Skilled Performance
1989-05-12
Pergamon, 1958. Broadbent , D.F., A mechanical model for human attention and immediate memory. Psychol. Rev., 64: 205-215’ 1957. Cherry, C. On the...material on the efficiency of selective listening. Amer. J. Psychol., 77: 533-546, 1964. Treisman, A. Strategies and models of selective attention ...cortex reveal suong effects of attention , these results suggest that the visual attentional " filter " may be located at a later stage. This is consistent
Schindler, Sebastian; Kissler, Johanna
2016-10-01
Human brains spontaneously differentiate between various emotional and neutral stimuli, including written words whose emotional quality is symbolic. In the electroencephalogram (EEG), emotional-neutral processing differences are typically reflected in the early posterior negativity (EPN, 200-300 ms) and the late positive potential (LPP, 400-700 ms). These components are also enlarged by task-driven visual attention, supporting the assumption that emotional content naturally drives attention. Still, the spatio-temporal dynamics of interactions between emotional stimulus content and task-driven attention remain to be specified. Here, we examine this issue in visual word processing. Participants attended to negative, neutral, or positive nouns while high-density EEG was recorded. Emotional content and top-down attention both amplified the EPN component in parallel. On the LPP, by contrast, emotion and attention interacted: Explicit attention to emotional words led to a substantially larger amplitude increase than did explicit attention to neutral words. Source analysis revealed early parallel effects of emotion and attention in bilateral visual cortex and a later interaction of both in right visual cortex. Distinct effects of attention were found in inferior, middle and superior frontal, paracentral, and parietal areas, as well as in the anterior cingulate cortex (ACC). Results specify separate and shared mechanisms of emotion and attention at distinct processing stages. Hum Brain Mapp 37:3575-3587, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Krishna, B. Suresh; Treue, Stefan
2016-01-01
Paying attention to a sensory feature improves its perception and impairs that of others. Recent work has shown that a Normalization Model of Attention (NMoA) can account for a wide range of physiological findings and the influence of different attentional manipulations on visual performance. A key prediction of the NMoA is that attention to a visual feature like an orientation or a motion direction will increase the response of neurons preferring the attended feature (response gain) rather than increase the sensory input strength of the attended stimulus (input gain). This effect of feature-based attention on neuronal responses should translate to similar patterns of improvement in behavioral performance, with psychometric functions showing response gain rather than input gain when attention is directed to the task-relevant feature. In contrast, we report here that when human subjects are cued to attend to one of two motion directions in a transparent motion display, attentional effects manifest as a combination of input and response gain. Further, the impact on input gain is greater when attention is directed towards a narrow range of motion directions than when it is directed towards a broad range. These results are captured by an extended NMoA, which either includes a stimulus-independent attentional contribution to normalization or utilizes direction-tuned normalization. The proposed extensions are consistent with the feature-similarity gain model of attention and the attentional modulation in extrastriate area MT, where neuronal responses are enhanced and suppressed by attention to preferred and non-preferred motion directions respectively. PMID:27977679
Belkaid, Marwen; Cuperlier, Nicolas; Gaussier, Philippe
2017-01-01
Emotions play a significant role in internal regulatory processes. In this paper, we advocate four key ideas. First, novelty detection can be grounded in the sensorimotor experience and allow higher order appraisal. Second, cognitive processes, such as those involved in self-assessment, influence emotional states by eliciting affects like boredom and frustration. Third, emotional processes such as those triggered by self-assessment influence attentional processes. Last, close emotion-cognition interactions implement an efficient feedback loop for the purpose of top-down behavior regulation. The latter is what we call 'Emotional Metacontrol'. We introduce a model based on artificial neural networks. This architecture is used to control a robotic system in a visual search task. The emotional metacontrol intervenes to bias the robot visual attention during active object recognition. Through a behavioral and statistical analysis, we show that this mechanism increases the robot performance and fosters the exploratory behavior to avoid deadlocks.
Park, George D; Reed, Catherine L
2015-10-01
Despite attentional prioritization for grasping space near the hands, tool-use appears to transfer attentional bias to the tool's end/functional part. The contributions of haptic and visual inputs to attentional distribution along a tool were investigated as a function of tool-use in near (Experiment 1) and far (Experiment 2) space. Visual attention was assessed with a 50/50, go/no-go, target discrimination task, while a tool was held next to targets appearing near the tool-occupied hand or tool-end. Target response times (RTs) and sensitivity (d-prime) were measured at target locations, before and after functional tool practice for three conditions: (1) open-tool: tool-end visible (visual + haptic inputs), (2) hidden-tool: tool-end visually obscured (haptic input only), and (3) short-tool: stick missing tool's length/end (control condition: hand occupied but no visual/haptic input). In near space, both open- and hidden-tool groups showed a tool-end, attentional bias (faster RTs toward tool-end) before practice; after practice, RTs near the hand improved. In far space, the open-tool group showed no bias before practice; after practice, target RTs near the tool-end improved. However, the hidden-tool group showed a consistent tool-end bias despite practice. Lack of short-tool group results suggested that hidden-tool group results were specific to haptic inputs. In conclusion, (1) allocation of visual attention along a tool due to tool practice differs in near and far space, and (2) visual attention is drawn toward the tool's end even when visually obscured, suggesting haptic input provides sufficient information for directing attention along the tool.
Visual attention for a desktop virtual environment with ambient scent
Toet, Alexander; van Schaik, Martin G.
2013-01-01
In the current study participants explored a desktop virtual environment (VE) representing a suburban neighborhood with signs of public disorder (neglect, vandalism, and crime), while being exposed to either room air (control group), or subliminal levels of tar (unpleasant; typically associated with burned or waste material) or freshly cut grass (pleasant; typically associated with natural or fresh material) ambient odor. They reported all signs of disorder they noticed during their walk together with their associated emotional response. Based on recent evidence that odors reflexively direct visual attention to (either semantically or affectively) congruent visual objects, we hypothesized that participants would notice more signs of disorder in the presence of ambient tar odor (since this odor may bias attention to unpleasant and negative features), and less signs of disorder in the presence of ambient grass odor (since this odor may bias visual attention toward the vegetation in the environment and away from the signs of disorder). Contrary to our expectations the results provide no indication that the presence of an ambient odor affected the participants’ visual attention for signs of disorder or their emotional response. However, the paradigm used in present study does not allow us to draw any conclusions in this respect. We conclude that a closer affective, semantic, or spatiotemporal link between the contents of a desktop VE and ambient scents may be required to effectively establish diagnostic associations that guide a user’s attention. In the absence of these direct links, ambient scent may be more diagnostic for the physical environment of the observer as a whole than for the particular items in that environment (or, in this case, items represented in the VE). PMID:24324453
Video attention deviation estimation using inter-frame visual saliency map analysis
NASA Astrophysics Data System (ADS)
Feng, Yunlong; Cheung, Gene; Le Callet, Patrick; Ji, Yusheng
2012-01-01
A viewer's visual attention during video playback is the matching of his eye gaze movement to the changing video content over time. If the gaze movement matches the video content (e.g., follow a rolling soccer ball), then the viewer keeps his visual attention. If the gaze location moves from one video object to another, then the viewer shifts his visual attention. A video that causes a viewer to shift his attention often is a "busy" video. Determination of which video content is busy is an important practical problem; a busy video is difficult for encoder to deploy region of interest (ROI)-based bit allocation, and hard for content provider to insert additional overlays like advertisements, making the video even busier. One way to determine the busyness of video content is to conduct eye gaze experiments with a sizable group of test subjects, but this is time-consuming and costineffective. In this paper, we propose an alternative method to determine the busyness of video-formally called video attention deviation (VAD): analyze the spatial visual saliency maps of the video frames across time. We first derive transition probabilities of a Markov model for eye gaze using saliency maps of a number of consecutive frames. We then compute steady state probability of the saccade state in the model-our estimate of VAD. We demonstrate that the computed steady state probability for saccade using saliency map analysis matches that computed using actual gaze traces for a range of videos with different degrees of busyness. Further, our analysis can also be used to segment video into shorter clips of different degrees of busyness by computing the Kullback-Leibler divergence using consecutive motion compensated saliency maps.
Modulation of spatial attention by goals, statistical learning, and monetary reward.
Jiang, Yuhong V; Sha, Li Z; Remington, Roger W
2015-10-01
This study documented the relative strength of task goals, visual statistical learning, and monetary reward in guiding spatial attention. Using a difficult T-among-L search task, we cued spatial attention to one visual quadrant by (i) instructing people to prioritize it (goal-driven attention), (ii) placing the target frequently there (location probability learning), or (iii) associating that quadrant with greater monetary gain (reward-based attention). Results showed that successful goal-driven attention exerted the strongest influence on search RT. Incidental location probability learning yielded a smaller though still robust effect. Incidental reward learning produced negligible guidance for spatial attention. The 95 % confidence intervals of the three effects were largely nonoverlapping. To understand these results, we simulated the role of location repetition priming in probability cuing and reward learning. Repetition priming underestimated the strength of location probability cuing, suggesting that probability cuing involved long-term statistical learning of how to shift attention. Repetition priming provided a reasonable account for the negligible effect of reward on spatial attention. We propose a multiple-systems view of spatial attention that includes task goals, search habit, and priming as primary drivers of top-down attention.
Modulation of spatial attention by goals, statistical learning, and monetary reward
Sha, Li Z.; Remington, Roger W.
2015-01-01
This study documented the relative strength of task goals, visual statistical learning, and monetary reward in guiding spatial attention. Using a difficult T-among-L search task, we cued spatial attention to one visual quadrant by (i) instructing people to prioritize it (goal-driven attention), (ii) placing the target frequently there (location probability learning), or (iii) associating that quadrant with greater monetary gain (reward-based attention). Results showed that successful goal-driven attention exerted the strongest influence on search RT. Incidental location probability learning yielded a smaller though still robust effect. Incidental reward learning produced negligible guidance for spatial attention. The 95 % confidence intervals of the three effects were largely nonoverlapping. To understand these results, we simulated the role of location repetition priming in probability cuing and reward learning. Repetition priming underestimated the strength of location probability cuing, suggesting that probability cuing involved long-term statistical learning of how to shift attention. Repetition priming provided a reasonable account for the negligible effect of reward on spatial attention. We propose a multiple-systems view of spatial attention that includes task goals, search habit, and priming as primary drivers of top-down attention. PMID:26105657
Schubert, Torsten; Finke, Kathrin; Redel, Petra; Kluckow, Steffen; Müller, Hermann; Strobach, Tilo
2015-05-01
Experts with video game experience, in contrast to non-experienced persons, are superior in multiple domains of visual attention. However, it is an open question which basic aspects of attention underlie this superiority. We approached this question using the framework of Theory of Visual Attention (TVA) with tools that allowed us to assess various parameters that are related to different visual attention aspects (e.g., perception threshold, processing speed, visual short-term memory storage capacity, top-down control, spatial distribution of attention) and that are measurable on the same experimental basis. In Experiment 1, we found advantages of video game experts in perception threshold and visual processing speed; the latter being restricted to the lower positions of the used computer display. The observed advantages were not significantly moderated by general person-related characteristics such as personality traits, sensation seeking, intelligence, social anxiety, or health status. Experiment 2 tested a potential causal link between the expert advantages and video game practice with an intervention protocol. It found no effects of action video gaming on perception threshold, visual short-term memory storage capacity, iconic memory storage, top-down control, and spatial distribution of attention after 15 days of training. However, observations of a selected improvement of processing speed at the lower positions of the computer screen after video game training and of retest effects are suggestive for limited possibilities to improve basic aspects of visual attention (TVA) with practice. Copyright © 2015 Elsevier B.V. All rights reserved.
Xia, Jing; Zhang, Wei; Jiang, Yizhou; Li, You; Chen, Qi
2018-05-16
Practice and experiences gradually shape the central nervous system, from the synaptic level to large-scale neural networks. In natural multisensory environment, even when inundated by streams of information from multiple sensory modalities, our brain does not give equal weight to different modalities. Rather, visual information more frequently receives preferential processing and eventually dominates consciousness and behavior, i.e., visual dominance. It remains unknown, however, the supra-modal and modality-specific practice effect during cross-modal selective attention, and moreover whether the practice effect shows similar modality preferences as the visual dominance effect in the multisensory environment. To answer the above two questions, we adopted a cross-modal selective attention paradigm in conjunction with the hybrid fMRI design. Behaviorally, visual performance significantly improved while auditory performance remained constant with practice, indicating that visual attention more flexibly adapted behavior with practice than auditory attention. At the neural level, the practice effect was associated with decreasing neural activity in the frontoparietal executive network and increasing activity in the default mode network, which occurred independently of the modality attended, i.e., the supra-modal mechanisms. On the other hand, functional decoupling between the auditory and the visual system was observed with the progress of practice, which varied as a function of the modality attended. The auditory system was functionally decoupled with both the dorsal and ventral visual stream during auditory attention while was decoupled only with the ventral visual stream during visual attention. To efficiently suppress the irrelevant visual information with practice, auditory attention needs to additionally decouple the auditory system from the dorsal visual stream. The modality-specific mechanisms, together with the behavioral effect, thus support the visual dominance model in terms of the practice effect during cross-modal selective attention. Copyright © 2018 Elsevier Ltd. All rights reserved.
Romei, Vincenzo; Thut, Gregor; Mok, Robert M; Schyns, Philippe G; Driver, Jon
2012-03-01
Although oscillatory activity in the alpha band was traditionally associated with lack of alertness, more recent work has linked it to specific cognitive functions, including visual attention. The emerging method of rhythmic transcranial magnetic stimulation (TMS) allows causal interventional tests for the online impact on performance of TMS administered in short bursts at a particular frequency. TMS bursts at 10 Hz have recently been shown to have an impact on spatial visual attention, but any role in featural attention remains unclear. Here we used rhythmic TMS at 10 Hz to assess the impact on attending to global or local components of a hierarchical Navon-like stimulus (D. Navon (1977) Forest before trees: The precedence of global features in visual perception. Cognit. Psychol., 9, 353), in a paradigm recently used with TMS at other frequencies (V. Romei, J. Driver, P.G. Schyns & G. Thut. (2011) Rhythmic TMS over parietal cortex links distinct brain frequencies to global versus local visual processing. Curr. Biol., 2, 334-337). In separate groups, left or right posterior parietal sites were stimulated at 10 Hz just before presentation of the hierarchical stimulus. Participants had to identify either the local or global component in separate blocks. Right parietal 10 Hz stimulation (vs. sham) significantly impaired global processing without affecting local processing, while left parietal 10 Hz stimulation vs. sham impaired local processing with a minor trend to enhance global processing. These 10 Hz outcomes differed significantly from stimulation at other frequencies (i.e. 5 or 20 Hz) over the same site in other recent work with the same paradigm. These dissociations confirm differential roles of the two hemispheres in local vs. global processing, and reveal a frequency-specific role for stimulation in the alpha band for regulating feature-based visual attention. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Eye-gaze independent EEG-based brain-computer interfaces for communication.
Riccio, A; Mattia, D; Simione, L; Olivetti, M; Cincotti, F
2012-08-01
The present review systematically examines the literature reporting gaze independent interaction modalities in non-invasive brain-computer interfaces (BCIs) for communication. BCIs measure signals related to specific brain activity and translate them into device control signals. This technology can be used to provide users with severe motor disability (e.g. late stage amyotrophic lateral sclerosis (ALS); acquired brain injury) with an assistive device that does not rely on muscular contraction. Most of the studies on BCIs explored mental tasks and paradigms using visual modality. Considering that in ALS patients the oculomotor control can deteriorate and also other potential users could have impaired visual function, tactile and auditory modalities have been investigated over the past years to seek alternative BCI systems which are independent from vision. In addition, various attentional mechanisms, such as covert attention and feature-directed attention, have been investigated to develop gaze independent visual-based BCI paradigms. Three areas of research were considered in the present review: (i) auditory BCIs, (ii) tactile BCIs and (iii) independent visual BCIs. Out of a total of 130 search results, 34 articles were selected on the basis of pre-defined exclusion criteria. Thirteen articles dealt with independent visual BCIs, 15 reported on auditory BCIs and the last six on tactile BCIs, respectively. From the review of the available literature, it can be concluded that a crucial point is represented by the trade-off between BCI systems/paradigms with high accuracy and speed, but highly demanding in terms of attention and memory load, and systems requiring lower cognitive effort but with a limited amount of communicable information. These issues should be considered as priorities to be explored in future studies to meet users' requirements in a real-life scenario.
Eye-gaze independent EEG-based brain-computer interfaces for communication
NASA Astrophysics Data System (ADS)
Riccio, A.; Mattia, D.; Simione, L.; Olivetti, M.; Cincotti, F.
2012-08-01
The present review systematically examines the literature reporting gaze independent interaction modalities in non-invasive brain-computer interfaces (BCIs) for communication. BCIs measure signals related to specific brain activity and translate them into device control signals. This technology can be used to provide users with severe motor disability (e.g. late stage amyotrophic lateral sclerosis (ALS); acquired brain injury) with an assistive device that does not rely on muscular contraction. Most of the studies on BCIs explored mental tasks and paradigms using visual modality. Considering that in ALS patients the oculomotor control can deteriorate and also other potential users could have impaired visual function, tactile and auditory modalities have been investigated over the past years to seek alternative BCI systems which are independent from vision. In addition, various attentional mechanisms, such as covert attention and feature-directed attention, have been investigated to develop gaze independent visual-based BCI paradigms. Three areas of research were considered in the present review: (i) auditory BCIs, (ii) tactile BCIs and (iii) independent visual BCIs. Out of a total of 130 search results, 34 articles were selected on the basis of pre-defined exclusion criteria. Thirteen articles dealt with independent visual BCIs, 15 reported on auditory BCIs and the last six on tactile BCIs, respectively. From the review of the available literature, it can be concluded that a crucial point is represented by the trade-off between BCI systems/paradigms with high accuracy and speed, but highly demanding in terms of attention and memory load, and systems requiring lower cognitive effort but with a limited amount of communicable information. These issues should be considered as priorities to be explored in future studies to meet users’ requirements in a real-life scenario.
Visual Hybrid Development Learning System (VHDLS) framework for children with autism.
Banire, Bilikis; Jomhari, Nazean; Ahmad, Rodina
2015-10-01
The effect of education on children with autism serves as a relative cure for their deficits. As a result of this, they require special techniques to gain their attention and interest in learning as compared to typical children. Several studies have shown that these children are visual learners. In this study, we proposed a Visual Hybrid Development Learning System (VHDLS) framework that is based on an instructional design model, multimedia cognitive learning theory, and learning style in order to guide software developers in developing learning systems for children with autism. The results from this study showed that the attention of children with autism increased more with the proposed VHDLS framework.
Guidance of visual attention by semantic information in real-world scenes
Wu, Chia-Chien; Wick, Farahnaz Ahmed; Pomplun, Marc
2014-01-01
Recent research on attentional guidance in real-world scenes has focused on object recognition within the context of a scene. This approach has been valuable for determining some factors that drive the allocation of visual attention and determine visual selection. This article provides a review of experimental work on how different components of context, especially semantic information, affect attentional deployment. We review work from the areas of object recognition, scene perception, and visual search, highlighting recent studies examining semantic structure in real-world scenes. A better understanding on how humans parse scene representations will not only improve current models of visual attention but also advance next-generation computer vision systems and human-computer interfaces. PMID:24567724
Attention Effects During Visual Short-Term Memory Maintenance: Protection or Prioritization?
Matsukura, Michi; Luck, Steven J.; Vecera, Shaun P.
2007-01-01
Interactions between visual attention and visual short-term memory (VSTM) play a central role in cognitive processing. For example, attention can assist in selectively encoding items into visual memory. Attention appears to be able to influence items already stored in visual memory as well; cues that appear long after the presentation of an array of objects can affect memory for those objects (Griffin & Nobre, 2003). In five experiments, we distinguished two possible mechanisms for the effects of cues on items currently stored in VSTM. A protection account proposes that attention protects the cued item from becoming degraded during the retention interval. By contrast, a prioritization account suggests that attention increases a cued item’s priority during the comparison process that occurs when memory is tested. The results of the experiments were consistent with the first of these possibilities, suggesting that attention can serve to protect VSTM representations while they are being maintained. PMID:18078232
Attention distributed across sensory modalities enhances perceptual performance
Mishra, Jyoti; Gazzaley, Adam
2012-01-01
This study investigated the interaction between top-down attentional control and multisensory processing in humans. Using semantically congruent and incongruent audiovisual stimulus streams, we found target detection to be consistently improved in the setting of distributed audiovisual attention versus focused visual attention. This performance benefit was manifested as faster reaction times for congruent audiovisual stimuli, and as accuracy improvements for incongruent stimuli, resulting in a resolution of stimulus interference. Electrophysiological recordings revealed that these behavioral enhancements were associated with reduced neural processing of both auditory and visual components of the audiovisual stimuli under distributed vs. focused visual attention. These neural changes were observed at early processing latencies, within 100–300 ms post-stimulus onset, and localized to auditory, visual, and polysensory temporal cortices. These results highlight a novel neural mechanism for top-down driven performance benefits via enhanced efficacy of sensory neural processing during distributed audiovisual attention relative to focused visual attention. PMID:22933811
Wästlund, Erik; Shams, Poja; Otterbring, Tobias
2018-01-01
In visual marketing, the truism that "unseen is unsold" means that products that are not noticed will not be sold. This truism rests on the idea that the consumer choice process is heavily influenced by visual search. However, given that the majority of available products are not seen by consumers, this article examines the role of peripheral vision in guiding attention during the consumer choice process. In two eye-tracking studies, one conducted in a lab facility and the other conducted in a supermarket, the authors investigate the role and limitations of peripheral vision. The results show that peripheral vision is used to direct visual attention when discriminating between target and non-target objects in an eye-tracking laboratory. Target and non-target similarity, as well as visual saliency of non-targets, constitute the boundary conditions for this effect, which generalizes from instruction-based laboratory tasks to preference-based choice tasks in a real supermarket setting. Thus, peripheral vision helps customers to devote a larger share of attention to relevant products during the consumer choice process. Taken together, the results show how the creation of consideration set (sets of possible choice options) relies on both goal-directed attention and peripheral vision. These results could explain how visually similar packaging positively influences market leaders, while making novel brands almost invisible on supermarket shelves. The findings show that even though unsold products might be unseen, in the sense that they have not been directly observed, they might still have been evaluated and excluded by means of peripheral vision. This article is based on controlled lab experiments as well as a field study conducted in a complex retail environment. Thus, the findings are valid both under controlled and ecologically valid conditions. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
ERIC Educational Resources Information Center
Thiessen, Amber; Beukelman, David; Hux, Karen; Longenecker, Maria
2016-01-01
Purpose: The purpose of the study was to compare the visual attention patterns of adults with aphasia and adults without neurological conditions when viewing visual scenes with 2 types of engagement. Method: Eye-tracking technology was used to measure the visual attention patterns of 10 adults with aphasia and 10 adults without neurological…
Aging and goal-directed emotional attention: distraction reverses emotional biases.
Knight, Marisa; Seymour, Travis L; Gaunt, Joshua T; Baker, Christopher; Nesmith, Kathryn; Mather, Mara
2007-11-01
Previous findings reveal that older adults favor positive over negative stimuli in both memory and attention (for a review, see Mather & Carstensen, 2005). This study used eye tracking to investigate the role of cognitive control in older adults' selective visual attention. Younger and older adults viewed emotional-neutral and emotional-emotional pairs of faces and pictures while their gaze patterns were recorded under full or divided attention conditions. Replicating previous eye-tracking findings, older adults allocated less of their visual attention to negative stimuli in negative-neutral stimulus pairings in the full attention condition than younger adults did. However, as predicted by a cognitive-control-based account of the positivity effect in older adults' information processing tendencies (Mather & Knight, 2005), older adults' tendency to avoid negative stimuli was reversed in the divided attention condition. Compared with younger adults, older adults' limited attentional resources were more likely to be drawn to negative stimuli when they were distracted. These findings indicate that emotional goals can have unintended consequences when cognitive control mechanisms are not fully available.
Attention modulates perception of visual space
Zhou, Liu; Deng, Chenglong; Ooi, Teng Leng; He, Zijiang J.
2017-01-01
Attention readily facilitates the detection and discrimination of objects, but it is not known whether it helps to form the vast volume of visual space that contains the objects and where actions are implemented. Conventional wisdom suggests not, given the effortless ease with which we perceive three-dimensional (3D) scenes on opening our eyes. Here, we show evidence to the contrary. In Experiment 1, the observer judged the location of a briefly presented target, placed either on the textured ground or ceiling surface. Judged location was more accurate for a target on the ground, provided that the ground was visible and that the observer directed attention to the lower visual field, not the upper field. This reveals that attention facilitates space perception with reference to the ground. Experiment 2 showed that judged location of a target in mid-air, with both ground and ceiling surfaces present, was more accurate when the observer directed their attention to the lower visual field; this indicates that the attention effect extends to visual space above the ground. These findings underscore the role of attention in anchoring visual orientation in space, which is arguably a primal event that enhances one’s ability to interact with objects and surface layouts within the visual space. The fact that the effect of attention was contingent on the ground being visible suggests that our terrestrial visual system is best served by its ecological niche. PMID:29177198
Attention and normalization circuits in macaque V1
Sanayei, M; Herrero, J L; Distler, C; Thiele, A
2015-01-01
Attention affects neuronal processing and improves behavioural performance. In extrastriate visual cortex these effects have been explained by normalization models, which assume that attention influences the circuit that mediates surround suppression. While normalization models have been able to explain attentional effects, their validity has rarely been tested against alternative models. Here we investigate how attention and surround/mask stimuli affect neuronal firing rates and orientation tuning in macaque V1. Surround/mask stimuli provide an estimate to what extent V1 neurons are affected by normalization, which was compared against effects of spatial top down attention. For some attention/surround effect comparisons, the strength of attentional modulation was correlated with the strength of surround modulation, suggesting that attention and surround/mask stimulation (i.e. normalization) might use a common mechanism. To explore this in detail, we fitted multiplicative and additive models of attention to our data. In one class of models, attention contributed to normalization mechanisms, whereas in a different class of models it did not. Model selection based on Akaike's and on Bayesian information criteria demonstrated that in most cells the effects of attention were best described by models where attention did not contribute to normalization mechanisms. This demonstrates that attentional influences on neuronal responses in primary visual cortex often bypass normalization mechanisms. PMID:25757941
Automatic Guidance of Visual Attention from Verbal Working Memory
ERIC Educational Resources Information Center
Soto, David; Humphreys, Glyn W.
2007-01-01
Previous studies have shown that visual attention can be captured by stimuli matching the contents of working memory (WM). Here, the authors assessed the nature of the representation that mediates the guidance of visual attention from WM. Observers were presented with either verbal or visual primes (to hold in memory, Experiment 1; to verbalize,…
Visual Spatial Attention to Multiple Locations At Once: The Jury Is Still Out
ERIC Educational Resources Information Center
Jans, Bert; Peters, Judith C.; De Weerd, Peter
2010-01-01
Although in traditional attention research the focus of visual spatial attention has been considered as indivisible, many studies in the last 15 years have claimed the contrary. These studies suggest that humans can direct their attention simultaneously to multiple noncontiguous regions of the visual field upon mere instruction. The notion that…
Television Viewing at Home: Age Trends in Visual Attention and Time with TV.
ERIC Educational Resources Information Center
Anderson, Daniel R.; And Others
1986-01-01
Decribes age trends in television viewing time and visual attention of children and adults videotaped in their homes for 10-day periods. Shows that the increase in visual attention to television during the preschool years is consistent with the theory that television program comprehensibility is a major determinant of attention in young children.…
The fate of task-irrelevant visual motion: perceptual load versus feature-based attention.
Taya, Shuichiro; Adams, Wendy J; Graf, Erich W; Lavie, Nilli
2009-11-18
We tested contrasting predictions derived from perceptual load theory and from recent feature-based selection accounts. Observers viewed moving, colored stimuli and performed low or high load tasks associated with one stimulus feature, either color or motion. The resultant motion aftereffect (MAE) was used to evaluate attentional allocation. We found that task-irrelevant visual features received less attention than co-localized task-relevant features of the same objects. Moreover, when color and motion features were co-localized yet perceived to belong to two distinct surfaces, feature-based selection was further increased at the expense of object-based co-selection. Load theory predicts that the MAE for task-irrelevant motion would be reduced with a higher load color task. However, this was not seen for co-localized features; perceptual load only modulated the MAE for task-irrelevant motion when this was spatially separated from the attended color location. Our results suggest that perceptual load effects are mediated by spatial selection and do not generalize to the feature domain. Feature-based selection operates to suppress processing of task-irrelevant, co-localized features, irrespective of perceptual load.
Smith, Philip L; Sewell, David K
2013-07-01
We generalize the integrated system model of Smith and Ratcliff (2009) to obtain a new theory of attentional selection in brief, multielement visual displays. The theory proposes that attentional selection occurs via competitive interactions among detectors that signal the presence of task-relevant features at particular display locations. The outcome of the competition, together with attention, determines which stimuli are selected into visual short-term memory (VSTM). Decisions about the contents of VSTM are made by a diffusion-process decision stage. The selection process is modeled by coupled systems of shunting equations, which perform gated where-on-what pathway VSTM selection. The theory provides a computational account of key findings from attention tasks with near-threshold stimuli. These are (a) the success of the MAX model of visual search and spatial cuing, (b) the distractor homogeneity effect, (c) the double-target detection deficit, (d) redundancy costs in the post-stimulus probe task, (e) the joint item and information capacity limits of VSTM, and (f) the object-based nature of attentional selection. We argue that these phenomena are all manifestations of an underlying competitive VSTM selection process, which arise as a natural consequence of our theory. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Milleville-Pennel, Isabelle; Pothier, Johanna; Hoc, Jean-Michel; Mathé, Jean-François
2010-01-01
The aim was to assess the visual exploration of a person suffering from traumatic brain injury (TBI). It was hypothesized that visual exploration could be modified as a result of attentional or executive function deficits that are often observed following brain injury. This study compared an analysis of eyes movements while driving with data from neuropsychological tests. Five participants suffering from TBI and six control participants took part in this study. All had good driving experience. They were invited to drive on a fixed-base driving simulator. Eye fixations were recorded using an eye tracker. Neuropsychological tests were used to assess attention, working memory, rapidity of information processing and executive functions. Participants with TBI showed a reduction in the variety of the visual zones explored and a reduction of the distance of exploration. Moreover, neuropsychological evaluation indicates that there were difficulties in terms of divided attention, anticipation and planning. There is a complementarity of the information obtained. Tests give information about cognitive deficiencies but not about their translation into a dynamic situation. Conversely, visual exploration provides information about the dynamic with which information is picked up in the environment but not about the cognitive processes involved.
Poggel, Dorothe A; Treutwein, Bernhard; Calmanti, Claudia; Strasburger, Hans
2012-08-01
Part I described the topography of visual performance over the life span. Performance decline was explained only partly by deterioration of the optical apparatus. Part II therefore examines the influence of higher visual and cognitive functions. Visual field maps for 95 healthy observers of static perimetry, double-pulse resolution (DPR), reaction times, and contrast thresholds, were correlated with measures of visual attention (alertness, divided attention, spatial cueing), visual search, and the size of the attention focus. Correlations with the attentional variables were substantial, particularly for variables of temporal processing. DPR thresholds depended on the size of the attention focus. The extraction of cognitive variables from the correlations between topographical variables and participant age substantially reduced those correlations. There is a systematic top-down influence on the aging of visual functions, particularly of temporal variables, that largely explains performance decline and the change of the topography over the life span.
Vision in Flies: Measuring the Attention Span
Koenig, Sebastian; Wolf, Reinhard; Heisenberg, Martin
2016-01-01
A visual stimulus at a particular location of the visual field may elicit a behavior while at the same time equally salient stimuli in other parts do not. This property of visual systems is known as selective visual attention (SVA). The animal is said to have a focus of attention (FoA) which it has shifted to a particular location. Visual attention normally involves an attention span at the location to which the FoA has been shifted. Here the attention span is measured in Drosophila. The fly is tethered and hence has its eyes fixed in space. It can shift its FoA internally. This shift is revealed using two simultaneous test stimuli with characteristic responses at their particular locations. In tethered flight a wild type fly keeps its FoA at a certain location for up to 4s. Flies with a mutation in the radish gene, that has been suggested to be involved in attention-like mechanisms, display a reduced attention span of only 1s. PMID:26848852
Vision in Flies: Measuring the Attention Span.
Koenig, Sebastian; Wolf, Reinhard; Heisenberg, Martin
2016-01-01
A visual stimulus at a particular location of the visual field may elicit a behavior while at the same time equally salient stimuli in other parts do not. This property of visual systems is known as selective visual attention (SVA). The animal is said to have a focus of attention (FoA) which it has shifted to a particular location. Visual attention normally involves an attention span at the location to which the FoA has been shifted. Here the attention span is measured in Drosophila. The fly is tethered and hence has its eyes fixed in space. It can shift its FoA internally. This shift is revealed using two simultaneous test stimuli with characteristic responses at their particular locations. In tethered flight a wild type fly keeps its FoA at a certain location for up to 4s. Flies with a mutation in the radish gene, that has been suggested to be involved in attention-like mechanisms, display a reduced attention span of only 1s.
Real-time decoding of the direction of covert visuospatial attention
NASA Astrophysics Data System (ADS)
Andersson, Patrik; Ramsey, Nick F.; Raemaekers, Mathijs; Viergever, Max A.; Pluim, Josien P. W.
2012-08-01
Brain-computer interfaces (BCIs) make it possible to translate a person’s intentions into actions without depending on the muscular system. Brain activity is measured and classified into commands, thereby creating a direct link between the mind and the environment, enabling, e.g., cursor control or navigation of a wheelchair or robot. Most BCI research is conducted with scalp EEG but recent developments move toward intracranial electrodes for paralyzed people. The vast majority of BCI studies focus on the motor system as the appropriate target for recording and decoding movement intentions. However, properties of the visual system may make the visual system an attractive and intuitive alternative. We report on a study investigating feasibility of decoding covert visuospatial attention in real time, exploiting the full potential of a 7 T MRI scanner to obtain the necessary signal quality, capitalizing on earlier fMRI studies indicating that covert visuospatial attention changes activity in the visual areas that respond to stimuli presented in the attended area of the visual field. Healthy volunteers were instructed to shift their attention from the center of the screen to one of four static targets in the periphery, without moving their eyes from the center. During the first part of the fMRI-run, the relevant brain regions were located using incremental statistical analysis. During the second part, the activity in these regions was extracted and classified, and the subject was given visual feedback of the result. Performance was assessed as the number of trials where the real-time classifier correctly identified the direction of attention. On average, 80% of trials were correctly classified (chance level <25%) based on a single image volume, indicating very high decoding performance. While we restricted the experiment to five attention target regions (four peripheral and one central), the number of directions can be higher provided the brain activity patterns can be distinguished. In summary, the visual system promises to be an effective target for BCI control.
Harjunen, Ville J; Ahmed, Imtiaj; Jacucci, Giulio; Ravaja, Niklas; Spapé, Michiel M
2017-01-01
Earlier studies have revealed cross-modal visuo-tactile interactions in endogenous spatial attention. The current research used event-related potentials (ERPs) and virtual reality (VR) to identify how the visual cues of the perceiver's body affect visuo-tactile interaction in endogenous spatial attention and at what point in time the effect takes place. A bimodal oddball task with lateralized tactile and visual stimuli was presented in two VR conditions, one with and one without visible hands, and one VR-free control with hands in view. Participants were required to silently count one type of stimulus and ignore all other stimuli presented in irrelevant modality or location. The presence of hands was found to modulate early and late components of somatosensory and visual evoked potentials. For sensory-perceptual stages, the presence of virtual or real hands was found to amplify attention-related negativity on the somatosensory N140 and cross-modal interaction in somatosensory and visual P200. For postperceptual stages, an amplified N200 component was obtained in somatosensory and visual evoked potentials, indicating increased response inhibition in response to non-target stimuli. The effect of somatosensory, but not visual, N200 enhanced when the virtual hands were present. The findings suggest that bodily presence affects sustained cross-modal spatial attention between vision and touch and that this effect is specifically present in ERPs related to early- and late-sensory processing, as well as response inhibition, but do not affect later attention and memory-related P3 activity. Finally, the experiments provide commeasurable scenarios for the estimation of the signal and noise ratio to quantify effects related to the use of a head mounted display (HMD). However, despite valid a-priori reasons for fearing signal interference due to a HMD, we observed no significant drop in the robustness of our ERP measurements.
Harjunen, Ville J.; Ahmed, Imtiaj; Jacucci, Giulio; Ravaja, Niklas; Spapé, Michiel M.
2017-01-01
Earlier studies have revealed cross-modal visuo-tactile interactions in endogenous spatial attention. The current research used event-related potentials (ERPs) and virtual reality (VR) to identify how the visual cues of the perceiver’s body affect visuo-tactile interaction in endogenous spatial attention and at what point in time the effect takes place. A bimodal oddball task with lateralized tactile and visual stimuli was presented in two VR conditions, one with and one without visible hands, and one VR-free control with hands in view. Participants were required to silently count one type of stimulus and ignore all other stimuli presented in irrelevant modality or location. The presence of hands was found to modulate early and late components of somatosensory and visual evoked potentials. For sensory-perceptual stages, the presence of virtual or real hands was found to amplify attention-related negativity on the somatosensory N140 and cross-modal interaction in somatosensory and visual P200. For postperceptual stages, an amplified N200 component was obtained in somatosensory and visual evoked potentials, indicating increased response inhibition in response to non-target stimuli. The effect of somatosensory, but not visual, N200 enhanced when the virtual hands were present. The findings suggest that bodily presence affects sustained cross-modal spatial attention between vision and touch and that this effect is specifically present in ERPs related to early- and late-sensory processing, as well as response inhibition, but do not affect later attention and memory-related P3 activity. Finally, the experiments provide commeasurable scenarios for the estimation of the signal and noise ratio to quantify effects related to the use of a head mounted display (HMD). However, despite valid a-priori reasons for fearing signal interference due to a HMD, we observed no significant drop in the robustness of our ERP measurements. PMID:28275346
Guidance of attention by information held in working memory.
Calleja, Marissa Ortiz; Rich, Anina N
2013-05-01
Information held in working memory (WM) can guide attention during visual search. The authors of recent studies have interpreted the effect of holding verbal labels in WM as guidance of visual attention by semantic information. In a series of experiments, we tested how attention is influenced by visual features versus category-level information about complex objects held in WM. Participants either memorized an object's image or its category. While holding this information in memory, they searched for a target in a four-object search display. On exact-match trials, the memorized item reappeared as a distractor in the search display. On category-match trials, another exemplar of the memorized item appeared as a distractor. On neutral trials, none of the distractors were related to the memorized object. We found attentional guidance in visual search on both exact-match and category-match trials in Experiment 1, in which the exemplars were visually similar. When we controlled for visual similarity among the exemplars by using four possible exemplars (Exp. 2) or by using two exemplars rated as being visually dissimilar (Exp. 3), we found attentional guidance only on exact-match trials when participants memorized the object's image. The same pattern of results held when the target was invariant (Exps. 2-3) and when the target was defined semantically and varied in visual features (Exp. 4). The findings of these experiments suggest that attentional guidance by WM requires active visual information.
Gersch, Timothy M.; Schnitzer, Brian S.; Dosher, Barbara A.; Kowler, Eileen
2012-01-01
Saccadic eye movements and perceptual attention work in a coordinated fashion to allow selection of the objects, features or regions with the greatest momentary need for limited visual processing resources. This study investigates perceptual characteristics of pre-saccadic shifts of attention during a sequence of saccades using the visual manipulations employed to study mechanisms of attention during maintained fixation. The first part of this paper reviews studies of the connections between saccades and attention, and their significance for both saccadic control and perception. The second part presents three experiments that examine the effects of pre-saccadic shifts of attention on vision during sequences of saccades. Perceptual enhancements at the saccadic goal location relative to non-goal locations were found across a range of stimulus contrasts, with either perceptual discrimination or detection tasks, with either single or multiple perceptual targets, and regardless of the presence of external noise. The results show that the preparation of saccades can evoke a variety of attentional effects, including attentionally-mediated changes in the strength of perceptual representations, selection of targets for encoding in visual memory, exclusion of external noise, or changes in the levels of internal visual noise. The visual changes evoked by saccadic planning make it possible for the visual system to effectively use saccadic eye movements to explore the visual environment. PMID:22809798
Delayed visual attention caused by high myopic refractive error.
Winges, Kimberly M; Zarpellon, Ursula; Hou, Chuan; Good, William V
2005-06-01
Delayed visual maturation (DVM) is usually a retrospective diagnosis given to infants who are born with no or poor visually-directed behavior, despite normal acuity on objective testing, but who recover months later. This condition can be organized into several types based on associated neurodevelopmental or ocular findings, but the etiology of DVM is probably complex and involves multiple possible origins. Here we report two infants who presented with delayed visual maturation (attention). They were visually unresponsive at birth but were later found to have high myopic errors. Patient 1 had -4 D right eye, -5 D left eye. Patient 2 had -9 D o.u. Upon spectacle correction at 5 and 4 months, respectively, both infants immediately displayed visually-directed behavior, suggesting that a high refractive error was the cause of inattention in these patients. These findings could add to knowledge surrounding DVM and the diagnosis of apparently blind infants. Findings presented here also indicate the importance of prompt refractive error measurement in such cases.
Robertson, Kayela; Schmitter-Edgecombe, Maureen
2017-01-01
Impairments in attention following traumatic brain injury (TBI) can significantly impact recovery and rehabilitation effectiveness. This study investigated the multi-faceted construct of selective attention following TBI, highlighting the differences on visual nonsearch (focused attention) and search (divided attention) tasks. Participants were 30 individuals with moderate to severe TBI who were tested acutely (i.e. following emergence from PTA) and 30 age- and education-matched controls. Participants were presented with visual displays that contained either two or eight items. In the focused attention, nonsearch condition, the location of the target (if present) was cued with a peripheral arrow prior to presentation of the visual displays. In the divided attention, search condition, no spatial cue was provided prior to presentation of the visual displays. The results revealed intact focused, nonsearch, attention abilities in the acute phase of TBI recovery. In contrast, when no spatial cue was provided (divided attention condition), participants with TBI demonstrated slower visual search compared to the control group. The results of this study suggest that capitalizing on intact focused attention abilities by allocating attention during cognitively demanding tasks may help to reduce mental workload and improve rehabilitation effectiveness.
Kirk, Hannah E; Gray, Kylie; Riby, Deborah M; Taffe, John; Cornish, Kim M
2017-11-01
Despite well-documented attention deficits in children with intellectual and developmental disabilities (IDD), distinctions across types of attention problems and their association with academic attainment has not been fully explored. This study examines visual attention capacities and inattentive/hyperactive behaviours in 77 children aged 4 to 11 years with IDD and elevated behavioural attention difficulties. Children with autism spectrum disorder (ASD; n = 23), Down syndrome (DS; n = 22), and non-specific intellectual disability (NSID; n = 32) completed computerized visual search and vigilance paradigms. In addition, parents and teachers completed rating scales of inattention and hyperactivity. Concurrent associations between attention abilities and early literacy and numeracy skills were also examined. Children completed measures of receptive vocabulary, phonological abilities and cardinality skills. As expected, the results indicated that all groups had relatively comparable levels of inattentive/hyperactive behaviours as rated by parents and teachers. However, the extent of visual attention deficits varied as a result of group; namely children with DS had poorer visual search and vigilance abilities than children with ASD and NSID. Further, significant associations between visual attention difficulties and poorer literacy and numeracy skills were observed, regardless of group. Collectively the findings demonstrate that in children with IDD who present with homogenous behavioural attention difficulties, at the cognitive level, subtle profiles of attentional problems can be delineated. © 2016 John Wiley & Sons Ltd.
Rhythmic Sampling within and between Objects despite Sustained Attention at a Cued Location
Fiebelkorn, Ian C.; Saalmann, Yuri B.; Kastner, Sabine
2013-01-01
SUMMARY The brain directs its limited processing resources through various selection mechanisms, broadly referred to as attention. The present study investigated the temporal dynamics of two such selection mechanisms: space- and object-based selection. Previous evidence has demonstrated that preferential processing resulting from a spatial cue (i.e., space-based selection) spreads to uncued locations, if those locations are part of the same object (i.e., resulting in object-based selection). But little is known about the relationship between these fundamental selection mechanisms. Here, we used human behavioral data to determine how space- and object-based selection simultaneously evolve under conditions that promote sustained attention at a cued location, varying the cue-to-target interval from 300—1100 ms. We tracked visual-target detection at a cued location (i.e., space-based selection), at an uncued location that was part of the same object (i.e., object-based selection), and at an uncued location that was part of a different object (i.e., in the absence of space- and object-based selection). The data demonstrate that even under static conditions, there is a moment-to-moment reweighting of attentional priorities based on object properties. This reweighting is revealed through rhythmic patterns of visual-target detection both within (at 8 Hz) and between (at 4 Hz) objects. PMID:24316204
Focused and shifting attention in children with heavy prenatal alcohol exposure.
Mattson, Sarah N; Calarco, Katherine E; Lang, Aimée R
2006-05-01
Attention deficits are a hallmark of the teratogenic effects of alcohol. However, characterization of these deficits remains inconclusive. Children with heavy prenatal alcohol exposure and nonexposed controls were evaluated using a paradigm consisting of three conditions: visual focus, auditory focus, and auditory-visual shift of attention. For the focus conditions, participants responded manually to visual or auditory targets. For the shift condition, participants alternated responses between visual targets and auditory targets. For the visual focus condition, alcohol-exposed children had lower accuracy and slower reaction time for all intertarget intervals (ITIs), while on the auditory focus condition, alcohol-exposed children were less accurate but displayed slower reaction time only on the longest ITI. Finally, for the shift condition, the alcohol-exposed group was accurate but had slowed reaction times. These results indicate that children with heavy prenatal alcohol exposure have pervasive deficits in visual focused attention and deficits in maintaining auditory attention over time. However, no deficits were noted in the ability to disengage and reengage attention when required to shift attention between visual and auditory stimuli, although reaction times to shift were slower. Copyright (c) 2006 APA, all rights reserved.
The effect of search condition and advertising type on visual attention to Internet advertising.
Kim, Gho; Lee, Jang-Han
2011-05-01
This research was conducted to examine the level of consumers' visual attention to Internet advertising. It was predicted that consumers' search type would influence visual attention to advertising. Specifically, it was predicted that more attention to advertising would be attracted in the exploratory search condition than in the goal-directed search condition. It was also predicted that there would be a difference in visual attention depending on the advertisement type (advertising type: text vs. pictorial advertising). An eye tracker was used for measurement. Results revealed that search condition and advertising type influenced advertising effectiveness.
The Differential Effects of Reward on Space- and Object-Based Attentional Allocation
Shomstein, Sarah
2013-01-01
Estimating reward contingencies and allocating attentional resources to a subset of relevant information are the most important contributors to increasing adaptability of an organism. Although recent evidence suggests that reward- and attention-based guidance recruits overlapping cortical regions and has similar effects on sensory responses, the exact nature of the relationship between the two remains elusive. Here, using event-related fMRI on human participants, we contrasted the effects of reward on space- and object-based selection in the same experimental setting. Reward was either distributed randomly or biased a particular object. Behavioral and neuroimaging results show that space- and object-based attention is influenced by reward differentially. Space-based attentional allocation is mandatory, integrating reward information over time, whereas object-based attentional allocation is a default setting that is completely replaced by the reward signal. Nonadditivity of the effects of reward and object-based attention was observed consistently at multiple levels of analysis in early visual areas as well as in control regions. These results provide strong evidence that space- and object-based allocation are two independent attentional mechanisms, and suggest that reward serves to constrain attentional selection. PMID:23804086
Motor (but not auditory) attention affects syntactic choice.
Pokhoday, Mikhail; Scheepers, Christoph; Shtyrov, Yury; Myachykov, Andriy
2018-01-01
Understanding the determinants of syntactic choice in sentence production is a salient topic in psycholinguistics. Existing evidence suggests that syntactic choice results from an interplay between linguistic and non-linguistic factors, and a speaker's attention to the elements of a described event represents one such factor. Whereas multimodal accounts of attention suggest a role for different modalities in this process, existing studies examining attention effects in syntactic choice are primarily based on visual cueing paradigms. Hence, it remains unclear whether attentional effects on syntactic choice are limited to the visual modality or are indeed more general. This issue is addressed by the current study. Native English participants viewed and described line drawings of simple transitive events while their attention was directed to the location of the agent or the patient of the depicted event by means of either an auditory (monaural beep) or a motor (unilateral key press) lateral cue. Our results show an effect of cue location, with participants producing more passive-voice descriptions in the patient-cued conditions. Crucially, this cue location effect emerged in the motor-cue but not (or substantially less so) in the auditory-cue condition, as confirmed by a reliable interaction between cue location (agent vs. patient) and cue type (auditory vs. motor). Our data suggest that attentional effects on the speaker's syntactic choices are modality-specific and limited to the visual and motor, but not the auditory, domain.
Mangun, G R; Buck, L A
1998-03-01
This study investigated the simple reaction time (RT) and event-related potential (ERP) correlates of biasing attention towards a location in the visual field. RTs and ERPs were recorded to stimuli flashed randomly and with equal probability to the left and right visual hemifields in the three blocked, covert attention conditions: (i) attention divided equally to left and right hemifield locations; (ii) attention biased towards the left location; or (iii) attention biased towards the right location. Attention was biased towards left or right by instructions to the subjects, and responses were required to all stimuli. Relative to the divided attention condition, RTs were significantly faster for targets occurring where more attention was allocated (benefits), and slower to targets where less attention was allocated (costs). The early P1 (100-140 msec) component over the lateral occipital scalp regions showed attentional benefits. There were no amplitude modulations of the occipital N1 (125-180 msec) component with attention. Between 200 and 500 msec latency, a late positive deflection (LPD) showed both attentional costs and benefits. The behavioral findings show that when sufficiently induced to bias attention, human observers demonstrate RT benefits as well as costs. The corresponding P1 benefits suggest that the RT benefits of spatial attention may arise as the result of modulations of visual information processing in the extrastriate visual cortex.
A top-down manner-based DCNN architecture for semantic image segmentation.
Qiao, Kai; Chen, Jian; Wang, Linyuan; Zeng, Lei; Yan, Bin
2017-01-01
Given their powerful feature representation for recognition, deep convolutional neural networks (DCNNs) have been driving rapid advances in high-level computer vision tasks. However, their performance in semantic image segmentation is still not satisfactory. Based on the analysis of visual mechanism, we conclude that DCNNs in a bottom-up manner are not enough, because semantic image segmentation task requires not only recognition but also visual attention capability. In the study, superpixels containing visual attention information are introduced in a top-down manner, and an extensible architecture is proposed to improve the segmentation results of current DCNN-based methods. We employ the current state-of-the-art fully convolutional network (FCN) and FCN with conditional random field (DeepLab-CRF) as baselines to validate our architecture. Experimental results of the PASCAL VOC segmentation task qualitatively show that coarse edges and error segmentation results are well improved. We also quantitatively obtain about 2%-3% intersection over union (IOU) accuracy improvement on the PASCAL VOC 2011 and 2012 test sets.
Gilchrist, Amanda L; Duarte, Audrey; Verhaeghen, Paul
2016-01-01
Research with younger adults has shown that retrospective cues can be used to orient top-down attention toward relevant items in working memory. We examined whether older adults could take advantage of these cues to improve memory performance. Younger and older adults were presented with visual arrays of five colored shapes; during maintenance, participants were presented either with an informative cue based on an object feature (here, object shape or color) that would be probed, or with an uninformative, neutral cue. Although older adults were less accurate overall, both age groups benefited from the presentation of an informative, feature-based cue relative to a neutral cue. Surprisingly, we also observed differences in the effectiveness of shape versus color cues and their effects upon post-cue memory load. These results suggest that older adults can use top-down attention to remove irrelevant items from visual working memory, provided that task-relevant features function as cues.
Evidence-based Assessment of Cognitive Functioning in Pediatric Psychology
Brown, Ronald T.; Cavanagh, Sarah E.; Vess, Sarah F.; Segall, Mathew J.
2008-01-01
Objective To review the evidence base for measures of cognitive functioning frequently used within the field of pediatric psychology. Methods From a list of 47 measures identified by the Society of Pediatric Psychology (Division 54) Evidence-Based Assessment Task Force Workgroup, 27 measures were included in the review. Measures were organized, reviewed, and evaluated according to general domains of functioning (e.g., attention/executive functioning, memory). Results Twenty-two of 27 measures reviewed demonstrated psychometric properties that met “Well-established” criteria as set forth by the Assessment Task Force. Psychometric properties were strongest for measures of general cognitive ability and weakest for measures of visual-motor functioning and attention. Conclusions We report use of “Well-established” measures of overall cognitive functioning, nonverbal intelligence, academic achievement, language, and memory and learning. For several specific tests in the domains of visual-motor functioning and attention, additional psychometric data are needed for measures to meet criteria as “Well established.” PMID:18194973
Bionic Vision-Based Intelligent Power Line Inspection System
Ma, Yunpeng; He, Feijia; Xu, Jinxin
2017-01-01
Detecting the threats of the external obstacles to the power lines can ensure the stability of the power system. Inspired by the attention mechanism and binocular vision of human visual system, an intelligent power line inspection system is presented in this paper. Human visual attention mechanism in this intelligent inspection system is used to detect and track power lines in image sequences according to the shape information of power lines, and the binocular visual model is used to calculate the 3D coordinate information of obstacles and power lines. In order to improve the real time and accuracy of the system, we propose a new matching strategy based on the traditional SURF algorithm. The experimental results show that the system is able to accurately locate the position of the obstacles around power lines automatically, and the designed power line inspection system is effective in complex backgrounds, and there are no missing detection instances under different conditions. PMID:28203269
Kamke, Marc R; Van Luyn, Jeanette; Constantinescu, Gabriella; Harris, Jill
2014-01-01
Evidence suggests that deafness-induced changes in visual perception, cognition and attention may compensate for a hearing loss. Such alterations, however, may also negatively influence adaptation to a cochlear implant. This study investigated whether involuntary attentional capture by salient visual stimuli is altered in children who use a cochlear implant. Thirteen experienced implant users (aged 8-16 years) and age-matched normally hearing children were presented with a rapid sequence of simultaneous visual and auditory events. Participants were tasked with detecting numbers presented in a specified color and identifying a change in the tonal frequency whilst ignoring irrelevant visual distractors. Compared to visual distractors that did not possess the target-defining characteristic, target-colored distractors were associated with a decrement in visual performance (response time and accuracy), demonstrating a contingent capture of involuntary attention. Visual distractors did not, however, impair auditory task performance. Importantly, detection performance for the visual and auditory targets did not differ between the groups. These results suggest that proficient cochlear implant users demonstrate normal capture of visuospatial attention by stimuli that match top-down control settings.
Determinants of Global Color-Based Selection in Human Visual Cortex.
Bartsch, Mandy V; Boehler, Carsten N; Stoppel, Christian M; Merkel, Christian; Heinze, Hans-Jochen; Schoenfeld, Mircea A; Hopf, Jens-Max
2015-09-01
Feature attention operates in a spatially global way, with attended feature values being prioritized for selection outside the focus of attention. Accounts of global feature attention have emphasized feature competition as a determining factor. Here, we use magnetoencephalographic recordings in humans to test whether competition is critical for global feature selection to arise. Subjects performed a color/shape discrimination task in one visual field (VF), while irrelevant color probes were presented in the other unattended VF. Global effects of color attention were assessed by analyzing the response to the probe as a function of whether or not the probe's color was a target-defining color. We find that global color selection involves a sequence of modulations in extrastriate cortex, with an initial phase in higher tier areas (lateral occipital complex) followed by a later phase in lower tier retinotopic areas (V3/V4). Importantly, these modulations appeared with and without color competition in the focus of attention. Moreover, early parts of the modulation emerged for a task-relevant color not even present in the focus of attention. All modulations, however, were eliminated during simple onset-detection of the colored target. These results indicate that global color-based attention depends on target discrimination independent of feature competition in the focus of attention. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
The Interplay between Executive Control and Motor Functioning in Williams Syndrome
ERIC Educational Resources Information Center
Hocking, Darren R.; Thomas, Daniel; Menant, Jasmine C.; Porter, Melanie A.; Smith, Stuart; Lord, Stephen R.; Cornish, Kim M.
2013-01-01
Previous studies suggest that individuals with Williams syndrome (WS), a rare genetically based neurodevelopmental disorder, show specific weaknesses in visual attention and response inhibition within the visuospatial domain. Here we examine the extent to which impairments in attentional control extend to the visuomotor domain using a…
Splitting attention across the two visual fields in visual short-term memory.
Delvenne, Jean-Francois; Holt, Jessica L
2012-02-01
Humans have the ability to attentionally select the most relevant visual information from their extrapersonal world and to retain it in a temporary buffer, known as visual short-term memory (VSTM). Research suggests that at least two non-contiguous items can be selected simultaneously when they are distributed across the two visual hemifields. In two experiments, we show that attention can also be split between the left and right sides of internal representations held in VSTM. Participants were asked to remember several colors, while cues presented during the delay instructed them to orient their attention to a subset of memorized colors. Experiment 1 revealed that orienting attention to one or two colors strengthened equally participants' memory for those colors, but only when they were from separate hemifields. Experiment 2 showed that in the absence of attentional cues the distribution of the items in the visual field per se had no effect on memory. These findings strongly suggest the existence of independent attentional resources in the two hemifields for selecting and/or consolidating information in VSTM. Copyright © 2011 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Teubert, Manuel; Lohaus, Arnold; Fassbender, Ina; Vierhaus, Marc; Spangler, Sibylle; Borchert, Sonja; Freitag, Claudia; Goertz, Claudia; Graf, Frauke; Gudi, Helene; Kolling, Thorsten; Lamm, Bettina; Keller, Heidi; Knopf, Monika; Schwarzer, Gudrun
2012-01-01
This longitudinal study examined the influence of stimulus material on attention and expectation learning in the visual expectation paradigm. Female faces were used as attention-attracting stimuli, and non-meaningful visual stimuli of comparable complexity (Greebles) were used as low attention-attracting stimuli. Expectation learning performance…
The Role of Target-Distractor Relationships in Guiding Attention and the Eyes in Visual Search
ERIC Educational Resources Information Center
Becker, Stefanie I.
2010-01-01
Current models of visual search assume that visual attention can be guided by tuning attention toward specific feature values (e.g., particular size, color) or by inhibiting the features of the irrelevant nontargets. The present study demonstrates that attention and eye movements can also be guided by a relational specification of how the target…
Attention biases visual activity in visual short-term memory.
Kuo, Bo-Cheng; Stokes, Mark G; Murray, Alexandra M; Nobre, Anna Christina
2014-07-01
In the current study, we tested whether representations in visual STM (VSTM) can be biased via top-down attentional modulation of visual activity in retinotopically specific locations. We manipulated attention using retrospective cues presented during the retention interval of a VSTM task. Retrospective cues triggered activity in a large-scale network implicated in attentional control and led to retinotopically specific modulation of activity in early visual areas V1-V4. Importantly, shifts of attention during VSTM maintenance were associated with changes in functional connectivity between pFC and retinotopic regions within V4. Our findings provide new insights into top-down control mechanisms that modulate VSTM representations for flexible and goal-directed maintenance of the most relevant memoranda.
Multiperson visual focus of attention from head pose and meeting contextual cues.
Ba, Sileye O; Odobez, Jean-Marc
2011-01-01
This paper introduces a novel contextual model for the recognition of people's visual focus of attention (VFOA) in meetings from audio-visual perceptual cues. More specifically, instead of independently recognizing the VFOA of each meeting participant from his own head pose, we propose to jointly recognize the participants' visual attention in order to introduce context-dependent interaction models that relate to group activity and the social dynamics of communication. Meeting contextual information is represented by the location of people, conversational events identifying floor holding patterns, and a presentation activity variable. By modeling the interactions between the different contexts and their combined and sometimes contradictory impact on the gazing behavior, our model allows us to handle VFOA recognition in difficult task-based meetings involving artifacts, presentations, and moving people. We validated our model through rigorous evaluation on a publicly available and challenging data set of 12 real meetings (5 hours of data). The results demonstrated that the integration of the presentation and conversation dynamical context using our model can lead to significant performance improvements.
Modeling the Effects of Perceptual Load: Saliency, Competitive Interactions, and Top-Down Biases
Neokleous, Kleanthis; Shimi, Andria; Avraamides, Marios N.
2016-01-01
A computational model of visual selective attention has been implemented to account for experimental findings on the Perceptual Load Theory (PLT) of attention. The model was designed based on existing neurophysiological findings on attentional processes with the objective to offer an explicit and biologically plausible formulation of PLT. Simulation results verified that the proposed model is capable of capturing the basic pattern of results that support the PLT as well as findings that are considered contradictory to the theory. Importantly, the model is able to reproduce the behavioral results from a dilution experiment, providing thus a way to reconcile PLT with the competing Dilution account. Overall, the model presents a novel account for explaining PLT effects on the basis of the low-level competitive interactions among neurons that represent visual input and the top-down signals that modulate neural activity. The implications of the model concerning the debate on the locus of selective attention as well as the origins of distractor interference in visual displays of varying load are discussed. PMID:26858668
Color-Change Detection Activity in the Primate Superior Colliculus.
Herman, James P; Krauzlis, Richard J
2017-01-01
The primate superior colliculus (SC) is a midbrain structure that participates in the control of spatial attention. Previous studies examining the role of the SC in attention have mostly used luminance-based visual features (e.g., motion, contrast) as the stimuli and saccadic eye movements as the behavioral response, both of which are known to modulate the activity of SC neurons. To explore the limits of the SC's involvement in the control of spatial attention, we recorded SC neuronal activity during a task using color, a visual feature dimension not traditionally associated with the SC, and required monkeys to detect threshold-level changes in the saturation of a cued stimulus by releasing a joystick during maintained fixation. Using this color-based spatial attention task, we found substantial cue-related modulation in all categories of visually responsive neurons in the intermediate layers of the SC. Notably, near-threshold changes in color saturation, both increases and decreases, evoked phasic bursts of activity with magnitudes as large as those evoked by stimulus onset. This change-detection activity had two distinctive features: activity for hits was larger than for misses, and the timing of change-detection activity accounted for 67% of joystick release latency, even though it preceded the release by at least 200 ms. We conclude that during attention tasks, SC activity denotes the behavioral relevance of the stimulus regardless of feature dimension and that phasic event-related SC activity is suitable to guide the selection of manual responses as well as saccadic eye movements.
The guidance of visual search by shape features and shape configurations.
McCants, Cody W; Berggren, Nick; Eimer, Martin
2018-03-01
Representations of target features (attentional templates) guide attentional object selection during visual search. In many search tasks, targets objects are defined not by a single feature but by the spatial configuration of their component shapes. We used electrophysiological markers of attentional selection processes to determine whether the guidance of shape configuration search is entirely part-based or sensitive to the spatial relationship between shape features. Participants searched for targets defined by the spatial arrangement of two shape components (e.g., hourglass above circle). N2pc components were triggered not only by targets but also by partially matching distractors with one target shape (e.g., hourglass above hexagon) and by distractors that contained both target shapes in the reverse arrangement (e.g., circle above hourglass), in line with part-based attentional control. Target N2pc components were delayed when a reverse distractor was present on the opposite side of the same display, suggesting that early shape-specific attentional guidance processes could not distinguish between targets and reverse distractors. The control of attention then became sensitive to spatial configuration, which resulted in a stronger attentional bias for target objects relative to reverse and partially matching distractors. Results demonstrate that search for target objects defined by the spatial arrangement of their component shapes is initially controlled in a feature-based fashion but can later be guided by templates for spatial configurations. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Shaping Attention with Reward: Effects of Reward on Space- and Object-Based Selection
Shomstein, Sarah; Johnson, Jacoba
2014-01-01
The contribution of rewarded actions to automatic attentional selection remains obscure. We hypothesized that some forms of automatic orienting, such as object-based selection, can be completely abandoned in lieu of reward maximizing strategy. While presenting identical visual stimuli to the observer, in a set of two experiments, we manipulate what is being rewarded (different object targets or random object locations) and the type of reward received (money or points). It was observed that reward alone guides attentional selection, entirely predicting behavior. These results suggest that guidance of selective attention, while automatic, is flexible and can be adjusted in accordance with external non-sensory reward-based factors. PMID:24121412
Components of working memory and visual selective attention.
Burnham, Bryan R; Sabia, Matthew; Langan, Catherine
2014-02-01
Load theory (Lavie, N., Hirst, A., De Fockert, J. W., & Viding, E. [2004]. Load theory of selective attention and cognitive control. Journal of Experimental Psychology: General, 133, 339-354.) proposes that control of attention depends on the amount and type of load that is imposed by current processing. Specifically, perceptual load should lead to efficient distractor rejection, whereas working memory load (dual-task coordination) should hinder distractor rejection. Studies support load theory's prediction that working memory load will lead to larger distractor effects; however, these studies used secondary tasks that required only verbal working memory and the central executive. The present study examined which other working memory components (visual, spatial, and phonological) influence visual selective attention. Subjects completed an attentional capture task alone (single-task) or while engaged in a working memory task (dual-task). Results showed that along with the central executive, visual and spatial working memory influenced selective attention, but phonological working memory did not. Specifically, attentional capture was larger when visual or spatial working memory was loaded, but phonological working memory load did not affect attentional capture. The results are consistent with load theory and suggest specific components of working memory influence visual selective attention. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Harasawa, Masamitsu; Shioiri, Satoshi
2011-04-01
The effect of the visual hemifield to which spatial attention was oriented on the activities of the posterior parietal and occipital visual cortices was examined using functional near-infrared spectroscopy in order to investigate the neural substrates of voluntary visuospatial attention. Our brain imaging data support the theory put forth in a previous psychophysical study, namely, the attentional resources for the left and right visual hemifields are distinct. Increasing the attentional load asymmetrically increased the brain activity. Increase in attentional load produced a greater increase in brain activity in the case of the left visual hemifield than in the case of the right visual hemifield. This asymmetry was observed in all the examined brain areas, including the right and left occipital and parietal cortices. These results suggest the existence of asymmetrical inhibitory interactions between the hemispheres and the presence of an extensive inhibitory network. Copyright © 2011 Elsevier Inc. All rights reserved.
Filippopulos, Filipp M; Grafenstein, Jessica; Straube, Andreas; Eggert, Thomas
2015-11-01
In natural life pain automatically draws attention towards the painful body part suggesting that it interacts with different attentional mechanisms such as visual attention. Complex regional pain syndrome (CRPS) patients who typically report on chronic distally located pain of one extremity may suffer from so-called neglect-like symptoms, which have also been linked to attentional mechanisms. The purpose of the study was to further evaluate how continuous pain conditions influence visual attention. Saccade latencies were recorded in two experiments using a common visual attention paradigm whereby orientating saccades to cued or uncued lateral visual targets had to be performed. In the first experiment saccade latencies of healthy subjects were measured under two conditions: one in which continuous experimental pain stimulation was applied to the index finger to imitate a continuous pain situation, and one without pain stimulation. In the second experiment saccade latencies of patients suffering from CRPS were compared to controls. The results showed that neither the continuous experimental pain stimulation during the experiment nor the chronic pain in CRPS led to an unilateral increase of saccade latencies or to a unilateral increase of the cue effect on latency. The results show that unilateral, continuously applied pain stimuli or chronic pain have no or only very limited influence on visual attention. Differently from patients with visual neglect, patients with CRPS did not show strong side asymmetries of saccade latencies or of cue effects on saccade latencies. Thus, neglect-like clinical symptoms of CRPS patients do not involve the allocation of visual attention.
Mixing apples with oranges: Visual attention deficits in schizophrenia.
Caprile, Claudia; Cuevas-Esteban, Jorge; Ochoa, Susana; Usall, Judith; Navarra, Jordi
2015-09-01
Patients with schizophrenia usually present cognitive deficits. We investigated possible anomalies at filtering out irrelevant visual information in this psychiatric disorder. Associations between these anomalies and positive and/or negative symptomatology were also addressed. A group of individuals with schizophrenia and a control group of healthy adults performed a Garner task. In Experiment 1, participants had to rapidly classify visual stimuli according to their colour while ignoring their shape. These two perceptual dimensions are reported to be "separable" by visual selective attention. In Experiment 2, participants classified the width of other visual stimuli while trying to ignore their height. These two visual dimensions are considered as being "integral" and cannot be attended separately. While healthy perceivers were, in Experiment 1, able to exclusively respond to colour, an irrelevant variation in shape increased colour-based reaction times (RTs) in the group of patients. In Experiment 2, RTs when classifying width increased in both groups as a consequence of perceiving a variation in the irrelevant dimension (height). However, this interfering effect was larger in the group of schizophrenic patients than in the control group. Further analyses revealed that these alterations in filtering out irrelevant visual information correlated with positive symptoms in PANSS scale. A possible limitation of the study is the relatively small sample. Our findings suggest the presence of attention deficits in filtering out irrelevant visual information in schizophrenia that could be related to positive symptomatology. Copyright © 2015 Elsevier Ltd. All rights reserved.
Ciaramitaro, Vivian M; Chow, Hiu Mei; Eglington, Luke G
2017-03-01
We used a cross-modal dual task to examine how changing visual-task demands influenced auditory processing, namely auditory thresholds for amplitude- and frequency-modulated sounds. Observers had to attend to two consecutive intervals of sounds and report which interval contained the auditory stimulus that was modulated in amplitude (Experiment 1) or frequency (Experiment 2). During auditory-stimulus presentation, observers simultaneously attended to a rapid sequential visual presentation-two consecutive intervals of streams of visual letters-and had to report which interval contained a particular color (low load, demanding less attentional resources) or, in separate blocks of trials, which interval contained more of a target letter (high load, demanding more attentional resources). We hypothesized that if attention is a shared resource across vision and audition, an easier visual task should free up more attentional resources for auditory processing on an unrelated task, hence improving auditory thresholds. Auditory detection thresholds were lower-that is, auditory sensitivity was improved-for both amplitude- and frequency-modulated sounds when observers engaged in a less demanding (compared to a more demanding) visual task. In accord with previous work, our findings suggest that visual-task demands can influence the processing of auditory information on an unrelated concurrent task, providing support for shared attentional resources. More importantly, our results suggest that attending to information in a different modality, cross-modal attention, can influence basic auditory contrast sensitivity functions, highlighting potential similarities between basic mechanisms for visual and auditory attention.
Multiple Sensory-Motor Pathways Lead to Coordinated Visual Attention
Yu, Chen; Smith, Linda B.
2016-01-01
Joint attention has been extensively studied in the developmental literature because of overwhelming evidence that the ability to socially coordinate visual attention to an object is essential to healthy developmental outcomes, including language learning. The goal of the present study is to understand the complex system of sensory-motor behaviors that may underlie the establishment of joint attention between parents and toddlers. In an experimental task, parents and toddlers played together with multiple toys. We objectively measured joint attention – and the sensory-motor behaviors that underlie it – using a dual head-mounted eye-tracking system and frame-by-frame coding of manual actions. By tracking the momentary visual fixations and hand actions of each participant, we precisely determined just how often they fixated on the same object at the same time, the visual behaviors that preceded joint attention, and manual behaviors that preceded and co-occurred with joint attention. We found that multiple sequential sensory-motor patterns lead to joint attention. In addition, there are developmental changes in this multi-pathway system evidenced as variations in strength among multiple routes. We propose that coordinated visual attention between parents and toddlers is primarily a sensory-motor behavior. Skill in achieving coordinated visual attention in social settings – like skills in other sensory-motor domains – emerges from multiple pathways to the same functional end. PMID:27016038
Multiple Sensory-Motor Pathways Lead to Coordinated Visual Attention.
Yu, Chen; Smith, Linda B
2017-02-01
Joint attention has been extensively studied in the developmental literature because of overwhelming evidence that the ability to socially coordinate visual attention to an object is essential to healthy developmental outcomes, including language learning. The goal of this study was to understand the complex system of sensory-motor behaviors that may underlie the establishment of joint attention between parents and toddlers. In an experimental task, parents and toddlers played together with multiple toys. We objectively measured joint attention-and the sensory-motor behaviors that underlie it-using a dual head-mounted eye-tracking system and frame-by-frame coding of manual actions. By tracking the momentary visual fixations and hand actions of each participant, we precisely determined just how often they fixated on the same object at the same time, the visual behaviors that preceded joint attention and manual behaviors that preceded and co-occurred with joint attention. We found that multiple sequential sensory-motor patterns lead to joint attention. In addition, there are developmental changes in this multi-pathway system evidenced as variations in strength among multiple routes. We propose that coordinated visual attention between parents and toddlers is primarily a sensory-motor behavior. Skill in achieving coordinated visual attention in social settings-like skills in other sensory-motor domains-emerges from multiple pathways to the same functional end. Copyright © 2016 Cognitive Science Society, Inc.
Wang, Wei; Ji, Xiangtong; Ni, Jun; Ye, Qian; Zhang, Sicong; Chen, Wenli; Bian, Rong; Yu, Cui; Zhang, Wenting; Shen, Guangyu; Machado, Sergio; Yuan, Tifei; Shan, Chunlei
2015-01-01
To compare the effect of visual spatial training on the spatial attention to that on motor control and to correlate the improvement of spatial attention to motor control progress after visual spatial training in subjects with unilateral spatial neglect (USN). 9 cases with USN after right cerebral stroke were randomly divided into Conventional treatment group + visual spatial attention and Conventional treatment group. The Conventional treatment group + visual spatial attention received conventional rehabilitation therapy (physical and occupational therapy) and visual spatial attention training (optokinetic stimulation and right half-field eye patching). The Conventional treatment group was only treated with conventional rehabilitation training (physical and occupational therapy). All patients were assessed by behavioral inattention test (BIT), Fugl-Meyer Assessment of motor function (FMA), equilibrium coordination test (ECT) and non-equilibrium coordination test (NCT) before and after 4 weeks treatment. Total scores in both groups (without visual spatial attention/with visual spatial attention) improved significantly (BIT: P=0.021/P=0.000, d=1.667/d=2.116, power=0.69/power=0.98, 95%CI[-0.8839,45.88]/95%CI=[16.96,92.64]; FMA: P=0.002/P=0.000, d=2.521/d=2.700, power=0.93/power=0.98, 95%CI[5.707,30.79]/95%CI=[16.06,53.94]; ECT: P=0.002/ P=0.000, d=2.031/d=1.354, power=0.90/power=0.17, 95%CI[3.380,42.61]/95%CI=[-1.478,39.08]; NCT: P=0.013/P=0.000, d=1.124/d=1.822, power=0.41/power=0.56, 95%CI[-7.980,37.48]/95%CI=[4.798,43.60],) after treatment. Among the 2 groups, the group with visual spatial attention significantly improved in BIT (P=0.003, d=3.103, power=1, 95%CI[15.68,48.92]), FMA of upper extremity (P=0.006, d=2.771, power=1, 95%CI[5.061,20.14]) and NCT (P=0.010, d=2.214, power=0.81-0.90, 95%CI[3.018,15.88]). Correlative analysis shows that the change of BIT scores is positively correlated to the change of FMA total score (r=0.77, P<;0.01), FMA of upper extremity (r=0.81, P<0.01), NCT (r=0.78, P<0.01). Four weeks visual spatial training could improve spatial attention as well as motor control functions in hemineglect patients. The improvement of motor function is positively correlated to the progresses of visual spatial functions after visual spatial attention training.
Dynamic crossmodal links revealed by steady-state responses in auditory-visual divided attention.
de Jong, Ritske; Toffanin, Paolo; Harbers, Marten
2010-01-01
Frequency tagging has been often used to study intramodal attention but not intermodal attention. We used EEG and simultaneous frequency tagging of auditory and visual sources to study intermodal focused and divided attention in detection and discrimination performance. Divided-attention costs were smaller, but still significant, in detection than in discrimination. The auditory steady-state response (SSR) showed no effects of attention at frontocentral locations, but did so at occipital locations where it was evident only when attention was divided between audition and vision. Similarly, the visual SSR at occipital locations was substantially enhanced when attention was divided across modalities. Both effects were equally present in detection and discrimination. We suggest that both effects reflect a common cause: An attention-dependent influence of auditory information processing on early cortical stages of visual information processing, mediated by enhanced effective connectivity between the two modalities under conditions of divided attention. Copyright (c) 2009 Elsevier B.V. All rights reserved.
Dividing time: concurrent timing of auditory and visual events by young and elderly adults.
McAuley, J Devin; Miller, Jonathan P; Wang, Mo; Pang, Kevin C H
2010-07-01
This article examines age differences in individual's ability to produce the durations of learned auditory and visual target events either in isolation (focused attention) or concurrently (divided attention). Young adults produced learned target durations equally well in focused and divided attention conditions. Older adults, in contrast, showed an age-related increase in timing variability in divided attention conditions that tended to be more pronounced for visual targets than for auditory targets. Age-related impairments were associated with a decrease in working memory span; moreover, the relationship between working memory and timing performance was largest for visual targets in divided attention conditions.
ERIC Educational Resources Information Center
Mather, Susan M.; Clark, M. Diane
2012-01-01
One of the ongoing challenges teachers of students who are deaf or hard of hearing face is managing the visual split attention implicit in multimedia learning. When a teacher presents various types of visual information at the same time, visual learners have no choice but to divide their attention among those materials and the teacher and…
Evidence for an attentional component of inhibition of return in visual search.
Pierce, Allison M; Crouse, Monique D; Green, Jessica J
2017-11-01
Inhibition of return (IOR) is typically described as an inhibitory bias against returning attention to a recently attended location as a means of promoting efficient visual search. Most studies examining IOR, however, either do not use visual search paradigms or do not effectively isolate attentional processes, making it difficult to conclusively link IOR to a bias in attention. Here, we recorded ERPs during a simple visual search task designed to isolate the attentional component of IOR to examine whether an inhibitory bias of attention is observed and, if so, how it influences visual search behavior. Across successive visual search displays, we found evidence of both a broad, hemisphere-wide inhibitory bias of attention along with a focal, target location-specific facilitation. When the target appeared in the same visual hemifield in successive searches, responses were slower and the N2pc component was reduced, reflecting a bias of attention away from the previously attended side of space. When the target occurred at the same location in successive searches, responses were facilitated and the P1 component was enhanced, likely reflecting spatial priming of the target. These two effects are combined in the response times, leading to a reduction in the IOR effect for repeated target locations. Using ERPs, however, these two opposing effects can be isolated in time, demonstrating that the inhibitory biasing of attention still occurs even when response-time slowing is ameliorated by spatial priming. © 2017 Society for Psychophysiological Research.
Top-down alpha oscillatory network interactions during visuospatial attention orienting.
Doesburg, Sam M; Bedo, Nicolas; Ward, Lawrence M
2016-05-15
Neuroimaging and lesion studies indicate that visual attention is controlled by a distributed network of brain areas. The covert control of visuospatial attention has also been associated with retinotopic modulation of alpha-band oscillations within early visual cortex, which are thought to underlie inhibition of ignored areas of visual space. The relation between distributed networks mediating attention control and more focal oscillatory mechanisms, however, remains unclear. The present study evaluated the hypothesis that alpha-band, directed, network interactions within the attention control network are systematically modulated by the locus of visuospatial attention. We localized brain areas involved in visuospatial attention orienting using magnetoencephalographic (MEG) imaging and investigated alpha-band Granger-causal interactions among activated regions using narrow-band transfer entropy. The deployment of attention to one side of visual space was indexed by lateralization of alpha power changes between about 400ms and 700ms post-cue onset. The changes in alpha power were associated, in the same time period, with lateralization of anterior-to-posterior information flow in the alpha-band from various brain areas involved in attention control, including the anterior cingulate cortex, left middle and inferior frontal gyri, left superior temporal gyrus, and right insula, and inferior parietal lobule, to early visual areas. We interpreted these results to indicate that distributed network interactions mediated by alpha oscillations exert top-down influences on early visual cortex to modulate inhibition of processing for ignored areas of visual space. Copyright © 2016. Published by Elsevier Inc.
Selective maintenance in visual working memory does not require sustained visual attention.
Hollingworth, Andrew; Maxcey-Richard, Ashleigh M
2013-08-01
In four experiments, we tested whether sustained visual attention is required for the selective maintenance of objects in visual working memory (VWM). Participants performed a color change-detection task. During the retention interval, a valid cue indicated the item that would be tested. Change-detection performance was higher in the valid-cue condition than in a neutral-cue control condition. To probe the role of visual attention in the cuing effect, on half of the trials, a difficult search task was inserted after the cue, precluding sustained attention on the cued item. The addition of the search task produced no observable decrement in the magnitude of the cuing effect. In a complementary test, search efficiency was not impaired by simultaneously prioritizing an object for retention in VWM. The results demonstrate that selective maintenance in VWM can be dissociated from the locus of visual attention. 2013 APA, all rights reserved
Biasing the brain's attentional set: I. cue driven deployments of intersensory selective attention.
Foxe, John J; Simpson, Gregory V; Ahlfors, Seppo P; Saron, Clifford D
2005-10-01
Brain activity associated with directing attention to one of two possible sensory modalities was examined using high-density mapping of human event-related potentials. The deployment of selective attention was based on visually presented symbolic cue-words instructing subjects on a trial-by-trial basis, which sensory modality to attend. We measured the spatio-temporal pattern of activation in the approximately 1 second period between the cue-instruction and a subsequent compound auditory-visual imperative stimulus. This allowed us to assess the flow of processing across brain regions involved in deploying and sustaining inter-sensory selective attention, prior to the actual selective processing of the compound audio-visual target stimulus. Activity over frontal and parietal areas showed sensory specific increases in activation during the early part of the anticipatory period (~230 ms), probably representing the activation of fronto-parietal attentional deployment systems for top-down control of attention. In the later period preceding the arrival of the "to-be-attended" stimulus, sustained differential activity was seen over fronto-central regions and parieto-occipital regions, suggesting the maintenance of sensory-specific biased attentional states that would allow for subsequent selective processing. Although there was clear sensory biasing in this late sustained period, it was also clear that both sensory systems were being prepared during the cue-target period. These late sensory-specific biasing effects were also accompanied by sustained activations over frontal cortices that also showed both common and sensory specific activation patterns, suggesting that maintenance of the biased state includes top-down inputs from generators in frontal cortices, some of which are sensory-specific regions. These data support extensive interactions between sensory, parietal and frontal regions during processing of cue information, deployment of attention, and maintenance of the focus of attention in anticipation of impending attentionally relevant input.
Yadav, Naveen K; Thiagarajan, Preethi; Ciuffreda, Kenneth J
2014-01-01
The purpose of the experiment was to investigate the effect of oculomotor vision rehabilitation (OVR) on the visual-evoked potential (VEP) and visual attention in the mTBI population. Subjects (n = 7) were adults with a history of mild traumatic brain injury (mTBI). Each received 9 hours of OVR over a 6-week period. The effects of OVR on VEP amplitude and latency, the attention-related alpha band (8-13 Hz) power (µV(2)) and the clinical Visual Search and Attention Test (VSAT) were assessed before and after the OVR. After the OVR, the VEP amplitude increased and its variability decreased. There was no change in VEP latency, which was normal. Alpha band power increased, as did the VSAT score, following the OVR. The significant changes in most test parameters suggest that OVR affects the visual system at early visuo-cortical levels, as well as other pathways which are involved in visual attention.
Bressler, David W.; Fortenbaugh, Francesca C.; Robertson, Lynn C.; Silver, Michael A.
2013-01-01
Endogenous visual spatial attention improves perception and enhances neural responses to visual stimuli at attended locations. Although many aspects of visual processing differ significantly between central and peripheral vision, little is known regarding the neural substrates of the eccentricity dependence of spatial attention effects. We measured amplitudes of positive and negative fMRI responses to visual stimuli as a function of eccentricity in a large number of topographically-organized cortical areas. Responses to each stimulus were obtained when the stimulus was attended and when spatial attention was directed to a stimulus in the opposite visual hemifield. Attending to the stimulus increased both positive and negative response amplitudes in all cortical areas we studied: V1, V2, V3, hV4, VO1, LO1, LO2, V3A/B, IPS0, TO1, and TO2. However, the eccentricity dependence of these effects differed considerably across cortical areas. In early visual, ventral, and lateral occipital cortex, attentional enhancement of positive responses was greater for central compared to peripheral eccentricities. The opposite pattern was observed in dorsal stream areas IPS0 and putative MT homolog TO1, where attentional enhancement of positive responses was greater in the periphery. Both the magnitude and the eccentricity dependence of attentional modulation of negative fMRI responses closely mirrored that of positive responses across cortical areas. PMID:23562388
Marino, Alexandria C.; Mazer, James A.
2016-01-01
During natural vision, saccadic eye movements lead to frequent retinal image changes that result in different neuronal subpopulations representing the same visual feature across fixations. Despite these potentially disruptive changes to the neural representation, our visual percept is remarkably stable. Visual receptive field remapping, characterized as an anticipatory shift in the position of a neuron’s spatial receptive field immediately before saccades, has been proposed as one possible neural substrate for visual stability. Many of the specific properties of remapping, e.g., the exact direction of remapping relative to the saccade vector and the precise mechanisms by which remapping could instantiate stability, remain a matter of debate. Recent studies have also shown that visual attention, like perception itself, can be sustained across saccades, suggesting that the attentional control system can also compensate for eye movements. Classical remapping could have an attentional component, or there could be a distinct attentional analog of visual remapping. At this time we do not yet fully understand how the stability of attentional representations relates to perisaccadic receptive field shifts. In this review, we develop a vocabulary for discussing perisaccadic shifts in receptive field location and perisaccadic shifts of attentional focus, review and synthesize behavioral and neurophysiological studies of perisaccadic perception and perisaccadic attention, and identify open questions that remain to be experimentally addressed. PMID:26903820
Wang, Wuyi; Viswanathan, Shivakumar; Lee, Taraz; Grafton, Scott T
2016-01-01
Cortical theta band oscillations (4-8 Hz) in EEG signals have been shown to be important for a variety of different cognitive control operations in visual attention paradigms. However the synchronization source of these signals as defined by fMRI BOLD activity and the extent to which theta oscillations play a role in multimodal attention remains unknown. Here we investigated the extent to which cross-modal visual and auditory attention impacts theta oscillations. Using a simultaneous EEG-fMRI paradigm, healthy human participants performed an attentional vigilance task with six cross-modal conditions using naturalistic stimuli. To assess supramodal mechanisms, modulation of theta oscillation amplitude for attention to either visual or auditory stimuli was correlated with BOLD activity by conjunction analysis. Negative correlation was localized to cortical regions associated with the default mode network and positively with ventral premotor areas. Modality-associated attention to visual stimuli was marked by a positive correlation of theta and BOLD activity in fronto-parietal area that was not observed in the auditory condition. A positive correlation of theta and BOLD activity was observed in auditory cortex, while a negative correlation of theta and BOLD activity was observed in visual cortex during auditory attention. The data support a supramodal interaction of theta activity with of DMN function, and modality-associated processes within fronto-parietal networks related to top-down theta related cognitive control in cross-modal visual attention. On the other hand, in sensory cortices there are opposing effects of theta activity during cross-modal auditory attention.
Eye-tracking of visual attention in web-based assessment using the Force Concept Inventory
NASA Astrophysics Data System (ADS)
Han, Jing; Chen, Li; Fu, Zhao; Fritchman, Joseph; Bao, Lei
2017-07-01
This study used eye-tracking technology to investigate students’ visual attention while taking the Force Concept Inventory (FCI) in a web-based interface. Eighty nine university students were randomly selected into a pre-test group and a post-test group. Students took the 30-question FCI on a computer equipped with an eye-tracker. There were seven weeks of instruction between the pre- and post-test data collection. Students’ performance on the FCI improved significantly from pre-test to post-test. Meanwhile, the eye-tracking results reveal that the time students spent on taking the FCI test was not affected by student performance and did not change from pre-test to post-test. Analysis of students’ attention to answer choices shows that on the pre-test students primarily focused on the naïve choices and ignored the expert choices. On the post-test, although students had shifted their primary attention to the expert choices, they still kept a high level of attention to the naïve choices, indicating significant conceptual mixing and competition during problem solving. Outcomes of this study provide new insights on students’ conceptual development in learning physics.
Stuart, Samuel; Lord, Sue; Galna, Brook; Rochester, Lynn
2018-04-01
Gait impairment is a core feature of Parkinson's disease (PD) with implications for falls risk. Visual cues improve gait in PD, but the underlying mechanisms are unclear. Evidence suggests that attention and vision play an important role; however, the relative contribution from each is unclear. Measurement of visual exploration (specifically saccade frequency) during gait allows for real-time measurement of attention and vision. Understanding how visual cues influence visual exploration may allow inferences of the underlying mechanisms to response which could help to develop effective therapeutics. This study aimed to examine saccade frequency during gait in response to a visual cue in PD and older adults and investigate the roles of attention and vision in visual cue response in PD. A mobile eye-tracker measured saccade frequency during gait in 55 people with PD and 32 age-matched controls. Participants walked in a straight line with and without a visual cue (50 cm transverse lines) presented under single task and dual-task (concurrent digit span recall). Saccade frequency was reduced when walking in PD compared to controls; however, visual cues ameliorated saccadic deficit. Visual cues significantly increased saccade frequency in both PD and controls under both single task and dual-task. Attention rather than visual function was central to saccade frequency and gait response to visual cues in PD. In conclusion, this study highlights the impact of visual cues on visual exploration when walking and the important role of attention in PD. Understanding these complex features will help inform intervention development. © 2018 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
The spread of attention across features of a surface
Ernst, Zachary Raymond; Jazayeri, Mehrdad
2013-01-01
Contrasting theories of visual attention have emphasized selection by spatial location, individual features, and whole objects. We used functional magnetic resonance imaging to ask whether and how attention to one feature of an object spreads to other features of the same object. Subjects viewed two spatially superimposed surfaces of random dots that were segregated by distinct color-motion conjunctions. The color and direction of motion of each surface changed smoothly and in a cyclical fashion. Subjects were required to track one feature (e.g., color) of one of the two surfaces and detect brief moments when the attended feature diverged from its smooth trajectory. To tease apart the effect of attention to individual features on the hemodynamic response, we used a frequency-tagging scheme. In this scheme, the stimulus features (color and direction of motion) are modulated periodically at distinct frequencies so that the contribution of each feature to the hemodynamics can be inferred from the harmonic response at the corresponding frequency. We found that attention to one feature (e.g., color) of one surface increased the response modulation not only to the attended feature but also to the other feature (e.g., motion) of the same surface. This attentional modulation was evident in multiple visual areas and was present as early as V1. The spread of attention to the behaviorally irrelevant features of a surface suggests that attention may automatically select all features of a single object. Thus object-based attention may be supported by an enhancement of feature-specific sensory signals in the visual cortex. PMID:23883860
Kaiser, Daniel; Stein, Timo; Peelen, Marius V.
2014-01-01
In virtually every real-life situation humans are confronted with complex and cluttered visual environments that contain a multitude of objects. Because of the limited capacity of the visual system, objects compete for neural representation and cognitive processing resources. Previous work has shown that such attentional competition is partly object based, such that competition among elements is reduced when these elements perceptually group into an object based on low-level cues. Here, using functional MRI (fMRI) and behavioral measures, we show that the attentional benefit of grouping extends to higher-level grouping based on the relative position of objects as experienced in the real world. An fMRI study designed to measure competitive interactions among objects in human visual cortex revealed reduced neural competition between objects when these were presented in commonly experienced configurations, such as a lamp above a table, relative to the same objects presented in other configurations. In behavioral visual search studies, we then related this reduced neural competition to improved target detection when distracter objects were shown in regular configurations. Control studies showed that low-level grouping could not account for these results. We interpret these findings as reflecting the grouping of objects based on higher-level spatial-relational knowledge acquired through a lifetime of seeing objects in specific configurations. This interobject grouping effectively reduces the number of objects that compete for representation and thereby contributes to the efficiency of real-world perception. PMID:25024190
Haptic guidance of overt visual attention.
List, Alexandra; Iordanescu, Lucica; Grabowecky, Marcia; Suzuki, Satoru
2014-11-01
Research has shown that information accessed from one sensory modality can influence perceptual and attentional processes in another modality. Here, we demonstrated a novel crossmodal influence of haptic-shape information on visual attention. Participants visually searched for a target object (e.g., an orange) presented among distractor objects, fixating the target as quickly as possible. While searching for the target, participants held (never viewed and out of sight) an item of a specific shape in their hands. In two experiments, we demonstrated that the time for the eyes to reach a target-a measure of overt visual attention-was reduced when the shape of the held item (e.g., a sphere) was consistent with the shape of the visual target (e.g., an orange), relative to when the held shape was unrelated to the target (e.g., a hockey puck) or when no shape was held. This haptic-to-visual facilitation occurred despite the fact that the held shapes were not predictive of the visual targets' shapes, suggesting that the crossmodal influence occurred automatically, reflecting shape-specific haptic guidance of overt visual attention.
Spatial working memory interferes with explicit, but not probabilistic cuing of spatial attention.
Won, Bo-Yeong; Jiang, Yuhong V
2015-05-01
Recent empirical and theoretical work has depicted a close relationship between visual attention and visual working memory. For example, rehearsal in spatial working memory depends on spatial attention, whereas adding a secondary spatial working memory task impairs attentional deployment in visual search. These findings have led to the proposal that working memory is attention directed toward internal representations. Here, we show that the close relationship between these 2 constructs is limited to some but not all forms of spatial attention. In 5 experiments, participants held color arrays, dot locations, or a sequence of dots in working memory. During the memory retention interval, they performed a T-among-L visual search task. Crucially, the probable target location was cued either implicitly through location probability learning or explicitly with a central arrow or verbal instruction. Our results showed that whereas imposing a visual working memory load diminished the effectiveness of explicit cuing, it did not interfere with probability cuing. We conclude that spatial working memory shares similar mechanisms with explicit, goal-driven attention but is dissociated from implicitly learned attention. (c) 2015 APA, all rights reserved).
Spatial working memory interferes with explicit, but not probabilistic cuing of spatial attention
Won, Bo-Yeong; Jiang, Yuhong V.
2014-01-01
Recent empirical and theoretical work has depicted a close relationship between visual attention and visual working memory. For example, rehearsal in spatial working memory depends on spatial attention, whereas adding a secondary spatial working memory task impairs attentional deployment in visual search. These findings have led to the proposal that working memory is attention directed toward internal representations. Here we show that the close relationship between these two constructs is limited to some but not all forms of spatial attention. In five experiments, participants held color arrays, dot locations, or a sequence of dots in working memory. During the memory retention interval they performed a T-among-L visual search task. Crucially, the probable target location was cued either implicitly through location probability learning, or explicitly with a central arrow or verbal instruction. Our results showed that whereas imposing a visual working memory load diminished the effectiveness of explicit cuing, it did not interfere with probability cuing. We conclude that spatial working memory shares similar mechanisms with explicit, goal-driven attention but is dissociated from implicitly learned attention. PMID:25401460
Collinearity Impairs Local Element Visual Search
ERIC Educational Resources Information Center
Jingling, Li; Tseng, Chia-Huei
2013-01-01
In visual searches, stimuli following the law of good continuity attract attention to the global structure and receive attentional priority. Also, targets that have unique features are of high feature contrast and capture attention in visual search. We report on a salient global structure combined with a high orientation contrast to the…
Contextual Cueing: Implicit Learning and Memory of Visual Context Guides Spatial Attention.
ERIC Educational Resources Information Center
Chun, Marvin M.; Jiang, Yuhong
1998-01-01
Six experiments involving a total of 112 college students demonstrate that a robust memory for visual context exists to guide spatial attention. Results show how implicit learning and memory of visual context can guide spatial attention toward task-relevant aspects of a scene. (SLD)
Visual Memory for Objects Following Foveal Vision Loss
ERIC Educational Resources Information Center
Geringswald, Franziska; Herbik, Anne; Hofmüller, Wolfram; Hoffmann, Michael B.; Pollmann, Stefan
2015-01-01
Allocation of visual attention is crucial for encoding items into visual long-term memory. In free vision, attention is closely linked to the center of gaze, raising the question whether foveal vision loss entails suboptimal deployment of attention and subsequent impairment of object encoding. To investigate this question, we examined visual…
A common network of functional areas for attention and eye movements
NASA Technical Reports Server (NTRS)
Corbetta, M.; Akbudak, E.; Conturo, T. E.; Snyder, A. Z.; Ollinger, J. M.; Drury, H. A.; Linenweber, M. R.; Petersen, S. E.; Raichle, M. E.; Van Essen, D. C.;
1998-01-01
Functional magnetic resonance imaging (fMRI) and surface-based representations of brain activity were used to compare the functional anatomy of two tasks, one involving covert shifts of attention to peripheral visual stimuli, the other involving both attentional and saccadic shifts to the same stimuli. Overlapping regional networks in parietal, frontal, and temporal lobes were active in both tasks. This anatomical overlap is consistent with the hypothesis that attentional and oculomotor processes are tightly integrated at the neural level.
Neural bases of selective attention in action video game players.
Bavelier, D; Achtman, R L; Mani, M; Föcker, J
2012-05-15
Over the past few years, the very act of playing action video games has been shown to enhance several different aspects of visual selective attention, yet little is known about the neural mechanisms that mediate such attentional benefits. A review of the aspects of attention enhanced in action game players suggests there are changes in the mechanisms that control attention allocation and its efficiency (Hubert-Wallander, Green, & Bavelier, 2010). The present study used brain imaging to test this hypothesis by comparing attentional network recruitment and distractor processing in action gamers versus non-gamers as attentional demands increased. Moving distractors were found to elicit lesser activation of the visual motion-sensitive area (MT/MST) in gamers as compared to non-gamers, suggestive of a better early filtering of irrelevant information in gamers. As expected, a fronto-parietal network of areas showed greater recruitment as attentional demands increased in non-gamers. In contrast, gamers barely engaged this network as attentional demands increased. This reduced activity in the fronto-parietal network that is hypothesized to control the flexible allocation of top-down attention is compatible with the proposal that action game players may allocate attentional resources more automatically, possibly allowing more efficient early filtering of irrelevant information. Copyright © 2011 Elsevier Ltd. All rights reserved.
Object-Based Attention on Social Units: Visual Selection of Hands Performing a Social Interaction.
Yin, Jun; Xu, Haokui; Duan, Jipeng; Shen, Mowei
2018-05-01
Traditionally, objects of attention are characterized either as full-fledged entities or either as elements grouped by Gestalt principles. Because humans appear to use social groups as units to explain social activities, we proposed that a socially defined group, according to social interaction information, would also be a possible object of attentional selection. This hypothesis was examined using displays with and without handshaking interactions. Results demonstrated that object-based attention, which was measured by an object-specific attentional advantage (i.e., shorter response times to targets on a single object), was extended to two hands performing a handshake but not to hands that did not perform meaningful social interactions, even when they did perform handshake-like actions. This finding cannot be attributed to the familiarity of the frequent co-occurrence of two handshaking hands. Hence, object-based attention can select a grouped object whose parts are connected within a meaningful social interaction. This finding implies that object-based attention is constrained by top-down information.
Focused and Sustained Attention Is Modified by a Goal-Based Rehabilitation in Parkinsonian Patients.
Ferrazzoli, Davide; Ortelli, Paola; Maestri, Roberto; Bera, Rossana; Gargantini, Roberto; Palamara, Grazia; Zarucchi, Marianna; Giladi, Nir; Frazzitta, Giuseppe
2017-01-01
Rehabilitation for patients with Parkinson's disease (PD) is based on cognitive strategies that exploit attention. Parkinsonians exhibit impairments in divided attention and interference control. Nevertheless, the effectiveness of specific rehabilitation treatments based on attention suggests that other attentional functions are preserved. Data about attention are conflicting in PD, and it is not clear whether rehabilitative treatments that entail attentional strategies affect attention itself. Reaction times (RTs) represent an instrument to explore attention and investigate whether changes in attentional performances parallel rehabilitation induced-gains. RTs of 103 parkinsonian patients in "on" state, without cognitive deficits, were compared with those of a population of 34 healthy controls. We studied those attentional networks that subtend the use of cognitive strategies in motor rehabilitation: alertness and focused and sustained attention, which is a component of the executive system. We used visual and auditory RTs to evaluate alertness and multiple choices RTs (MC RTs) to explore focused and sustained attention. Parkinsonian patients underwent these tasks before and after a 4-week multidisciplinary, intensive and goal-based rehabilitation treatment (MIRT). Unified Parkinson's Disease Rating Scale (UPDRS) III and Timed Up and Go test (TUG) were assessed at the enrollment and at the end of MIRT to evaluate the motor-functional effectiveness of treatment. We did not find differences in RTs between parkinsonian patients and controls. Further, we found that improvements in motor-functional outcome measures after MIRT ( p < 0.0001) paralleled a reduction in MC RTs ( p = 0.014). No changes were found for visual and auditory RTs. Correlation analysis revealed no association between changes in MC RTs and improvements in UPDRS-III and TUG. These findings indicate that alertness, as well as focused and sustained attention, are preserved in "on" state. This explains why Parkinsonians benefit from a goal-based rehabilitation that entails the use of attention. The reduction in MC RTs suggests a positive effect of MIRT on the executive component of attention and indicates that this type of rehabilitation provides benefits by exploiting executive functions. This ensues from different training approaches aimed at bypassing the dysfunctional basal ganglia circuit, allowing the voluntary execution of the defective movements. These data suggest that the effectiveness of a motor rehabilitation tailored for PD lies on cognitive engagement.
Visual salience metrics for image inpainting
NASA Astrophysics Data System (ADS)
Ardis, Paul A.; Singhal, Amit
2009-01-01
Quantitative metrics for successful image inpainting currently do not exist, with researchers instead relying upon qualitative human comparisons to evaluate their methodologies and techniques. In an attempt to rectify this situation, we propose two new metrics to capture the notions of noticeability and visual intent in order to evaluate inpainting results. The proposed metrics use a quantitative measure of visual salience based upon a computational model of human visual attention. We demonstrate how these two metrics repeatably correlate with qualitative opinion in a human observer study, correctly identify the optimum uses for exemplar-based inpainting (as specified in the original publication), and match qualitative opinion in published examples.
Giuliano, Ryan J; Karns, Christina M; Neville, Helen J; Hillyard, Steven A
2014-12-01
A growing body of research suggests that the predictive power of working memory (WM) capacity for measures of intellectual aptitude is due to the ability to control attention and select relevant information. Crucially, attentional mechanisms implicated in controlling access to WM are assumed to be domain-general, yet reports of enhanced attentional abilities in individuals with larger WM capacities are primarily within the visual domain. Here, we directly test the link between WM capacity and early attentional gating across sensory domains, hypothesizing that measures of visual WM capacity should predict an individual's capacity to allocate auditory selective attention. To address this question, auditory ERPs were recorded in a linguistic dichotic listening task, and individual differences in ERP modulations by attention were correlated with estimates of WM capacity obtained in a separate visual change detection task. Auditory selective attention enhanced ERP amplitudes at an early latency (ca. 70-90 msec), with larger P1 components elicited by linguistic probes embedded in an attended narrative. Moreover, this effect was associated with greater individual estimates of visual WM capacity. These findings support the view that domain-general attentional control mechanisms underlie the wide variation of WM capacity across individuals.
Measuring advertising effectiveness in Travel 2.0 websites through eye-tracking technology.
Muñoz-Leiva, Francisco; Hernández-Méndez, Janet; Gómez-Carmona, Diego
2018-03-06
The advent of Web 2.0 is changing tourists' behaviors, prompting them to take on a more active role in preparing their travel plans. It is also leading tourism companies to have to adapt their marketing strategies to different online social media. The present study analyzes advertising effectiveness in social media in terms of customers' visual attention and self-reported memory (recall). Data were collected through a within-subjects and between-groups design based on eye-tracking technology, followed by a self-administered questionnaire. Participants were instructed to visit three Travel 2.0 websites (T2W), including a hotel's blog, social network profile (Facebook), and virtual community profile (Tripadvisor). Overall, the results revealed greater advertising effectiveness in the case of the hotel social network; and visual attention measures based on eye-tracking data differed from measures of self-reported recall. Visual attention to the ad banner was paid at a low level of awareness, which explains why the associations with the ad did not activate its subsequent recall. The paper offers a pioneering attempt in the application of eye-tracking technology, and examines the possible impact of visual marketing stimuli on user T2W-related behavior. The practical implications identified in this research, along with its limitations and future research opportunities, are of interest both for further theoretical development and practical application. Copyright © 2018 Elsevier Inc. All rights reserved.
Attention and normalization circuits in macaque V1.
Sanayei, M; Herrero, J L; Distler, C; Thiele, A
2015-04-01
Attention affects neuronal processing and improves behavioural performance. In extrastriate visual cortex these effects have been explained by normalization models, which assume that attention influences the circuit that mediates surround suppression. While normalization models have been able to explain attentional effects, their validity has rarely been tested against alternative models. Here we investigate how attention and surround/mask stimuli affect neuronal firing rates and orientation tuning in macaque V1. Surround/mask stimuli provide an estimate to what extent V1 neurons are affected by normalization, which was compared against effects of spatial top down attention. For some attention/surround effect comparisons, the strength of attentional modulation was correlated with the strength of surround modulation, suggesting that attention and surround/mask stimulation (i.e. normalization) might use a common mechanism. To explore this in detail, we fitted multiplicative and additive models of attention to our data. In one class of models, attention contributed to normalization mechanisms, whereas in a different class of models it did not. Model selection based on Akaike's and on Bayesian information criteria demonstrated that in most cells the effects of attention were best described by models where attention did not contribute to normalization mechanisms. This demonstrates that attentional influences on neuronal responses in primary visual cortex often bypass normalization mechanisms. © 2015 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Griffis, Joseph C.; Elkhetali, Abdurahman S.; Burge, Wesley K.; Chen, Richard H.; Visscher, Kristina M.
2015-01-01
Attention facilitates the processing of task-relevant visual information and suppresses interference from task-irrelevant information. Modulations of neural activity in visual cortex depend on attention, and likely result from signals originating in fronto-parietal and cingulo-opercular regions of cortex. Here, we tested the hypothesis that attentional facilitation of visual processing is accomplished in part by changes in how brain networks involved in attentional control interact with sectors of V1 that represent different retinal eccentricities. We measured the strength of background connectivity between fronto-parietal and cingulo-opercular regions with different eccentricity sectors in V1 using functional MRI data that were collected while participants performed tasks involving attention to either a centrally presented visual stimulus or a simultaneously presented auditory stimulus. We found that when the visual stimulus was attended, background connectivity between V1 and the left frontal eye fields (FEF), left intraparietal sulcus (IPS), and right IPS varied strongly across different eccentricity sectors in V1 so that foveal sectors were more strongly connected than peripheral sectors. This retinotopic gradient was weaker when the visual stimulus was ignored, indicating that it was driven by attentional effects. Greater task-driven differences between foveal and peripheral sectors in background connectivity to these regions were associated with better performance on the visual task and faster response times on correct trials. These findings are consistent with the notion that attention drives the configuration of task-specific functional pathways that enable the prioritized processing of task-relevant visual information, and show that the prioritization of visual information by attentional processes may be encoded in the retinotopic gradient of connectivty between V1 and fronto-parietal regions. PMID:26106320
Visual Sonority Modulates Infants' Attraction to Sign Language
ERIC Educational Resources Information Center
Stone, Adam; Petitto, Laura-Ann; Bosworth, Rain
2018-01-01
The infant brain may be predisposed to identify perceptually salient cues that are common to both signed and spoken languages. Recent theory based on spoken languages has advanced sonority as one of these potential language acquisition cues. Using a preferential looking paradigm with an infrared eye tracker, we explored visual attention of hearing…
Seeing desert as wilderness and as landscape—an exercise in visual thinking approaches
John Opie
1979-01-01
Based on the components and program of VRVA (Visual Resources Values Assessment), a behavioral history of the visitor's perception of the American desert is examined. Emphasis is placed upon contrasts between traditional eastern "garden-park" viewpoints and contemporary desert scenery experiences. Special attention is given to the influence of John...
ERIC Educational Resources Information Center
Lin, Huifen
2011-01-01
The purpose of this study was to investigate the relative effectiveness of different types of visuals (static and animated) and instructional strategies (no strategy, questions, and questions plus feedback) used to complement visualized materials on students' learning of different educational objectives in a computer-based instructional (CBI)…
2010-03-01
1979). As drivers’ daily commuting times increase, and as new technologies such as Blackberrys , navigation systems, DVDs, etc., become more pervasive...Thomas, L.C., & Wickens, C.D. (2001). Visual displays and cognitive tunneling : frames of reference effects on spatial judgments and change
Irrelevant Singletons in Pop-Out Search: Attentional Capture or Filtering Costs?
ERIC Educational Resources Information Center
Becker, Stefanie I.
2007-01-01
The aim of the present study was to investigate whether costs invoked by the presence of an irrelevant singleton distractor in a visual search task are due to attentional capture by the irrelevant singleton or spatially unrelated filtering costs. Measures of spatial effects were based on distance effects, compatibility effects, and differences…
Feature-based attention: it is all bottom-up priming.
Theeuwes, Jan
2013-10-19
Feature-based attention (FBA) enhances the representation of image characteristics throughout the visual field, a mechanism that is particularly useful when searching for a specific stimulus feature. Even though most theories of visual search implicitly or explicitly assume that FBA is under top-down control, we argue that the role of top-down processing in FBA may be limited. Our review of the literature indicates that all behavioural and neuro-imaging studies investigating FBA suffer from the shortcoming that they cannot rule out an effect of priming. The mere attending to a feature enhances the mandatory processing of that feature across the visual field, an effect that is likely to occur in an automatic, bottom-up way. Studies that have investigated the feasibility of FBA by means of cueing paradigms suggest that the role of top-down processing in FBA is limited (e.g. prepare for red). Instead, the actual processing of the stimulus is needed to cause the mandatory tuning of responses throughout the visual field. We conclude that it is likely that all FBA effects reported previously are the result of bottom-up priming.
Feature-based attention: it is all bottom-up priming
Theeuwes, Jan
2013-01-01
Feature-based attention (FBA) enhances the representation of image characteristics throughout the visual field, a mechanism that is particularly useful when searching for a specific stimulus feature. Even though most theories of visual search implicitly or explicitly assume that FBA is under top-down control, we argue that the role of top-down processing in FBA may be limited. Our review of the literature indicates that all behavioural and neuro-imaging studies investigating FBA suffer from the shortcoming that they cannot rule out an effect of priming. The mere attending to a feature enhances the mandatory processing of that feature across the visual field, an effect that is likely to occur in an automatic, bottom-up way. Studies that have investigated the feasibility of FBA by means of cueing paradigms suggest that the role of top-down processing in FBA is limited (e.g. prepare for red). Instead, the actual processing of the stimulus is needed to cause the mandatory tuning of responses throughout the visual field. We conclude that it is likely that all FBA effects reported previously are the result of bottom-up priming. PMID:24018717
Pacing Visual Attention: Temporal Structure Effects
1993-06-01
of perception and motor action: Ideomotor compatibility and interference in divided attention . Journal of Motor Behavior, 2, (3), 155-162. Kwak, H...1993 Dissertation, Jun 89 - Jun 93 4. TITLE AND SUBTITLE S. FUNDING NUMBERS Pacing Visual Attention : Temporal Structure Effects PE - 62202F 6. AUTHOR(S...that persisting temporal relationships may be an important factor in the external (exogenous) control of visual attention , at least to some extent, was
Perception and Attention for Visualization
ERIC Educational Resources Information Center
Haroz, Steve
2013-01-01
This work examines how a better understanding of visual perception and attention can impact visualization design. In a collection of studies, I explore how different levels of the visual system can measurably affect a variety of visualization metrics. The results show that expert preference, user performance, and even computational performance are…
Markant, Julie; Worden, Michael S.; Amso, Dima
2015-01-01
Learning through visual exploration often requires orienting of attention to meaningful information in a cluttered world. Previous work has shown that attention modulates visual cortex activity, with enhanced activity for attended targets and suppressed activity for competing inputs, thus enhancing the visual experience. Here we examined the idea that learning may be engaged differentially with variations in attention orienting mechanisms that drive driving eye movements during visual search and exploration. We hypothesized that attention orienting mechanisms that engaged suppression of a previously attended location will boost memory encoding of the currently attended target objects to a greater extent than those that involve target enhancement alone To test this hypothesis we capitalized on the classic spatial cueing task and the inhibition of return (IOR) mechanism (Posner, Rafal, & Choate, 1985; Posner, 1980) to demonstrate that object images encoded in the context of concurrent suppression at a previously attended location were encoded more effectively and remembered better than those encoded without concurrent suppression. Furthermore, fMRI analyses revealed that this memory benefit was driven by attention modulation of visual cortex activity, as increased suppression of the previously attended location in visual cortex during target object encoding predicted better subsequent recognition memory performance. These results suggest that not all attention orienting impacts learning and memory equally. PMID:25701278
Hirai, Masahiro; Muramatsu, Yukako; Mizuno, Seiji; Kurahashi, Naoko; Kurahashi, Hirokazu; Nakamura, Miho
2016-01-01
Evidence indicates that individuals with Williams syndrome (WS) exhibit atypical attentional characteristics when viewing faces. However, the dynamics of visual attention captured by faces remain unclear, especially when explicit attentional forces are present. To clarify this, we introduced a visual search paradigm and assessed how the relative strength of visual attention captured by a face and explicit attentional control changes as search progresses. Participants (WS and controls) searched for a target (butterfly) within an array of distractors, which sometimes contained an upright face. We analyzed reaction time and location of the first fixation-which reflect the attentional profile at the initial stage-and fixation durations. These features represent aspects of attention at later stages of visual search. The strength of visual attention captured by faces and explicit attentional control (toward the butterfly) was characterized by the frequency of first fixations on a face or butterfly and on the duration of face or butterfly fixations. Although reaction time was longer in all groups when faces were present, and visual attention was not dominated by faces in any group during the initial stages of the search, when faces were present, attention to faces dominated in the WS group during the later search stages. Furthermore, for the WS group, reaction time correlated with eye-movement measures at different stages of searching such that longer reaction times were associated with longer face-fixations, specifically at the initial stage of searching. Moreover, longer reaction times were associated with longer face-fixations at the later stages of searching, while shorter reaction times were associated with longer butterfly fixations. The relative strength of attention captured by faces in people with WS is not observed at the initial stage of searching but becomes dominant as the search progresses. Furthermore, although behavioral responses are associated with some aspects of eye movements, they are not as sensitive as eye-movement measurements themselves at detecting atypical attentional characteristics in people with WS.
The Attentional Drift Diffusion Model of Simple Perceptual Decision-Making.
Tavares, Gabriela; Perona, Pietro; Rangel, Antonio
2017-01-01
Perceptual decisions requiring the comparison of spatially distributed stimuli that are fixated sequentially might be influenced by fluctuations in visual attention. We used two psychophysical tasks with human subjects to investigate the extent to which visual attention influences simple perceptual choices, and to test the extent to which the attentional Drift Diffusion Model (aDDM) provides a good computational description of how attention affects the underlying decision processes. We find evidence for sizable attentional choice biases and that the aDDM provides a reasonable quantitative description of the relationship between fluctuations in visual attention, choices and reaction times. We also find that exogenous manipulations of attention induce choice biases consistent with the predictions of the model.
Harris, Anthony M; Dux, Paul E; Jones, Caelyn N; Mattingley, Jason B
2017-05-15
Mechanisms of attention assign priority to sensory inputs on the basis of current task goals. Previous studies have shown that lateralized neural oscillations within the alpha (8-14Hz) range are associated with the voluntary allocation of attention to the contralateral visual field. It is currently unknown, however, whether similar oscillatory signatures instantiate the involuntary capture of spatial attention by goal-relevant stimulus properties. Here we investigated the roles of theta (4-8Hz), alpha, and beta (14-30Hz) oscillations in human goal-directed visual attention. Across two experiments, we had participants respond to a brief target of a particular color among heterogeneously colored distractors. Prior to target onset, we cued one location with a lateralized, non-predictive cue that was either target- or non-target-colored. During the behavioral task, we recorded brain activity using electroencephalography (EEG), with the aim of analyzing cue-elicited oscillatory activity. We found that theta oscillations lateralized in response to all cues, and this lateralization was stronger if the cue matched the target color. Alpha oscillations lateralized relatively later, and only in response to target-colored cues, consistent with the capture of spatial attention. Our findings suggest that stimulus induced changes in theta and alpha amplitude reflect task-based modulation of signals by feature-based and spatial attention, respectively. Copyright © 2017 Elsevier Inc. All rights reserved.
de Voogd, E L; Wiers, R W; Prins, P J M; de Jong, P J; Boendermaker, W J; Zwitser, R J; Salemink, E
2016-12-01
Based on information processing models of anxiety and depression, we investigated the efficacy of multiple sessions of online attentional bias modification training to reduce attentional bias and symptoms of anxiety and depression, and to increase emotional resilience in youth. Unselected adolescents (N = 340, age: 11-18 years) were randomly allocated to eight sessions of a dot-probe, or a visual search-based attentional training, or one of two corresponding placebo control conditions. Cognitive and emotional measures were assessed pre- and post-training; emotional outcome measures also at three, six and twelve months follow-up. Only visual search training enhanced attention for positive information, and this effect was stronger for participants who completed more training sessions. Symptoms of anxiety and depression reduced, whereas emotional resilience improved. However, these effects were not especially pronounced in the active conditions. Thus, this large-scale randomized controlled study provided no support for the efficacy of the current online attentional bias modification training as a preventive intervention to reduce symptoms of anxiety or depression or to increase emotional resilience in unselected adolescents. However, the absence of biased attention related to symptomatology at baseline, and the large drop-out rates at follow-up preclude strong conclusions. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Paneri, Sofia; Gregoriou, Georgia G.
2017-01-01
The ability to select information that is relevant to current behavioral goals is the hallmark of voluntary attention and an essential part of our cognition. Attention tasks are a prime example to study at the neuronal level, how task related information can be selectively processed in the brain while irrelevant information is filtered out. Whereas, numerous studies have focused on elucidating the mechanisms of visual attention at the single neuron and population level in the visual cortices, considerably less work has been devoted to deciphering the distinct contribution of higher-order brain areas, which are known to be critical for the employment of attention. Among these areas, the prefrontal cortex (PFC) has long been considered a source of top-down signals that bias selection in early visual areas in favor of the attended features. Here, we review recent experimental data that support the role of PFC in attention. We examine the existing evidence for functional specialization within PFC and we discuss how long-range interactions between PFC subregions and posterior visual areas may be implemented in the brain and contribute to the attentional modulation of different measures of neural activity in visual cortices. PMID:29033784
Paneri, Sofia; Gregoriou, Georgia G
2017-01-01
The ability to select information that is relevant to current behavioral goals is the hallmark of voluntary attention and an essential part of our cognition. Attention tasks are a prime example to study at the neuronal level, how task related information can be selectively processed in the brain while irrelevant information is filtered out. Whereas, numerous studies have focused on elucidating the mechanisms of visual attention at the single neuron and population level in the visual cortices, considerably less work has been devoted to deciphering the distinct contribution of higher-order brain areas, which are known to be critical for the employment of attention. Among these areas, the prefrontal cortex (PFC) has long been considered a source of top-down signals that bias selection in early visual areas in favor of the attended features. Here, we review recent experimental data that support the role of PFC in attention. We examine the existing evidence for functional specialization within PFC and we discuss how long-range interactions between PFC subregions and posterior visual areas may be implemented in the brain and contribute to the attentional modulation of different measures of neural activity in visual cortices.
Changes in search rate but not in the dynamics of exogenous attention in action videogame players.
Hubert-Wallander, Bjorn; Green, C Shawn; Sugarman, Michael; Bavelier, Daphne
2011-11-01
Many previous studies have shown that the speed of processing in attentionally demanding tasks seems enhanced following habitual action videogame play. However, using one of the diagnostic tasks for efficiency of attentional processing, a visual search task, Castel and collaborators (Castel, Pratt, & Drummond, Acta Psychologica 119:217-230, 2005) reported no difference in visual search rates, instead proposing that action gaming may change response execution time rather than the efficiency of visual selective attention per se. Here we used two hard visual search tasks, one measuring reaction time and the other accuracy, to test whether visual search rate may be changed by action videogame play. We found greater search rates in the gamer group than in the nongamer controls, consistent with increased efficiency in visual selective attention. We then asked how general the change in attentional throughput noted so far in gamers might be by testing whether exogenous attentional cues would lead to a disproportional enhancement in throughput in gamers as compared to nongamers. Interestingly, exogenous cues were found to enhance throughput equivalently between gamers and nongamers, suggesting that not all mechanisms known to enhance throughput are similarly enhanced in action videogamers.
Visual selective attention and reading efficiency are related in children.
Casco, C; Tressoldi, P E; Dellantonio, A
1998-09-01
We investigated the relationship between visual selective attention and linguistic performance. Subjects were classified in four categories according to their accuracy in a letter cancellation task involving selective attention. The task consisted in searching a target letter in a set of background letters and accuracy was measured as a function of set size. We found that children with the lowest performance in the cancellation task present a significantly slower reading rate and a higher number of reading visual errors than children with highest performance. Results also show that these groups of searchers present significant differences in a lexical search task whereas their performance did not differ in lexical decision and syllables control task. The relationship between letter search and reading, as well as the finding that poor readers-searchers perform poorly lexical search tasks also involving selective attention, suggest that the relationship between letter search and reading difficulty may reflect a deficit in a visual selective attention mechanisms which is involved in all these tasks. A deficit in visual attention can be linked to the problems that disabled readers present in the function of magnocellular stream which culminates in posterior parietal cortex, an area which plays an important role in guiding visual attention.
Saccade-synchronized rapid attention shifts in macaque visual cortical area MT.
Yao, Tao; Treue, Stefan; Krishna, B Suresh
2018-03-06
While making saccadic eye-movements to scan a visual scene, humans and monkeys are able to keep track of relevant visual stimuli by maintaining spatial attention on them. This ability requires a shift of attentional modulation from the neuronal population representing the relevant stimulus pre-saccadically to the one representing it post-saccadically. For optimal performance, this trans-saccadic attention shift should be rapid and saccade-synchronized. Whether this is so is not known. We trained two rhesus monkeys to make saccades while maintaining covert attention at a fixed spatial location. We show that the trans-saccadic attention shift in cortical visual medial temporal (MT) area is well synchronized to saccades. Attentional modulation crosses over from the pre-saccadic to the post-saccadic neuronal representation by about 50 ms after a saccade. Taking response latency into account, the trans-saccadic attention shift is well timed to maintain spatial attention on relevant stimuli, so that they can be optimally tracked and processed across saccades.
Shifting Attention within Memory Representations Involves Early Visual Areas
Munneke, Jaap; Belopolsky, Artem V.; Theeuwes, Jan
2012-01-01
Prior studies have shown that spatial attention modulates early visual cortex retinotopically, resulting in enhanced processing of external perceptual representations. However, it is not clear whether the same visual areas are modulated when attention is focused on, and shifted within a working memory representation. In the current fMRI study participants were asked to memorize an array containing four stimuli. After a delay, participants were presented with a verbal cue instructing them to actively maintain the location of one of the stimuli in working memory. Additionally, on a number of trials a second verbal cue instructed participants to switch attention to the location of another stimulus within the memorized representation. Results of the study showed that changes in the BOLD pattern closely followed the locus of attention within the working memory representation. A decrease in BOLD-activity (V1–V3) was observed at ROIs coding a memory location when participants switched away from this location, whereas an increase was observed when participants switched towards this location. Continuous increased activity was obtained at the memorized location when participants did not switch. This study shows that shifting attention within memory representations activates the earliest parts of visual cortex (including V1) in a retinotopic fashion. We conclude that even in the absence of visual stimulation, early visual areas support shifting of attention within memorized representations, similar to when attention is shifted in the outside world. The relationship between visual working memory and visual mental imagery is discussed in light of the current findings. PMID:22558165
ERIC Educational Resources Information Center
Hart, Verna; Ferrell, Kay
Twenty-four congenitally visually handicapped infants, aged 6-24 months, participated in a study to determine (1) those stimuli best able to elicit visual attention, (2) the stability of visual acuity over time, and (3) the effects of binaural sensory aids on both visual attention and visual acuity. Ss were dichotomized into visually handicapped…
Jeromin, Franziska; Nyenhuis, Nele; Barke, Antonia
2016-03-01
Background and aims Internet Gaming Disorder is included in the Diagnostic and statistical manual of mental disorders (5 th edition) as a disorder that merits further research. The diagnostic criteria are based on those for Substance Use Disorder and Gambling Disorder. Excessive gamblers and persons with Substance Use Disorder show attentional biases towards stimuli related to their addictions. We investigated whether excessive Internet gamers show a similar attentional bias, by using two established experimental paradigms. Methods We measured reaction times of excessive Internet gamers and non-gamers (N = 51, 23.7 ± 2.7 years) by using an addiction Stroop with computer-related and neutral words, as well as a visual probe with computer-related and neutral pictures. Mixed design analyses of variance with the between-subjects factor group (gamer/non-gamer) and the within-subjects factor stimulus type (computer-related/neutral) were calculated for the reaction times as well as for valence and familiarity ratings of the stimulus material. Results In the addiction Stroop, an interaction for group × word type was found: Only gamers showed longer reaction times to computer-related words compared to neutral words, thus exhibiting an attentional bias. In the visual probe, no differences in reaction time between computer-related and neutral pictures were found in either group, but the gamers were faster overall. Conclusions An attentional bias towards computer-related stimuli was found in excessive Internet gamers, by using an addiction Stroop but not by using a visual probe. A possible explanation for the discrepancy could lie in the fact that the visual probe may have been too easy for the gamers.
Jeromin, Franziska; Nyenhuis, Nele; Barke, Antonia
2016-01-01
Background and aims Internet Gaming Disorder is included in the Diagnostic and statistical manual of mental disorders (5th edition) as a disorder that merits further research. The diagnostic criteria are based on those for Substance Use Disorder and Gambling Disorder. Excessive gamblers and persons with Substance Use Disorder show attentional biases towards stimuli related to their addictions. We investigated whether excessive Internet gamers show a similar attentional bias, by using two established experimental paradigms. Methods We measured reaction times of excessive Internet gamers and non-gamers (N = 51, 23.7 ± 2.7 years) by using an addiction Stroop with computer-related and neutral words, as well as a visual probe with computer-related and neutral pictures. Mixed design analyses of variance with the between-subjects factor group (gamer/non-gamer) and the within-subjects factor stimulus type (computer-related/neutral) were calculated for the reaction times as well as for valence and familiarity ratings of the stimulus material. Results In the addiction Stroop, an interaction for group × word type was found: Only gamers showed longer reaction times to computer-related words compared to neutral words, thus exhibiting an attentional bias. In the visual probe, no differences in reaction time between computer-related and neutral pictures were found in either group, but the gamers were faster overall. Conclusions An attentional bias towards computer-related stimuli was found in excessive Internet gamers, by using an addiction Stroop but not by using a visual probe. A possible explanation for the discrepancy could lie in the fact that the visual probe may have been too easy for the gamers. PMID:28092198
Villena-González, Mario; López, Vladimir; Rodríguez, Eugenio
2016-05-15
When attention is oriented toward inner thoughts, as spontaneously occurs during mind wandering, the processing of external information is attenuated. However, the potential effects of thought's content regarding sensory attenuation are still unknown. The present study aims to assess if the representational format of thoughts, such as visual imagery or inner speech, might differentially affect the sensory processing of external stimuli. We recorded the brain activity of 20 participants (12 women) while they were exposed to a probe visual stimulus in three different conditions: executing a task on the visual probe (externally oriented attention), and two conditions involving inward-turned attention i.e. generating inner speech and performing visual imagery. Event-related potentials results showed that the P1 amplitude, related with sensory response, was significantly attenuated during both task involving inward attention compared with external task. When both representational formats were compared, the visual imagery condition showed stronger attenuation in sensory processing than inner speech condition. Alpha power in visual areas was measured as an index of cortical inhibition. Larger alpha amplitude was found when participants engaged in an internal thought contrasted with the external task, with visual imagery showing even more alpha power than inner speech condition. Our results show, for the first time to our knowledge, that visual attentional processing to external stimuli during self-generated thoughts is differentially affected by the representational format of the ongoing train of thoughts. Copyright © 2016 Elsevier Inc. All rights reserved.
Finke, Kathrin; Neitzel, Julia; Bäuml, Josef G; Redel, Petra; Müller, Hermann J; Meng, Chun; Jaekel, Julia; Daamen, Marcel; Scheef, Lukas; Busch, Barbara; Baumann, Nicole; Boecker, Henning; Bartmann, Peter; Habekost, Thomas; Wolke, Dieter; Wohlschläger, Afra; Sorg, Christian
2015-02-15
Although pronounced and lasting deficits in selective attention have been observed for preterm born individuals it is unknown which specific attentional sub-mechanisms are affected and how they relate to brain networks. We used the computationally specified 'Theory of Visual Attention' together with whole- and partial-report paradigms to compare attentional sub-mechanisms of pre- (n=33) and full-term (n=32) born adults. Resting-state fMRI was used to evaluate both between-group differences and inter-individual variance in changed functional connectivity of intrinsic brain networks relevant for visual attention. In preterm born adults, we found specific impairments of visual short-term memory (vSTM) storage capacity while other sub-mechanisms such as processing speed or attentional weighting were unchanged. Furthermore, changed functional connectivity was found in unimodal visual and supramodal attention-related intrinsic networks. Among preterm born adults, the individual pattern of changed connectivity in occipital and parietal cortices was systematically associated with vSTM in such a way that the more distinct the connectivity differences, the better the preterm adults' storage capacity. These findings provide first evidence for selectively changed attentional sub-mechanisms in preterm born adults and their relation to altered intrinsic brain networks. In particular, data suggest that cortical changes in intrinsic functional connectivity may compensate adverse developmental consequences of prematurity on visual short-term storage capacity. Copyright © 2014 Elsevier Inc. All rights reserved.
Markers of preparatory attention predict visual short-term memory performance.
Murray, Alexandra M; Nobre, Anna C; Stokes, Mark G
2011-05-01
Visual short-term memory (VSTM) is limited in capacity. Therefore, it is important to encode only visual information that is most likely to be relevant to behaviour. Here we asked which aspects of selective biasing of VSTM encoding predict subsequent memory-based performance. We measured EEG during a selective VSTM encoding task, in which we varied parametrically the memory load and the precision of recall required to compare a remembered item to a subsequent probe item. On half the trials, a spatial cue indicated that participants only needed to encode items from one hemifield. We observed a typical sequence of markers of anticipatory spatial attention: early attention directing negativity (EDAN), anterior attention directing negativity (ADAN), late directing attention positivity (LDAP); as well as of VSTM maintenance: contralateral delay activity (CDA). We found that individual differences in preparatory brain activity (EDAN/ADAN) predicted cue-related changes in recall accuracy, indexed by memory-probe discrimination sensitivity (d'). Importantly, our parametric manipulation of memory-probe similarity also allowed us to model the behavioural data for each participant, providing estimates for the quality of the memory representation and the probability that an item could be retrieved. We found that selective encoding primarily increased the probability of accurate memory recall; that ERP markers of preparatory attention predicted the cue-related changes in recall probability. Copyright © 2011. Published by Elsevier Ltd.
Early Visual Cortex Dynamics during Top-Down Modulated Shifts of Feature-Selective Attention.
Müller, Matthias M; Trautmann, Mireille; Keitel, Christian
2016-04-01
Shifting attention from one color to another color or from color to another feature dimension such as shape or orientation is imperative when searching for a certain object in a cluttered scene. Most attention models that emphasize feature-based selection implicitly assume that all shifts in feature-selective attention underlie identical temporal dynamics. Here, we recorded time courses of behavioral data and steady-state visual evoked potentials (SSVEPs), an objective electrophysiological measure of neural dynamics in early visual cortex to investigate temporal dynamics when participants shifted attention from color or orientation toward color or orientation, respectively. SSVEPs were elicited by four random dot kinematograms that flickered at different frequencies. Each random dot kinematogram was composed of dashes that uniquely combined two features from the dimensions color (red or blue) and orientation (slash or backslash). Participants were cued to attend to one feature (such as color or orientation) and respond to coherent motion targets of the to-be-attended feature. We found that shifts toward color occurred earlier after the shifting cue compared with shifts toward orientation, regardless of the original feature (i.e., color or orientation). This was paralleled in SSVEP amplitude modulations as well as in the time course of behavioral data. Overall, our results suggest different neural dynamics during shifts of attention from color and orientation and the respective shifting destinations, namely, either toward color or toward orientation.
Attentional load modulates responses of human primary visual cortex to invisible stimuli.
Bahrami, Bahador; Lavie, Nilli; Rees, Geraint
2007-03-20
Visual neuroscience has long sought to determine the extent to which stimulus-evoked activity in visual cortex depends on attention and awareness. Some influential theories of consciousness maintain that the allocation of attention is restricted to conscious representations [1, 2]. However, in the load theory of attention [3], competition between task-relevant and task-irrelevant stimuli for limited-capacity attention does not depend on conscious perception of the irrelevant stimuli. The critical test is whether the level of attentional load in a relevant task would determine unconscious neural processing of invisible stimuli. Human participants were scanned with high-field fMRI while they performed a foveal task of low or high attentional load. Irrelevant, invisible monocular stimuli were simultaneously presented peripherally and were continuously suppressed by a flashing mask in the other eye [4]. Attentional load in the foveal task strongly modulated retinotopic activity evoked in primary visual cortex (V1) by the invisible stimuli. Contrary to traditional views [1, 2, 5, 6], we found that availability of attentional capacity determines neural representations related to unconscious processing of continuously suppressed stimuli in human primary visual cortex. Spillover of attention to cortical representations of invisible stimuli (under low load) cannot be a sufficient condition for their awareness.
Reinhart, Robert M G; Carlisle, Nancy B; Woodman, Geoffrey F
2014-08-01
Current research suggests that we can watch visual working memory surrender the control of attention early in the process of learning to search for a specific object. This inference is based on the observation that the contralateral delay activity (CDA) rapidly decreases in amplitude across trials when subjects search for the same target object. Here, we tested the alternative explanation that the role of visual working memory does not actually decline across learning, but instead lateralized representations accumulate in both hemispheres across trials and wash out the lateralized CDA. We show that the decline in CDA amplitude occurred even when the target objects were consistently lateralized to a single visual hemifield. Our findings demonstrate that reductions in the amplitude of the CDA during learning are not simply due to the dilution of the CDA from interhemispheric cancellation. Copyright © 2014 Society for Psychophysiological Research.
Prefrontal contributions to visual selective attention.
Squire, Ryan F; Noudoost, Behrad; Schafer, Robert J; Moore, Tirin
2013-07-08
The faculty of attention endows us with the capacity to process important sensory information selectively while disregarding information that is potentially distracting. Much of our understanding of the neural circuitry underlying this fundamental cognitive function comes from neurophysiological studies within the visual modality. Past evidence suggests that a principal function of the prefrontal cortex (PFC) is selective attention and that this function involves the modulation of sensory signals within posterior cortices. In this review, we discuss recent progress in identifying the specific prefrontal circuits controlling visual attention and its neural correlates within the primate visual system. In addition, we examine the persisting challenge of precisely defining how behavior should be affected when attentional function is lost.
Gamito, Pedro; Oliveira, Jorge; Alghazzawi, Daniyal; Fardoun, Habib; Rosa, Pedro; Sousa, Tatiana; Maia, Ines; Morais, Diogo; Lopes, Paulo; Brito, Rodrigo
2017-01-01
Ecological validity should be the cornerstone of any assessment of cognitive functioning. For this purpose, we have developed a preliminary study to test the Art Gallery Test (AGT) as an alternative to traditional neuropsychological testing. The AGT involves three visual search subtests displayed in a virtual reality (VR) art gallery, designed to assess visual attention within an ecologically valid setting. To evaluate the relation between AGT and standard neuropsychological assessment scales, data were collected on a normative sample of healthy adults ( n = 30). The measures consisted of concurrent paper-and-pencil neuropsychological measures [Montreal Cognitive Assessment (MoCA), Frontal Assessment Battery (FAB), and Color Trails Test (CTT)] along with the outcomes from the three subtests of the AGT. The results showed significant correlations between the AGT subtests describing different visual search exercises strategies with global and specific cognitive measures. Comparative visual search was associated with attention and cognitive flexibility (CTT); whereas visual searches involving pictograms correlated with global cognitive function (MoCA).
Featural and temporal attention selectively enhance task-appropriate representations in human V1
Warren, Scott; Yacoub, Essa; Ghose, Geoffrey
2015-01-01
Our perceptions are often shaped by focusing our attention toward specific features or periods of time irrespective of location. We explore the physiological bases of these non-spatial forms of attention by imaging brain activity while subjects perform a challenging change detection task. The task employs a continuously varying visual stimulus that, for any moment in time, selectively activates functionally distinct subpopulations of primary visual cortex (V1) neurons. When subjects are cued to the timing and nature of the change, the mapping of orientation preference across V1 was systematically shifts toward the cued stimulus just prior to its appearance. A simple linear model can explain this shift: attentional changes are selectively targeted toward neural subpopulations representing the attended feature at the times the feature was anticipated. Our results suggest that featural attention is mediated by a linear change in the responses of task-appropriate neurons across cortex during appropriate periods of time. PMID:25501983
Gao, Xiao; Deng, Xiao; Yang, Jia; Liang, Shuang; Liu, Jie; Chen, Hong
2014-12-01
Visual attentional bias has important functions during the appearance social comparisons. However, for the limitations of experimental paradigms or analysis methods in previous studies, the time course of attentional bias to thin and fat body images among women with body dissatisfaction (BD) has still been unclear. In using free reviewing task combined with eye movement tracking, and based on event-related analyses of the critical first eye movement events, as well as epoch-related analyses of gaze durations, the current study investigated different attentional bias components to body shape/part images during 15s presentation time among 34 high BD and 34 non-BD young women. In comparison to the controls, women with BD showed sustained maintenance biases on thin and fat body images during both early automatic and late strategic processing stages. This study highlights a clear need for research on the dynamics of attentional biases related to body image and eating disturbances. Copyright © 2014 Elsevier Ltd. All rights reserved.
Cognitive load effects on early visual perceptual processing.
Liu, Ping; Forte, Jason; Sewell, David; Carter, Olivia
2018-05-01
Contrast-based early visual processing has largely been considered to involve autonomous processes that do not need the support of cognitive resources. However, as spatial attention is known to modulate early visual perceptual processing, we explored whether cognitive load could similarly impact contrast-based perception. We used a dual-task paradigm to assess the impact of a concurrent working memory task on the performance of three different early visual tasks. The results from Experiment 1 suggest that cognitive load can modulate early visual processing. No effects of cognitive load were seen in Experiments 2 or 3. Together, the findings provide evidence that under some circumstances cognitive load effects can penetrate the early stages of visual processing and that higher cognitive function and early perceptual processing may not be as independent as was once thought.
Cuperlier, Nicolas; Gaussier, Philippe
2017-01-01
Emotions play a significant role in internal regulatory processes. In this paper, we advocate four key ideas. First, novelty detection can be grounded in the sensorimotor experience and allow higher order appraisal. Second, cognitive processes, such as those involved in self-assessment, influence emotional states by eliciting affects like boredom and frustration. Third, emotional processes such as those triggered by self-assessment influence attentional processes. Last, close emotion-cognition interactions implement an efficient feedback loop for the purpose of top-down behavior regulation. The latter is what we call ‘Emotional Metacontrol’. We introduce a model based on artificial neural networks. This architecture is used to control a robotic system in a visual search task. The emotional metacontrol intervenes to bias the robot visual attention during active object recognition. Through a behavioral and statistical analysis, we show that this mechanism increases the robot performance and fosters the exploratory behavior to avoid deadlocks. PMID:28934291
Combined contributions of feedforward and feedback inputs to bottom-up attention
Khorsand, Peyman; Moore, Tirin; Soltani, Alireza
2015-01-01
In order to deal with a large amount of information carried by visual inputs entering the brain at any given point in time, the brain swiftly uses the same inputs to enhance processing in one part of visual field at the expense of the others. These processes, collectively called bottom-up attentional selection, are assumed to solely rely on feedforward processing of the external inputs, as it is implied by the nomenclature. Nevertheless, evidence from recent experimental and modeling studies points to the role of feedback in bottom-up attention. Here, we review behavioral and neural evidence that feedback inputs are important for the formation of signals that could guide attentional selection based on exogenous inputs. Moreover, we review results from a modeling study elucidating mechanisms underlying the emergence of these signals in successive layers of neural populations and how they depend on feedback from higher visual areas. We use these results to interpret and discuss more recent findings that can further unravel feedforward and feedback neural mechanisms underlying bottom-up attention. We argue that while it is descriptively useful to separate feedforward and feedback processes underlying bottom-up attention, these processes cannot be mechanistically separated into two successive stages as they occur at almost the same time and affect neural activity within the same brain areas using similar neural mechanisms. Therefore, understanding the interaction and integration of feedforward and feedback inputs is crucial for better understanding of bottom-up attention. PMID:25784883
Geldof, Christiaan J A; van Hus, Janeline W P; Jeukens-Visser, Martine; Nollet, Frans; Kok, Joke H; Oosterlaan, Jaap; van Wassenaer-Leemhuis, Aleid G
2016-01-01
To extend understanding of impaired motor functioning of very preterm (VP)/very low birth weight (VLBW) children by investigating its relationship with visual attention, visual and visual-motor functioning. Motor functioning (Movement Assessment Battery for Children, MABC-2; Manual Dexterity, Aiming & Catching, and Balance component), as well as visual attention (attention network and visual search tests), vision (oculomotor, visual sensory and perceptive functioning), visual-motor integration (Beery Visual Motor Integration), and neurological status (Touwen examination) were comprehensively assessed in a sample of 106 5.5-year-old VP/VLBW children. Stepwise linear regression analyses were conducted to investigate multivariate associations between deficits in visual attention, oculomotor, visual sensory, perceptive and visual-motor integration functioning, abnormal neurological status, neonatal risk factors, and MABC-2 scores. Abnormal MABC-2 Total or component scores occurred in 23-36% of VP/VLBW children. Visual and visual-motor functioning accounted for 9-11% of variance in MABC-2 Total, Manual Dexterity and Balance scores. Visual perceptive deficits only were associated with Aiming & Catching. Abnormal neurological status accounted for an additional 19-30% of variance in MABC-2 Total, Manual Dexterity and Balance scores, and 5% of variance in Aiming & Catching, and neonatal risk factors for 3-6% of variance in MABC-2 Total, Manual Dexterity and Balance scores. Motor functioning is weakly associated with visual and visual-motor integration deficits and moderately associated with abnormal neurological status, indicating that motor performance reflects long term vulnerability following very preterm birth, and that visual deficits are of minor importance in understanding motor functioning of VP/VLBW children. Copyright © 2016 Elsevier Ltd. All rights reserved.
Functional size of human visual area V1: a neural correlate of top-down attention.
Verghese, Ashika; Kolbe, Scott C; Anderson, Andrew J; Egan, Gary F; Vidyasagar, Trichur R
2014-06-01
Heavy demands are placed on the brain's attentional capacity when selecting a target item in a cluttered visual scene, or when reading. It is widely accepted that such attentional selection is mediated by top-down signals from higher cortical areas to early visual areas such as the primary visual cortex (V1). Further, it has also been reported that there is considerable variation in the surface area of V1. This variation may impact on either the number or specificity of attentional feedback signals and, thereby, the efficiency of attentional mechanisms. In this study, we investigated whether individual differences between humans performing attention-demanding tasks can be related to the functional area of V1. We found that those with a larger representation in V1 of the central 12° of the visual field as measured using BOLD signals from fMRI were able to perform a serial search task at a faster rate. In line with recent suggestions of the vital role of visuo-spatial attention in reading, the speed of reading showed a strong positive correlation with the speed of visual search, although it showed little correlation with the size of V1. The results support the idea that the functional size of the primary visual cortex is an important determinant of the efficiency of selective spatial attention for simple tasks, and that the attentional processing required for complex tasks like reading are to a large extent determined by other brain areas and inter-areal connections. Copyright © 2014 Elsevier Inc. All rights reserved.
A bilateral advantage in controlling access to visual short-term memory.
Holt, Jessica L; Delvenne, Jean-François
2014-01-01
Recent research on visual short-term memory (VSTM) has revealed the existence of a bilateral field advantage (BFA--i.e., better memory when the items are distributed in the two visual fields than if they are presented in the same hemifield) for spatial location and bar orientation, but not for color (Delvenne, 2005; Umemoto, Drew, Ester, & Awh, 2010). Here, we investigated whether a BFA in VSTM is constrained by attentional selective processes. It has indeed been previously suggested that the BFA may be a general feature of selective attention (Alvarez & Cavanagh, 2005; Delvenne, 2005). Therefore, the present study examined whether VSTM for color benefits from bilateral presentation if attentional selective processes are particularly engaged. Participants completed a color change detection task whereby target stimuli were presented either across both hemifields or within one single hemifield. In order to engage attentional selective processes, some trials contained irrelevant stimuli that needed to be ignored. Targets were selected based on spatial locations (Experiment 1) or on a salient feature (Experiment 2). In both cases, the results revealed a BFA only when irrelevant stimuli were presented among the targets. Overall, the findings strongly suggest that attentional selective processes at encoding can constrain whether a BFA is observed in VSTM.
Visual short-term memory always requires general attention.
Morey, Candice C; Bieler, Malte
2013-02-01
The role of attention in visual memory remains controversial; while some evidence has suggested that memory for binding between features demands no more attention than does memory for the same features, other evidence has indicated cognitive costs or mnemonic benefits for explicitly attending to bindings. We attempted to reconcile these findings by examining how memory for binding, for features, and for features during binding is affected by a concurrent attention-demanding task. We demonstrated that performing a concurrent task impairs memory for as few as two visual objects, regardless of whether each object includes one or more features. We argue that this pattern of results reflects an essential role for domain-general attention in visual memory, regardless of the simplicity of the to-be-remembered stimuli. We then discuss the implications of these findings for theories of visual working memory.
Age-equivalent top-down modulation during cross-modal selective attention.
Guerreiro, Maria J S; Anguera, Joaquin A; Mishra, Jyoti; Van Gerven, Pascal W M; Gazzaley, Adam
2014-12-01
Selective attention involves top-down modulation of sensory cortical areas, such that responses to relevant information are enhanced whereas responses to irrelevant information are suppressed. Suppression of irrelevant information, unlike enhancement of relevant information, has been shown to be deficient in aging. Although these attentional mechanisms have been well characterized within the visual modality, little is known about these mechanisms when attention is selectively allocated across sensory modalities. The present EEG study addressed this issue by testing younger and older participants in three different tasks: Participants attended to the visual modality and ignored the auditory modality, attended to the auditory modality and ignored the visual modality, or passively perceived information presented through either modality. We found overall modulation of visual and auditory processing during cross-modal selective attention in both age groups. Top-down modulation of visual processing was observed as a trend toward enhancement of visual information in the setting of auditory distraction, but no significant suppression of visual distraction when auditory information was relevant. Top-down modulation of auditory processing, on the other hand, was observed as suppression of auditory distraction when visual stimuli were relevant, but no significant enhancement of auditory information in the setting of visual distraction. In addition, greater visual enhancement was associated with better recognition of relevant visual information, and greater auditory distractor suppression was associated with a better ability to ignore auditory distraction. There were no age differences in these effects, suggesting that when relevant and irrelevant information are presented through different sensory modalities, selective attention remains intact in older age.
Brain activity associated with selective attention, divided attention and distraction.
Salo, Emma; Salmela, Viljami; Salmi, Juha; Numminen, Jussi; Alho, Kimmo
2017-06-01
Top-down controlled selective or divided attention to sounds and visual objects, as well as bottom-up triggered attention to auditory and visual distractors, has been widely investigated. However, no study has systematically compared brain activations related to all these types of attention. To this end, we used functional magnetic resonance imaging (fMRI) to measure brain activity in participants performing a tone pitch or a foveal grating orientation discrimination task, or both, distracted by novel sounds not sharing frequencies with the tones or by extrafoveal visual textures. To force focusing of attention to tones or gratings, or both, task difficulty was kept constantly high with an adaptive staircase method. A whole brain analysis of variance (ANOVA) revealed fronto-parietal attention networks for both selective auditory and visual attention. A subsequent conjunction analysis indicated partial overlaps of these networks. However, like some previous studies, the present results also suggest segregation of prefrontal areas involved in the control of auditory and visual attention. The ANOVA also suggested, and another conjunction analysis confirmed, an additional activity enhancement in the left middle frontal gyrus related to divided attention supporting the role of this area in top-down integration of dual task performance. Distractors expectedly disrupted task performance. However, contrary to our expectations, activations specifically related to the distractors were found only in the auditory and visual cortices. This suggests gating of the distractors from further processing perhaps due to strictly focused attention in the current demanding discrimination tasks. Copyright © 2017 Elsevier B.V. All rights reserved.
Bressler, David W; Fortenbaugh, Francesca C; Robertson, Lynn C; Silver, Michael A
2013-06-07
Endogenous visual spatial attention improves perception and enhances neural responses to visual stimuli at attended locations. Although many aspects of visual processing differ significantly between central and peripheral vision, little is known regarding the neural substrates of the eccentricity dependence of spatial attention effects. We measured amplitudes of positive and negative fMRI responses to visual stimuli as a function of eccentricity in a large number of topographically-organized cortical areas. Responses to each stimulus were obtained when the stimulus was attended and when spatial attention was directed to a stimulus in the opposite visual hemifield. Attending to the stimulus increased both positive and negative response amplitudes in all cortical areas we studied: V1, V2, V3, hV4, VO1, LO1, LO2, V3A/B, IPS0, TO1, and TO2. However, the eccentricity dependence of these effects differed considerably across cortical areas. In early visual, ventral, and lateral occipital cortex, attentional enhancement of positive responses was greater for central compared to peripheral eccentricities. The opposite pattern was observed in dorsal stream areas IPS0 and putative MT homolog TO1, where attentional enhancement of positive responses was greater in the periphery. Both the magnitude and the eccentricity dependence of attentional modulation of negative fMRI responses closely mirrored that of positive responses across cortical areas. Copyright © 2013 Elsevier Ltd. All rights reserved.
Visual Scan Adaptation During Repeated Visual Search
2010-01-01
Junge, J. A. (2004). Searching for stimulus-driven shifts of attention. Psychonomic Bulletin & Review , 11, 876–881. Furst, C. J. (1971...search strategies cannot override attentional capture. Psychonomic Bulletin & Review , 11, 65–70. Wolfe, J. M. (1994). Guided search 2.0: A revised model...of visual search. Psychonomic Bulletin & Review , 1, 202–238. Wolfe, J. M. (1998a). Visual search. In H. Pashler (Ed.), Attention (pp. 13–73). East
Filbrich, Lieve; Alamia, Andrea; Burns, Soline; Legrain, Valéry
2017-07-01
Despite their high relevance for defending the integrity of the body, crossmodal links between nociception, the neural system specifically coding potentially painful information, and vision are still poorly studied, especially the effects of nociception on visual perception. This study investigated if, and in which time window, a nociceptive stimulus can attract attention to its location on the body, independently of voluntary control, to facilitate the processing of visual stimuli occurring in the same side of space as the limb on which the visual stimulus was applied. In a temporal order judgment task based on an adaptive procedure, participants judged which of two visual stimuli, one presented next to either hand in either side of space, had been perceived first. Each pair of visual stimuli was preceded (by 200, 400, or 600 ms) by a nociceptive stimulus applied either unilaterally on one single hand, or bilaterally, on both hands simultaneously. Results show that, as compared to the bilateral condition, participants' judgments were biased to the advantage of the visual stimuli that occurred in the same side of space as the hand on which a unilateral, nociceptive stimulus was applied. This effect was present in a time window ranging from 200 to 600 ms, but importantly, biases increased with decreasing time interval. These results suggest that nociceptive stimuli can affect the perceptual processing of spatially congruent visual inputs.
Attention modulates visual size adaptation.
Kreutzer, Sylvia; Fink, Gereon R; Weidner, Ralph
2015-01-01
The current study determined in healthy subjects (n = 16) whether size adaptation occurs at early, i.e., preattentive, levels of processing or whether higher cognitive processes such as attention can modulate the illusion. To investigate this issue, bottom-up stimulation was kept constant across conditions by using a single adaptation display containing both small and large adapter stimuli. Subjects' attention was directed to either the large or small adapter stimulus by means of a luminance detection task. When attention was directed toward the small as compared to the large adapter, the perceived size of the subsequent target was significantly increased. Data suggest that different size adaptation effects can be induced by one and the same stimulus depending on the current allocation of attention. This indicates that size adaptation is subject to attentional modulation. These findings are in line with previous research showing that transient as well as sustained attention modulates visual features, such as contrast sensitivity and spatial frequency, and influences adaptation in other contexts, such as motion adaptation (Alais & Blake, 1999; Lankheet & Verstraten, 1995). Based on a recently suggested model (Pooresmaeili, Arrighi, Biagi, & Morrone, 2013), according to which perceptual adaptation is based on local excitation and inhibition in V1, we conclude that guiding attention can boost these local processes in one or the other direction by increasing the weight of the attended adapter. In sum, perceptual adaptation, although reflected in changes of neural activity at early levels (as shown in the aforementioned study), is nevertheless subject to higher-order modulation.
Attentive Tracking Disrupts Feature Binding in Visual Working Memory
Fougnie, Daryl; Marois, René
2009-01-01
One of the most influential theories in visual cognition proposes that attention is necessary to bind different visual features into coherent object percepts (Treisman & Gelade, 1980). While considerable evidence supports a role for attention in perceptual feature binding, whether attention plays a similar function in visual working memory (VWM) remains controversial. To test the attentional requirements of VWM feature binding, here we gave participants an attention-demanding multiple object tracking task during the retention interval of a VWM task. Results show that the tracking task disrupted memory for color-shape conjunctions above and beyond any impairment to working memory for object features, and that this impairment was larger when the VWM stimuli were presented at different spatial locations. These results demonstrate that the role of visuospatial attention in feature binding is not unique to perception, but extends to the working memory of these perceptual representations as well. PMID:19609460
Attentional modulation of cell-class specific gamma-band synchronization in awake monkey area V4
Vinck, Martin; Womelsdorf, Thilo; Buffalo, Elizabeth A.; Desimone, Robert; Fries, Pascal
2013-01-01
Summary Selective visual attention is subserved by selective neuronal synchronization, entailing precise orchestration among excitatory and inhibitory cells. We tentatively identified these as broad (BS) and narrow spiking (NS) cells and analyzed their synchronization to the local field potential in two macaque monkeys performing a selective visual attention task. Across cells, gamma phases scattered widely but were unaffected by stimulation or attention. During stimulation, NS cells lagged BS cells on average by ~60° and gamma synchronized twice as strongly. Attention enhanced and reduced the gamma locking of strongly and weakly activated cells, respectively. During a pre-stimulus attentional cue period, BS cells showed weak gamma synchronization, while NS cells gamma synchronized as strongly as with visual stimulation. These analyses reveal the cell-type specific dynamics of the gamma cycle in macaque visual cortex and suggest that attention affects neurons differentially depending on cell type and activation level. PMID:24267656
Kamitani, Toshiaki; Kuroiwa, Yoshiyuki
2009-01-01
Recent studies demonstrated an altered P3 component and prolonged reaction time during the visual discrimination tasks in multiple system atrophy (MSA). In MSA, however, little is known about the N2 component which is known to be closely related to the visual discrimination process. We therefore compared the N2 component as well as the N1 and P3 components in 17 MSA patients with these components in 10 normal controls, by using a visual selective attention task to color or to shape. While the P3 in MSA was significantly delayed in selective attention to shape, the N2 in MSA was significantly delayed in selective attention to color. N1 was normally preserved both in attention to color and in attention to shape. Our electrophysiological results indicate that the color discrimination process during selective attention is impaired in MSA.
Owsley, Cynthia
2013-09-20
Older adults commonly report difficulties in visual tasks of everyday living that involve visual clutter, secondary task demands, and time sensitive responses. These difficulties often cannot be attributed to visual sensory impairment. Techniques for measuring visual processing speed under divided attention conditions and among visual distractors have been developed and have established construct validity in that those older adults performing poorly in these tests are more likely to exhibit daily visual task performance problems. Research suggests that computer-based training exercises can increase visual processing speed in older adults and that these gains transfer to enhancement of health and functioning and a slowing in functional and health decline as people grow older. Copyright © 2012 Elsevier Ltd. All rights reserved.
’What’ and ’Where’ in Visual Attention: Evidence from the Neglect Syndrome
1992-01-01
representations of the visual world, visual attention, and object representations. 24 Bauer, R. M., & Rubens, A. B. (1985). Agnosia . In K. M. Heilman, & E...visual information. Journal of Experimental Psychology: General, 1-1, 501-517. Farah, M. J. (1990). Visual Agnosia : Disorders of Object Recognition and
Thiessen, Amber; Beukelman, David; Hux, Karen; Longenecker, Maria
2016-04-01
The purpose of the study was to compare the visual attention patterns of adults with aphasia and adults without neurological conditions when viewing visual scenes with 2 types of engagement. Eye-tracking technology was used to measure the visual attention patterns of 10 adults with aphasia and 10 adults without neurological conditions. Participants viewed camera-engaged (i.e., human figure facing camera) and task-engaged (i.e., human figure looking at and touching an object) visual scenes. Participants with aphasia responded to engagement cues by focusing on objects of interest more for task-engaged scenes than camera-engaged scenes; however, the difference in their responses to these scenes were not as pronounced as those observed in adults without neurological conditions. In addition, people with aphasia spent more time looking at background areas of interest and less time looking at person areas of interest for camera-engaged scenes than did control participants. Results indicate people with aphasia visually attend to scenes differently than adults without neurological conditions. As a consequence, augmentative and alternative communication (AAC) facilitators may have different visual attention behaviors than the people with aphasia for whom they are constructing or selecting visual scenes. Further examination of the visual attention of people with aphasia may help optimize visual scene selection.