ERIC Educational Resources Information Center
Son, Seung-Hee Claire; Tineo, Maria F.
2016-01-01
This study examined associations among low-income mothers' use of attention-getting utterances during shared book reading, preschoolers' verbal engagement and visual attention to reading, and their early literacy skills (N = 51). Mother-child shared book reading sessions were videotaped and coded for each utterance, including attention talk,…
Shared filtering processes link attentional and visual short-term memory capacity limits.
Bettencourt, Katherine C; Michalka, Samantha W; Somers, David C
2011-09-30
Both visual attention and visual short-term memory (VSTM) have been shown to have capacity limits of 4 ± 1 objects, driving the hypothesis that they share a visual processing buffer. However, these capacity limitations also show strong individual differences, making the degree to which these capacities are related unclear. Moreover, other research has suggested a distinction between attention and VSTM buffers. To explore the degree to which capacity limitations reflect the use of a shared visual processing buffer, we compared individual subject's capacities on attentional and VSTM tasks completed in the same testing session. We used a multiple object tracking (MOT) and a VSTM change detection task, with varying levels of distractors, to measure capacity. Significant correlations in capacity were not observed between the MOT and VSTM tasks when distractor filtering demands differed between the tasks. Instead, significant correlations were seen when the tasks shared spatial filtering demands. Moreover, these filtering demands impacted capacity similarly in both attention and VSTM tasks. These observations fail to support the view that visual attention and VSTM capacity limits result from a shared buffer but instead highlight the role of the resource demands of underlying processes in limiting capacity.
Kiyonaga, Anastasia; Egner, Tobias
2014-01-01
It is unclear why and under what circumstances working memory (WM) and attention interact. Here, we apply the logic of the time-based resource-sharing (TBRS) model of WM (e.g., Barrouillet et al., 2004) to explore the mixed findings of a separate, but related, literature that studies the guidance of visual attention by WM contents. Specifically, we hypothesize that the linkage between WM representations and visual attention is governed by a time-shared cognitive resource that alternately refreshes internal (WM) and selects external (visual attention) information. If this were the case, WM content should guide visual attention (involuntarily), but only when there is time for it to be refreshed in an internal focus of attention. To provide an initial test for this hypothesis, we examined whether the amount of unoccupied time during a WM delay could impact the magnitude of attentional capture by WM contents. Participants were presented with a series of visual search trials while they maintained a WM cue for a delayed-recognition test. WM cues could coincide with the search target, a distracter, or neither. We varied both the number of searches to be performed, and the amount of available time to perform them. Slowing of visual search by a WM matching distracter-and facilitation by a matching target-were curtailed when the delay was filled with fast-paced (refreshing-preventing) search trials, as was subsequent memory probe accuracy. WM content may, therefore, only capture visual attention when it can be refreshed, suggesting that internal (WM) and external attention demands reciprocally impact one another because they share a limited resource. The TBRS rationale can thus be applied in a novel context to explain why WM contents capture attention, and under what conditions that effect should be observed.
Kiyonaga, Anastasia; Egner, Tobias
2014-01-01
It is unclear why and under what circumstances working memory (WM) and attention interact. Here, we apply the logic of the time-based resource-sharing (TBRS) model of WM (e.g., Barrouillet et al., 2004) to explore the mixed findings of a separate, but related, literature that studies the guidance of visual attention by WM contents. Specifically, we hypothesize that the linkage between WM representations and visual attention is governed by a time-shared cognitive resource that alternately refreshes internal (WM) and selects external (visual attention) information. If this were the case, WM content should guide visual attention (involuntarily), but only when there is time for it to be refreshed in an internal focus of attention. To provide an initial test for this hypothesis, we examined whether the amount of unoccupied time during a WM delay could impact the magnitude of attentional capture by WM contents. Participants were presented with a series of visual search trials while they maintained a WM cue for a delayed-recognition test. WM cues could coincide with the search target, a distracter, or neither. We varied both the number of searches to be performed, and the amount of available time to perform them. Slowing of visual search by a WM matching distracter—and facilitation by a matching target—were curtailed when the delay was filled with fast-paced (refreshing-preventing) search trials, as was subsequent memory probe accuracy. WM content may, therefore, only capture visual attention when it can be refreshed, suggesting that internal (WM) and external attention demands reciprocally impact one another because they share a limited resource. The TBRS rationale can thus be applied in a novel context to explain why WM contents capture attention, and under what conditions that effect should be observed. PMID:25221499
ERIC Educational Resources Information Center
Olivers, Christian N. L.; Meijer, Frank; Theeuwes, Jan
2006-01-01
In 7 experiments, the authors explored whether visual attention (the ability to select relevant visual information) and visual working memory (the ability to retain relevant visual information) share the same content representations. The presence of singleton distractors interfered more strongly with a visual search task when it was accompanied by…
Common neural substrates for visual working memory and attention.
Mayer, Jutta S; Bittner, Robert A; Nikolić, Danko; Bledowski, Christoph; Goebel, Rainer; Linden, David E J
2007-06-01
Humans are severely limited in their ability to memorize visual information over short periods of time. Selective attention has been implicated as a limiting factor. Here we used functional magnetic resonance imaging to test the hypothesis that this limitation is due to common neural resources shared by visual working memory (WM) and selective attention. We combined visual search and delayed discrimination of complex objects and independently modulated the demands on selective attention and WM encoding. Participants were presented with a search array and performed easy or difficult visual search in order to encode one or three complex objects into visual WM. Overlapping activation for attention-demanding visual search and WM encoding was observed in distributed posterior and frontal regions. In the right prefrontal cortex and bilateral insula blood oxygen-level-dependent activation additively increased with increased WM load and attentional demand. Conversely, several visual, parietal and premotor areas showed overlapping activation for the two task components and were severely reduced in their WM load response under the condition with high attentional demand. Regions in the left prefrontal cortex were selectively responsive to WM load. Areas selectively responsive to high attentional demand were found within the right prefrontal and bilateral occipital cortex. These results indicate that encoding into visual WM and visual selective attention require to a high degree access to common neural resources. We propose that competition for resources shared by visual attention and WM encoding can limit processing capabilities in distributed posterior brain regions.
Wahn, Basil; König, Peter
2015-01-01
Humans continuously receive and integrate information from several sensory modalities. However, attentional resources limit the amount of information that can be processed. It is not yet clear how attentional resources and multisensory processing are interrelated. Specifically, the following questions arise: (1) Are there distinct spatial attentional resources for each sensory modality? and (2) Does attentional load affect multisensory integration? We investigated these questions using a dual task paradigm: participants performed two spatial tasks (a multiple object tracking task and a localization task), either separately (single task condition) or simultaneously (dual task condition). In the multiple object tracking task, participants visually tracked a small subset of several randomly moving objects. In the localization task, participants received either visual, auditory, or redundant visual and auditory location cues. In the dual task condition, we found a substantial decrease in participants' performance relative to the results of the single task condition. Importantly, participants performed equally well in the dual task condition regardless of the location cues' modality. This result suggests that having spatial information coming from different modalities does not facilitate performance, thereby indicating shared spatial attentional resources for the auditory and visual modality. Furthermore, we found that participants integrated redundant multisensory information similarly even when they experienced additional attentional load in the dual task condition. Overall, findings suggest that (1) visual and auditory spatial attentional resources are shared and that (2) audiovisual integration of spatial information occurs in an pre-attentive processing stage.
Infants' visual and auditory communication when a partner is or is not visually attending.
Liszkowski, Ulf; Albrecht, Konstanze; Carpenter, Malinda; Tomasello, Michael
2008-04-01
In the current study we investigated infants' communication in the visual and auditory modalities as a function of the recipient's visual attention. We elicited pointing at interesting events from thirty-two 12-month olds and thirty-two 18-month olds in two conditions: when the recipient either was or was not visually attending to them before and during the point. The main result was that infants initiated more pointing when the recipient's visual attention was on them than when it was not. In addition, when the recipient did not respond by sharing interest in the designated event, infants initiated more repairs (repeated pointing) than when she did, again, especially when the recipient was visually attending to them. Interestingly, accompanying vocalizations were used intentionally and increased in both experimental conditions when the recipient did not share attention and interest. However, there was little evidence that infants used their vocalizations to direct attention to their gestures when the recipient was not attending to them.
ERIC Educational Resources Information Center
Olivers, Christian N. L.
2009-01-01
An important question is whether visual attention (the ability to select relevant visual information) and visual working memory (the ability to retain relevant visual information) share the same content representations. Some past research has indicated that they do: Singleton distractors interfered more strongly with a visual search task when they…
Sacrey, Lori-Ann R; Whishaw, Ian Q
2012-06-01
Skilled reaching is a forelimb movement in which a subject reaches for a piece of food that is placed in the mouth for eating. It is a natural movement used by many animal species and is a routine, daily activity for humans. Its prominent features include transport of the hand to a target, shaping the digits in preparation for grasping, grasping, and withdrawal of the hand to place the food in the mouth. Studies on normal human adults show that skilled reaching is mediated by at least two sensory attention processes. Hand transport to the target and hand shaping are temporally coupled with visual fixation on the target. Grasping, withdrawal, and placing the food into the mouth are associated with visual disengagement and somatosensory guidance. Studies on nonhuman animal species illustrate that shared visual and somatosensory attention likely evolved in the primate lineage. Studies on developing infants illustrate that shared attention requires both experience and maturation. Studies on subjects with Parkinson's disease and Huntington's disease illustrate that decomposition of shared attention also features compensatory visual guidance. The evolutionary, developmental, and neural control of skilled reaching suggests that associative learning processes are importantly related to normal adult attention sharing and so can be used in remediation. The economical use of sensory attention in the different phases of skilled reaching ensures efficiency in eating, reduces sensory interference between sensory reference frames, and provides efficient neural control of the advance and withdrawal components of skilled reaching movements. Copyright © 2011 Elsevier B.V. All rights reserved.
Ciaramitaro, Vivian M; Chow, Hiu Mei; Eglington, Luke G
2017-03-01
We used a cross-modal dual task to examine how changing visual-task demands influenced auditory processing, namely auditory thresholds for amplitude- and frequency-modulated sounds. Observers had to attend to two consecutive intervals of sounds and report which interval contained the auditory stimulus that was modulated in amplitude (Experiment 1) or frequency (Experiment 2). During auditory-stimulus presentation, observers simultaneously attended to a rapid sequential visual presentation-two consecutive intervals of streams of visual letters-and had to report which interval contained a particular color (low load, demanding less attentional resources) or, in separate blocks of trials, which interval contained more of a target letter (high load, demanding more attentional resources). We hypothesized that if attention is a shared resource across vision and audition, an easier visual task should free up more attentional resources for auditory processing on an unrelated task, hence improving auditory thresholds. Auditory detection thresholds were lower-that is, auditory sensitivity was improved-for both amplitude- and frequency-modulated sounds when observers engaged in a less demanding (compared to a more demanding) visual task. In accord with previous work, our findings suggest that visual-task demands can influence the processing of auditory information on an unrelated concurrent task, providing support for shared attentional resources. More importantly, our results suggest that attending to information in a different modality, cross-modal attention, can influence basic auditory contrast sensitivity functions, highlighting potential similarities between basic mechanisms for visual and auditory attention.
Is Attentional Resource Allocation Across Sensory Modalities Task-Dependent?
Wahn, Basil; König, Peter
2017-01-01
Human information processing is limited by attentional resources. That is, via attentional mechanisms, humans select a limited amount of sensory input to process while other sensory input is neglected. In multisensory research, a matter of ongoing debate is whether there are distinct pools of attentional resources for each sensory modality or whether attentional resources are shared across sensory modalities. Recent studies have suggested that attentional resource allocation across sensory modalities is in part task-dependent. That is, the recruitment of attentional resources across the sensory modalities depends on whether processing involves object-based attention (e.g., the discrimination of stimulus attributes) or spatial attention (e.g., the localization of stimuli). In the present paper, we review findings in multisensory research related to this view. For the visual and auditory sensory modalities, findings suggest that distinct resources are recruited when humans perform object-based attention tasks, whereas for the visual and tactile sensory modalities, partially shared resources are recruited. If object-based attention tasks are time-critical, shared resources are recruited across the sensory modalities. When humans perform an object-based attention task in combination with a spatial attention task, partly shared resources are recruited across the sensory modalities as well. Conversely, for spatial attention tasks, attentional processing does consistently involve shared attentional resources for the sensory modalities. Generally, findings suggest that the attentional system flexibly allocates attentional resources depending on task demands. We propose that such flexibility reflects a large-scale optimization strategy that minimizes the brain's costly resource expenditures and simultaneously maximizes capability to process currently relevant information.
An amodal shared resource model of language-mediated visual attention
Smith, Alastair C.; Monaghan, Padraic; Huettig, Falk
2013-01-01
Language-mediated visual attention describes the interaction of two fundamental components of the human cognitive system, language and vision. Within this paper we present an amodal shared resource model of language-mediated visual attention that offers a description of the information and processes involved in this complex multimodal behavior and a potential explanation for how this ability is acquired. We demonstrate that the model is not only sufficient to account for the experimental effects of Visual World Paradigm studies but also that these effects are emergent properties of the architecture of the model itself, rather than requiring separate information processing channels or modular processing systems. The model provides an explicit description of the connection between the modality-specific input from language and vision and the distribution of eye gaze in language-mediated visual attention. The paper concludes by discussing future applications for the model, specifically its potential for investigating the factors driving observed individual differences in language-mediated eye gaze. PMID:23966967
Heim, Stefan; Pape-Neumann, Julia; van Ermingen-Marbach, Muna; Brinkhaus, Moti; Grande, Marion
2015-07-01
Whereas the neurobiological basis of developmental dyslexia has received substantial attention, only little is known about the processes in the brain during remediation. This holds in particular in light of recent findings on cognitive subtypes of dyslexia which suggest interactions between individual profiles, training methods, and also the task in the scanner. Therefore, we trained three groups of German dyslexic primary school children in the domains of phonology, attention, or visual word recognition. We compared neurofunctional changes after 4 weeks of training in these groups to those in untrained normal readers in a reading task and in a task of visual attention. The overall reading improvement in the dyslexic children was comparable over groups. It was accompanied by substantial increase of the activation level in the visual word form area (VWFA) during a reading task inside the scanner. Moreover, there were activation increases that were unique for each training group in the reading task. In contrast, when children performed the visual attention task, shared training effects were found in the left inferior frontal sulcus and gyrus, which varied in amplitude between the groups. Overall, the data reveal that different remediation programmes matched to individual profiles of dyslexia may improve reading ability and commonly affect the VWFA in dyslexia as a shared part of otherwise distinct networks.
Attention Increases Spike Count Correlations between Visual Cortical Areas.
Ruff, Douglas A; Cohen, Marlene R
2016-07-13
Visual attention, which improves perception of attended locations or objects, has long been known to affect many aspects of the responses of neuronal populations in visual cortex. There are two nonmutually exclusive hypotheses concerning the neuronal mechanisms that underlie these perceptual improvements. The first hypothesis, that attention improves the information encoded by a population of neurons in a particular cortical area, has considerable physiological support. The second hypothesis is that attention improves perception by selectively communicating relevant visual information. This idea has been tested primarily by measuring interactions between neurons on very short timescales, which are mathematically nearly independent of neuronal interactions on longer timescales. We tested the hypothesis that attention changes the way visual information is communicated between cortical areas on longer timescales by recording simultaneously from neurons in primary visual cortex (V1) and the middle temporal area (MT) in rhesus monkeys. We used two independent and complementary approaches. Our correlative experiment showed that attention increases the trial-to-trial response variability that is shared between the two areas. In our causal experiment, we electrically microstimulated V1 and found that attention increased the effect of stimulation on MT responses. Together, our results suggest that attention affects both the way visual stimuli are encoded within a cortical area and the extent to which visual information is communicated between areas on behaviorally relevant timescales. Visual attention dramatically improves the perception of attended stimuli. Attention has long been thought to act by selecting relevant visual information for further processing. It has been hypothesized that this selection is accomplished by increasing communication between neurons that encode attended information in different cortical areas. We recorded simultaneously from neurons in primary visual cortex and the middle temporal area while rhesus monkeys performed an attention task. We found that attention increased shared variability between neurons in the two areas and that attention increased the effect of microstimulation in V1 on the firing rates of MT neurons. Our results provide support for the hypothesis that attention increases communication between neurons in different brain areas on behaviorally relevant timescales. Copyright © 2016 the authors 0270-6474/16/367523-12$15.00/0.
Attention Increases Spike Count Correlations between Visual Cortical Areas
Cohen, Marlene R.
2016-01-01
Visual attention, which improves perception of attended locations or objects, has long been known to affect many aspects of the responses of neuronal populations in visual cortex. There are two nonmutually exclusive hypotheses concerning the neuronal mechanisms that underlie these perceptual improvements. The first hypothesis, that attention improves the information encoded by a population of neurons in a particular cortical area, has considerable physiological support. The second hypothesis is that attention improves perception by selectively communicating relevant visual information. This idea has been tested primarily by measuring interactions between neurons on very short timescales, which are mathematically nearly independent of neuronal interactions on longer timescales. We tested the hypothesis that attention changes the way visual information is communicated between cortical areas on longer timescales by recording simultaneously from neurons in primary visual cortex (V1) and the middle temporal area (MT) in rhesus monkeys. We used two independent and complementary approaches. Our correlative experiment showed that attention increases the trial-to-trial response variability that is shared between the two areas. In our causal experiment, we electrically microstimulated V1 and found that attention increased the effect of stimulation on MT responses. Together, our results suggest that attention affects both the way visual stimuli are encoded within a cortical area and the extent to which visual information is communicated between areas on behaviorally relevant timescales. SIGNIFICANCE STATEMENT Visual attention dramatically improves the perception of attended stimuli. Attention has long been thought to act by selecting relevant visual information for further processing. It has been hypothesized that this selection is accomplished by increasing communication between neurons that encode attended information in different cortical areas. We recorded simultaneously from neurons in primary visual cortex and the middle temporal area while rhesus monkeys performed an attention task. We found that attention increased shared variability between neurons in the two areas and that attention increased the effect of microstimulation in V1 on the firing rates of MT neurons. Our results provide support for the hypothesis that attention increases communication between neurons in different brain areas on behaviorally relevant timescales. PMID:27413161
A Metric to Quantify Shared Visual Attention in Two-Person Teams
NASA Technical Reports Server (NTRS)
Gontar, Patrick; Mulligan, Jeffrey B.
2015-01-01
Critical tasks in high-risk environments are often performed by teams, the members of which must work together efficiently. In some situations, the team members may have to work together to solve a particular problem, while in others it may be better for them to divide the work into separate tasks that can be completed in parallel. We hypothesize that these two team strategies can be differentiated on the basis of shared visual attention, measured by gaze tracking.
Xie, Zilong; Reetzke, Rachel; Chandrasekaran, Bharath
2018-05-24
Increasing visual perceptual load can reduce pre-attentive auditory cortical activity to sounds, a reflection of the limited and shared attentional resources for sensory processing across modalities. Here, we demonstrate that modulating visual perceptual load can impact the early sensory encoding of speech sounds, and that the impact of visual load is highly dependent on the predictability of the incoming speech stream. Participants (n = 20, 9 females) performed a visual search task of high (target similar to distractors) and low (target dissimilar to distractors) perceptual load, while early auditory electrophysiological responses were recorded to native speech sounds. Speech sounds were presented either in a 'repetitive context', or a less predictable 'variable context'. Independent of auditory stimulus context, pre-attentive auditory cortical activity was reduced during high visual load, relative to low visual load. We applied a data-driven machine learning approach to decode speech sounds from the early auditory electrophysiological responses. Decoding performance was found to be poorer under conditions of high (relative to low) visual load, when the incoming acoustic stream was predictable. When the auditory stimulus context was less predictable, decoding performance was substantially greater for the high (relative to low) visual load conditions. Our results provide support for shared attentional resources between visual and auditory modalities that substantially influence the early sensory encoding of speech signals in a context-dependent manner. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.
Attention stabilizes the shared gain of V4 populations
Rabinowitz, Neil C; Goris, Robbe L; Cohen, Marlene; Simoncelli, Eero P
2015-01-01
Responses of sensory neurons represent stimulus information, but are also influenced by internal state. For example, when monkeys direct their attention to a visual stimulus, the response gain of specific subsets of neurons in visual cortex changes. Here, we develop a functional model of population activity to investigate the structure of this effect. We fit the model to the spiking activity of bilateral neural populations in area V4, recorded while the animal performed a stimulus discrimination task under spatial attention. The model reveals four separate time-varying shared modulatory signals, the dominant two of which each target task-relevant neurons in one hemisphere. In attention-directed conditions, the associated shared modulatory signal decreases in variance. This finding provides an interpretable and parsimonious explanation for previous observations that attention reduces variability and noise correlations of sensory neurons. Finally, the recovered modulatory signals reflect previous reward, and are predictive of subsequent choice behavior. DOI: http://dx.doi.org/10.7554/eLife.08998.001 PMID:26523390
Division of attention as a function of the number of steps, visual shifts, and memory load
NASA Technical Reports Server (NTRS)
Chechile, R. A.; Butler, K.; Gutowski, W.; Palmer, E. A.
1986-01-01
The effects on divided attention of visual shifts and long-term memory retrieval during a monitoring task are considered. A concurrent vigilance task was standardized under all experimental conditions. The results show that subjects can perform nearly perfectly on all of the time-shared tasks if long-term memory retrieval is not required for monitoring. With the requirement of memory retrieval, however, there was a large decrease in accuracy for all of the time-shared activities. It was concluded that the attentional demand of longterm memory retrieval is appreciable (even for a well-learned motor sequence), and thus memory retrieval results in a sizable reduction in the capability of subjects to divide their attention. A selected bibliography on the divided attention literature is provided.
Common capacity-limited neural mechanisms of selective attention and spatial working memory encoding
Fusser, Fabian; Linden, David E J; Rahm, Benjamin; Hampel, Harald; Haenschel, Corinna; Mayer, Jutta S
2011-01-01
One characteristic feature of visual working memory (WM) is its limited capacity, and selective attention has been implicated as limiting factor. A possible reason why attention constrains the number of items that can be encoded into WM is that the two processes share limited neural resources. Functional magnetic resonance imaging (fMRI) studies have indeed demonstrated commonalities between the neural substrates of WM and attention. Here we investigated whether such overlapping activations reflect interacting neural mechanisms that could result in capacity limitations. To independently manipulate the demands on attention and WM encoding within one single task, we combined visual search and delayed discrimination of spatial locations. Participants were presented with a search array and performed easy or difficult visual search in order to encode one, three or five positions of target items into WM. Our fMRI data revealed colocalised activation for attention-demanding visual search and WM encoding in distributed posterior and frontal regions. However, further analysis yielded two patterns of results. Activity in prefrontal regions increased additively with increased demands on WM and attention, indicating regional overlap without functional interaction. Conversely, the WM load-dependent activation in visual, parietal and premotor regions was severely reduced during high attentional demand. We interpret this interaction as indicating the sites of shared capacity-limited neural resources. Our findings point to differential contributions of prefrontal and posterior regions to the common neural mechanisms that support spatial WM encoding and attention, providing new imaging evidence for attention-based models of WM encoding. PMID:21781193
Task specificity of attention training: the case of probability cuing
Jiang, Yuhong V.; Swallow, Khena M.; Won, Bo-Yeong; Cistera, Julia D.; Rosenbaum, Gail M.
2014-01-01
Statistical regularities in our environment enhance perception and modulate the allocation of spatial attention. Surprisingly little is known about how learning-induced changes in spatial attention transfer across tasks. In this study, we investigated whether a spatial attentional bias learned in one task transfers to another. Most of the experiments began with a training phase in which a search target was more likely to be located in one quadrant of the screen than in the other quadrants. An attentional bias toward the high-probability quadrant developed during training (probability cuing). In a subsequent, testing phase, the target's location distribution became random. In addition, the training and testing phases were based on different tasks. Probability cuing did not transfer between visual search and a foraging-like task. However, it did transfer between various types of visual search tasks that differed in stimuli and difficulty. These data suggest that different visual search tasks share a common and transferrable learned attentional bias. However, this bias is not shared by high-level, decision-making tasks such as foraging. PMID:25113853
Dossett, D; Burns, B
2000-06-01
Developmental changes in kindergarten, 1st-, and 4th-grade children's knowledge about the variables that affect attention sharing and resource allocation were examined. Findings from the 2 experiments showed that kindergartners understood that person and strategy variables affect performance in attention-sharing tasks. However, knowledge of how task variables affect performance was not evident to them and was inconsistent for 1st and 4th graders. Children's knowledge about resource allocation revealed a different pattern and varied according to the dissimilarity of task demands in the attention-sharing task. In Experiment 1, in which the dual attention tasks were similar (i.e., visual detection), kindergarten and 1st-grade children did not differentiate performance in single and dual tasks. Fourth graders demonstrated knowledge that performance on a single task would be better than performance on the dual tasks for only 2 of the variables examined. In Experiment 2, in which the dual attention tasks were dissimilar (i.e., visual and auditory detection), kindergarten and 1st-grade children demonstrated knowledge that performance in the single task would be better than in the dual tasks for 1 of the task variables examined. However, 4th-grade children consistently gave higher ratings for performance on the single than on the dual attention tasks for all variables examined. These findings (a) underscore that children's meta-attention is not unitary and (b) demonstrate that children's knowledge about variables affecting attention sharing and resource allocation have different developmental pathways. Results show that knowledge about attention sharing and about the factors that influence the control of attention develops slowly and undergoes reorganization in middle childhood.
Olivers, Christian N L; Meijer, Frank; Theeuwes, Jan
2006-10-01
In 7 experiments, the authors explored whether visual attention (the ability to select relevant visual information) and visual working memory (the ability to retain relevant visual information) share the same content representations. The presence of singleton distractors interfered more strongly with a visual search task when it was accompanied by an additional memory task. Singleton distractors interfered even more when they were identical or related to the object held in memory, but only when it was difficult to verbalize the memory content. Furthermore, this content-specific interaction occurred for features that were relevant to the memory task but not for irrelevant features of the same object or for once-remembered objects that could be forgotten. Finally, memory-related distractors attracted more eye movements but did not result in longer fixations. The results demonstrate memory-driven attentional capture on the basis of content-specific representations. Copyright 2006 APA.
Distractor-Induced Blindness: A Special Case of Contingent Attentional Capture?
Winther, Gesche N.; Niedeggen, Michael
2017-01-01
The detection of a salient visual target embedded in a rapid serial visual presentation (RSVP) can be severely affected if target-like distractors are presented previously. This phenomenon, known as distractor-induced blindness (DIB), shares the prerequisites of contingent attentional capture (Folk, Remington, & Johnston, 1992). In both, target processing is transiently impaired by the presentation of distractors defined by similar features. In the present study, we investigated whether the speeded response to a target in the DIB paradigm can be described in terms of a contingent attentional capture process. In the first experiments, multiple distractors were embedded in the RSVP stream. Distractors either shared the target’s visual features (Experiment 1A) or differed from them (Experiment 1B). Congruent with hypotheses drawn from contingent attentional capture theory, response times (RTs) were exclusively impaired in conditions with target-like distractors. However, RTs were not impaired if only one single target-like distractor was presented (Experiment 2). If attentional capture directly contributed to DIB, the single distractor should be sufficient to impair target processing. In conclusion, DIB is not due to contingent attentional capture, but may rely on a central suppression process triggered by multiple distractors. PMID:28439320
Sheremata, Summer L; Somers, David C; Shomstein, Sarah
2018-02-07
Visual short-term memory (VSTM) and attention are distinct yet interrelated processes. While both require selection of information across the visual field, memory additionally requires the maintenance of information across time and distraction. VSTM recruits areas within human (male and female) dorsal and ventral parietal cortex that are also implicated in spatial selection; therefore, it is important to determine whether overlapping activation might reflect shared attentional demands. Here, identical stimuli and controlled sustained attention across both tasks were used to ask whether fMRI signal amplitude, functional connectivity, and contralateral visual field bias reflect memory-specific task demands. While attention and VSTM activated similar cortical areas, BOLD amplitude and functional connectivity in parietal cortex differentiated the two tasks. Relative to attention, VSTM increased BOLD amplitude in dorsal parietal cortex and decreased BOLD amplitude in the angular gyrus. Additionally, the tasks differentially modulated parietal functional connectivity. Contrasting VSTM and attention, intraparietal sulcus (IPS) 1-2 were more strongly connected with anterior frontoparietal areas and more weakly connected with posterior regions. This divergence between tasks demonstrates that parietal activation reflects memory-specific functions and consequently modulates functional connectivity across the cortex. In contrast, both tasks demonstrated hemispheric asymmetries for spatial processing, exhibiting a stronger contralateral visual field bias in the left versus the right hemisphere across tasks, suggesting that asymmetries are characteristic of a shared selection process in IPS. These results demonstrate that parietal activity and patterns of functional connectivity distinguish VSTM from more general attention processes, establishing a central role of the parietal cortex in maintaining visual information. SIGNIFICANCE STATEMENT Visual short-term memory (VSTM) and attention are distinct yet interrelated processes. Cognitive mechanisms and neural activity underlying these tasks show a large degree of overlap. To examine whether activity within the posterior parietal cortex (PPC) reflects object maintenance across distraction or sustained attention per se, it is necessary to control for attentional demands inherent in VSTM tasks. We demonstrate that activity in PPC reflects VSTM demands even after controlling for attention; remembering items across distraction modulates relationships between parietal and other areas differently than during periods of sustained attention. Our study fills a gap in the literature by directly comparing and controlling for overlap between visual attention and VSTM tasks. Copyright © 2018 the authors 0270-6474/18/381511-09$15.00/0.
Effects of Psychological Attention on Pronoun Comprehension
Arnold, Jennifer E.; Lao, Shin-Yi C.
2015-01-01
Pronoun comprehension is facilitated for referents that are focused in the discourse context. Discourse focus has been described as a function of attention, especially shared attention, but few studies have explicitly tested this idea. Two experiments used an exogenous capture cue paradigm to demonstrate that listeners’ visual attention at the onset of a story influences their preferences during pronoun resolution later in the story. In both experiments trial-initial attention modulated listeners’ transitory biases while considering referents for the pronoun, whether it was in response to the capture cue or not. These biases even had a small influence on listeners’ final interpretation of the pronoun. These results provide independently-motivated evidence that the listener’s attention influences the on-line processes of pronoun comprehension. Trial-initial attentional shifts were made on the basis of non-shared, private information, demonstrating that attentional effects on pronoun comprehension are not restricted to shared attention among interlocutors. PMID:26191533
ERIC Educational Resources Information Center
Hay, Julia L.; Milders, Maarten M.; Sahraie, Arash; Niedeggen, Michael
2006-01-01
Recent visual marking studies have shown that the carry-over of distractor inhibition can impair the ability of singletons to capture attention if the singleton and distractors share features. The current study extends this finding to first-order motion targets and distractors, clearly separated in time by a visual cue (the letter X). Target…
Spatial working memory interferes with explicit, but not probabilistic cuing of spatial attention.
Won, Bo-Yeong; Jiang, Yuhong V
2015-05-01
Recent empirical and theoretical work has depicted a close relationship between visual attention and visual working memory. For example, rehearsal in spatial working memory depends on spatial attention, whereas adding a secondary spatial working memory task impairs attentional deployment in visual search. These findings have led to the proposal that working memory is attention directed toward internal representations. Here, we show that the close relationship between these 2 constructs is limited to some but not all forms of spatial attention. In 5 experiments, participants held color arrays, dot locations, or a sequence of dots in working memory. During the memory retention interval, they performed a T-among-L visual search task. Crucially, the probable target location was cued either implicitly through location probability learning or explicitly with a central arrow or verbal instruction. Our results showed that whereas imposing a visual working memory load diminished the effectiveness of explicit cuing, it did not interfere with probability cuing. We conclude that spatial working memory shares similar mechanisms with explicit, goal-driven attention but is dissociated from implicitly learned attention. (c) 2015 APA, all rights reserved).
Spatial working memory interferes with explicit, but not probabilistic cuing of spatial attention
Won, Bo-Yeong; Jiang, Yuhong V.
2014-01-01
Recent empirical and theoretical work has depicted a close relationship between visual attention and visual working memory. For example, rehearsal in spatial working memory depends on spatial attention, whereas adding a secondary spatial working memory task impairs attentional deployment in visual search. These findings have led to the proposal that working memory is attention directed toward internal representations. Here we show that the close relationship between these two constructs is limited to some but not all forms of spatial attention. In five experiments, participants held color arrays, dot locations, or a sequence of dots in working memory. During the memory retention interval they performed a T-among-L visual search task. Crucially, the probable target location was cued either implicitly through location probability learning, or explicitly with a central arrow or verbal instruction. Our results showed that whereas imposing a visual working memory load diminished the effectiveness of explicit cuing, it did not interfere with probability cuing. We conclude that spatial working memory shares similar mechanisms with explicit, goal-driven attention but is dissociated from implicitly learned attention. PMID:25401460
Vivanti, Giacomo; Fanning, Peter A J; Hocking, Darren R; Sievers, Stephanie; Dissanayake, Cheryl
2017-06-01
There is limited knowledge on shared and syndrome-specific attentional profiles in autism spectrum disorder (ASD) and Williams syndrome (WS). Using eye-tracking, we examined attentional profiles of 35 preschoolers with ASD, 22 preschoolers with WS and 20 typically developing children across social and non-social dimensions of attention. Children with ASD and those with WS presented with overlapping deficits in spontaneous visual engagement with the target of others' attention and in sustained attention. Children with ASD showed syndrome-specific abnormalities in monitoring and following a person's referential gaze, as well as a lack of preferential attention to social stimuli. Children with ASD and WS present with shared as well as syndrome-specific abnormalities across social and non-social dimensions of attention.
Spatial and Feature-Based Attention in a Layered Cortical Microcircuit Model
Wagatsuma, Nobuhiko; Potjans, Tobias C.; Diesmann, Markus; Sakai, Ko; Fukai, Tomoki
2013-01-01
Directing attention to the spatial location or the distinguishing feature of a visual object modulates neuronal responses in the visual cortex and the stimulus discriminability of subjects. However, the spatial and feature-based modes of attention differently influence visual processing by changing the tuning properties of neurons. Intriguingly, neurons' tuning curves are modulated similarly across different visual areas under both these modes of attention. Here, we explored the mechanism underlying the effects of these two modes of visual attention on the orientation selectivity of visual cortical neurons. To do this, we developed a layered microcircuit model. This model describes multiple orientation-specific microcircuits sharing their receptive fields and consisting of layers 2/3, 4, 5, and 6. These microcircuits represent a functional grouping of cortical neurons and mutually interact via lateral inhibition and excitatory connections between groups with similar selectivity. The individual microcircuits receive bottom-up visual stimuli and top-down attention in different layers. A crucial assumption of the model is that feature-based attention activates orientation-specific microcircuits for the relevant feature selectively, whereas spatial attention activates all microcircuits homogeneously, irrespective of their orientation selectivity. Consequently, our model simultaneously accounts for the multiplicative scaling of neuronal responses in spatial attention and the additive modulations of orientation tuning curves in feature-based attention, which have been observed widely in various visual cortical areas. Simulations of the model predict contrasting differences between excitatory and inhibitory neurons in the two modes of attentional modulations. Furthermore, the model replicates the modulation of the psychophysical discriminability of visual stimuli in the presence of external noise. Our layered model with a biologically suggested laminar structure describes the basic circuit mechanism underlying the attention-mode specific modulations of neuronal responses and visual perception. PMID:24324628
Schindler, Sebastian; Kissler, Johanna
2016-10-01
Human brains spontaneously differentiate between various emotional and neutral stimuli, including written words whose emotional quality is symbolic. In the electroencephalogram (EEG), emotional-neutral processing differences are typically reflected in the early posterior negativity (EPN, 200-300 ms) and the late positive potential (LPP, 400-700 ms). These components are also enlarged by task-driven visual attention, supporting the assumption that emotional content naturally drives attention. Still, the spatio-temporal dynamics of interactions between emotional stimulus content and task-driven attention remain to be specified. Here, we examine this issue in visual word processing. Participants attended to negative, neutral, or positive nouns while high-density EEG was recorded. Emotional content and top-down attention both amplified the EPN component in parallel. On the LPP, by contrast, emotion and attention interacted: Explicit attention to emotional words led to a substantially larger amplitude increase than did explicit attention to neutral words. Source analysis revealed early parallel effects of emotion and attention in bilateral visual cortex and a later interaction of both in right visual cortex. Distinct effects of attention were found in inferior, middle and superior frontal, paracentral, and parietal areas, as well as in the anterior cingulate cortex (ACC). Results specify separate and shared mechanisms of emotion and attention at distinct processing stages. Hum Brain Mapp 37:3575-3587, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Majerus, Steve; Cowan, Nelson; Péters, Frédéric; Van Calster, Laurens; Phillips, Christophe; Schrouff, Jessica
2016-01-01
Recent studies suggest common neural substrates involved in verbal and visual working memory (WM), interpreted as reflecting shared attention-based, short-term retention mechanisms. We used a machine-learning approach to determine more directly the extent to which common neural patterns characterize retention in verbal WM and visual WM. Verbal WM was assessed via a standard delayed probe recognition task for letter sequences of variable length. Visual WM was assessed via a visual array WM task involving the maintenance of variable amounts of visual information in the focus of attention. We trained a classifier to distinguish neural activation patterns associated with high- and low-visual WM load and tested the ability of this classifier to predict verbal WM load (high–low) from their associated neural activation patterns, and vice versa. We observed significant between-task prediction of load effects during WM maintenance, in posterior parietal and superior frontal regions of the dorsal attention network; in contrast, between-task prediction in sensory processing cortices was restricted to the encoding stage. Furthermore, between-task prediction of load effects was strongest in those participants presenting the highest capacity for the visual WM task. This study provides novel evidence for common, attention-based neural patterns supporting verbal and visual WM. PMID:25146374
Perceptual load influences selective attention across development.
Couperus, Jane W
2011-09-01
Research suggests that visual selective attention develops across childhood. However, there is relatively little understanding of the neurological changes that accompany this development, particularly in the context of adult theories of selective attention, such as N. Lavie's (1995) perceptual load theory of attention. This study examined visual selective attention across development from 7 years of age to adulthood. Specifically, the author examined if changes in processing as a function of selective attention are similarly influenced by perceptual load across development. Participants were asked to complete a task at either low or high perceptual load while processing of an unattended probe stimulus was examined using event related potentials. Similar to adults, children and teens showed reduced processing of the unattended stimulus as perceptual load increased at the P1 visual component. However, although there were no qualitative differences in changes in processing, there were quantitative differences, with shorter P1 latencies in teens and adults compared with children, suggesting increases in the speed of processing across development. In addition, younger children did not need as high a perceptual load to achieve the same difference in performance between low and high perceptual load as adults. Thus, this study demonstrates that although there are developmental changes in visual selective attention, the mechanisms by which visual selective attention is achieved in children may share similarities with adults.
Action Planning Mediates Guidance of Visual Attention from Working Memory.
Feldmann-Wüstefeld, Tobias; Schubö, Anna
2015-01-01
Visual search is impaired when a salient task-irrelevant stimulus is presented together with the target. Recent research has shown that this attentional capture effect is enhanced when the salient stimulus matches working memory (WM) content, arguing in favor of attention guidance from WM. Visual attention was also shown to be closely coupled with action planning. Preparing a movement renders action-relevant perceptual dimensions more salient and thus increases search efficiency for stimuli sharing that dimension. The present study aimed at revealing common underlying mechanisms for selective attention, WM, and action planning. Participants both prepared a specific movement (grasping or pointing) and memorized a color hue. Before the movement was executed towards an object of the memorized color, a visual search task (additional singleton) was performed. Results showed that distraction from target was more pronounced when the additional singleton had a memorized color. This WM-guided attention deployment was more pronounced when participants prepared a grasping movement. We argue that preparing a grasping movement mediates attention guidance from WM content by enhancing representations of memory content that matches the distractor shape (i.e., circles), thus encouraging attentional capture by circle distractors of the memorized color. We conclude that templates for visual search, action planning, and WM compete for resources and thus cause interferences.
Action Planning Mediates Guidance of Visual Attention from Working Memory
Schubö, Anna
2015-01-01
Visual search is impaired when a salient task-irrelevant stimulus is presented together with the target. Recent research has shown that this attentional capture effect is enhanced when the salient stimulus matches working memory (WM) content, arguing in favor of attention guidance from WM. Visual attention was also shown to be closely coupled with action planning. Preparing a movement renders action-relevant perceptual dimensions more salient and thus increases search efficiency for stimuli sharing that dimension. The present study aimed at revealing common underlying mechanisms for selective attention, WM, and action planning. Participants both prepared a specific movement (grasping or pointing) and memorized a color hue. Before the movement was executed towards an object of the memorized color, a visual search task (additional singleton) was performed. Results showed that distraction from target was more pronounced when the additional singleton had a memorized color. This WM-guided attention deployment was more pronounced when participants prepared a grasping movement. We argue that preparing a grasping movement mediates attention guidance from WM content by enhancing representations of memory content that matches the distractor shape (i.e., circles), thus encouraging attentional capture by circle distractors of the memorized color. We conclude that templates for visual search, action planning, and WM compete for resources and thus cause interferences. PMID:26171241
Fagot, J; Kruschke, J K; Dépy, D; Vauclair, J
1998-10-01
We examined attention shifting in baboons and humans during the learning of visual categories. Within a conditional matching-to-sample task, participants of the two species sequentially learned two two-feature categories which shared a common feature. Results showed that humans encoded both features of the initially learned category, but predominantly only the distinctive feature of the subsequently learned category. Although baboons initially encoded both features of the first category, they ultimately retained only the distinctive features of each category. Empirical data from the two species were analyzed with the 1996 ADIT connectionist model of Kruschke. ADIT fits the baboon data when the attentional shift rate is zero, and the human data when the attentional shift rate is not zero. These empirical and modeling results suggest species differences in learned attention to visual features.
Pannebakker, Merel M; Jolicœur, Pierre; van Dam, Wessel O; Band, Guido P H; Ridderinkhof, K Richard; Hommel, Bernhard
2011-09-01
Dual tasks and their associated delays have often been used to examine the boundaries of processing in the brain. We used the dual-task procedure and recorded event-related potentials (ERPs) to investigate how mental rotation of a first stimulus (S1) influences the shifting of visual-spatial attention to a second stimulus (S2). Visual-spatial attention was monitored by using the N2pc component of the ERP. In addition, we examined the sustained posterior contralateral negativity (SPCN) believed to index the retention of information in visual short-term memory. We found modulations of both the N2pc and the SPCN, suggesting that engaging mechanisms of mental rotation impairs the deployment of visual-spatial attention and delays the passage of a representation of S2 into visual short-term memory. Both results suggest interactions between mental rotation and visual-spatial attention in capacity-limited processing mechanisms indicating that response selection is not pivotal in dual-task delays and all three processes are likely to share a common resource like executive control. Copyright © 2011 Elsevier Ltd. All rights reserved.
Frontal-parietal synchrony in elderly EEG for visual search.
Phillips, Steven; Takeda, Yuji
2010-01-01
Aging involves selective changes in attentional control. However, its precise effect on visual attention is difficult to discern from behavioural studies alone. In this paper, we employ a recently developed phase-locking measure of synchrony as an indicator of top-down/bottom-up control of attention to assess attentional control in the elderly. Fourteen participants (63-74 years) searched for a target item (coloured, oriented rectangular bar) among a display set of distractors. For the feature search condition, where none of the distractors shared a feature with the target, search time did not increase with display set size (two, or four items). For the conjunctive search condition, where each distractor shared either a colour or orientation feature with the target, search time increased with display size. Phase-locking analysis revealed a significant increase in high gamma-band (36-56 Hz) synchrony indicating greater bottom-up control for feature than conjunctive search. In view of our earlier study on younger (21-32 years) adults (Phillips and Takeda, 2009), these results suggest that older participants are more likely to use bottom-up control of attention, possibly triggered by their greater susceptibility to attentional capture, than younger participants. Copyright (c) 2009 Elsevier B.V. All rights reserved.
Visual Attention in Autism Families: "Unaffected" Sibs Share Atypical Frontal Activation
ERIC Educational Resources Information Center
Belmonte, Matthew K.; Gomot, Marie; Baron-Cohen, Simon
2010-01-01
Background: In addition to their more clinically evident abnormalities of social cognition, people with autism spectrum conditions (ASC) manifest perturbations of attention and sensory perception which may offer insights into the underlying neural abnormalities. Similar autistic traits in ASC relatives without a diagnosis suggest a continuity…
Brain activity associated with selective attention, divided attention and distraction.
Salo, Emma; Salmela, Viljami; Salmi, Juha; Numminen, Jussi; Alho, Kimmo
2017-06-01
Top-down controlled selective or divided attention to sounds and visual objects, as well as bottom-up triggered attention to auditory and visual distractors, has been widely investigated. However, no study has systematically compared brain activations related to all these types of attention. To this end, we used functional magnetic resonance imaging (fMRI) to measure brain activity in participants performing a tone pitch or a foveal grating orientation discrimination task, or both, distracted by novel sounds not sharing frequencies with the tones or by extrafoveal visual textures. To force focusing of attention to tones or gratings, or both, task difficulty was kept constantly high with an adaptive staircase method. A whole brain analysis of variance (ANOVA) revealed fronto-parietal attention networks for both selective auditory and visual attention. A subsequent conjunction analysis indicated partial overlaps of these networks. However, like some previous studies, the present results also suggest segregation of prefrontal areas involved in the control of auditory and visual attention. The ANOVA also suggested, and another conjunction analysis confirmed, an additional activity enhancement in the left middle frontal gyrus related to divided attention supporting the role of this area in top-down integration of dual task performance. Distractors expectedly disrupted task performance. However, contrary to our expectations, activations specifically related to the distractors were found only in the auditory and visual cortices. This suggests gating of the distractors from further processing perhaps due to strictly focused attention in the current demanding discrimination tasks. Copyright © 2017 Elsevier B.V. All rights reserved.
Majerus, Steve; Cowan, Nelson; Péters, Frédéric; Van Calster, Laurens; Phillips, Christophe; Schrouff, Jessica
2016-01-01
Recent studies suggest common neural substrates involved in verbal and visual working memory (WM), interpreted as reflecting shared attention-based, short-term retention mechanisms. We used a machine-learning approach to determine more directly the extent to which common neural patterns characterize retention in verbal WM and visual WM. Verbal WM was assessed via a standard delayed probe recognition task for letter sequences of variable length. Visual WM was assessed via a visual array WM task involving the maintenance of variable amounts of visual information in the focus of attention. We trained a classifier to distinguish neural activation patterns associated with high- and low-visual WM load and tested the ability of this classifier to predict verbal WM load (high-low) from their associated neural activation patterns, and vice versa. We observed significant between-task prediction of load effects during WM maintenance, in posterior parietal and superior frontal regions of the dorsal attention network; in contrast, between-task prediction in sensory processing cortices was restricted to the encoding stage. Furthermore, between-task prediction of load effects was strongest in those participants presenting the highest capacity for the visual WM task. This study provides novel evidence for common, attention-based neural patterns supporting verbal and visual WM. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Justice, Laura M; Pullen, Paige C; Pence, Khara
2008-05-01
How much do preschool children look at print within storybooks when adults read to them? This study sought to answer this question as well as to examine the effects of adult verbal and nonverbal references to print on children's visual attention to print during storybook reading. Forty-four preschool-aged children participated in this study designed to determine the amount of visual attention children paid to print in 4 planned variations of storybook reading. Children's visual attention to print was examined when adults commented and questioned about print (verbal print condition) or pointed to and tracked the print (nonverbal print condition), relative to 2 comparison conditions (verbatim reading and verbal picture conditions). Results showed that children rarely look at print, with about 5%-6% of their fixations allocated to print in verbatim and verbal picture reading conditions. However, preschoolers' visual attention to print increases significantly when adults verbally and nonverbally reference print; both reading styles exerted similar effects. The authors conclude that explicit referencing of print is 1 way to increase young children's contacts with print during shared storybook reading. (PsycINFO Database Record (c) 2008 APA, all rights reserved).
Contingent attentional capture occurs by activated target congruence.
Ariga, Atsunori; Yokosawa, Kazuhiko
2008-05-01
Contingent attentional capture occurs when a stimulus property captures an observer's attention, usually related to the observer's top-down attentional set for target-defining properties. In this study, we examined whether contingent attentional capture occurs for a distractor that does not share the target-defining property at a physical level, but does share that property at an abstract level of representation. In a rapid serial visual presentation stream, we defined the target by color (e.g., a green-colored Japanese kanji character). Before the target onset, we presented a distractor that referred to the target-defining color (e.g., a white-colored character meaning "green"). We observed contingent attentional capture by the distractor, which was reflected by a deficit in identifying the subsequent target. This result suggests that because of the attentional set, stimuli were scanned on the basis of the target-defining property at an abstract semantic level of representation.
Context and competition in the capture of visual attention.
Hickey, Clayton; Theeuwes, Jan
2011-10-01
Competition-based models of visual attention propose that perceptual ambiguity is resolved through inhibition, which is stronger when objects share a greater number of neural receptive fields (RFs). According to this theory, the misallocation of attention to a salient distractor--that is, the capture of attention--can be indexed in RF-scaled interference costs. We used this pattern to investigate distractor-related costs in visual search across several manipulations of temporal context. Distractor costs are generally larger under circumstances in which the distractor can be defined by features that have recently characterised the target, suggesting that capture occurs in these trials. However, our results show that search for a target in the presence of a salient distractor also produces RF-scaled costs when the features defining the target and distractor do not vary from trial to trial. Contextual differences in distractor costs appear to reflect something other than capture, perhaps a qualitative difference in the type of attentional mechanism deployed to the distractor.
Künstler, E C S; Finke, K; Günther, A; Klingner, C; Witte, O; Bublak, P
2018-01-01
Dual tasking, or the simultaneous execution of two continuous tasks, is frequently associated with a performance decline that can be explained within a capacity sharing framework. In this study, we assessed the effects of a concurrent motor task on the efficiency of visual information uptake based on the 'theory of visual attention' (TVA). TVA provides parameter estimates reflecting distinct components of visual processing capacity: perceptual threshold, visual processing speed, and visual short-term memory (VSTM) storage capacity. Moreover, goodness-of-fit values and bootstrapping estimates were derived to test whether the TVA-model is validly applicable also under dual task conditions, and whether the robustness of parameter estimates is comparable in single- and dual-task conditions. 24 subjects of middle to higher age performed a continuous tapping task, and a visual processing task (whole report of briefly presented letter arrays) under both single- and dual-task conditions. Results suggest a decline of both visual processing capacity and VSTM storage capacity under dual-task conditions, while the perceptual threshold remained unaffected by a concurrent motor task. In addition, goodness-of-fit values and bootstrapping estimates support the notion that participants processed the visual task in a qualitatively comparable, although quantitatively less efficient way under dual-task conditions. The results support a capacity sharing account of motor-cognitive dual tasking and suggest that even performing a relatively simple motor task relies on central attentional capacity that is necessary for efficient visual information uptake.
Visual short-term memory load strengthens selective attention.
Roper, Zachary J J; Vecera, Shaun P
2014-04-01
Perceptual load theory accounts for many attentional phenomena; however, its mechanism remains elusive because it invokes underspecified attentional resources. Recent dual-task evidence has revealed that a concurrent visual short-term memory (VSTM) load slows visual search and reduces contrast sensitivity, but it is unknown whether a VSTM load also constricts attention in a canonical perceptual load task. If attentional selection draws upon VSTM resources, then distraction effects-which measure attentional "spill-over"-will be reduced as competition for resources increases. Observers performed a low perceptual load flanker task during the delay period of a VSTM change detection task. We observed a reduction of the flanker effect in the perceptual load task as a function of increasing concurrent VSTM load. These findings were not due to perceptual-level interactions between the physical displays of the two tasks. Our findings suggest that perceptual representations of distractor stimuli compete with the maintenance of visual representations held in memory. We conclude that access to VSTM determines the degree of attentional selectivity; when VSTM is not completely taxed, it is more likely for task-irrelevant items to be consolidated and, consequently, affect responses. The "resources" hypothesized by load theory are at least partly mnemonic in nature, due to the strong correspondence they share with VSTM capacity.
[Effects of the verbal loading on laterality difference in visual field (author's transl)].
Kawai, M
1980-02-01
In connection with the Kinsbourne's attention-model, the relation between the level of hemisphere sharing of loading task and the visual-laterality difference was examined under verbal loading conditions. The subjects were 13 (8 male and 5 female) right-handed college students. The loading tasks in Exp. I were the "same-different" judgment of Japanese hiragana alphabets and of triliteral hiragana words, and "true-false" judgment of short statements. In Exp. II, a procedure to eliminate configurational matching of the letters was followed. The results of the two experiments suggest that the visual-laterality effect occurs only when the level of hemisphere sharing of the loading task exceeds a certain lower bound.
Vadnais, Sarah A; Kibby, Michelle Y; Jagger-Rickels, Audreyana C
2018-01-01
We identified statistical predictors of four processing speed (PS) components in a sample of 151 children with and without attention-deficit/hyperactivity disorder (ADHD). Performance on perceptual speed was predicted by visual attention/short-term memory, whereas incidental learning/psychomotor speed was predicted by verbal working memory. Rapid naming was predictive of each PS component assessed, and inhibition predicted all but one task, suggesting a shared need to identify/retrieve stimuli rapidly and inhibit incorrect responding across PS components. Hence, we found both shared and unique predictors of perceptual, cognitive, and output speed, suggesting more specific terminology should be used in future research on PS in ADHD.
Dogs respond appropriately to cues of humans' attentional focus.
Virányi, Zsófia; Topál, József; Gácsi, Márta; Miklósi, Adám; Csányi, Vilmos
2004-05-31
Dogs' ability to recognise cues of human visual attention was studied in different experiments. Study 1 was designed to test the dogs' responsiveness to their owner's tape-recorded verbal commands (Down!) while the Instructor (who was the owner of the dog) was facing either the dog or a human partner or none of them, or was visually separated from the dog. Results show that dogs were more ready to follow the command if the Instructor attended them during instruction compared to situations when the Instructor faced the human partner or was out of sight of the dog. Importantly, however, dogs showed intermediate performance when the Instructor was orienting into 'empty space' during the re-played verbal commands. This suggests that dogs are able to differentiate the focus of human attention. In Study 2 the same dogs were offered the possibility to beg for food from two unfamiliar humans whose visual attention (i.e. facing the dog or turning away) was systematically varied. The dogs' preference for choosing the attentive person shows that dogs are capable of using visual cues of attention to evaluate the human actors' responsiveness to solicit food-sharing. The dogs' ability to understand the communicatory nature of the situations is discussed in terms of their social cognitive skills and unique evolutionary history.
ERIC Educational Resources Information Center
LoBue, Vanessa
2010-01-01
Spiders are among the most common targets of fears and phobias in the world. In visual search tasks, adults detect their presence more rapidly than other kinds of stimuli. Reported here is an investigation of whether young children share this attentional bias for the detection of spiders. In a series of experiments, preschoolers and adults were…
Enhancing reading performance through action video games: the role of visual attention span.
Antzaka, A; Lallier, M; Meyer, S; Diard, J; Carreiras, M; Valdois, S
2017-11-06
Recent studies reported that Action Video Game-AVG training improves not only certain attentional components, but also reading fluency in children with dyslexia. We aimed to investigate the shared attentional components of AVG playing and reading, by studying whether the Visual Attention (VA) span, a component of visual attention that has previously been linked to both reading development and dyslexia, is improved in frequent players of AVGs. Thirty-six French fluent adult readers, matched on chronological age and text reading proficiency, composed two groups: frequent AVG players and non-players. Participants performed behavioural tasks measuring the VA span, and a challenging reading task (reading of briefly presented pseudo-words). AVG players performed better on both tasks and performance on these tasks was correlated. These results further support the transfer of the attentional benefits of playing AVGs to reading, and indicate that the VA span could be a core component mediating this transfer. The correlation between VA span and pseudo-word reading also supports the involvement of VA span even in adult reading. Future studies could combine VA span training with defining features of AVGs, in order to build a new generation of remediation software.
Shalev, Nir; De Wandel, Linde; Dockree, Paul; Demeyere, Nele; Chechlacz, Magdalena
2017-10-03
The Theory of Visual Attention (TVA) provides a mathematical formalisation of the "biased competition" account of visual attention. Applying this model to individual performance in a free recall task allows the estimation of 5 independent attentional parameters: visual short-term memory (VSTM) capacity, speed of information processing, perceptual threshold of visual detection; attentional weights representing spatial distribution of attention (spatial bias), and the top-down selectivity index. While the TVA focuses on selection in space, complementary accounts of attention describe how attention is maintained over time, and how temporal processes interact with selection. A growing body of evidence indicates that different facets of attention interact and share common neural substrates. The aim of the current study was to modulate a spatial attentional bias via transfer effects, based on a mechanistic understanding of the interplay between spatial, selective and temporal aspects of attention. Specifically, we examined here: (i) whether a single administration of a lateralized sustained attention task could prime spatial orienting and lead to transferable changes in attentional weights (assigned to the left vs right hemi-field) and/or other attentional parameters assessed within the framework of TVA (Experiment 1); (ii) whether the effects of such spatial-priming on TVA parameters could be further enhanced by bi-parietal high frequency transcranial random noise stimulation (tRNS) (Experiment 2). Our results demonstrate that spatial attentional bias, as assessed within the TVA framework, was primed by sustaining attention towards the right hemi-field, but this spatial-priming effect did not occur when sustaining attention towards the left. Furthermore, we show that bi-parietal high-frequency tRNS combined with the rightward spatial-priming resulted in an increased attentional selectivity. To conclude, we present a novel, theory-driven method for attentional modulation providing important insights into how the spatial and temporal processes in attention interact with attentional selection. Copyright © 2017 Elsevier Ltd. All rights reserved.
A normalization model suggests that attention changes the weighting of inputs between visual areas
Cohen, Marlene R.
2017-01-01
Models of divisive normalization can explain the trial-averaged responses of neurons in sensory, association, and motor areas under a wide range of conditions, including how visual attention changes the gains of neurons in visual cortex. Attention, like other modulatory processes, is also associated with changes in the extent to which pairs of neurons share trial-to-trial variability. We showed recently that in addition to decreasing correlations between similarly tuned neurons within the same visual area, attention increases correlations between neurons in primary visual cortex (V1) and the middle temporal area (MT) and that an extension of a classic normalization model can account for this correlation increase. One of the benefits of having a descriptive model that can account for many physiological observations is that it can be used to probe the mechanisms underlying processes such as attention. Here, we use electrical microstimulation in V1 paired with recording in MT to provide causal evidence that the relationship between V1 and MT activity is nonlinear and is well described by divisive normalization. We then use the normalization model and recording and microstimulation experiments to show that the attention dependence of V1–MT correlations is better explained by a mechanism in which attention changes the weights of connections between V1 and MT than by a mechanism that modulates responses in either area. Our study shows that normalization can explain interactions between neurons in different areas and provides a framework for using multiarea recording and stimulation to probe the neural mechanisms underlying neuronal computations. PMID:28461501
A normalization model suggests that attention changes the weighting of inputs between visual areas.
Ruff, Douglas A; Cohen, Marlene R
2017-05-16
Models of divisive normalization can explain the trial-averaged responses of neurons in sensory, association, and motor areas under a wide range of conditions, including how visual attention changes the gains of neurons in visual cortex. Attention, like other modulatory processes, is also associated with changes in the extent to which pairs of neurons share trial-to-trial variability. We showed recently that in addition to decreasing correlations between similarly tuned neurons within the same visual area, attention increases correlations between neurons in primary visual cortex (V1) and the middle temporal area (MT) and that an extension of a classic normalization model can account for this correlation increase. One of the benefits of having a descriptive model that can account for many physiological observations is that it can be used to probe the mechanisms underlying processes such as attention. Here, we use electrical microstimulation in V1 paired with recording in MT to provide causal evidence that the relationship between V1 and MT activity is nonlinear and is well described by divisive normalization. We then use the normalization model and recording and microstimulation experiments to show that the attention dependence of V1-MT correlations is better explained by a mechanism in which attention changes the weights of connections between V1 and MT than by a mechanism that modulates responses in either area. Our study shows that normalization can explain interactions between neurons in different areas and provides a framework for using multiarea recording and stimulation to probe the neural mechanisms underlying neuronal computations.
Sklar, A E; Sarter, N B
1999-12-01
Observed breakdowns in human-machine communication can be explained, in part, by the nature of current automation feedback, which relies heavily on focal visual attention. Such feedback is not well suited for capturing attention in case of unexpected changes and events or for supporting the parallel processing of large amounts of data in complex domains. As suggested by multiple-resource theory, one possible solution to this problem is to distribute information across various sensory modalities. A simulator study was conducted to compare the effectiveness of visual, tactile, and redundant visual and tactile cues for indicating unexpected changes in the status of an automated cockpit system. Both tactile conditions resulted in higher detection rates for, and faster response times to, uncommanded mode transitions. Tactile feedback did not interfere with, nor was its effectiveness affected by, the performance of concurrent visual tasks. The observed improvement in task-sharing performance indicates that the introduction of tactile feedback is a promising avenue toward better supporting human-machine communication in event-driven, information-rich domains.
Povinelli, Daniel J; Dunphy-Lelii, Sarah; Reaux, James E; Mazza, Michael P
2002-01-01
We present the results of 5 experiments that assessed 7 chimpanzees' understanding of the visual experiences of others. The research was conducted when the animals were adolescents (7-8 years of age) and adults (12 years of age). The experiments examined their ability to recognize the equivalence between visual and tactile modes of gaining the attention of others (Exp. 1), their understanding that the vision of others can be impeded by opaque barriers (Exps. 2 and 5), and their ability to distinguish between postural cues which are and are not specifically relevant to visual attention (Exps. 3 and 4). The results suggest that although chimpanzees are excellent at exploiting the observable contingencies that exist between the facial and bodily postures of other agents on the one hand, and events in the world on the other, these animals may not construe others as possessing psychological states related to 'seeing' or 'attention.' Humans and chimpanzees share homologous suites of psychological systems that detect and process information about both the static and dynamic aspects of social life, but humans alone may possess systems which interpret behavior in terms of abstract, unobservable mental states such as seeing and attention. Copyright 2002 S. Karger AG, Basel
Attentional modulation of neuronal variability in circuit models of cortex
Kanashiro, Tatjana; Ocker, Gabriel Koch; Cohen, Marlene R; Doiron, Brent
2017-01-01
The circuit mechanisms behind shared neural variability (noise correlation) and its dependence on neural state are poorly understood. Visual attention is well-suited to constrain cortical models of response variability because attention both increases firing rates and their stimulus sensitivity, as well as decreases noise correlations. We provide a novel analysis of population recordings in rhesus primate visual area V4 showing that a single biophysical mechanism may underlie these diverse neural correlates of attention. We explore model cortical networks where top-down mediated increases in excitability, distributed across excitatory and inhibitory targets, capture the key neuronal correlates of attention. Our models predict that top-down signals primarily affect inhibitory neurons, whereas excitatory neurons are more sensitive to stimulus specific bottom-up inputs. Accounting for trial variability in models of state dependent modulation of neuronal activity is a critical step in building a mechanistic theory of neuronal cognition. DOI: http://dx.doi.org/10.7554/eLife.23978.001 PMID:28590902
On the Structure of Neuronal Population Activity under Fluctuations in Attentional State
Denfield, George H.; Bethge, Matthias; Tolias, Andreas S.
2016-01-01
Attention is commonly thought to improve behavioral performance by increasing response gain and suppressing shared variability in neuronal populations. However, both the focus and the strength of attention are likely to vary from one experimental trial to the next, thereby inducing response variability unknown to the experimenter. Here we study analytically how fluctuations in attentional state affect the structure of population responses in a simple model of spatial and feature attention. In our model, attention acts on the neural response exclusively by modulating each neuron's gain. Neurons are conditionally independent given the stimulus and the attentional gain, and correlated activity arises only from trial-to-trial fluctuations of the attentional state, which are unknown to the experimenter. We find that this simple model can readily explain many aspects of neural response modulation under attention, such as increased response gain, reduced individual and shared variability, increased correlations with firing rates, limited range correlations, and differential correlations. We therefore suggest that attention may act primarily by increasing response gain of individual neurons without affecting their correlation structure. The experimentally observed reduction in correlations may instead result from reduced variability of the attentional gain when a stimulus is attended. Moreover, we show that attentional gain fluctuations, even if unknown to a downstream readout, do not impair the readout accuracy despite inducing limited-range correlations, whereas fluctuations of the attended feature can in principle limit behavioral performance. SIGNIFICANCE STATEMENT Covert attention is one of the most widely studied examples of top-down modulation of neural activity in the visual system. Recent studies argue that attention improves behavioral performance by shaping of the noise distribution to suppress shared variability rather than by increasing response gain. Our work shows, however, that latent, trial-to-trial fluctuations of the focus and strength of attention lead to shared variability that is highly consistent with known experimental observations. Interestingly, fluctuations in the strength of attention do not affect coding performance. As a consequence, the experimentally observed changes in response variability may not be a mechanism of attention, but rather a side effect of attentional allocation strategies in different behavioral contexts. PMID:26843656
Global facilitation of attended features is obligatory and restricts divided attention.
Andersen, Søren K; Hillyard, Steven A; Müller, Matthias M
2013-11-13
In many common situations such as driving an automobile it is advantageous to attend concurrently to events at different locations (e.g., the car in front, the pedestrian to the side). While spatial attention can be divided effectively between separate locations, studies investigating attention to nonspatial features have often reported a "global effect", whereby items having the attended feature may be preferentially processed throughout the entire visual field. These findings suggest that spatial and feature-based attention may at times act in direct opposition: spatially divided foci of attention cannot be truly independent if feature attention is spatially global and thereby affects all foci equally. In two experiments, human observers attended concurrently to one of two overlapping fields of dots of different colors presented in both the left and right visual fields. When the same color or two different colors were attended on the two sides, deviant targets were detected accurately, and visual-cortical potentials elicited by attended dots were enhanced. However, when the attended color on one side matched the ignored color on the opposite side, attentional modulation of cortical potentials was abolished. This loss of feature selectivity could be attributed to enhanced processing of unattended items that shared the color of the attended items in the opposite field. Thus, while it is possible to attend to two different colors at the same time, this ability is fundamentally constrained by spatially global feature enhancement in early visual-cortical areas, which is obligatory and persists even when it explicitly conflicts with task demands.
Visual attention mechanisms in happiness versus trustworthiness processing of facial expressions.
Calvo, Manuel G; Krumhuber, Eva G; Fernández-Martín, Andrés
2018-03-01
A happy facial expression makes a person look (more) trustworthy. Do perceptions of happiness and trustworthiness rely on the same face regions and visual attention processes? In an eye-tracking study, eye movements and fixations were recorded while participants judged the un/happiness or the un/trustworthiness of dynamic facial expressions in which the eyes and/or the mouth unfolded from neutral to happy or vice versa. A smiling mouth and happy eyes enhanced perceived happiness and trustworthiness similarly, with a greater contribution of the smile relative to the eyes. This comparable judgement output for happiness and trustworthiness was reached through shared as well as distinct attentional mechanisms: (a) entry times and (b) initial fixation thresholds for each face region were equivalent for both judgements, thereby revealing the same attentional orienting in happiness and trustworthiness processing. However, (c) greater and (d) longer fixation density for the mouth region in the happiness task, and for the eye region in the trustworthiness task, demonstrated different selective attentional engagement. Relatedly, (e) mean fixation duration across face regions was longer in the trustworthiness task, thus showing increased attentional intensity or processing effort.
Pfeiffer, Ulrich J.; Schilbach, Leonhard; Jording, Mathis; Timmermans, Bert; Bente, Gary; Vogeley, Kai
2012-01-01
Social gaze provides a window into the interests and intentions of others and allows us to actively point out our own. It enables us to engage in triadic interactions involving human actors and physical objects and to build an indispensable basis for coordinated action and collaborative efforts. The object-related aspect of gaze in combination with the fact that any motor act of looking encompasses both input and output of the minds involved makes this non-verbal cue system particularly interesting for research in embodied social cognition. Social gaze comprises several core components, such as gaze-following or gaze aversion. Gaze-following can result in situations of either “joint attention” or “shared attention.” The former describes situations in which the gaze-follower is aware of sharing a joint visual focus with the gazer. The latter refers to a situation in which gazer and gaze-follower focus on the same object and both are aware of their reciprocal awareness of this joint focus. Here, a novel interactive eye-tracking paradigm suited for studying triadic interactions was used to explore two aspects of social gaze. Experiments 1a and 1b assessed how the latency of another person’s gaze reactions (i.e., gaze-following or gaze version) affected participants’ sense of agency, which was measured by their experience of relatedness of these reactions. Results demonstrate that both timing and congruency of a gaze reaction as well as the other’s action options influence the sense of agency. Experiment 2 explored differences in gaze dynamics when participants were asked to establish either joint or shared attention. Findings indicate that establishing shared attention takes longer and requires a larger number of gaze shifts as compared to joint attention, which more closely seems to resemble simple visual detection. Taken together, novel insights into the sense of agency and the awareness of others in gaze-based interaction are provided. PMID:23227017
ERIC Educational Resources Information Center
Bente, Gary; Ruggenberg, Sabine; Kramer, Nicole C.; Eschenburg, Felix
2008-01-01
This study analyzes the influence of avatars on social presence, interpersonal trust, perceived communication quality, nonverbal behavior, and visual attention in Net-based collaborations using a comparative approach. A real-time communication window including a special avatar interface was integrated into a shared collaborative workspace.…
Implicit Object Naming in Visual Search: Evidence from Phonological Competition
Walenchok, Stephen C.; Hout, Michael C.; Goldinger, Stephen D.
2016-01-01
During visual search, people are distracted by objects that visually resemble search targets; search is impaired when targets and distractors share overlapping features. In this study, we examined whether a nonvisual form of similarity, overlapping object names, can also affect search performance. In three experiments, people searched for images of real-world objects (e.g., a beetle) among items whose names either all shared the same phonological onset (/bi/), or were phonologically varied. Participants either searched for one or three potential targets per trial, with search targets designated either visually or verbally. We examined standard visual search (Experiments 1 and 3) and a self-paced serial search task wherein participants manually rejected each distractor (Experiment 2). We hypothesized that people would maintain visual templates when searching for single targets, but would rely more on object names when searching for multiple items and when targets were verbally cued. This reliance on target names would make performance susceptible to interference from similar-sounding distractors. Experiments 1 and 2 showed the predicted interference effect in conditions with high memory load and verbal cues. In Experiment 3, eye-movement results showed that phonological interference resulted from small increases in dwell time to all distractors. The results suggest that distractor names are implicitly activated during search, slowing attention disengagement when targets and distractors share similar names. PMID:27531018
Liang, Jiali; Wilkinson, Krista
2018-04-18
A striking characteristic of the social communication deficits in individuals with autism is atypical patterns of eye contact during social interactions. We used eye-tracking technology to evaluate how the number of human figures depicted and the presence of sharing activity between the human figures in still photographs influenced visual attention by individuals with autism, typical development, or Down syndrome. We sought to examine visual attention to the contents of visual scene displays, a growing form of augmentative and alternative communication support. Eye-tracking technology recorded point-of-gaze while participants viewed 32 photographs in which either 2 or 3 human figures were depicted. Sharing activities between these human figures are either present or absent. The sampling rate was 60 Hz; that is, the technology gathered 60 samples of gaze behavior per second, per participant. Gaze behaviors, including latency to fixate and time spent fixating, were quantified. The overall gaze behaviors were quite similar across groups, regardless of the social content depicted. However, individuals with autism were significantly slower than the other groups in latency to first view the human figures, especially when there were 3 people depicted in the photographs (as compared with 2 people). When participants' own viewing pace was considered, individuals with autism resembled those with Down syndrome. The current study supports the inclusion of social content with various numbers of human figures and sharing activities between human figures into visual scene displays, regardless of the population served. Study design and reporting practices in eye-tracking literature as it relates to autism and Down syndrome are discussed. https://doi.org/10.23641/asha.6066545.
Shared and distinct factors driving attention and temporal processing across modalities
Berry, Anne S.; Li, Xu; Lin, Ziyong; Lustig, Cindy
2013-01-01
In addition to the classic finding that “sounds are judged longer than lights,” the timing of auditory stimuli is often more precise and accurate than is the timing of visual stimuli. In cognitive models of temporal processing, these modality differences are explained by positing that auditory stimuli more automatically capture and hold attention, more efficiently closing an attentional switch that allows the accumulation of pulses marking the passage of time (Block & Zakay, 1997; Meck, 1991; Penney, 2003). However, attention is a multifaceted construct, and there has been little attempt to determine which aspects of attention may be related to modality effects. We used visual and auditory versions of the Continuous Temporal Expectancy Task (CTET; O'Connell et al., 2009) a timing task previously linked to behavioral and electrophysiological measures of mind-wandering and attention lapses, and tested participants with or without the presence of a video distractor. Performance in the auditory condition was generally superior to that in the visual condition, replicating standard results in the timing literature. The auditory modality was also less affected by declines in sustained attention indexed by declines in performance over time. In contrast, distraction had an equivalent impact on performance in the two modalities. Analysis of individual differences in performance revealed further differences between the two modalities: Poor performance in the auditory condition was primarily related to boredom whereas poor performance in the visual condition was primarily related to distractibility. These results suggest that: 1) challenges to different aspects of attention reveal both modality-specific and nonspecific effects on temporal processing, and 2) different factors drive individual differences when testing across modalities. PMID:23978664
Visual imagery of famous faces: effects of memory and attention revealed by fMRI.
Ishai, Alumit; Haxby, James V; Ungerleider, Leslie G
2002-12-01
Complex pictorial information can be represented and retrieved from memory as mental visual images. Functional brain imaging studies have shown that visual perception and visual imagery share common neural substrates. The type of memory (short- or long-term) that mediates the generation of mental images, however, has not been addressed previously. The purpose of this study was to investigate the neural correlates underlying imagery generated from short- and long-term memory (STM and LTM). We used famous faces to localize the visual response during perception and to compare the responses during visual imagery generated from STM (subjects memorized specific pictures of celebrities before the imagery task) and imagery from LTM (subjects imagined famous faces without seeing specific pictures during the experimental session). We found that visual perception of famous faces activated the inferior occipital gyri, lateral fusiform gyri, the superior temporal sulcus, and the amygdala. Small subsets of these face-selective regions were activated during imagery. Additionally, visual imagery of famous faces activated a network of regions composed of bilateral calcarine, hippocampus, precuneus, intraparietal sulcus (IPS), and the inferior frontal gyrus (IFG). In all these regions, imagery generated from STM evoked more activation than imagery from LTM. Regardless of memory type, focusing attention on features of the imagined faces (e.g., eyes, lips, or nose) resulted in increased activation in the right IPS and right IFG. Our results suggest differential effects of memory and attention during the generation and maintenance of mental images of faces.
Attention capture without awareness in a non-spatial selection task.
Oriet, Chris; Pandey, Mamata; Kawahara, Jun-Ichiro
2017-02-01
Distractors presented prior to a critical target in a rapid sequence of visually-presented items induce a lag-dependent deficit in target identification, particularly when the distractor shares a task-relevant feature of the target. Presumably, such capture of central attention is important for bringing a target into awareness. The results of the present investigation suggest that greater capture of attention by a distractor is not accompanied by greater awareness of it. Moreover, awareness tends to be limited to superficial characteristics of the target such as colour. The findings are interpreted within the context of a model that assumes sudden increases in arousal trigger selection of information for consolidation in working memory. In this conceptualization, prolonged analysis of distractor items sharing task-relevant features leads to larger target identification deficits (i.e., greater capture) but no increase in awareness. Copyright © 2016 Elsevier Inc. All rights reserved.
Fu, Kun; Jin, Junqi; Cui, Runpeng; Sha, Fei; Zhang, Changshui
2017-12-01
Recent progress on automatic generation of image captions has shown that it is possible to describe the most salient information conveyed by images with accurate and meaningful sentences. In this paper, we propose an image captioning system that exploits the parallel structures between images and sentences. In our model, the process of generating the next word, given the previously generated ones, is aligned with the visual perception experience where the attention shifts among the visual regions-such transitions impose a thread of ordering in visual perception. This alignment characterizes the flow of latent meaning, which encodes what is semantically shared by both the visual scene and the text description. Our system also makes another novel modeling contribution by introducing scene-specific contexts that capture higher-level semantic information encoded in an image. The contexts adapt language models for word generation to specific scene types. We benchmark our system and contrast to published results on several popular datasets, using both automatic evaluation metrics and human evaluation. We show that either region-based attention or scene-specific contexts improves systems without those components. Furthermore, combining these two modeling ingredients attains the state-of-the-art performance.
Shen, Wei; Qu, Qingqing; Tong, Xiuhong
2018-05-01
The aim of this study was to investigate the extent to which phonological information mediates the visual attention shift to printed Chinese words in spoken word recognition by using an eye-movement technique with a printed-word paradigm. In this paradigm, participants are visually presented with four printed words on a computer screen, which include a target word, a phonological competitor, and two distractors. Participants are then required to select the target word using a computer mouse, and the eye movements are recorded. In Experiment 1, phonological information was manipulated at the full-phonological overlap; in Experiment 2, phonological information at the partial-phonological overlap was manipulated; and in Experiment 3, the phonological competitors were manipulated to share either fulloverlap or partial-overlap with targets directly. Results of the three experiments showed that the phonological competitor effects were observed at both the full-phonological overlap and partial-phonological overlap conditions. That is, phonological competitors attracted more fixations than distractors, which suggested that phonological information mediates the visual attention shift during spoken word recognition. More importantly, we found that the mediating role of phonological information varies as a function of the phonological similarity between target words and phonological competitors.
Visual scanning with or without spatial uncertainty and time-sharing performance
NASA Technical Reports Server (NTRS)
Liu, Yili; Wickens, Christopher D.
1989-01-01
An experiment is reported that examines the pattern of task interference between visual scanning as a sequential and selective attention process and other concurrent spatial or verbal processing tasks. A distinction is proposed between visual scanning with or without spatial uncertainty regarding the possible differential effects of these two types of scanning on interference with other concurrent processes. The experiment required the subject to perform a simulated primary tracking task, which was time-shared with a secondary spatial or verbal decision task. The relevant information that was needed to perform the decision tasks were displayed with or without spatial uncertainty. The experiment employed a 2 x 2 x 2 design with types of scanning (with or without spatial uncertainty), expected scanning distance (low/high), and codes of concurrent processing (spatial/verbal) as the three experimental factors. The results provide strong evidence that visual scanning as a spatial exploratory activity produces greater task interference with concurrent spatial tasks than with concurrent verbal tasks. Furthermore, spatial uncertainty in visual scanning is identified to be the crucial factor in producing this differential effect.
Attentional limits on the perception and memory of visual information.
Palmer, J
1990-05-01
Attentional limits on perception and memory were measured by the decline in performance with increasing numbers of objects in a display. Multiple objects were presented to Ss who discriminated visual attributes. In a representative condition, 4 lines were briefly presented followed by a single line in 1 of the same locations. Ss were required to judge if the single line in the 2nd display was longer or shorter than the line in the corresponding location of the 1st display. The length difference threshold was calculated as a function of the number of objects. The difference thresholds doubled when the number of objects was increased from 1 to 4. This effect was generalized in several ways, and nonattentional explanations were ruled out. Further analyses showed that the attentional processes must share information from at least 4 objects and can be described by a simple model.
Modulation of neuronal responses during covert search for visual feature conjunctions
Buracas, Giedrius T.; Albright, Thomas D.
2009-01-01
While searching for an object in a visual scene, an observer's attentional focus and eye movements are often guided by information about object features and spatial locations. Both spatial and feature-specific attention are known to modulate neuronal responses in visual cortex, but little is known of the dynamics and interplay of these mechanisms as visual search progresses. To address this issue, we recorded from directionally selective cells in visual area MT of monkeys trained to covertly search for targets defined by a unique conjunction of color and motion features and to signal target detection with an eye movement to the putative target. Two patterns of response modulation were observed. One pattern consisted of enhanced responses to targets presented in the receptive field (RF). These modulations occurred at the end-stage of search and were more potent during correct target identification than during erroneous saccades to a distractor in RF, thus suggesting that this modulation is not a mere presaccadic enhancement. A second pattern of modulation was observed when RF stimuli were nontargets that shared a feature with the target. The latter effect was observed during early stages of search and is consistent with a global feature-specific mechanism. This effect often terminated before target identification, thus suggesting that it interacts with spatial attention. This modulation was exhibited not only for motion but also for color cue, although MT neurons are known to be insensitive to color. Such cue-invariant attentional effects may contribute to a feature binding mechanism acting across visual dimensions. PMID:19805385
Modulation of neuronal responses during covert search for visual feature conjunctions.
Buracas, Giedrius T; Albright, Thomas D
2009-09-29
While searching for an object in a visual scene, an observer's attentional focus and eye movements are often guided by information about object features and spatial locations. Both spatial and feature-specific attention are known to modulate neuronal responses in visual cortex, but little is known of the dynamics and interplay of these mechanisms as visual search progresses. To address this issue, we recorded from directionally selective cells in visual area MT of monkeys trained to covertly search for targets defined by a unique conjunction of color and motion features and to signal target detection with an eye movement to the putative target. Two patterns of response modulation were observed. One pattern consisted of enhanced responses to targets presented in the receptive field (RF). These modulations occurred at the end-stage of search and were more potent during correct target identification than during erroneous saccades to a distractor in RF, thus suggesting that this modulation is not a mere presaccadic enhancement. A second pattern of modulation was observed when RF stimuli were nontargets that shared a feature with the target. The latter effect was observed during early stages of search and is consistent with a global feature-specific mechanism. This effect often terminated before target identification, thus suggesting that it interacts with spatial attention. This modulation was exhibited not only for motion but also for color cue, although MT neurons are known to be insensitive to color. Such cue-invariant attentional effects may contribute to a feature binding mechanism acting across visual dimensions.
Schupp, Harald T; Stockburger, Jessica; Bublatzky, Florian; Junghöfer, Markus; Weike, Almut I; Hamm, Alfons O
2008-09-16
Event-related potential studies revealed an early posterior negativity (EPN) for emotional compared to neutral pictures. Exploring the emotion-attention relationship, a previous study observed that a primary visual discrimination task interfered with the emotional modulation of the EPN component. To specify the locus of interference, the present study assessed the fate of selective visual emotion processing while attention is directed towards the auditory modality. While simply viewing a rapid and continuous stream of pleasant, neutral, and unpleasant pictures in one experimental condition, processing demands of a concurrent auditory target discrimination task were systematically varied in three further experimental conditions. Participants successfully performed the auditory task as revealed by behavioral performance and selected event-related potential components. Replicating previous results, emotional pictures were associated with a larger posterior negativity compared to neutral pictures. Of main interest, increasing demands of the auditory task did not modulate the selective processing of emotional visual stimuli. With regard to the locus of interference, selective emotion processing as indexed by the EPN does not seem to reflect shared processing resources of visual and auditory modality.
An integrative, experience-based theory of attentional control.
Wilder, Matthew H; Mozer, Michael C; Wickens, Christopher D
2011-02-09
Although diverse, theories of visual attention generally share the notion that attention is controlled by some combination of three distinct strategies: (1) exogenous cuing from locally contrasting primitive visual features, such as abrupt onsets or color singletons (e.g., L. Itti, C. Koch, & E. Neiber, 1998), (2) endogenous gain modulation of exogenous activations, used to guide attention to task-relevant features (e.g., V. Navalpakkam & L. Itti, 2007; J. Wolfe, 1994, 2007), and (3) endogenous prediction of likely locations of interest, based on task and scene gist (e.g., A. Torralba, A. Oliva, M. Castelhano, & J. Henderson, 2006). However, little work has been done to synthesize these disparate theories. In this work, we propose a unifying conceptualization in which attention is controlled along two dimensions: the degree of task focus and the contextual scale of operation. Previously proposed strategies-and their combinations-can be viewed as instances of this one mechanism. Thus, this theory serves not as a replacement for existing models but as a means of bringing them into a coherent framework. We present an implementation of this theory and demonstrate its applicability to a wide range of attentional phenomena. The model accounts for key results in visual search with synthetic images and makes reasonable predictions for human eye movements in search tasks involving real-world images. In addition, the theory offers an unusual perspective on attention that places a fundamental emphasis on the role of experience and task-related knowledge.
A Metric to Quantify Shared Visual Attention in Two-Person Teams
NASA Technical Reports Server (NTRS)
Gontar, Patrick; Mulligan, Jeffrey B.
2015-01-01
Introduction: Critical tasks in high-risk environments are often performed by teams, the members of which must work together efficiently. In some situations, the team members may have to work together to solve a particular problem, while in others it may be better for them to divide the work into separate tasks that can be completed in parallel. We hypothesize that these two team strategies can be differentiated on the basis of shared visual attention, measured by gaze tracking. 2) Methods: Gaze recordings were obtained for two-person flight crews flying a high-fidelity simulator (Gontar, Hoermann, 2014). Gaze was categorized with respect to 12 areas of interest (AOIs). We used these data to construct time series of 12 dimensional vectors, with each vector component representing one of the AOIs. At each time step, each vector component was set to 0, except for the one corresponding to the currently fixated AOI, which was set to 1. This time series could then be averaged in time, with the averaging window time (t) as a variable parameter. For example, when we average with a t of one minute, each vector component represents the proportion of time that the corresponding AOI was fixated within the corresponding one minute interval. We then computed the Pearson product-moment correlation coefficient between the gaze proportion vectors for each of the two crew members, at each point in time, resulting in a signal representing the time-varying correlation between gaze behaviors. We determined criteria for concluding correlated gaze behavior using two methods: first, a permutation test was applied to the subjects' data. When one crew member's gaze proportion vector is correlated with a random time sample from the other crewmember's data, a distribution of correlation values is obtained that differs markedly from the distribution obtained from temporally aligned samples. In addition to validating that the gaze tracker was functioning reasonably well, this also allows us to compute probabilities of coordinated behavior for each value of the correlation. As an alternative, we also tabulated distributions of correlation coefficients for synthetic data sets, in which the behavior was modeled as a first-order Markov process, and compared correlation distributions for identical processes with those for disparate processes, allowing us to choose criteria and estimate error rates. 3) Discussion: Our method of gaze correlation is able to measure shared visual attention, and can distinguish between activities involving different instruments. We plan to analyze whether pilots strategies of sharing visual attention can predict performance. Possible measurements of performance include expert ratings from instructors, fuel consumption, total task time, and failure rate. While developed for two-person crews, our approach can be applied to larger groups, using intra-class correlation coefficients instead of the Pearson product-moment correlation.
Kasai, Tetsuko; Moriya, Hiroki; Hirano, Shingo
2011-07-05
It has been proposed that the most fundamental units of attentional selection are "objects" that are grouped according to Gestalt factors such as similarity or connectedness. Previous studies using event-related potentials (ERPs) have shown that object-based attention is associated with modulations of the visual-evoked N1 component, which reflects an early cortical mechanism that is shared with spatial attention. However, these studies only examined the case of perceptually continuous objects. The present study examined the case of separate objects that are grouped according to feature similarity (color, shape) by indexing lateralized potentials at posterior sites in a sustained-attention task that involved bilateral stimulus arrays. A behavioral object effect was found only for task-relevant shape similarity. Electrophysiological results indicated that attention was guided to the task-irrelevant side of the visual field due to achromatic-color similarity in N1 (155-205 ms post-stimulus) and early N2 (210-260 ms) and due to shape similarity in early N2 and late N2 (280-400 ms) latency ranges. These results are discussed in terms of selection mechanisms and object/group representations. Copyright © 2011 Elsevier B.V. All rights reserved.
From Sharing Time to Showtime! Valuing Diverse Venues for Storytelling in Technology-Rich Classrooms
ERIC Educational Resources Information Center
Ware, Paige D.
2006-01-01
This paper presents two nine-year-old children who used different oral, written, visual, and digital modes as resources to create meaning and to position themselves socially through multimodal stories. Their diverging experiences with technology as a resource for storytelling draw attention to the importance of studying "the ways that old and new…
Montefinese, Maria; Semenza, Carlo
2018-05-17
It is widely accepted that different number-related tasks, including solving simple addition and subtraction, may induce attentional shifts on the so-called mental number line, which represents larger numbers on the right and smaller numbers on the left. Recently, it has been shown that different number-related tasks also employ spatial attention shifts along with general cognitive processes. Here we investigated for the first time whether number line estimation and complex mental arithmetic recruit a common mechanism in healthy adults. Participants' performance in two-digit mental additions and subtractions using visual stimuli was compared with their performance in a mental bisection task using auditory numerical intervals. Results showed significant correlations between participants' performance in number line bisection and that in two-digit mental arithmetic operations, especially in additions, providing a first proof of a shared cognitive mechanism (or multiple shared cognitive mechanisms) between auditory number bisection and complex mental calculation.
Assessment of Attentional Workload while Driving by Eye-fixation-related Potentials
NASA Astrophysics Data System (ADS)
Takeda, Yuji; Yoshitsugu, Noritoshi; Itoh, Kazuya; Kanamori, Nobuhiro
How do drivers cope with the attentional workload of in-vehicle information technology? In the present study, we propose a new psychophysiological measure for assessing drivers' attention: eye-fixation-related potential (EFRP). EFRP is a kind of event-related brain potential measurable at the eye-movement situation that reflects how closely observers examine visual information at the eye-fixated position. In the experiment, the effects of verbal working memory load and spatial working memory load during simulated driving were examined by measuring the number of saccadic eye-movements and EFRP as the indices of drivers' attention. The results showed that the spatial working memory load affected both the number of saccadic eye-movements and the amplitude of the P100 component of EFRP, whereas the verbal working memory load affected only the number of saccadic eye-movements. This implies that drivers can perform time-sharing processing between driving and the verbal working memory task, but the decline of accuracy of visual processing during driving is inescapable when the spatial working memory load is given. The present study suggests that EFRP can provide a new index of drivers' attention, other than saccadic eye-movements.
Infant Joint Attention, Neural Networks and Social Cognition
Mundy, Peter; Jarrold, William
2010-01-01
Neural network models of attention can provide a unifying approach to the study of human cognitive and emotional development (Posner & Rothbart, 2007). This paper we argue that a neural networks approach to the infant development of joint attention can inform our understanding of the nature of human social learning, symbolic thought process and social cognition. At its most basic, joint attention involves the capacity to coordinate one’s own visual attention with that of another person. We propose that joint attention development involves increments in the capacity to engage in simultaneous or parallel processing of information about one’s own attention and the attention of other people. Infant practice with joint attention is both a consequence and organizer of the development of a distributed and integrated brain network involving frontal and parietal cortical systems. This executive distributed network first serves to regulate the capacity of infants to respond to and direct the overt behavior of other people in order to share experience with others through the social coordination of visual attention. In this paper we describe this parallel and distributed neural network model of joint attention development and discuss two hypotheses that stem from this model. One is that activation of this distributed network during coordinated attention enhances to depth of information processing and encoding beginning in the first year of life. We also propose that with development joint attention becomes internalized as the capacity to socially coordinate mental attention to internal representations. As this occurs the executive joint attention network makes vital contributions to the development of human symbolic thinking and social cognition. PMID:20884172
How to be more attractive… when communicating science
NASA Astrophysics Data System (ADS)
Ocko, I.
2015-12-01
Let's face it; we live in a culture that is captivated by attractive things. Beautiful celebrities, sleek smartphones, fancy cars, high fashion, stunning architecture, and more. Research even shows that we pay more attention to people and objects we find attractive. This talk is about taking advantage of this reality by applying it to science communication; luring in an audience and keeping their attention is essential to effective knowledge transfer. When the material is presented in an attractive and engaging format, the audience, lay or even expert, is more interested and thus educated and informed.Visuals, in particular, are powerful communication tools, as they: transmit messages faster; improve comprehension; trigger emotions; increase a learner's attention; stick in long-term memory; motivate learners; and promote widespread sharing of content. Experts suggest that more than half of the U.S. public prefers to learn visually; 90% of information transmitted to the brain is visual; people are much more inclined to spend the time learning something if it is presented in a visual format; and visuals increase retention scores from 10% to 90% after three days of learning the material. One study even suggested that individuals respond markedly better to infographic messages than text-based messages regardless of their learning style or visual literacy. In 2012, Google Search scored the keyword "infographic" with the highest possible trend score of 100.Attractive visuals are an excellent and beneficial complement to presentations, blog posts, news articles, scientific articles, reports, and memos. While various challenges often inhibit scientists from incorporating visuals (time commitment, skillset, software, etc.)—thus leading to missed opportunities—there are many simple strategies that can be used to circumvent common obstacles.
Phonological processing of ignored distractor pictures, an fMRI investigation.
Bles, Mart; Jansma, Bernadette M
2008-02-11
Neuroimaging studies of attention often focus on interactions between stimulus representations and top-down selection mechanisms in visual cortex. Less is known about the neural representation of distractor stimuli beyond visual areas, and the interactions between stimuli in linguistic processing areas. In the present study, participants viewed simultaneously presented line drawings at peripheral locations, while in the MRI scanner. The names of the objects depicted in these pictures were either phonologically related (i.e. shared the same consonant-vowel onset construction), or unrelated. Attention was directed either at the linguistic properties of one of these pictures, or at the fixation point (i.e. away from the pictures). Phonological representations of unattended pictures could be detected in the posterior superior temporal gyrus, the inferior frontal gyrus, and the insula. Under some circumstances, the name of ignored distractor pictures is retrieved by linguistic areas. This implies that selective attention to a specific location does not completely filter out the representations of distractor stimuli at early perceptual stages.
Hay, Julia L; Milders, Maarten M; Sahraie, Arash; Niedeggen, Michael
2006-08-01
Recent visual marking studies have shown that the carry-over of distractor inhibition can impair the ability of singletons to capture attention if the singleton and distractors share features. The current study extends this finding to first-order motion targets and distractors, clearly separated in time by a visual cue (the letter X). Target motion discrimination was significantly impaired, a result attributed to the carry-over of distractor inhibition. Increasing the difficulty of cue detection increased the motion target impairment, as distractor inhibition is thought to increase under demanding (high load) conditions in order to maximize selection efficiency. The apparent conflict with studies reporting reduced distractor inhibition under high load conditions was resolved by distinguishing between the effects of "cognitive" and "perceptual" load. ((c) 2006 APA, all rights reserved).
A bottom-up model of spatial attention predicts human error patterns in rapid scene recognition.
Einhäuser, Wolfgang; Mundhenk, T Nathan; Baldi, Pierre; Koch, Christof; Itti, Laurent
2007-07-20
Humans demonstrate a peculiar ability to detect complex targets in rapidly presented natural scenes. Recent studies suggest that (nearly) no focal attention is required for overall performance in such tasks. Little is known, however, of how detection performance varies from trial to trial and which stages in the processing hierarchy limit performance: bottom-up visual processing (attentional selection and/or recognition) or top-down factors (e.g., decision-making, memory, or alertness fluctuations)? To investigate the relative contribution of these factors, eight human observers performed an animal detection task in natural scenes presented at 20 Hz. Trial-by-trial performance was highly consistent across observers, far exceeding the prediction of independent errors. This consistency demonstrates that performance is not primarily limited by idiosyncratic factors but by visual processing. Two statistical stimulus properties, contrast variation in the target image and the information-theoretical measure of "surprise" in adjacent images, predict performance on a trial-by-trial basis. These measures are tightly related to spatial attention, demonstrating that spatial attention and rapid target detection share common mechanisms. To isolate the causal contribution of the surprise measure, eight additional observers performed the animal detection task in sequences that were reordered versions of those all subjects had correctly recognized in the first experiment. Reordering increased surprise before and/or after the target while keeping the target and distractors themselves unchanged. Surprise enhancement impaired target detection in all observers. Consequently, and contrary to several previously published findings, our results demonstrate that attentional limitations, rather than target recognition alone, affect the detection of targets in rapidly presented visual sequences.
Interactions between space-based and feature-based attention.
Leonard, Carly J; Balestreri, Angela; Luck, Steven J
2015-02-01
Although early research suggested that attention to nonspatial features (i.e., red) was confined to stimuli appearing at an attended spatial location, more recent research has emphasized the global nature of feature-based attention. For example, a distractor sharing a target feature may capture attention even if it occurs at a task-irrelevant location. Such findings have been used to argue that feature-based attention operates independently of spatial attention. However, feature-based attention may nonetheless interact with spatial attention, yielding larger feature-based effects at attended locations than at unattended locations. The present study tested this possibility. In 2 experiments, participants viewed a rapid serial visual presentation (RSVP) stream and identified a target letter defined by its color. Target-colored distractors were presented at various task-irrelevant locations during the RSVP stream. We found that feature-driven attentional capture effects were largest when the target-colored distractor was closer to the attended location. These results demonstrate that spatial attention modulates the strength of feature-based attention capture, calling into question the prior evidence that feature-based attention operates in a global manner that is independent of spatial attention.
Kiyonaga, Anastasia; Egner, Tobias
2013-04-01
Working memory (WM) and attention have been studied as separate cognitive constructs, although it has long been acknowledged that attention plays an important role in controlling the activation, maintenance, and manipulation of representations in WM. WM has, conversely, been thought of as a means of maintaining representations to voluntarily guide perceptual selective attention. It has more recently been observed, however, that the contents of WM can capture visual attention, even when such internally maintained representations are irrelevant, and often disruptive, to the immediate external task. Thus, the precise relationship between WM and attention remains unclear, but it appears that they may bidirectionally impact one another, whether or not internal representations are consistent with the external perceptual goals. This reciprocal relationship seems, further, to be constrained by limited cognitive resources to handle demands in either maintenance or selection. We propose here that the close relationship between WM and attention may be best described as a give-and-take interdependence between attention directed toward either actively maintained internal representations (traditionally considered WM) or external perceptual stimuli (traditionally considered selective attention), underpinned by their shared reliance on a common cognitive resource. Put simply, we argue that WM and attention should no longer be considered as separate systems or concepts, but as competing and influencing one another because they rely on the same limited resource. This framework can offer an explanation for the capture of visual attention by irrelevant WM contents, as well as a straightforward account of the underspecified relationship between WM and attention.
Kiyonaga, Anastasia; Egner, Tobias
2012-01-01
Working memory (WM) and attention have been studied as separate cognitive constructs, although it has long been acknowledged that attention plays an important role in controlling the activation, maintenance, and manipulation of representations in WM. WM has, conversely, been thought of as a means of maintaining representations to voluntarily guide perceptual selective attention. It has more recently been observed, however, that the contents of WM can capture visual attention, even when such internally maintained representations are irrelevant, and often disruptive, to the immediate external task. Thus the precise relationship between WM and attention remains unclear, but it appears that they may bi-directionally impact one another, whether or not internal representations are consistent with external perceptual goals. This reciprocal relationship seems, further, to be constrained by limited cognitive resources to handle demands in either maintenance or selection. We propose here that the close relationship between WM and attention may be best described as a give-and-take interdependence between attention directed toward actively maintained internal representations (traditionally considered WM) versus external perceptual stimuli (traditionally considered selective attention), underpinned by their shared reliance on a common cognitive resource. Put simply, we argue that WM and attention should no longer be considered as separate systems or concepts, but as competing and impacting one another because they rely on the same limited resource. This framework can offer an explanation for the capture of visual attention by irrelevant WM contents, as well as a straightforward account of the underspecified relationship between WM and attention. PMID:23233157
Infant joint attention, neural networks and social cognition.
Mundy, Peter; Jarrold, William
2010-01-01
Neural network models of attention can provide a unifying approach to the study of human cognitive and emotional development (Posner & Rothbart, 2007). In this paper we argue that a neural network approach to the infant development of joint attention can inform our understanding of the nature of human social learning, symbolic thought process and social cognition. At its most basic, joint attention involves the capacity to coordinate one's own visual attention with that of another person. We propose that joint attention development involves increments in the capacity to engage in simultaneous or parallel processing of information about one's own attention and the attention of other people. Infant practice with joint attention is both a consequence and an organizer of the development of a distributed and integrated brain network involving frontal and parietal cortical systems. This executive distributed network first serves to regulate the capacity of infants to respond to and direct the overt behavior of other people in order to share experience with others through the social coordination of visual attention. In this paper we describe this parallel and distributed neural network model of joint attention development and discuss two hypotheses that stem from this model. One is that activation of this distributed network during coordinated attention enhances the depth of information processing and encoding beginning in the first year of life. We also propose that with development, joint attention becomes internalized as the capacity to socially coordinate mental attention to internal representations. As this occurs the executive joint attention network makes vital contributions to the development of human symbolic thinking and social cognition. Copyright © 2010 Elsevier Ltd. All rights reserved.
The contributions of visual and central attention to visual working memory.
Souza, Alessandra S; Oberauer, Klaus
2017-10-01
We investigated the role of two kinds of attention-visual and central attention-for the maintenance of visual representations in working memory (WM). In Experiment 1 we directed attention to individual items in WM by presenting cues during the retention interval of a continuous delayed-estimation task, and instructing participants to think of the cued items. Attending to items improved recall commensurate with the frequency with which items were attended (0, 1, or 2 times). Experiments 1 and 3 further tested which kind of attention-visual or central-was involved in WM maintenance. We assessed the dual-task costs of two types of distractor tasks, one tapping sustained visual attention and one tapping central attention. Only the central attention task yielded substantial dual-task costs, implying that central attention substantially contributes to maintenance of visual information in WM. Experiment 2 confirmed that the visual-attention distractor task was demanding enough to disrupt performance in a task relying on visual attention. We combined the visual-attention and the central-attention distractor tasks with a multiple object tracking (MOT) task. Distracting visual attention, but not central attention, impaired MOT performance. Jointly, the three experiments provide a double dissociation between visual and central attention, and between visual WM and visual object tracking: Whereas tracking multiple targets across the visual filed depends on visual attention, visual WM depends mostly on central attention.
Howe, Piers D. L.
2017-01-01
To understand how the visual system represents multiple moving objects and how those representations contribute to tracking, it is essential that we understand how the processes of attention and working memory interact. In the work described here we present an investigation of that interaction via a series of tracking and working memory dual-task experiments. Previously, it has been argued that tracking is resistant to disruption by a concurrent working memory task and that any apparent disruption is in fact due to observers making a response to the working memory task, rather than due to competition for shared resources. Contrary to this, in our experiments we find that when task order and response order confounds are avoided, all participants show a similar decrease in both tracking and working memory performance. However, if task and response order confounds are not adequately controlled for we find substantial individual differences, which could explain the previous conflicting reports on this topic. Our results provide clear evidence that tracking and working memory tasks share processing resources. PMID:28410383
Lapierre, Mark D; Cropper, Simon J; Howe, Piers D L
2017-01-01
To understand how the visual system represents multiple moving objects and how those representations contribute to tracking, it is essential that we understand how the processes of attention and working memory interact. In the work described here we present an investigation of that interaction via a series of tracking and working memory dual-task experiments. Previously, it has been argued that tracking is resistant to disruption by a concurrent working memory task and that any apparent disruption is in fact due to observers making a response to the working memory task, rather than due to competition for shared resources. Contrary to this, in our experiments we find that when task order and response order confounds are avoided, all participants show a similar decrease in both tracking and working memory performance. However, if task and response order confounds are not adequately controlled for we find substantial individual differences, which could explain the previous conflicting reports on this topic. Our results provide clear evidence that tracking and working memory tasks share processing resources.
The strength of attentional biases reduces as visual short-term memory load increases
Shimi, A.
2013-01-01
Despite our visual system receiving irrelevant input that competes with task-relevant signals, we are able to pursue our perceptual goals. Attention enhances our visual processing by biasing the processing of the input that is relevant to the task at hand. The top-down signals enabling these biases are therefore important for regulating lower level sensory mechanisms. In three experiments, we examined whether we apply similar biases to successfully maintain information in visual short-term memory (VSTM). We presented participants with targets alongside distracters and we graded their perceptual similarity to vary the extent to which they competed. Experiments 1 and 2 showed that the more items held in VSTM before the onset of the distracters, the more perceptually distinct the distracters needed to be for participants to retain the target accurately. Experiment 3 extended these behavioral findings by demonstrating that the perceptual similarity between target and distracters exerted a significantly greater effect on occipital alpha amplitudes, depending on the number of items already held in VSTM. The trade-off between VSTM load and target-distracter competition suggests that VSTM and perceptual competition share a partially overlapping mechanism, namely top-down inputs into sensory areas. PMID:23576694
Altered spatial profile of distraction in people with schizophrenia.
Leonard, Carly J; Robinson, Benjamin M; Hahn, Britta; Luck, Steven J; Gold, James M
2017-11-01
Attention is critical for effective processing of incoming information and has long been identified as a potential area of dysfunction in people with schizophrenia (PSZ). In the realm of visual processing, both spatial attention and feature-based attention are involved in biasing selection toward task-relevant stimuli and avoiding distraction. Evidence from multiple paradigms has suggested that PSZ may hyperfocus and have a narrower "spotlight" of spatial attention. In contrast, feature-based attention seems largely preserved, with some suggestion of increased processing of stimuli sharing the target-defining feature. In the current study, we examined the spatial profile of feature-based distraction using a task in which participants searched for a particular color target and attempted to ignore distractors that varied in distance from the target location and either matched or mismatched the target color. PSZ differed from healthy controls in terms of interference from peripheral distractors that shared the target-color presented 200 ms before a central target. Specifically, PSZ showed an amplified gradient of spatial attention, with increased distraction to near distractors and less interference to far distractors. Moreover, consistent with hyperfocusing, individual differences in this spatial profile were correlated with positive symptoms, such that those with greater positive symptoms showed less distraction by target-colored distractors near the task-relevant location. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Smith, Philip L; Lilburn, Simon D; Corbett, Elaine A; Sewell, David K; Kyllingsbæk, Søren
2016-09-01
We investigated the capacity of visual short-term memory (VSTM) in a phase discrimination task that required judgments about the configural relations between pairs of black and white features. Sewell et al. (2014) previously showed that VSTM capacity in an orientation discrimination task was well described by a sample-size model, which views VSTM as a resource comprised of a finite number of noisy stimulus samples. The model predicts the invariance of [Formula: see text] , the sum of squared sensitivities across items, for displays of different sizes. For phase discrimination, the set-size effect significantly exceeded that predicted by the sample-size model for both simultaneously and sequentially presented stimuli. Instead, the set-size effect and the serial position curves with sequential presentation were predicted by an attention-weighted version of the sample-size model, which assumes that one of the items in the display captures attention and receives a disproportionate share of resources. The choice probabilities and response time distributions from the task were well described by a diffusion decision model in which the drift rates embodied the assumptions of the attention-weighted sample-size model. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Morey, Candice Coker; Cowan, Nelson; Morey, Richard D; Rouder, Jeffery N
2011-02-01
Prominent roles for general attention resources are posited in many models of working memory, but the manner in which these can be allocated differs between models or is not sufficiently specified. We varied the payoffs for correct responses in two temporally-overlapping recognition tasks, a visual array comparison task and a tone sequence comparison task. In the critical conditions, an increase in reward for one task corresponded to a decrease in reward for the concurrent task, but memory load remained constant. Our results show patterns of interference consistent with a trade-off between the tasks, suggesting that a shared resource can be flexibly divided, rather than only fully allotted to either of the tasks. Our findings support a role for a domain-general resource in models of working memory, and furthermore suggest that this resource is flexibly divisible.
Making sense of personal health information: challenges for information visualization.
Faisal, Sarah; Blandford, Ann; Potts, Henry W W
2013-09-01
This article presents a systematic review of the literature on information visualization for making sense of personal health information. Based on this review, five application themes were identified: treatment planning, examination of patients' medical records, representation of pedigrees and family history, communication and shared decision making, and life management and health monitoring. While there are recognized design challenges associated with each of these themes, such as how best to represent data visually and integrate qualitative and quantitative information, other challenges and opportunities have received little attention to date. In this article, we highlight, in particular, the opportunities for supporting people in better understanding their own illnesses and making sense of their health conditions in order to manage them more effectively.
Cross-modal perceptual load: the impact of modality and individual differences.
Sandhu, Rajwant; Dyson, Benjamin James
2016-05-01
Visual distractor processing tends to be more pronounced when the perceptual load (PL) of a task is low compared to when it is high [perpetual load theory (PLT); Lavie in J Exp Psychol Hum Percept Perform 21(3):451-468, 1995]. While PLT is well established in the visual domain, application to cross-modal processing has produced mixed results, and the current study was designed in an attempt to improve previous methodologies. First, we assessed PLT using response competition, a typical metric from the uni-modal domain. Second, we looked at the impact of auditory load on visual distractors, and of visual load on auditory distractors, within the same individual. Third, we compared individual uni- and cross-modal selective attention abilities, by correlating performance with the visual Attentional Network Test (ANT). Fourth, we obtained a measure of the relative processing efficiency between vision and audition, to investigate whether processing ease influences the extent of distractor processing. Although distractor processing was evident during both attend auditory and attend visual conditions, we found that PL did not modulate processing of either visual or auditory distractors. We also found support for a correlation between the uni-modal (visual) ANT and our cross-modal task but only when the distractors were visual. Finally, although auditory processing was more impacted by visual distractors, our measure of processing efficiency only accounted for this asymmetry in the auditory high-load condition. The results are discussed with respect to the continued debate regarding the shared or separate nature of processing resources across modalities.
Feature-based attentional modulation increases with stimulus separation in divided-attention tasks.
Sally, Sharon L; Vidnyánsky, Zoltán; Papathomas, Thomas V
2009-01-01
Attention modifies our visual experience by selecting certain aspects of a scene for further processing. It is therefore important to understand factors that govern the deployment of selective attention over the visual field. Both location and feature-specific mechanisms of attention have been identified and their modulatory effects can interact at a neural level (Treue and Martinez-Trujillo, 1999). The effects of spatial parameters on feature-based attentional modulation were examined for the feature dimensions of orientation, motion and color using three divided-attention tasks. Subjects performed concurrent discriminations of two briefly presented targets (Gabor patches) to the left and right of a central fixation point at eccentricities of +/-2.5 degrees , 5 degrees , 10 degrees and 15 degrees in the horizontal plane. Gabors were size-scaled to maintain consistent single-task performance across eccentricities. For all feature dimensions, the data show a linear increase in the attentional effects with target separation. In a control experiment, Gabors were presented on an isoeccentric viewing arc at 10 degrees and 15 degrees at the closest spatial separation (+/-2.5 degrees ) of the main experiment. Under these conditions, the effects of feature-based attentional effects were largely eliminated. Our results are consistent with the hypothesis that feature-based attention prioritizes the processing of attended features. Feature-based attentional mechanisms may have helped direct the attentional focus to the appropriate target locations at greater separations, whereas similar assistance may not have been necessary at closer target spacings. The results of the present study specify conditions under which dual-task performance benefits from sharing similar target features and may therefore help elucidate the processes by which feature-based attention operates.
Giraudet, L; Imbert, J-P; Bérenger, M; Tremblay, S; Causse, M
2015-11-01
The Air Traffic Control (ATC) environment is complex and safety-critical. Whilst exchanging information with pilots, controllers must also be alert to visual notifications displayed on the radar screen (e.g., warning which indicates a loss of minimum separation between aircraft). Under the assumption that attentional resources are shared between vision and hearing, the visual interface design may also impact the ability to process these auditory stimuli. Using a simulated ATC task, we compared the behavioral and neural responses to two different visual notification designs--the operational alarm that involves blinking colored "ALRT" displayed around the label of the notified plane ("Color-Blink"), and the more salient alarm involving the same blinking text plus four moving yellow chevrons ("Box-Animation"). Participants performed a concurrent auditory task with the requirement to react to rare pitch tones. P300 from the occurrence of the tones was taken as an indicator of remaining attentional resources. Participants who were presented with the more salient visual design showed better accuracy than the group with the suboptimal operational design. On a physiological level, auditory P300 amplitude in the former group was greater than that observed in the latter group. One potential explanation is that the enhanced visual design freed up attentional resources which, in turn, improved the cerebral processing of the auditory stimuli. These results suggest that P300 amplitude can be used as a valid estimation of the efficiency of interface designs, and of cognitive load more generally. Copyright © 2015 Elsevier B.V. All rights reserved.
Attentional capture and engagement during the attentional blink: A "camera" metaphor of attention.
Zivony, Alon; Lamy, Dominique
2016-11-01
Identification of a target is impaired when it follows a previous target within 500 ms, suggesting that our attentional system suffers from severe temporal limitations. Although control-disruption theories posit that such impairment, known as the attentional blink (AB), reflects a difficulty in matching incoming information with the current attentional set, disrupted-engagement theories propose that it reflects a delay in later processes leading to transient enhancement of potential targets. Here, we used a variant of the contingent-capture rapid serial visual presentation (RSVP) paradigm (Folk, Ester, & Troemel, 2009) to adjudicate these competing accounts. Our results show that a salient distractor that shares the target color captures attention to the same extent whether it appears within or outside the blink, thereby invalidating the notion that control over the attentional set is compromised during the blink. In addition, our results show that during the blink, not the attention-capturing object itself but the item immediately following it, is selected, indicating that the AB manifests as a delay between attentional capture and attentional engagement. We therefore conclude that attentional capture and attentional engagement can be dissociated as separate stages of attentional selection. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Kéri, Szabolcs; Kiss, Imre; Kelemen, Oguz; Benedek, György; Janka, Zoltán
2005-10-01
Schizophrenia is associated with impaired visual information processing. The aim of this study was to investigate the relationship between anomalous perceptual experiences, positive and negative symptoms, perceptual organization, rapid categorization of natural images and magnocellular (M) and parvocellular (P) visual pathway functioning. Thirty-five unmedicated patients with schizophrenia and 20 matched healthy control volunteers participated. Anomalous perceptual experiences were assessed with the Bonn Scale for the Assessment Basic Symptoms (BSABS). General intellectual functions were evaluated with the revised version of the Wechsler Adult Intelligence Scale. The 1-9 version of the Continuous Performance Test (CPT) was used to investigate sustained attention. The following psychophysical tests were used: detection of Gabor patches with collinear and orthogonal flankers (perceptual organization), categorization of briefly presented natural scenes (rapid visual processing), low-contrast and frequency-doubling vernier threshold (M pathway functioning), isoluminant colour vernier threshold and high spatial frequency discrimination (P pathway functioning). The patients with schizophrenia were impaired on test of perceptual organization, rapid visual processing and M pathway functioning. There was a significant correlation between BSABS scores, negative symptoms, perceptual organization, rapid visual processing and M pathway functioning. Positive symptoms, IQ, CPT and P pathway measures did not correlate with these parameters. The best predictor of the BSABS score was the perceptual organization deficit. These results raise the possibility that multiple facets of visual information processing deficits can be explained by M pathway dysfunctions in schizophrenia, resulting in impaired attentional modulation of perceptual organization and of natural image categorization.
Inattentional Deafness: Visual Load Leads to Time-Specific Suppression of Auditory Evoked Responses
Molloy, Katharine; Griffiths, Timothy D.; Lavie, Nilli
2015-01-01
Due to capacity limits on perception, conditions of high perceptual load lead to reduced processing of unattended stimuli (Lavie et al., 2014). Accumulating work demonstrates the effects of visual perceptual load on visual cortex responses, but the effects on auditory processing remain poorly understood. Here we establish the neural mechanisms underlying “inattentional deafness”—the failure to perceive auditory stimuli under high visual perceptual load. Participants performed a visual search task of low (target dissimilar to nontarget items) or high (target similar to nontarget items) load. On a random subset (50%) of trials, irrelevant tones were presented concurrently with the visual stimuli. Brain activity was recorded with magnetoencephalography, and time-locked responses to the visual search array and to the incidental presence of unattended tones were assessed. High, compared to low, perceptual load led to increased early visual evoked responses (within 100 ms from onset). This was accompanied by reduced early (∼100 ms from tone onset) auditory evoked activity in superior temporal sulcus and posterior middle temporal gyrus. A later suppression of the P3 “awareness” response to the tones was also observed under high load. A behavioral experiment revealed reduced tone detection sensitivity under high visual load, indicating that the reduction in neural responses was indeed associated with reduced awareness of the sounds. These findings support a neural account of shared audiovisual resources, which, when depleted under load, leads to failures of sensory perception and awareness. SIGNIFICANCE STATEMENT The present work clarifies the neural underpinning of inattentional deafness under high visual load. The findings of near-simultaneous load effects on both visual and auditory evoked responses suggest shared audiovisual processing capacity. Temporary depletion of shared capacity in perceptually demanding visual tasks leads to a momentary reduction in sensory processing of auditory stimuli, resulting in inattentional deafness. The dynamic “push–pull” pattern of load effects on visual and auditory processing furthers our understanding of both the neural mechanisms of attention and of cross-modal effects across visual and auditory processing. These results also offer an explanation for many previous failures to find cross-modal effects in experiments where the visual load effects may not have coincided directly with auditory sensory processing. PMID:26658858
Lee, Sylvia E.; Kibby, Michelle Y.; Cohen, Morris J.; Stanford, Lisa; Park, Yong; Strickland, Suzanne
2016-01-01
Prior research has shown that attention-deficit/hyperactivity disorder (ADHD) and epilepsy are frequently comorbid and that both disorders are associated with various attention and memory problems. Nonetheless, limited research has been conducted comparing the two disorders in one sample to determine unique versus shared deficits. Hence, we investigated differences in working memory and short-term and delayed recall between children with ADHD, focal epilepsy of mixed foci, comorbid ADHD/epilepsy and controls. Participants were compared on the Core subtests and the Picture Locations subtest of the Children’s Memory Scale (CMS). Results indicated that children with ADHD displayed intact verbal working memory and long-term memory (LTM), as well as intact performance on most aspects of short-term memory (STM). They performed worse than controls on Numbers Forward and Picture Locations, suggesting problems with focused attention and simple span for visual-spatial material. Conversely, children with epilepsy displayed poor focused attention and STM regardless of modality assessed, which affected encoding into LTM. The only loss over time was found for passages (Stories). Working memory was intact. Children with comorbid ADHD/epilepsy displayed focused attention and STM/LTM problems consistent with both disorders, having the lowest scores across the four groups. Hence, focused attention and visual-spatial span appear to be affected in both disorders, whereas additional STM/encoding problems are specific to epilepsy. Children with comorbid ADHD/epilepsy have deficits consistent with both disorders, with slight additive effects. This study suggests that attention and memory testing should be a regular part of the evaluation of children with epilepsy and ADHD. PMID:26156331
Lee, Sylvia E; Kibby, Michelle Y; Cohen, Morris J; Stanford, Lisa; Park, Yong; Strickland, Suzanne
2016-01-01
Prior research has shown that attention-deficit/hyperactivity disorder (ADHD) and epilepsy are frequently comorbid and that both disorders are associated with various attention and memory problems. Nonetheless, limited research has been conducted comparing the two disorders in one sample to determine unique versus shared deficits. Hence, we investigated differences in working memory (WM) and short-term and delayed recall between children with ADHD, focal epilepsy of mixed foci, comorbid ADHD/epilepsy and controls. Participants were compared on the Core subtests and the Picture Locations subtest of the Children's Memory Scale (CMS). Results indicated that children with ADHD displayed intact verbal WM and long-term memory (LTM), as well as intact performance on most aspects of short-term memory (STM). They performed worse than controls on Numbers Forward and Picture Locations, suggesting problems with focused attention and simple span for visual-spatial material. Conversely, children with epilepsy displayed poor focused attention and STM regardless of the modality assessed, which affected encoding into LTM. The only loss over time was found for passages (Stories). WM was intact. Children with comorbid ADHD/epilepsy displayed focused attention and STM/LTM problems consistent with both disorders, having the lowest scores across the four groups. Hence, focused attention and visual-spatial span appear to be affected in both disorders, whereas additional STM/encoding problems are specific to epilepsy. Children with comorbid ADHD/epilepsy have deficits consistent with both disorders, with slight additive effects. This study suggests that attention and memory testing should be a regular part of the evaluation of children with epilepsy and ADHD.
Role of Gestalt grouping in selective attention: evidence from the Stroop task.
Lamers, Martijn J M; Roelofs, Ardi
2007-11-01
Selective attention has been intensively studied using the Stroop task. Evidence suggests that Stroop interference in a color-naming task arises partly because of visual attention sharing between color and word: Removing the target color after 150 msec reduces interference (Neumann, 1986). Moreover, removing both the color and the word simultaneously reduces interference less than does removing the color only (La Heij, van der Heijden, & Plooij, 2001). These findings could also be attributed to Gestalt grouping principles, such as common fate. We report three experiments in which the role of Gestalt grouping was further investigated. Experiment I replicated the reduced interference, using words and color patches. In Experiment 2, the color patch was not removed but only repositioned (<2 degrees) after 100 msec, which also reduced interference. In Experiment 3, the distractor was repositioned while the target remained stationary, again reducing interference. These results indicate a role for Gestalt grouping in selective attention.
Craston, Patrick; Wyble, Brad; Chennu, Srivas; Bowman, Howard
2009-03-01
Observers often miss a second target (T2) if it follows an identified first target item (T1) within half a second in rapid serial visual presentation (RSVP), a finding termed the attentional blink. If two targets are presented in immediate succession, however, accuracy is excellent (Lag 1 sparing). The resource sharing hypothesis proposes a dynamic distribution of resources over a time span of up to 600 msec during the attentional blink. In contrast, the ST(2) model argues that working memory encoding is serial during the attentional blink and that, due to joint consolidation, Lag 1 is the only case where resources are shared. Experiment 1 investigates the P3 ERP component evoked by targets in RSVP. The results suggest that, in this context, P3 amplitude is an indication of bottom-up strength rather than a measure of cognitive resource allocation. Experiment 2, employing a two-target paradigm, suggests that T1 consolidation is not affected by the presentation of T2 during the attentional blink. However, if targets are presented in immediate succession (Lag 1 sparing), they are jointly encoded into working memory. We use the ST(2) model's neural network implementation, which replicates a range of behavioral results related to the attentional blink, to generate "virtual ERPs" by summing across activation traces. We compare virtual to human ERPs and show how the results suggest a serial nature of working memory encoding as implied by the ST(2) model.
Wästlund, Erik; Shams, Poja; Otterbring, Tobias
2018-01-01
In visual marketing, the truism that "unseen is unsold" means that products that are not noticed will not be sold. This truism rests on the idea that the consumer choice process is heavily influenced by visual search. However, given that the majority of available products are not seen by consumers, this article examines the role of peripheral vision in guiding attention during the consumer choice process. In two eye-tracking studies, one conducted in a lab facility and the other conducted in a supermarket, the authors investigate the role and limitations of peripheral vision. The results show that peripheral vision is used to direct visual attention when discriminating between target and non-target objects in an eye-tracking laboratory. Target and non-target similarity, as well as visual saliency of non-targets, constitute the boundary conditions for this effect, which generalizes from instruction-based laboratory tasks to preference-based choice tasks in a real supermarket setting. Thus, peripheral vision helps customers to devote a larger share of attention to relevant products during the consumer choice process. Taken together, the results show how the creation of consideration set (sets of possible choice options) relies on both goal-directed attention and peripheral vision. These results could explain how visually similar packaging positively influences market leaders, while making novel brands almost invisible on supermarket shelves. The findings show that even though unsold products might be unseen, in the sense that they have not been directly observed, they might still have been evaluated and excluded by means of peripheral vision. This article is based on controlled lab experiments as well as a field study conducted in a complex retail environment. Thus, the findings are valid both under controlled and ecologically valid conditions. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
The attentive brain: insights from developmental cognitive neuroscience.
Amso, Dima; Scerif, Gaia
2015-10-01
Visual attention functions as a filter to select environmental information for learning and memory, making it the first step in the eventual cascade of thought and action systems. Here, we review studies of typical and atypical visual attention development and explain how they offer insights into the mechanisms of adult visual attention. We detail interactions between visual processing and visual attention, as well as the contribution of visual attention to memory. Finally, we discuss genetic mechanisms underlying attention disorders and how attention may be modified by training.
Attention induced neural response trade-off in retinotopic cortex under load.
Torralbo, Ana; Kelley, Todd A; Rees, Geraint; Lavie, Nilli
2016-09-14
The effects of perceptual load on visual cortex response to distractors are well established and various phenomena of 'inattentional blindness' associated with elimination of visual cortex response to unattended distractors, have been documented in tasks of high load. Here we tested an account for these effects in terms of a load-induced trade-off between target and distractor processing in retinotopic visual cortex. Participants were scanned using fMRI while performing a visual-search task and ignoring distractor checkerboards in the periphery. Retinotopic responses to target and distractors were assessed as a function of search load (comparing search set-sizes two, three and five). We found that increased load not only increased activity in frontoparietal network, but also had opposite effects on retinotopic responses to target and distractors. Target-related signals in areas V2-V3 linearly increased, while distractor response linearly decreased, with increased load. Critically, the slopes were equivalent for both load functions, thus demonstrating resource trade-off. Load effects were also found in displays with the same item number in the distractor hemisphere across different set sizes, thus ruling out local intrahemispheric interactions as the cause. Our findings provide new evidence for load theory proposals of attention resource sharing between target and distractor leading to inattentional blindness.
Attention induced neural response trade-off in retinotopic cortex under load
Torralbo, Ana; Kelley, Todd A.; Rees, Geraint; Lavie, Nilli
2016-01-01
The effects of perceptual load on visual cortex response to distractors are well established and various phenomena of ‘inattentional blindness’ associated with elimination of visual cortex response to unattended distractors, have been documented in tasks of high load. Here we tested an account for these effects in terms of a load-induced trade-off between target and distractor processing in retinotopic visual cortex. Participants were scanned using fMRI while performing a visual-search task and ignoring distractor checkerboards in the periphery. Retinotopic responses to target and distractors were assessed as a function of search load (comparing search set-sizes two, three and five). We found that increased load not only increased activity in frontoparietal network, but also had opposite effects on retinotopic responses to target and distractors. Target-related signals in areas V2–V3 linearly increased, while distractor response linearly decreased, with increased load. Critically, the slopes were equivalent for both load functions, thus demonstrating resource trade-off. Load effects were also found in displays with the same item number in the distractor hemisphere across different set sizes, thus ruling out local intrahemispheric interactions as the cause. Our findings provide new evidence for load theory proposals of attention resource sharing between target and distractor leading to inattentional blindness. PMID:27625311
The Role of Visual Processing Speed in Reading Speed Development
Lobier, Muriel; Dubois, Matthieu; Valdois, Sylviane
2013-01-01
A steady increase in reading speed is the hallmark of normal reading acquisition. However, little is known of the influence of visual attention capacity on children's reading speed. The number of distinct visual elements that can be simultaneously processed at a glance (dubbed the visual attention span), predicts single-word reading speed in both normal reading and dyslexic children. However, the exact processes that account for the relationship between the visual attention span and reading speed remain to be specified. We used the Theory of Visual Attention to estimate visual processing speed and visual short-term memory capacity from a multiple letter report task in eight and nine year old children. The visual attention span and text reading speed were also assessed. Results showed that visual processing speed and visual short term memory capacity predicted the visual attention span. Furthermore, visual processing speed predicted reading speed, but visual short term memory capacity did not. Finally, the visual attention span mediated the effect of visual processing speed on reading speed. These results suggest that visual attention capacity could constrain reading speed in elementary school children. PMID:23593117
The role of visual processing speed in reading speed development.
Lobier, Muriel; Dubois, Matthieu; Valdois, Sylviane
2013-01-01
A steady increase in reading speed is the hallmark of normal reading acquisition. However, little is known of the influence of visual attention capacity on children's reading speed. The number of distinct visual elements that can be simultaneously processed at a glance (dubbed the visual attention span), predicts single-word reading speed in both normal reading and dyslexic children. However, the exact processes that account for the relationship between the visual attention span and reading speed remain to be specified. We used the Theory of Visual Attention to estimate visual processing speed and visual short-term memory capacity from a multiple letter report task in eight and nine year old children. The visual attention span and text reading speed were also assessed. Results showed that visual processing speed and visual short term memory capacity predicted the visual attention span. Furthermore, visual processing speed predicted reading speed, but visual short term memory capacity did not. Finally, the visual attention span mediated the effect of visual processing speed on reading speed. These results suggest that visual attention capacity could constrain reading speed in elementary school children.
Autism, Attention, and Alpha Oscillations: An Electrophysiological Study of Attentional Capture.
Keehn, Brandon; Westerfield, Marissa; Müller, Ralph-Axel; Townsend, Jeanne
2017-09-01
Autism spectrum disorder (ASD) is associated with deficits in adaptively orienting attention to behaviorally-relevant information. Neural oscillatory activity plays a key role in brain function and provides a high-resolution temporal marker of attention dynamics. Alpha band (8-12 Hz) activity is associated with both selecting task-relevant stimuli and filtering task-irrelevant information. The present study used electroencephalography (EEG) to examine alpha-band oscillatory activity associated with attentional capture in nineteen children with ASD and twenty-one age- and IQ-matched typically developing (TD) children. Participants completed a rapid serial visual presentation paradigm designed to investigate responses to behaviorally-relevant targets and contingent attention capture by task-irrelevant distractors, which either did or did not share a behaviorally-relevant feature. Participants also completed six minutes of eyes-open resting EEG. In contrast to their TD peers, children with ASD did not evidence posterior alpha desynchronization to behaviorally-relevant targets. Additionally, reduced target-related desynchronization and poorer target detection were associated with increased ASD symptomatology. TD children also showed behavioral and electrophysiological evidence of contingent attention capture, whereas children with ASD showed no behavioral facilitation or alpha desynchronization to distractors that shared a task-relevant feature. Lastly, children with ASD had significantly decreased resting alpha power, and for all participants increased resting alpha levels were associated with greater task-related alpha desynchronization. These results suggest that in ASD under-responsivity and impairments in orienting to salient events within their environment are reflected by atypical EEG oscillatory neurodynamics, which may signify atypical arousal levels and/or an excitatory/inhibitory imbalance.
Zhou, Zhi; Arce, Gonzalo R; Di Crescenzo, Giovanni
2006-08-01
Visual cryptography encodes a secret binary image (SI) into n shares of random binary patterns. If the shares are xeroxed onto transparencies, the secret image can be visually decoded by superimposing a qualified subset of transparencies, but no secret information can be obtained from the superposition of a forbidden subset. The binary patterns of the n shares, however, have no visual meaning and hinder the objectives of visual cryptography. Extended visual cryptography [1] was proposed recently to construct meaningful binary images as shares using hypergraph colourings, but the visual quality is poor. In this paper, a novel technique named halftone visual cryptography is proposed to achieve visual cryptography via halftoning. Based on the blue-noise dithering principles, the proposed method utilizes the void and cluster algorithm [2] to encode a secret binary image into n halftone shares (images) carrying significant visual information. The simulation shows that the visual quality of the obtained halftone shares are observably better than that attained by any available visual cryptography method known to date.
When is perception top-down and when is it not? Culture, narrative, and attention.
Senzaki, Sawa; Masuda, Takahiko; Ishii, Keiko
2014-01-01
Previous findings in cultural psychology indicated that East Asians are more likely than North Americans to be attentive to contextual information (e.g., Nisbett & Masuda, ). However, to what extent and in which conditions culture influences patterns of attention has not been fully examined. As a result, universal patterns of attention may be obscured, and culturally unique patterns may be wrongly assumed to be constant across situations. By carrying out two cross-cultural studies, we demonstrated that (a) both European Canadians and Japanese attended to moving objects similarly when the task was to simply observe the visual information; however, (b) there were cultural variations in patterns of attention when participants actively engaged in the task by constructing narratives of their observation (narrative construction). These findings suggest that cultural effects are most pronounced in narrative construction conditions, where the need to act in accordance with a culturally shared meaning system is elicited. © 2014 Cognitive Science Society, Inc.
NASA Technical Reports Server (NTRS)
Wickens, Christopher; Vieanne, Alex; Clegg, Benjamin; Sebok, Angelia; Janes, Jessica
2015-01-01
Fifty six participants time shared a spacecraft environmental control system task with a realistic space robotic arm control task in either a manual or highly automated version. The former could suffer minor failures, whose diagnosis and repair were supported by a decision aid. At the end of the experiment this decision aid unexpectedly failed. We measured visual attention allocation and switching between the two tasks, in each of the eight conditions formed by manual-automated arm X expected-unexpected failure X monitoring- failure management. We also used our multi-attribute task switching model, based on task attributes of priority interest, difficulty and salience that were self-rated by participants, to predict allocation. An un-weighted model based on attributes of difficulty, interest and salience accounted for 96 percent of the task allocation variance across the 8 different conditions. Task difficulty served as an attractor, with more difficult tasks increasing the tendency to stay on task.
Kopp, Bruno; Wessel, Karl
2010-05-01
In the present study, event-related potentials (ERPs) were recorded to investigate cognitive processes related to the partial transmission of information from stimulus recognition to response preparation. Participants classified two-dimensional visual stimuli with dimensions size and form. One feature combination was designated as the go-target, whereas the other three feature combinations served as no-go distractors. Size discriminability was manipulated across three experimental conditions. N2c and P3a amplitudes were enhanced in response to those distractors that shared the feature from the faster dimension with the target. Moreover, N2c and P3a amplitudes showed a crossover effect: Size distractors evoked more pronounced ERPs under high size discriminability, but form distractors elicited enhanced ERPs under low size discriminability. These results suggest that partial perceptual-motor transmission of information is accompanied by acts of cognitive control and by shifts of attention between the sources of conflicting information. Selection negativity findings imply adaptive allocation of visual feature-based attention across the two stimulus dimensions.
Synchronization of spontaneous eyeblinks while viewing video stories
Nakano, Tamami; Yamamoto, Yoshiharu; Kitajo, Keiichi; Takahashi, Toshimitsu; Kitazawa, Shigeru
2009-01-01
Blinks are generally suppressed during a task that requires visual attention and tend to occur immediately before or after the task when the timing of its onset and offset are explicitly given. During the viewing of video stories, blinks are expected to occur at explicit breaks such as scene changes. However, given that the scene length is unpredictable, there should also be appropriate timing for blinking within a scene to prevent temporal loss of critical visual information. Here, we show that spontaneous blinks were highly synchronized between and within subjects when they viewed the same short video stories, but were not explicitly tied to the scene breaks. Synchronized blinks occurred during scenes that required less attention such as at the conclusion of an action, during the absence of the main character, during a long shot and during repeated presentations of a similar scene. In contrast, blink synchronization was not observed when subjects viewed a background video or when they listened to a story read aloud. The results suggest that humans share a mechanism for controlling the timing of blinks that searches for an implicit timing that is appropriate to minimize the chance of losing critical information while viewing a stream of visual events. PMID:19640888
Keeping your eyes on the prize: anger and visual attention to threats and rewards.
Ford, Brett Q; Tamir, Maya; Brunyé, Tad T; Shirer, William R; Mahoney, Caroline R; Taylor, Holly A
2010-08-01
People's emotional states influence what they focus their attention on in their environment. For example, fear focuses people's attention on threats, whereas excitement may focus their attention on rewards. This study examined the effect of anger on overt visual attention to threats and rewards. Anger is an unpleasant emotion associated with approach motivation. If the effect of emotion on visual attention depends on valence, we would expect anger to focus people's attention on threats. If, however, the effect of emotion on visual attention depends on motivation, we would expect anger to focus people's attention on rewards. Using an eye tracker, we examined the effects of anger, fear, excitement, and a neutral emotional state on participants' overt visual attention to threatening, rewarding, and control images. We found that anger increased visual attention to rewarding information, but not to threatening information. These findings demonstrate that anger increases attention to potential rewards and suggest that the effects of emotions on visual attention are motivationally driven.
Color extended visual cryptography using error diffusion.
Kang, InKoo; Arce, Gonzalo R; Lee, Heung-Kyu
2011-01-01
Color visual cryptography (VC) encrypts a color secret message into n color halftone image shares. Previous methods in the literature show good results for black and white or gray scale VC schemes, however, they are not sufficient to be applied directly to color shares due to different color structures. Some methods for color visual cryptography are not satisfactory in terms of producing either meaningless shares or meaningful shares with low visual quality, leading to suspicion of encryption. This paper introduces the concept of visual information pixel (VIP) synchronization and error diffusion to attain a color visual cryptography encryption method that produces meaningful color shares with high visual quality. VIP synchronization retains the positions of pixels carrying visual information of original images throughout the color channels and error diffusion generates shares pleasant to human eyes. Comparisons with previous approaches show the superior performance of the new method.
Neuronal basis of covert spatial attention in the frontal eye field.
Thompson, Kirk G; Biscoe, Keri L; Sato, Takashi R
2005-10-12
The influential "premotor theory of attention" proposes that developing oculomotor commands mediate covert visual spatial attention. A likely source of this attentional bias is the frontal eye field (FEF), an area of the frontal cortex involved in converting visual information into saccade commands. We investigated the link between FEF activity and covert spatial attention by recording from FEF visual and saccade-related neurons in monkeys performing covert visual search tasks without eye movements. Here we show that the source of attention signals in the FEF is enhanced activity of visually responsive neurons. At the time attention is allocated to the visual search target, nonvisually responsive saccade-related movement neurons are inhibited. Therefore, in the FEF, spatial attention signals are independent of explicit saccade command signals. We propose that spatially selective activity in FEF visually responsive neurons corresponds to the mental spotlight of attention via modulation of ongoing visual processing.
Effects of feature-based attention on the motion aftereffect at remote locations.
Boynton, Geoffrey M; Ciaramitaro, Vivian M; Arman, A Cyrus
2006-09-01
Previous studies have shown that attention to a particular stimulus feature, such as direction of motion or color, enhances neuronal responses to unattended stimuli sharing that feature. We studied this effect psychophysically by measuring the strength of the motion aftereffect (MAE) induced by an unattended stimulus when attention was directed to one of two overlapping fields of moving dots in a different spatial location. When attention was directed to the same direction of motion as the unattended stimulus, the unattended stimulus induced a stronger MAE than when attention was directed to the opposite direction. Also, when the unattended location contained either uncorrelated motion or had no stimulus at all an MAE was induced in the opposite direction to the attended direction of motion. The strength of the MAE was similar regardless of whether subjects attended to the speed or luminance of the attended dots. These results provide further support for a global feature-based mechanism of attention, and show that the effect spreads across all features of an attended object, and to all locations of visual space.
Sasson, Noah J; Pinkham, Amy E; Weittenhiller, Lauren P; Faso, Daniel J; Simpson, Claire
2016-05-01
Although Schizophrenia (SCZ) and Autism Spectrum Disorder (ASD) share impairments in emotion recognition, the mechanisms underlying these impairments may differ. The current study used the novel "Emotions in Context" task to examine how the interpretation and visual inspection of facial affect is modulated by congruent and incongruent emotional contexts in SCZ and ASD. Both adults with SCZ (n= 44) and those with ASD (n= 21) exhibited reduced affect recognition relative to typically-developing (TD) controls (n= 39) when faces were integrated within broader emotional scenes but not when they were presented in isolation, underscoring the importance of using stimuli that better approximate real-world contexts. Additionally, viewing faces within congruent emotional scenes improved accuracy and visual attention to the face for controls more so than the clinical groups, suggesting that individuals with SCZ and ASD may not benefit from the presence of complementary emotional information as readily as controls. Despite these similarities, important distinctions between SCZ and ASD were found. In every condition, IQ was related to emotion-recognition accuracy for the SCZ group but not for the ASD or TD groups. Further, only the ASD group failed to increase their visual attention to faces in incongruent emotional scenes, suggesting a lower reliance on facial information within ambiguous emotional contexts relative to congruent ones. Collectively, these findings highlight both shared and distinct social cognitive processes in SCZ and ASD that may contribute to their characteristic social disabilities. © The Author 2015. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center. All rights reserved. For permissions, please email: journals.permissions@oup.com.
Effects of visual attention on chromatic and achromatic detection sensitivities.
Uchikawa, Keiji; Sato, Masayuki; Kuwamura, Keiko
2014-05-01
Visual attention has a significant effect on various visual functions, such as response time, detection and discrimination sensitivity, and color appearance. It has been suggested that visual attention may affect visual functions in the early visual pathways. In this study we examined selective effects of visual attention on sensitivities of the chromatic and achromatic pathways to clarify whether visual attention modifies responses in the early visual system. We used a dual task paradigm in which the observer detected a peripheral test stimulus presented at 4 deg eccentricities while the observer concurrently carried out an attention task in the central visual field. In experiment 1, it was confirmed that peripheral spectral sensitivities were reduced more for short and long wavelengths than for middle wavelengths with the central attention task so that the spectral sensitivity function changed its shape by visual attention. This indicated that visual attention affected the chromatic response more strongly than the achromatic response. In experiment 2 it was obtained that the detection thresholds increased in greater degrees in the red-green and yellow-blue chromatic directions than in the white-black achromatic direction in the dual task condition. In experiment 3 we showed that the peripheral threshold elevations depended on the combination of color-directions of the central and peripheral stimuli. Since the chromatic and achromatic responses were separately processed in the early visual pathways, the present results provided additional evidence that visual attention affects responses in the early visual pathways.
Age-related changes in conjunctive visual search in children with and without ASD.
Iarocci, Grace; Armstrong, Kimberly
2014-04-01
Visual-spatial strengths observed among people with autism spectrum disorder (ASD) may be associated with increased efficiency of selective attention mechanisms such as visual search. In a series of studies, researchers examined the visual search of targets that share features with distractors in a visual array and concluded that people with ASD showed enhanced performance on visual search tasks. However, methodological limitations, the small sample sizes, and the lack of developmental analysis have tempered the interpretations of these results. In this study, we specifically addressed age-related changes in visual search. We examined conjunctive visual search in groups of children with (n = 34) and without ASD (n = 35) at 7-9 years of age when visual search performance is beginning to improve, and later, at 10-12 years, when performance has improved. The results were consistent with previous developmental findings; 10- to 12-year-old children were significantly faster visual searchers than their 7- to 9-year-old counterparts. However, we found no evidence of enhanced search performance among the children with ASD at either the younger or older ages. More research is needed to understand the development of visual search in both children with and without ASD. © 2014 International Society for Autism Research, Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
Alvarez, George A.; Horowitz, Todd S.; Arsenio, Helga C.; DiMase, Jennifer S.; Wolfe, Jeremy M.
2005-01-01
Multielement visual tracking and visual search are 2 tasks that are held to require visual-spatial attention. The authors used the attentional operating characteristic (AOC) method to determine whether both tasks draw continuously on the same attentional resource (i.e., whether the 2 tasks are mutually exclusive). The authors found that observers…
Lin, Hung-Yu; Hsieh, Hsieh-Chun; Lee, Posen; Hong, Fu-Yuan; Chang, Wen-Dien; Liu, Kuo-Cheng
2017-08-01
This study explored auditory and visual attention in children with ADHD. In a randomized, two-period crossover design, 50 children with ADHD and 50 age- and sex-matched typically developing peers were measured with the Test of Various Attention (TOVA). The deficiency of visual attention is more serious than that of auditory attention in children with ADHD. On the auditory modality, only the deficit of attentional inconsistency is sufficient to explain most cases of ADHD; however, most of the children with ADHD suffered from deficits of sustained attention, response inhibition, and attentional inconsistency on the visual modality. Our results also showed that the deficit of attentional inconsistency is the most important indicator in diagnosing and intervening in ADHD when both auditory and visual modalities are considered. The findings provide strong evidence that the deficits of auditory attention are different from those of visual attention in children with ADHD.
ERP signs of categorical and supra-categorical processing of visual information.
Zani, Alberto; Marsili, Giulia; Senerchia, Annapaola; Orlandi, Andrea; Citron, Francesca M M; Rizzi, Ezia; Proverbio, Alice M
2015-01-01
The aim of the present study was to investigate to what extent shared and distinct brain mechanisms are possibly subserving the processing of visual supra-categorical and categorical knowledge as observed with event-related potentials of the brain. Access time to these knowledge types was also investigated. Picture pairs of animals, objects, and mixed types were presented. Participants were asked to decide whether each pair contained pictures belonging to the same category (either animals or man-made objects) or to different categories by pressing one of two buttons. Response accuracy and reaction times (RTs) were also recorded. Both ERPs and RTs were grand-averaged separately for the same-different supra-categories and the animal-object categories. Behavioral performance was faster for more endomorphic pairs, i.e., animals vs. objects and same vs. different category pairs. For ERPs, a modulation of the earliest C1 and subsequent P1 responses to the same vs. different supra-category pairs, but not to the animal vs. object category pairs, was found. This finding supports the view that early afferent processing in the striate cortex can be boosted as a by-product of attention allocated to the processing of shapes and basic features that are mismatched, but not to their semantic quintessence, during same-different supra-categorical judgment. Most importantly, the fact that this processing accrual occurred independent of a traditional experimental condition requiring selective attention to a stimulus source out of the various sources addressed makes it conceivable that this processing accrual may arise from the attentional demand deriving from the alternate focusing of visual attention within and across stimulus categorical pairs' basic structural features. Additional posterior ERP reflections of the brain more prominently processing animal category and same-category pairs were observed at the N1 and N2 levels, respectively, as well as at a late positive complex level, overall most likely related to different stages of analysis of the greater endomorphy of these shape groups. Conversely, an enhanced fronto-central and fronto-lateral N2 as well as a centro-parietal N400 to man-made objects and different-category pairs were found, possibly indexing processing of these entities' lower endomorphy and isomorphy at the basic features and semantic levels, respectively. Overall, the present ERP results revealed shared and distinct mechanisms of access to supra-categorical and categorical knowledge in the same way in which shared and distinct neural representations underlie the processing of diverse semantic categories. Additionally, they outlined the serial nature of categorical and supra-categorical representations, indicating the sequential steps of access to these separate knowledge types. Copyright © 2014 Elsevier B.V. All rights reserved.
Grossberg, Stephen; Palma, Jesse; Versace, Massimiliano
2015-01-01
Freely behaving organisms need to rapidly calibrate their perceptual, cognitive, and motor decisions based on continuously changing environmental conditions. These plastic changes include sharpening or broadening of cognitive and motor attention and learning to match the behavioral demands that are imposed by changing environmental statistics. This article proposes that a shared circuit design for such flexible decision-making is used in specific cognitive and motor circuits, and that both types of circuits use acetylcholine to modulate choice selectivity. Such task-sensitive control is proposed to control thalamocortical choice of the critical features that are cognitively attended and that are incorporated through learning into prototypes of visual recognition categories. A cholinergically-modulated process of vigilance control determines if a recognition category and its attended features are abstract (low vigilance) or concrete (high vigilance). Homologous neural mechanisms of cholinergic modulation are proposed to focus attention and learn a multimodal map within the deeper layers of superior colliculus. This map enables visual, auditory, and planned movement commands to compete for attention, leading to selection of a winning position that controls where the next saccadic eye movement will go. Such map learning may be viewed as a kind of attentive motor category learning. The article hereby explicates a link between attention, learning, and cholinergic modulation during decision making within both cognitive and motor systems. Homologs between the mammalian superior colliculus and the avian optic tectum lead to predictions about how multimodal map learning may occur in the mammalian and avian brain and how such learning may be modulated by acetycholine.
Grossberg, Stephen; Palma, Jesse; Versace, Massimiliano
2016-01-01
Freely behaving organisms need to rapidly calibrate their perceptual, cognitive, and motor decisions based on continuously changing environmental conditions. These plastic changes include sharpening or broadening of cognitive and motor attention and learning to match the behavioral demands that are imposed by changing environmental statistics. This article proposes that a shared circuit design for such flexible decision-making is used in specific cognitive and motor circuits, and that both types of circuits use acetylcholine to modulate choice selectivity. Such task-sensitive control is proposed to control thalamocortical choice of the critical features that are cognitively attended and that are incorporated through learning into prototypes of visual recognition categories. A cholinergically-modulated process of vigilance control determines if a recognition category and its attended features are abstract (low vigilance) or concrete (high vigilance). Homologous neural mechanisms of cholinergic modulation are proposed to focus attention and learn a multimodal map within the deeper layers of superior colliculus. This map enables visual, auditory, and planned movement commands to compete for attention, leading to selection of a winning position that controls where the next saccadic eye movement will go. Such map learning may be viewed as a kind of attentive motor category learning. The article hereby explicates a link between attention, learning, and cholinergic modulation during decision making within both cognitive and motor systems. Homologs between the mammalian superior colliculus and the avian optic tectum lead to predictions about how multimodal map learning may occur in the mammalian and avian brain and how such learning may be modulated by acetycholine. PMID:26834535
Common mechanisms of spatial attention in memory and perception: a tactile dual-task study.
Katus, Tobias; Andersen, Søren K; Müller, Matthias M
2014-03-01
Orienting attention to locations in mnemonic representations engages processes that functionally and anatomically overlap the neural circuitry guiding prospective shifts of spatial attention. The attention-based rehearsal account predicts that the requirement to withdraw attention from a memorized location impairs memory accuracy. In a dual-task study, we simultaneously presented retro-cues and pre-cues to guide spatial attention in short-term memory (STM) and perception, respectively. The spatial direction of each cue was independent of the other. The locations indicated by the combined cues could be compatible (same hand) or incompatible (opposite hands). Incompatible directional cues decreased lateralized activity in brain potentials evoked by visual cues, indicating interference in the generation of prospective attention shifts. The detection of external stimuli at the prospectively cued location was impaired when the memorized location was part of the perceptually ignored hand. The disruption of attention-based rehearsal by means of incompatible pre-cues reduced memory accuracy and affected encoding of tactile test stimuli at the retrospectively cued hand. These findings highlight the functional significance of spatial attention for spatial STM. The bidirectional interactions between both tasks demonstrate that spatial attention is a shared neural resource of a capacity-limited system that regulates information processing in internal and external stimulus representations.
Project DyAdd: Visual Attention in Adult Dyslexia and ADHD
ERIC Educational Resources Information Center
Laasonen, Marja; Salomaa, Jonna; Cousineau, Denis; Leppamaki, Sami; Tani, Pekka; Hokkanen, Laura; Dye, Matthew
2012-01-01
In this study of the project DyAdd, three aspects of visual attention were investigated in adults (18-55 years) with dyslexia (n = 35) or attention deficit/hyperactivity disorder (ADHD, n = 22), and in healthy controls (n = 35). Temporal characteristics of visual attention were assessed with Attentional Blink (AB), capacity of visual attention…
A probabilistic model of overt visual attention for cognitive robots.
Begum, Momotaz; Karray, Fakhri; Mann, George K I; Gosine, Raymond G
2010-10-01
Visual attention is one of the major requirements for a robot to serve as a cognitive companion for human. The robotic visual attention is mostly concerned with overt attention which accompanies head and eye movements of a robot. In this case, each movement of the camera head triggers a number of events, namely transformation of the camera and the image coordinate systems, change of content of the visual field, and partial appearance of the stimuli. All of these events contribute to the reduction in probability of meaningful identification of the next focus of attention. These events are specific to overt attention with head movement and, therefore, their effects are not addressed in the classical models of covert visual attention. This paper proposes a Bayesian model as a robot-centric solution for the overt visual attention problem. The proposed model, while taking inspiration from the primates visual attention mechanism, guides a robot to direct its camera toward behaviorally relevant and/or visually demanding stimuli. A particle filter implementation of this model addresses the challenges involved in overt attention with head movement. Experimental results demonstrate the performance of the proposed model.
Guerin, Scott A.; Robbins, Clifford A.; Gilmore, Adrian W.; Schacter, Daniel L.
2012-01-01
SUMMARY The interaction between episodic retrieval and visual attention is relatively unexplored. Given that systems mediating attention and episodic memory appear to be segregated, and perhaps even in competition, it is unclear how visual attention is recruited during episodic retrieval. We investigated the recruitment of visual attention during the suppression of gist-based false recognition, the tendency to falsely recognize items that are similar to previously encountered items. Recruitment of visual attention was associated with activity in the dorsal attention network. The inferior parietal lobule, often implicated in episodic retrieval, tracked veridical retrieval of perceptual detail and showed reduced activity during the engagement of visual attention, consistent with a competitive relationship with the dorsal attention network. These findings suggest that the contribution of the parietal cortex to interactions between visual attention and episodic retrieval entails distinct systems that contribute to different components of the task while also suppressing each other. PMID:22998879
A model for the pilot's use of motion cues in roll-axis tracking tasks
NASA Technical Reports Server (NTRS)
Levison, W. H.; Junker, A. M.
1977-01-01
Simulated target-following and disturbance-regulation tasks were explored with subjects using visual-only and combined visual and motion cues. The effects of motion cues on task performance and pilot response behavior were appreciably different for the two task configurations and were consistent with data reported in earlier studies for similar task configurations. The optimal-control model for pilot/vehicle systems provided a task-independent framework for accounting for the pilot's use of motion cues. Specifically, the availability of motion cues was modeled by augmenting the set of perceptual variables to include position, rate, acceleration, and accleration-rate of the motion simulator, and results were consistent with the hypothesis of attention-sharing between visual and motion variables. This straightforward informational model allowed accurate model predictions of the effects of motion cues on a variety of response measures for both the target-following and disturbance-regulation tasks.
Attention and Visuospatial Working Memory Share the Same Processing Resources
Feng, Jing; Pratt, Jay; Spence, Ian
2012-01-01
Attention and visuospatial working memory (VWM) share very similar characteristics; both have the same upper bound of about four items in capacity and they recruit overlapping brain regions. We examined whether both attention and VWM share the same processing resources using a novel dual-task costs approach based on a load-varying dual-task technique. With sufficiently large loads on attention and VWM, considerable interference between the two processes was observed. A further load increase on either process produced reciprocal increases in interference on both processes, indicating that attention and VWM share common resources. More critically, comparison among four experiments on the reciprocal interference effects, as measured by the dual-task costs, demonstrates no significant contribution from additional processing other than the shared processes. These results support the notion that attention and VWM share the same processing resources. PMID:22529826
de la Serna, Elena; Sugranyes, Gisela; Sanchez-Gistau, Vanessa; Rodriguez-Toscano, Elisa; Baeza, Immaculada; Vila, Montserrat; Romero, Soledad; Sanchez-Gutierrez, Teresa; Penzol, Mª José; Moreno, Dolores; Castro-Fornieles, Josefina
2017-05-01
Schizophrenia (SZ) and bipolar disorder (BD) are considered neurobiological disorders which share some clinical, cognitive and neuroimaging characteristics. Studying child and adolescent offspring of patients diagnosed with bipolar disorder (BDoff) or schizophrenia (SZoff) is regarded as a reliable method for investigating early alterations and vulnerability factors for these disorders. This study compares the neuropsychological characteristics of SZoff, BDoff and a community control offspring group (CC) with the aim of examining shared and differential cognitive characteristics among groups. 41 SZoff, 90 BDoff and 107 CC were recruited. They were all assessed with a complete neuropsychological battery which included intelligence quotient, working memory (WM), processing speed, verbal memory and learning, visual memory, executive functions and sustained attention. SZoff and BDoff showed worse performance in some cognitive areas compared with CC. Some of these difficulties (visual memory) were common to both offspring groups, whereas others, such as verbal learning and WM in SZoff or PSI in BDoff, were group-specific. The cognitive difficulties in visual memory shown by both the SZoff and BDoff groups might point to a common endophenotype in the two disorders. Difficulties in other cognitive functions would be specific depending on the family diagnosis. Copyright © 2016 Elsevier B.V. All rights reserved.
Threat captures attention but does not affect learning of contextual regularities.
Yamaguchi, Motonori; Harwood, Sarah L
2017-04-01
Some of the stimulus features that guide visual attention are abstract properties of objects such as potential threat to one's survival, whereas others are complex configurations such as visual contexts that are learned through past experiences. The present study investigated the two functions that guide visual attention, threat detection and learning of contextual regularities, in visual search. Search arrays contained images of threat and non-threat objects, and their locations were fixed on some trials but random on other trials. Although they were irrelevant to the visual search task, threat objects facilitated attention capture and impaired attention disengagement. Search time improved for fixed configurations more than for random configurations, reflecting learning of visual contexts. Nevertheless, threat detection had little influence on learning of the contextual regularities. The results suggest that factors guiding visual attention are different from factors that influence learning to guide visual attention.
ERIC Educational Resources Information Center
Skorich, Daniel P.; Gash, Tahlia B.; Stalker, Katie L.; Zheng, Lidan; Haslam, S. Alexander
2017-01-01
The social difficulties of autism spectrum disorder (ASD) are typically explained as a disruption in the Shared Attention Mechanism (SAM) sub-component of the theory of mind (ToM) system. In the current paper, we explore the hypothesis that SAM's capacity to construct the self-other-object relations necessary for shared-attention arises from a…
Kraft, Antje; Dyrholm, Mads; Kehrer, Stefanie; Kaufmann, Christian; Bruening, Jovita; Kathmann, Norbert; Bundesen, Claus; Irlbacher, Kerstin; Brandt, Stephan A
2015-01-01
Several studies have demonstrated a bilateral field advantage (BFA) in early visual attentional processing, that is, enhanced visual processing when stimuli are spread across both visual hemifields. The results are reminiscent of a hemispheric resource model of parallel visual attentional processing, suggesting more attentional resources on an early level of visual processing for bilateral displays [e.g. Sereno AB, Kosslyn SM. Discrimination within and between hemifields: a new constraint on theories of attention. Neuropsychologia 1991;29(7):659-75.]. Several studies have shown that the BFA extends beyond early stages of visual attentional processing, demonstrating that visual short term memory (VSTM) capacity is higher when stimuli are distributed bilaterally rather than unilaterally. Here we examine whether hemisphere-specific resources are also evident on later stages of visual attentional processing. Based on the Theory of Visual Attention (TVA) [Bundesen C. A theory of visual attention. Psychol Rev 1990;97(4):523-47.] we used a whole report paradigm that allows investigating visual attention capacity variability in unilateral and bilateral displays during navigated repetitive transcranial magnetic stimulation (rTMS) of the precuneus region. A robust BFA in VSTM storage capacity was apparent after rTMS over the left precuneus and in the control condition without rTMS. In contrast, the BFA diminished with rTMS over the right precuneus. This finding indicates that the right precuneus plays a causal role in VSTM capacity, particularly in bilateral visual displays. Copyright © 2015 Elsevier Inc. All rights reserved.
The Influence of Selective and Divided Attention on Audiovisual Integration in Children.
Yang, Weiping; Ren, Yanna; Yang, Dan Ou; Yuan, Xue; Wu, Jinglong
2016-01-24
This article aims to investigate whether there is a difference in audiovisual integration in school-aged children (aged 6 to 13 years; mean age = 9.9 years) between the selective attention condition and divided attention condition. We designed a visual and/or auditory detection task that included three blocks (divided attention, visual-selective attention, and auditory-selective attention). The results showed that the response to bimodal audiovisual stimuli was faster than to unimodal auditory or visual stimuli under both divided attention and auditory-selective attention conditions. However, in the visual-selective attention condition, no significant difference was found between the unimodal visual and bimodal audiovisual stimuli in response speed. Moreover, audiovisual behavioral facilitation effects were compared between divided attention and selective attention (auditory or visual attention). In doing so, we found that audiovisual behavioral facilitation was significantly difference between divided attention and selective attention. The results indicated that audiovisual integration was stronger in the divided attention condition than that in the selective attention condition in children. Our findings objectively support the notion that attention can modulate audiovisual integration in school-aged children. Our study might offer a new perspective for identifying children with conditions that are associated with sustained attention deficit, such as attention-deficit hyperactivity disorder. © The Author(s) 2016.
Recovery of Visual Search following Moderate to Severe Traumatic Brain Injury
Schmitter-Edgecombe, Maureen; Robertson, Kayela
2015-01-01
Introduction Deficits in attentional abilities can significantly impact rehabilitation and recovery from traumatic brain injury (TBI). This study investigated the nature and recovery of pre-attentive (parallel) and attentive (serial) visual search abilities after TBI. Methods Participants were 40 individuals with moderate to severe TBI who were tested following emergence from post-traumatic amnesia and approximately 8-months post-injury, as well as 40 age and education matched controls. Pre-attentive (automatic) and attentive (controlled) visual search situations were created by manipulating the saliency of the target item amongst distractor items in visual displays. The relationship between pre-attentive and attentive visual search rates and follow-up community integration were also explored. Results The results revealed intact parallel (automatic) processing skills in the TBI group both post-acutely and at follow-up. In contrast, when attentional demands on visual search were increased by reducing the saliency of the target, the TBI group demonstrated poorer performances compared to the control group both post-acutely and 8-months post-injury. Neither pre-attentive nor attentive visual search slope values correlated with follow-up community integration. Conclusions These results suggest that utilizing intact pre-attentive visual search skills during rehabilitation may help to reduce high mental workload situations, thereby improving the rehabilitation process. For example, making commonly used objects more salient in the environment should increase reliance or more automatic visual search processes and reduce visual search time for individuals with TBI. PMID:25671675
Infant Visual Attention and Object Recognition
Reynolds, Greg D.
2015-01-01
This paper explores the role visual attention plays in the recognition of objects in infancy. Research and theory on the development of infant attention and recognition memory are reviewed in three major sections. The first section reviews some of the major findings and theory emerging from a rich tradition of behavioral research utilizing preferential looking tasks to examine visual attention and recognition memory in infancy. The second section examines research utilizing neural measures of attention and object recognition in infancy as well as research on brain-behavior relations in the early development of attention and recognition memory. The third section addresses potential areas of the brain involved in infant object recognition and visual attention. An integrated synthesis of some of the existing models of the development of visual attention is presented which may account for the observed changes in behavioral and neural measures of visual attention and object recognition that occur across infancy. PMID:25596333
Auditory and Visual Capture during Focused Visual Attention
ERIC Educational Resources Information Center
Koelewijn, Thomas; Bronkhorst, Adelbert; Theeuwes, Jan
2009-01-01
It is well known that auditory and visual onsets presented at a particular location can capture a person's visual attention. However, the question of whether such attentional capture disappears when attention is focused endogenously beforehand has not yet been answered. Moreover, previous studies have not differentiated between capture by onsets…
Skorich, Daniel P; Gash, Tahlia B; Stalker, Katie L; Zheng, Lidan; Haslam, S Alexander
2017-05-01
The social difficulties of autism spectrum disorder (ASD) are typically explained as a disruption in the Shared Attention Mechanism (SAM) sub-component of the theory of mind (ToM) system. In the current paper, we explore the hypothesis that SAM's capacity to construct the self-other-object relations necessary for shared-attention arises from a self-categorization process, which is weaker among those with more autistic-like traits. We present participants with self-categorization and shared-attention tasks, and measure their autism-spectrum quotient (AQ). Results reveal a negative relationship between AQ and shared-attention, via self-categorization, suggesting a role for self-categorization in the disruption in SAM seen in ASD. Implications for intervention, and for a ToM model in which weak central coherence plays a role are discussed.
Value-driven attentional capture in the auditory domain.
Anderson, Brian A
2016-01-01
It is now well established that the visual attention system is shaped by reward learning. When visual features are associated with a reward outcome, they acquire high priority and can automatically capture visual attention. To date, evidence for value-driven attentional capture has been limited entirely to the visual system. In the present study, I demonstrate that previously reward-associated sounds also capture attention, interfering more strongly with the performance of a visual task. This finding suggests that value-driven attention reflects a broad principle of information processing that can be extended to other sensory modalities and that value-driven attention can bias cross-modal stimulus competition.
Harris, Jill; Kamke, Marc R
2014-11-01
Selective attention fundamentally alters sensory perception, but little is known about the functioning of attention in individuals who use a cochlear implant. This study aimed to investigate visual and auditory attention in adolescent cochlear implant users. Event related potentials were used to investigate the influence of attention on visual and auditory evoked potentials in six cochlear implant users and age-matched normally-hearing children. Participants were presented with streams of alternating visual and auditory stimuli in an oddball paradigm: each modality contained frequently presented 'standard' and infrequent 'deviant' stimuli. Across different blocks attention was directed to either the visual or auditory modality. For the visual stimuli attention boosted the early N1 potential, but this effect was larger for cochlear implant users. Attention was also associated with a later P3 component for the visual deviant stimulus, but there was no difference between groups in the later attention effects. For the auditory stimuli, attention was associated with a decrease in N1 latency as well as a robust P3 for the deviant tone. Importantly, there was no difference between groups in these auditory attention effects. The results suggest that basic mechanisms of auditory attention are largely normal in children who are proficient cochlear implant users, but that visual attention may be altered. Ultimately, a better understanding of how selective attention influences sensory perception in cochlear implant users will be important for optimising habilitation strategies. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
The broadcast of shared attention and its impact on political persuasion.
Shteynberg, Garriy; Bramlett, James M; Fles, Elizabeth H; Cameron, Jaclyn
2016-11-01
In democracies where multitudes yield political influence, so does broadcast media that reaches those multitudes. However, broadcast media may not be powerful simply because it reaches a certain audience, but because each of the recipients is aware of that fact. That is, watching broadcast media can evoke a state of shared attention, or the perception of simultaneous coattention with others. Whereas past research has investigated the effects of shared attention with a few socially close others (i.e., friends, acquaintances, minimal ingroup members), we examine the impact of shared attention with a multitude of unfamiliar others in the context of televised broadcasting. In this paper, we explore whether shared attention increases the psychological impact of televised political speeches, and whether fewer numbers of coattending others diminishes this effect. Five studies investigate whether the perception of simultaneous coattention, or shared attention, on a mass broadcasted political speech leads to more extreme judgments. The results indicate that the perception of synchronous coattention (as compared with coattending asynchronously and attending alone) renders persuasive speeches more persuasive, and unpersuasive speeches more unpersuasive. We also find that recall memory for the content of the speech mediates the effect of shared attention on political persuasion. The results are consistent with the notion that shared attention on mass broadcasted information results in deeper processing of the content, rendering judgments more extreme. In all, our findings imply that shared attention is a cognitive capacity that supports large-scale social coordination, where multitudes of people can cognitively prioritize simultaneously coattended information. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Madden, David J.
2007-01-01
Older adults are often slower and less accurate than are younger adults in performing visual-search tasks, suggesting an age-related decline in attentional functioning. Age-related decline in attention, however, is not entirely pervasive. Visual search that is based on the observer’s expectations (i.e., top-down attention) is relatively preserved as a function of adult age. Neuroimaging research suggests that age-related decline occurs in the structure and function of brain regions mediating the visual sensory input, whereas activation of regions in the frontal and parietal lobes is often greater for older adults than for younger adults. This increased activation may represent an age-related increase in the role of top-down attention during visual tasks. To obtain a more complete account of age-related decline and preservation of visual attention, current research is beginning to explore the relation of neuroimaging measures of brain structure and function to behavioral measures of visual attention. PMID:18080001
A Componential Analysis of Visual Attention in Children With ADHD.
McAvinue, Laura P; Vangkilde, Signe; Johnson, Katherine A; Habekost, Thomas; Kyllingsbæk, Søren; Bundesen, Claus; Robertson, Ian H
2015-10-01
Inattentive behaviour is a defining characteristic of ADHD. Researchers have wondered about the nature of the attentional deficit underlying these symptoms. The primary purpose of the current study was to examine this attentional deficit using a novel paradigm based upon the Theory of Visual Attention (TVA). The TVA paradigm enabled a componential analysis of visual attention through the use of a mathematical model to estimate parameters relating to attentional selectivity and capacity. Children's ability to sustain attention was also assessed using the Sustained Attention to Response Task. The sample included a comparison between 25 children with ADHD and 25 control children aged 9-13. Children with ADHD had significantly impaired sustained attention and visual processing speed but intact attentional selectivity, perceptual threshold and visual short-term memory capacity. The results of this study lend support to the notion of differential impairment of attentional functions in children with ADHD. © 2012 SAGE Publications.
Visual attention shifting in autism spectrum disorders.
Richard, Annette E; Lajiness-O'Neill, Renee
2015-01-01
Abnormal visual attention has been frequently observed in autism spectrum disorders (ASD). Abnormal shifting of visual attention is related to abnormal development of social cognition and has been identified as a key neuropsychological finding in ASD. Better characterizing attention shifting in ASD and its relationship with social functioning may help to identify new targets for intervention and improving social communication in these disorders. Thus, the current study investigated deficits in attention shifting in ASD as well as relationships between attention shifting and social communication in ASD and neurotypicals (NT). To investigate deficits in visual attention shifting in ASD, 20 ASD and 20 age- and gender-matched NT completed visual search (VS) and Navon tasks with attention-shifting demands as well as a set-shifting task. VS was a feature search task with targets defined in one of two dimensions; Navon required identification of a target letter presented at the global or local level. Psychomotor and processing speed were entered as covariates. Relationships between visual attention shifting, set shifting, and social functioning were also examined. ASD and NT showed comparable costs of shifting attention. However, psychomotor and processing speed were slower in ASD than in NT, and psychomotor and processing speed were positively correlated with attention-shifting costs on Navon and VS, respectively, for both groups. Attention shifting on VS and Navon were correlated among NT, while attention shifting on Navon was correlated with set shifting among ASD. Attention-shifting costs on Navon were positively correlated with restricted and repetitive behaviors among ASD. Relationships between attention shifting and psychomotor and processing speed, as well as relationships between measures of different aspects of visual attention shifting, suggest inefficient top-down influences over preattentive visual processing in ASD. Inefficient attention shifting may be related to restricted and repetitive behaviors in these disorders.
Interactive Visualization of Healthcare Data Using Tableau.
Ko, Inseok; Chang, Hyejung
2017-10-01
Big data analysis is receiving increasing attention in many industries, including healthcare. Visualization plays an important role not only in intuitively showing the results of data analysis but also in the whole process of collecting, cleaning, analyzing, and sharing data. This paper presents a procedure for the interactive visualization and analysis of healthcare data using Tableau as a business intelligence tool. Starting with installation of the Tableau Desktop Personal version 10.3, this paper describes the process of understanding and visualizing healthcare data using an example. The example data of colon cancer patients were obtained from health insurance claims in years 2012 and 2013, provided by the Health Insurance Review and Assessment Service. To explore the visualization of healthcare data using Tableau for beginners, this paper describes the creation of a simple view for the average length of stay of colon cancer patients. Since Tableau provides various visualizations and customizations, the level of analysis can be increased with small multiples, view filtering, mark cards, and Tableau charts. Tableau is a software that can help users explore and understand their data by creating interactive visualizations. The software has the advantages that it can be used in conjunction with almost any database, and it is easy to use by dragging and dropping to create an interactive visualization expressing the desired format.
Gherri, Elena; Eimer, Martin
2011-04-01
The ability to drive safely is disrupted by cell phone conversations, and this has been attributed to a diversion of attention from the visual environment. We employed behavioral and ERP measures to study whether the attentive processing of spoken messages is, in itself, sufficient to produce visual-attentional deficits. Participants searched for visual targets defined by a unique feature (Experiment 1) or feature conjunction (Experiment 2), and simultaneously listened to narrated text passages that had to be recalled later (encoding condition), or heard backward-played speech sounds that could be ignored (control condition). Responses to targets were slower in the encoding condition, and ERPs revealed that the visual processing of search arrays and the attentional selection of target stimuli were less efficient in the encoding relative to the control condition. Results demonstrate that the attentional processing of visual information is impaired when concurrent spoken messages are encoded and maintained, in line with cross-modal links in selective attention, but inconsistent with the view that attentional resources are modality-specific. The distraction of visual attention by active listening could contribute to the adverse effects of cell phone use on driving performance.
Object-based attention underlies the rehearsal of feature binding in visual working memory.
Shen, Mowei; Huang, Xiang; Gao, Zaifeng
2015-04-01
Feature binding is a core concept in many research fields, including the study of working memory (WM). Over the past decade, it has been debated whether keeping the feature binding in visual WM consumes more visual attention than the constituent single features. Previous studies have only explored the contribution of domain-general attention or space-based attention in the binding process; no study so far has explored the role of object-based attention in retaining binding in visual WM. We hypothesized that object-based attention underlay the mechanism of rehearsing feature binding in visual WM. Therefore, during the maintenance phase of a visual WM task, we inserted a secondary mental rotation (Experiments 1-3), transparent motion (Experiment 4), or an object-based feature report task (Experiment 5) to consume the object-based attention available for binding. In line with the prediction of the object-based attention hypothesis, Experiments 1-5 revealed a more significant impairment for binding than for constituent single features. However, this selective binding impairment was not observed when inserting a space-based visual search task (Experiment 6). We conclude that object-based attention underlies the rehearsal of binding representation in visual WM. (c) 2015 APA, all rights reserved.
Sigurdardottir, Heida M; Sheinberg, David L
2015-07-01
The lateral intraparietal area (LIP) is thought to play an important role in the guidance of where to look and pay attention. LIP can also respond selectively to differently shaped objects. We sought to understand to what extent short-term and long-term experience with visual orienting determines the responses of LIP to objects of different shapes. We taught monkeys to arbitrarily associate centrally presented objects of various shapes with orienting either toward or away from a preferred spatial location of a neuron. The training could last for less than a single day or for several months. We found that neural responses to objects are affected by such experience, but that the length of the learning period determines how this neural plasticity manifests. Short-term learning affects neural responses to objects, but these effects are only seen relatively late after visual onset; at this time, the responses to newly learned objects resemble those of familiar objects that share their meaning or arbitrary association. Long-term learning affects the earliest bottom-up responses to visual objects. These responses tend to be greater for objects that have been associated with looking toward, rather than away from, LIP neurons' preferred spatial locations. Responses to objects can nonetheless be distinct, although they have been similarly acted on in the past and will lead to the same orienting behavior in the future. Our results therefore indicate that a complete experience-driven override of LIP object responses may be difficult or impossible. We relate these results to behavioral work on visual attention.
Changes in the distribution of sustained attention alter the perceived structure of visual space.
Fortenbaugh, Francesca C; Robertson, Lynn C; Esterman, Michael
2017-02-01
Visual spatial attention is a critical process that allows for the selection and enhanced processing of relevant objects and locations. While studies have shown attentional modulations of perceived location and the representation of distance information across multiple objects, there remains disagreement regarding what influence spatial attention has on the underlying structure of visual space. The present study utilized a method of magnitude estimation in which participants must judge the location of briefly presented targets within the boundaries of their individual visual fields in the absence of any other objects or boundaries. Spatial uncertainty of target locations was used to assess perceived locations across distributed and focused attention conditions without the use of external stimuli, such as visual cues. Across two experiments we tested locations along the cardinal and 45° oblique axes. We demonstrate that focusing attention within a region of space can expand the perceived size of visual space; even in cases where doing so makes performance less accurate. Moreover, the results of the present studies show that when fixation is actively maintained, focusing attention along a visual axis leads to an asymmetrical stretching of visual space that is predominantly focused across the central half of the visual field, consistent with an expansive gradient along the focus of voluntary attention. These results demonstrate that focusing sustained attention peripherally during active fixation leads to an asymmetrical expansion of visual space within the central visual field. Published by Elsevier Ltd.
Higher dietary diversity is related to better visual and auditory sustained attention.
Shiraseb, Farideh; Siassi, Fereydoun; Qorbani, Mostafa; Sotoudeh, Gity; Rostami, Reza; Narmaki, Elham; Yavari, Parvaneh; Aghasi, Mohadeseh; Shaibu, Osman Mohammed
2016-04-01
Attention is a complex cognitive function that is necessary for learning, for following social norms of behaviour and for effective performance of responsibilities and duties. It is especially important in sensitive occupations requiring sustained attention. Improvement of dietary diversity (DD) is recognised as an important factor in health promotion, but its association with sustained attention is unknown. The aim of this study was to determine the association between auditory and visual sustained attention and DD. A cross-sectional study was carried out on 400 women aged 20-50 years who attended sports clubs at Tehran Municipality. Sustained attention was evaluated on the basis of the Integrated Visual and Auditory Continuous Performance Test using Integrated Visual and Auditory software. A single 24-h dietary recall questionnaire was used for DD assessment. Dietary diversity scores (DDS) were determined using the FAO guidelines. The mean visual and auditory sustained attention scores were 40·2 (sd 35·2) and 42·5 (sd 38), respectively. The mean DDS was 4·7 (sd 1·5). After adjusting for age, education years, physical activity, energy intake and BMI, mean visual and auditory sustained attention showed a significant increase as the quartiles of DDS increased (P=0·001). In addition, the mean subscales of attention, including auditory consistency and vigilance, visual persistence, visual and auditory focus, speed, comprehension and full attention, increased significantly with increasing DDS (P<0·05). In conclusion, higher DDS is associated with better visual and auditory sustained attention.
Ruiz-Rizzo, Adriana L; Neitzel, Julia; Müller, Hermann J; Sorg, Christian; Finke, Kathrin
2018-01-01
Separable visual attention functions are assumed to rely on distinct but interacting neural mechanisms. Bundesen's "theory of visual attention" (TVA) allows the mathematical estimation of independent parameters that characterize individuals' visual attentional capacity (i.e., visual processing speed and visual short-term memory storage capacity) and selectivity functions (i.e., top-down control and spatial laterality). However, it is unclear whether these parameters distinctively map onto different brain networks obtained from intrinsic functional connectivity, which organizes slowly fluctuating ongoing brain activity. In our study, 31 demographically homogeneous healthy young participants performed whole- and partial-report tasks and underwent resting-state functional magnetic resonance imaging (rs-fMRI). Report accuracy was modeled using TVA to estimate, individually, the four TVA parameters. Networks encompassing cortical areas relevant for visual attention were derived from independent component analysis of rs-fMRI data: visual, executive control, right and left frontoparietal, and ventral and dorsal attention networks. Two TVA parameters were mapped on particular functional networks. First, participants with higher (vs. lower) visual processing speed showed lower functional connectivity within the ventral attention network. Second, participants with more (vs. less) efficient top-down control showed higher functional connectivity within the dorsal attention network and lower functional connectivity within the visual network. Additionally, higher performance was associated with higher functional connectivity between networks: specifically, between the ventral attention and right frontoparietal networks for visual processing speed, and between the visual and executive control networks for top-down control. The higher inter-network functional connectivity was related to lower intra-network connectivity. These results demonstrate that separable visual attention parameters that are assumed to constitute relatively stable traits correspond distinctly to the functional connectivity both within and between particular functional networks. This implies that individual differences in basic attention functions are represented by differences in the coherence of slowly fluctuating brain activity.
Visual search and attention: an overview.
Davis, Elizabeth T; Palmer, John
2004-01-01
This special feature issue is devoted to attention and visual search. Attention is a central topic in psychology and visual search is both a versatile paradigm for the study of visual attention and a topic of study in itself. Visual search depends on sensory, perceptual, and cognitive processes. As a result, the search paradigm has been used to investigate a diverse range of phenomena. Manipulating the search task can vary the demands on attention. In turn, attention modulates visual search by selecting and limiting the information available at various levels of processing. Focusing on the intersection of attention and search provides a relatively structured window into the wide world of attentional phenomena. In particular, the effects of divided attention are illustrated by the effects of set size (the number of stimuli in a display) and the effects of selective attention are illustrated by cueing subsets of stimuli within the display. These two phenomena provide the starting point for the articles in this special issue. The articles are organized into four general topics to help structure the issues of attention and search.
Object-Based Visual Attention in 8-Month-Old Infants: Evidence from an Eye-Tracking Study
ERIC Educational Resources Information Center
Bulf, Hermann; Valenza, Eloisa
2013-01-01
Visual attention is one of the infant's primary tools for gathering relevant information from the environment for further processing and learning. The space-based component of visual attention in infants has been widely investigated; however, the object-based component of visual attention has received scarce interest. This scarcity is…
Infant visual attention and object recognition.
Reynolds, Greg D
2015-05-15
This paper explores the role visual attention plays in the recognition of objects in infancy. Research and theory on the development of infant attention and recognition memory are reviewed in three major sections. The first section reviews some of the major findings and theory emerging from a rich tradition of behavioral research utilizing preferential looking tasks to examine visual attention and recognition memory in infancy. The second section examines research utilizing neural measures of attention and object recognition in infancy as well as research on brain-behavior relations in the early development of attention and recognition memory. The third section addresses potential areas of the brain involved in infant object recognition and visual attention. An integrated synthesis of some of the existing models of the development of visual attention is presented which may account for the observed changes in behavioral and neural measures of visual attention and object recognition that occur across infancy. Copyright © 2015 Elsevier B.V. All rights reserved.
Keehn, Brandon; Nair, Aarti; Lincoln, Alan J; Townsend, Jeanne; Müller, Ralph-Axel
2016-02-01
For individuals with autism spectrum disorder (ASD), salient behaviorally-relevant information often fails to capture attention, while subtle behaviorally-irrelevant details commonly induce a state of distraction. The present study used functional magnetic resonance imaging (fMRI) to investigate the neurocognitive networks underlying attentional capture in sixteen high-functioning children and adolescents with ASD and twenty-one typically developing (TD) individuals. Participants completed a rapid serial visual presentation paradigm designed to investigate activation of attentional networks to behaviorally-relevant targets and contingent attention capture by task-irrelevant distractors. In individuals with ASD, target stimuli failed to trigger bottom-up activation of the ventral attentional network and the cerebellum. Additionally, the ASD group showed no differences in behavior or occipital activation associated with contingent attentional capture. Rather, results suggest that to-be-ignored distractors that shared either task-relevant or irrelevant features captured attention in ASD. Results indicate that individuals with ASD may be under-reactive to behaviorally-relevant stimuli, unable to filter irrelevant information, and that both top-down and bottom-up attention networks function atypically in ASD. Lastly, deficits in target-related processing were associated with autism symptomatology, providing further support for the hypothesis that non-social attentional processes and their neurofunctional underpinnings may play a significant role in the development of sociocommunicative impairments in ASD. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Spatial Scaling of the Profile of Selective Attention in the Visual Field.
Gannon, Matthew A; Knapp, Ashley A; Adams, Thomas G; Long, Stephanie M; Parks, Nathan A
2016-01-01
Neural mechanisms of selective attention must be capable of adapting to variation in the absolute size of an attended stimulus in the ever-changing visual environment. To date, little is known regarding how attentional selection interacts with fluctuations in the spatial expanse of an attended object. Here, we use event-related potentials (ERPs) to investigate the scaling of attentional enhancement and suppression across the visual field. We measured ERPs while participants performed a task at fixation that varied in its attentional demands (attentional load) and visual angle (1.0° or 2.5°). Observers were presented with a stream of task-relevant stimuli while foveal, parafoveal, and peripheral visual locations were probed by irrelevant distractor stimuli. We found two important effects in the N1 component of visual ERPs. First, N1 modulations to task-relevant stimuli indexed attentional selection of stimuli during the load task and further correlated with task performance. Second, with increased task size, attentional modulation of the N1 to distractor stimuli showed a differential pattern that was consistent with a scaling of attentional selection. Together, these results demonstrate that the size of an attended stimulus scales the profile of attentional selection across the visual field and provides insights into the attentional mechanisms associated with such spatial scaling.
The role of early visual cortex in visual short-term memory and visual attention.
Offen, Shani; Schluppeck, Denis; Heeger, David J
2009-06-01
We measured cortical activity with functional magnetic resonance imaging to probe the involvement of early visual cortex in visual short-term memory and visual attention. In four experimental tasks, human subjects viewed two visual stimuli separated by a variable delay period. The tasks placed differential demands on short-term memory and attention, but the stimuli were visually identical until after the delay period. Early visual cortex exhibited sustained responses throughout the delay when subjects performed attention-demanding tasks, but delay-period activity was not distinguishable from zero when subjects performed a task that required short-term memory. This dissociation reveals different computational mechanisms underlying the two processes.
Characterizing the effects of feature salience and top-down attention in the early visual system.
Poltoratski, Sonia; Ling, Sam; McCormack, Devin; Tong, Frank
2017-07-01
The visual system employs a sophisticated balance of attentional mechanisms: salient stimuli are prioritized for visual processing, yet observers can also ignore such stimuli when their goals require directing attention elsewhere. A powerful determinant of visual salience is local feature contrast: if a local region differs from its immediate surround along one or more feature dimensions, it will appear more salient. We used high-resolution functional MRI (fMRI) at 7T to characterize the modulatory effects of bottom-up salience and top-down voluntary attention within multiple sites along the early visual pathway, including visual areas V1-V4 and the lateral geniculate nucleus (LGN). Observers viewed arrays of spatially distributed gratings, where one of the gratings immediately to the left or right of fixation differed from all other items in orientation or motion direction, making it salient. To investigate the effects of directed attention, observers were cued to attend to the grating to the left or right of fixation, which was either salient or nonsalient. Results revealed reliable additive effects of top-down attention and stimulus-driven salience throughout visual areas V1-hV4. In comparison, the LGN exhibited significant attentional enhancement but was not reliably modulated by orientation- or motion-defined salience. Our findings indicate that top-down effects of spatial attention can influence visual processing at the earliest possible site along the visual pathway, including the LGN, whereas the processing of orientation- and motion-driven salience primarily involves feature-selective interactions that take place in early cortical visual areas. NEW & NOTEWORTHY While spatial attention allows for specific, goal-driven enhancement of stimuli, salient items outside of the current focus of attention must also be prioritized. We used 7T fMRI to compare salience and spatial attentional enhancement along the early visual hierarchy. We report additive effects of attention and bottom-up salience in early visual areas, suggesting that salience enhancement is not contingent on the observer's attentional state. Copyright © 2017 the American Physiological Society.
Behavior Selection of Mobile Robot Based on Integration of Multimodal Information
NASA Astrophysics Data System (ADS)
Chen, Bin; Kaneko, Masahide
Recently, biologically inspired robots have been developed to acquire the capacity for directing visual attention to salient stimulus generated from the audiovisual environment. On purpose to realize this behavior, a general method is to calculate saliency maps to represent how much the external information attracts the robot's visual attention, where the audiovisual information and robot's motion status should be involved. In this paper, we represent a visual attention model where three modalities, that is, audio information, visual information and robot's motor status are considered, while the previous researches have not considered all of them. Firstly, we introduce a 2-D density map, on which the value denotes how much the robot pays attention to each spatial location. Then we model the attention density using a Bayesian network where the robot's motion statuses are involved. Secondly, the information from both of audio and visual modalities is integrated with the attention density map in integrate-fire neurons. The robot can direct its attention to the locations where the integrate-fire neurons are fired. Finally, the visual attention model is applied to make the robot select the visual information from the environment, and react to the content selected. Experimental results show that it is possible for robots to acquire the visual information related to their behaviors by using the attention model considering motion statuses. The robot can select its behaviors to adapt to the dynamic environment as well as to switch to another task according to the recognition results of visual attention.
Reimer, Christina B; Strobach, Tilo; Schubert, Torsten
2017-12-01
Visual attention and response selection are limited in capacity. Here, we investigated whether visual attention requires the same bottleneck mechanism as response selection in a dual-task of the psychological refractory period (PRP) paradigm. The dual-task consisted of an auditory two-choice discrimination Task 1 and a conjunction search Task 2, which were presented at variable temporal intervals (stimulus onset asynchrony, SOA). In conjunction search, visual attention is required to select items and to bind their features resulting in a serial search process around the items in the search display (i.e., set size). We measured the reaction time of the visual search task (RT2) and the N2pc, an event-related potential (ERP), which reflects lateralized visual attention processes. If the response selection processes in Task 1 influence the visual attention processes in Task 2, N2pc latency and amplitude would be delayed and attenuated at short SOA compared to long SOA. The results, however, showed that latency and amplitude were independent of SOA, indicating that visual attention was concurrently deployed to response selection. Moreover, the RT2 analysis revealed an underadditive interaction of SOA and set size. We concluded that visual attention does not require the same bottleneck mechanism as response selection in dual-tasks.
Fujisawa, Junya; Touyama, Hideaki; Hirose, Michitaka
2008-01-01
In this paper, alpha band modulation during visual spatial attention without visual stimuli was focused. Visual spatial attention has been expected to provide a new channel of non-invasive independent brain computer interface (BCI), but little work has been done on the new interfacing method. The flickering stimuli used in previous work cause a decline of independency and have difficulties in a practical use. Therefore we investigated whether visual spatial attention could be detected without such stimuli. Further, the common spatial patterns (CSP) were for the first time applied to the brain states during visual spatial attention. The performance evaluation was based on three brain states of left, right and center direction attention. The 30-channel scalp electroencephalographic (EEG) signals over occipital cortex were recorded for five subjects. Without CSP, the analyses made 66.44 (range 55.42 to 72.27) % of average classification performance in discriminating left and right attention classes. With CSP, the averaged classification accuracy was 75.39 (range 63.75 to 86.13) %. It is suggested that CSP is useful in the context of visual spatial attention, and the alpha band modulation during visual spatial attention without flickering stimuli has the possibility of a new channel for independent BCI as well as motor imagery.
Attraction of position preference by spatial attention throughout human visual cortex.
Klein, Barrie P; Harvey, Ben M; Dumoulin, Serge O
2014-10-01
Voluntary spatial attention concentrates neural resources at the attended location. Here, we examined the effects of spatial attention on spatial position selectivity in humans. We measured population receptive fields (pRFs) using high-field functional MRI (fMRI) (7T) while subjects performed an attention-demanding task at different locations. We show that spatial attention attracts pRF preferred positions across the entire visual field, not just at the attended location. This global change in pRF preferred positions systematically increases up the visual hierarchy. We model these pRF preferred position changes as an interaction between two components: an attention field and a pRF without the influence of attention. This computational model suggests that increasing effects of attention up the hierarchy result primarily from differences in pRF size and that the attention field is similar across the visual hierarchy. A similar attention field suggests that spatial attention transforms different neural response selectivities throughout the visual hierarchy in a similar manner. Copyright © 2014 Elsevier Inc. All rights reserved.
Perceptual organization and visual attention.
Kimchi, Ruth
2009-01-01
Perceptual organization--the processes structuring visual information into coherent units--and visual attention--the processes by which some visual information in a scene is selected--are crucial for the perception of our visual environment and to visuomotor behavior. Recent research points to important relations between attentional and organizational processes. Several studies demonstrated that perceptual organization constrains attentional selectivity, and other studies suggest that attention can also constrain perceptual organization. In this chapter I focus on two aspects of the relationship between perceptual organization and attention. The first addresses the question of whether or not perceptual organization can take place without attention. I present findings demonstrating that some forms of grouping and figure-ground segmentation can occur without attention, whereas others require controlled attentional processing, depending on the processes involved and the conditions prevailing for each process. These findings challenge the traditional view, which assumes that perceptual organization is a unitary entity that operates preattentively. The second issue addresses the question of whether perceptual organization can affect the automatic deployment of attention. I present findings showing that the mere organization of some elements in the visual field by Gestalt factors into a coherent perceptual unit (an "object"), with no abrupt onset or any other unique transient, can capture attention automatically in a stimulus-driven manner. Taken together, the findings discussed in this chapter demonstrate the multifaceted, interactive relations between perceptual organization and visual attention.
Distinctive Correspondence Between Separable Visual Attention Functions and Intrinsic Brain Networks
Ruiz-Rizzo, Adriana L.; Neitzel, Julia; Müller, Hermann J.; Sorg, Christian; Finke, Kathrin
2018-01-01
Separable visual attention functions are assumed to rely on distinct but interacting neural mechanisms. Bundesen's “theory of visual attention” (TVA) allows the mathematical estimation of independent parameters that characterize individuals' visual attentional capacity (i.e., visual processing speed and visual short-term memory storage capacity) and selectivity functions (i.e., top-down control and spatial laterality). However, it is unclear whether these parameters distinctively map onto different brain networks obtained from intrinsic functional connectivity, which organizes slowly fluctuating ongoing brain activity. In our study, 31 demographically homogeneous healthy young participants performed whole- and partial-report tasks and underwent resting-state functional magnetic resonance imaging (rs-fMRI). Report accuracy was modeled using TVA to estimate, individually, the four TVA parameters. Networks encompassing cortical areas relevant for visual attention were derived from independent component analysis of rs-fMRI data: visual, executive control, right and left frontoparietal, and ventral and dorsal attention networks. Two TVA parameters were mapped on particular functional networks. First, participants with higher (vs. lower) visual processing speed showed lower functional connectivity within the ventral attention network. Second, participants with more (vs. less) efficient top-down control showed higher functional connectivity within the dorsal attention network and lower functional connectivity within the visual network. Additionally, higher performance was associated with higher functional connectivity between networks: specifically, between the ventral attention and right frontoparietal networks for visual processing speed, and between the visual and executive control networks for top-down control. The higher inter-network functional connectivity was related to lower intra-network connectivity. These results demonstrate that separable visual attention parameters that are assumed to constitute relatively stable traits correspond distinctly to the functional connectivity both within and between particular functional networks. This implies that individual differences in basic attention functions are represented by differences in the coherence of slowly fluctuating brain activity. PMID:29662444
Robotic Attention Processing And Its Application To Visual Guidance
NASA Astrophysics Data System (ADS)
Barth, Matthew; Inoue, Hirochika
1988-03-01
This paper describes a method of real-time visual attention processing for robots performing visual guidance. This robot attention processing is based on a novel vision processor, the multi-window vision system that was developed at the University of Tokyo. The multi-window vision system is unique in that it only processes visual information inside local area windows. These local area windows are quite flexible in their ability to move anywhere on the visual screen, change their size and shape, and alter their pixel sampling rate. By using these windows for specific attention tasks, it is possible to perform high speed attention processing. The primary attention skills of detecting motion, tracking an object, and interpreting an image are all performed at high speed on the multi-window vision system. A basic robotic attention scheme using the attention skills was developed. The attention skills involved detection and tracking of salient visual features. The tracking and motion information thus obtained was utilized in producing the response to the visual stimulus. The response of the attention scheme was quick enough to be applicable to the real-time vision processing tasks of playing a video 'pong' game, and later using an automobile driving simulator. By detecting the motion of a 'ball' on a video screen and then tracking the movement, the attention scheme was able to control a 'paddle' in order to keep the ball in play. The response was faster than that of a human's, allowing the attention scheme to play the video game at higher speeds. Further, in the application to the driving simulator, the attention scheme was able to control both direction and velocity of a simulated vehicle following a lead car. These two applications show the potential of local visual processing in its use for robotic attention processing.
The remains of the trial: goal-determined inter-trial suppression of selective attention.
Lleras, Alejandro; Levinthal, Brian R; Kawahara, Jun
2009-01-01
When an observer is searching through the environment for a target, what are the consequences of not finding a target in a given environment? We examine this issue in detail and propose that the visual system systematically tags environmental information during a search, in an effort to improve performance in future search events. Information that led to search successes is positively tagged, so as to favor future deployments of attention toward that type of information, whereas information that led to search failures is negatively tagged, so as to discourage future deployments of attention toward such failed information. To study this, we use an oddball-search task, where participants search for one item that differs from all others along one feature or belongs to a different visual category, from the other stimuli in the display. We find that when participants perform oddball-search tasks, the absence of a target delays identification of future targets containing the feature or category that was shared by all distractors in the target-absent trial. We interpret this effect as reflecting an implicit assessment of performance: target-absent trials can be viewed as processing "failures" insofar as they do not provide the visual system with the information needed to complete the task. Here, we study the goal-oriented nature of this bias in three ways. First, we show that the direction of the bias is determined by the experimental task. Second, we show that the effect is independent of the mode of presentation of stimuli: it happens with both serial and simultaneous stimuli presentation. Third, we show that, when using categorically defined oddballs as the search stimuli (find the face among houses or vice versa), the bias generalizes to unseen members of the "failed" category. Together, these findings support the idea that this inter-trial attentional biases arise from high-level, task-constrained, implicit assessments of performance, involving categorical associations between classes of stimuli and behavioral outcomes (success/failure), which are independent of attentional modality (temporal vs. spatial attention).
Occipitoparietal alpha-band responses to the graded allocation of top-down spatial attention.
Dombrowe, Isabel; Hilgetag, Claus C
2014-09-15
The voluntary, top-down allocation of visual spatial attention has been linked to changes in the alpha-band of the electroencephalogram (EEG) signal measured over occipital and parietal lobes. In the present study, we investigated how occipitoparietal alpha-band activity changes when people allocate their attentional resources in a graded fashion across the visual field. We asked participants to either completely shift their attention into one hemifield, to balance their attention equally across the entire visual field, or to attribute more attention to one-half of the visual field than to the other. As expected, we found that alpha-band amplitudes decreased stronger contralaterally than ipsilaterally to the attended side when attention was shifted completely. Alpha-band amplitudes decreased bilaterally when attention was balanced equally across the visual field. However, when participants allocated more attentional resources to one-half of the visual field, this was not reflected in the alpha-band amplitudes, which just decreased bilaterally. We found that the performance of the participants was more strongly reflected in the coherence between frontal and occipitoparietal brain regions. We conclude that low alpha-band amplitudes seem to be necessary for stimulus detection. Furthermore, complete shifts of attention are directly reflected in the lateralization of alpha-band amplitudes. In the present study, a gradual allocation of visual attention across the visual field was only indirectly reflected in the alpha-band activity over occipital and parietal cortexes. Copyright © 2014 the American Physiological Society.
Neural Mechanisms of Selective Visual Attention.
Moore, Tirin; Zirnsak, Marc
2017-01-03
Selective visual attention describes the tendency of visual processing to be confined largely to stimuli that are relevant to behavior. It is among the most fundamental of cognitive functions, particularly in humans and other primates for whom vision is the dominant sense. We review recent progress in identifying the neural mechanisms of selective visual attention. We discuss evidence from studies of different varieties of selective attention and examine how these varieties alter the processing of stimuli by neurons within the visual system, current knowledge of their causal basis, and methods for assessing attentional dysfunctions. In addition, we identify some key questions that remain in identifying the neural mechanisms that give rise to the selective processing of visual information.
Harrison, Neil R; Woodhouse, Rob
2016-05-01
Previous research has demonstrated that threatening, compared to neutral pictures, can bias attention towards non-emotional auditory targets. Here we investigated which subcomponents of attention contributed to the influence of emotional visual stimuli on auditory spatial attention. Participants indicated the location of an auditory target, after brief (250 ms) presentation of a spatially non-predictive peripheral visual cue. Responses to targets were faster at the location of the preceding visual cue, compared to at the opposite location (cue validity effect). The cue validity effect was larger for targets following pleasant and unpleasant cues compared to neutral cues, for right-sided targets. For unpleasant cues, the crossmodal cue validity effect was driven by delayed attentional disengagement, and for pleasant cues, it was driven by enhanced engagement. We conclude that both pleasant and unpleasant visual cues influence the distribution of attention across modalities and that the associated attentional mechanisms depend on the valence of the visual cue.
Visual Field Asymmetries in Attention Vary with Self-Reported Attention Deficits
ERIC Educational Resources Information Center
Poynter, William; Ingram, Paul; Minor, Scott
2010-01-01
The purpose of this study was to determine whether an index of self-reported attention deficits predicts the pattern of visual field asymmetries observed in behavioral measures of attention. Studies of "normal" subjects do not present a consistent pattern of asymmetry in attention functions, with some studies showing better left visual field (LVF)…
Liu, Wen-Long; Zhao, Xu; Tan, Jian-Hui; Wang, Juan
2014-09-01
To explore the attention characteristics of children with different clinical subtypes of attention deficit hyperactivity disorder (ADHD) and to provide a basis for clinical intervention. A total of 345 children diagnosed with ADHD were selected and the subtypes were identified. Attention assessment was performed by the intermediate visual and auditory continuous performance test at diagnosis, and the visual and auditory attention characteristics were compared between children with different subtypes. A total of 122 normal children were recruited in the control group and their attention characteristics were compared with those of children with ADHD. The scores of full scale attention quotient (AQ) and full scale response control quotient (RCQ) of children with all three subtypes of ADHD were significantly lower than those of normal children (P<0.01). The score of auditory RCQ was significantly lower than that of visual RCQ in children with ADHD-hyperactive/impulsive subtype (P<0.05). The scores of auditory AQ and speed quotient (SQ) were significantly higher than those of visual AQ and SQ in three subtypes of ADHD children (P<0.01), while the score of visual precaution quotient (PQ) was significantly higher than that of auditory PQ (P<0.01). No significant differences in auditory or visual AQ were observed between the three subtypes of ADHD. The attention function of children with ADHD is worse than that of normal children, and the impairment of visual attention function is severer than that of auditory attention function. The degree of functional impairment of visual or auditory attention shows no significant differences between three subtypes of ADHD.
Attention and Visual Motor Integration in Young Children with Uncorrected Hyperopia.
Kulp, Marjean Taylor; Ciner, Elise; Maguire, Maureen; Pistilli, Maxwell; Candy, T Rowan; Ying, Gui-Shuang; Quinn, Graham; Cyert, Lynn; Moore, Bruce
2017-10-01
Among 4- and 5-year-old children, deficits in measures of attention, visual-motor integration (VMI) and visual perception (VP) are associated with moderate, uncorrected hyperopia (3 to 6 diopters [D]) accompanied by reduced near visual function (near visual acuity worse than 20/40 or stereoacuity worse than 240 seconds of arc). To compare attention, visual motor, and visual perceptual skills in uncorrected hyperopes and emmetropes attending preschool or kindergarten and evaluate their associations with visual function. Participants were 4 and 5 years of age with either hyperopia (≥3 to ≤6 D, astigmatism ≤1.5 D, anisometropia ≤1 D) or emmetropia (hyperopia ≤1 D; astigmatism, anisometropia, and myopia each <1 D), without amblyopia or strabismus. Examiners masked to refractive status administered tests of attention (sustained, receptive, and expressive), VMI, and VP. Binocular visual acuity, stereoacuity, and accommodative accuracy were also assessed at near. Analyses were adjusted for age, sex, race/ethnicity, and parent's/caregiver's education. Two hundred forty-four hyperopes (mean, +3.8 ± [SD] 0.8 D) and 248 emmetropes (+0.5 ± 0.5 D) completed testing. Mean sustained attention score was worse in hyperopes compared with emmetropes (mean difference, -4.1; P < .001 for 3 to 6 D). Mean Receptive Attention score was worse in 4 to 6 D hyperopes compared with emmetropes (by -2.6, P = .01). Hyperopes with reduced near visual acuity (20/40 or worse) had worse scores than emmetropes (-6.4, P < .001 for sustained attention; -3.0, P = .004 for Receptive Attention; -0.7, P = .006 for VMI; -1.3, P = .008 for VP). Hyperopes with stereoacuity of 240 seconds of arc or worse scored significantly worse than emmetropes (-6.7, P < .001 for sustained attention; -3.4, P = .03 for Expressive Attention; -2.2, P = .03 for Receptive Attention; -0.7, P = .01 for VMI; -1.7, P < .001 for VP). Overall, hyperopes with better near visual function generally performed similarly to emmetropes. Moderately hyperopic children were found to have deficits in measures of attention. Hyperopic children with reduced near visual function also had lower scores on VMI and VP than emmetropic children.
Processing reafferent and exafferent visual information for action and perception.
Reichenbach, Alexandra; Diedrichsen, Jörn
2015-01-01
A recent study suggests that reafferent hand-related visual information utilizes a privileged, attention-independent processing channel for motor control. This process was termed visuomotor binding to reflect its proposed function: linking visual reafferences to the corresponding motor control centers. Here, we ask whether the advantage of processing reafferent over exafferent visual information is a specific feature of the motor processing stream or whether the improved processing also benefits the perceptual processing stream. Human participants performed a bimanual reaching task in a cluttered visual display, and one of the visual hand cursors could be displaced laterally during the movement. We measured the rapid feedback responses of the motor system as well as matched perceptual judgments of which cursor was displaced. Perceptual judgments were either made by watching the visual scene without moving or made simultaneously to the reaching tasks, such that the perceptual processing stream could also profit from the specialized processing of reafferent information in the latter case. Our results demonstrate that perceptual judgments in the heavily cluttered visual environment were improved when performed based on reafferent information. Even in this case, however, the filtering capability of the perceptual processing stream suffered more from the increasing complexity of the visual scene than the motor processing stream. These findings suggest partly shared and partly segregated processing of reafferent information for vision for motor control versus vision for perception.
Auditory and visual capture during focused visual attention.
Koelewijn, Thomas; Bronkhorst, Adelbert; Theeuwes, Jan
2009-10-01
It is well known that auditory and visual onsets presented at a particular location can capture a person's visual attention. However, the question of whether such attentional capture disappears when attention is focused endogenously beforehand has not yet been answered. Moreover, previous studies have not differentiated between capture by onsets presented at a nontarget (invalid) location and possible performance benefits occurring when the target location is (validly) cued. In this study, the authors modulated the degree of attentional focus by presenting endogenous cues with varying reliability and by displaying placeholders indicating the precise areas where the target stimuli could occur. By using not only valid and invalid exogenous cues but also neutral cues that provide temporal but no spatial information, they found performance benefits as well as costs when attention is not strongly focused. The benefits disappear when the attentional focus is increased. These results indicate that there is bottom-up capture of visual attention by irrelevant auditory and visual stimuli that cannot be suppressed by top-down attentional control. PsycINFO Database Record (c) 2009 APA, all rights reserved.
Dube, William V.; Wilkinson, Krista M.
2014-01-01
This paper examines the phenomenon of “stimulus overselectivity” or “overselective attention” as it may impact AAC training and use in individuals with intellectual disabilities. Stimulus overselectivity is defined as an atypical limitation in the number of stimuli or stimulus features within an image that are attended to and subsequently learned. Within AAC, the term “stimulus” could refer to symbols or line drawings on speech generating devices, drawings or pictures on low-technology systems, and/or the elements within visual scene displays. In this context, overselective attention may result in unusual or uneven error patterns such as confusion between two symbols that share a single feature or difficulties with transitioning between different types of hardware. We review some of the ways that overselective attention has been studied behaviorally. We then examine how eye tracking technology allows a glimpse into some of the behavioral characteristics of overselective attention. We describe an intervention approach, differential observing responses, that may reduce or eliminate overselectivity, and we consider this type of intervention as it relates to issues of relevance for AAC. PMID:24773053
Perceptual integration of motion and form information: evidence of parallel-continuous processing.
von Mühlenen, A; Müller, H J
2000-04-01
In three visual search experiments, the processes involved in the efficient detection of motion-form conjunction targets were investigated. Experiment 1 was designed to estimate the relative contributions of stationary and moving nontargets to the search rate. Search rates were primarily determined by the number of moving nontargets; stationary nontargets sharing the target form also exerted a significant effect, but this was only about half as strong as that of moving nontargets; stationary nontargets not sharing the target form had little influence. In Experiments 2 and 3, the effects of display factors influencing the visual (form) quality of moving items (movement speed and item size) were examined. Increasing the speed of the moving items (> 1.5 degrees/sec) facilitated target detection when the task required segregation of the moving from the stationary items. When no segregation was necessary, increasing the movement speed impaired performance: With large display items, motion speed had little effect on target detection, but with small items, search efficiency declined when items moved faster than 1.5 degrees/sec. This pattern indicates that moving nontargets exert a strong effect on the search rate (Experiment 1) because of the loss of visual quality for moving items above a certain movement speed. A parallel-continuous processing account of motion-form conjunction search is proposed, which combines aspects of Guided Search (Wolfe, 1994) and attentional engagement theory (Duncan & Humphreys, 1989).
Rolke, Bettina; Festl, Freya; Seibold, Verena C
2016-11-01
We used ERPs to investigate whether temporal attention interacts with spatial attention and feature-based attention to enhance visual processing. We presented a visual search display containing one singleton stimulus among a set of homogenous distractors. Participants were asked to respond only to target singletons of a particular color and shape that were presented in an attended spatial position. We manipulated temporal attention by presenting a warning signal before each search display and varying the foreperiod (FP) between the warning signal and the search display in a blocked manner. We observed distinctive ERP effects of both spatial and temporal attention. The amplitudes for the N2pc, SPCN, and P3 were enhanced by spatial attention indicating a processing benefit of relevant stimulus features at the attended side. Temporal attention accelerated stimulus processing; this was indexed by an earlier onset of the N2pc component and a reduction in reaction times to targets. Most importantly, temporal attention did not interact with spatial attention or stimulus features to influence visual processing. Taken together, the results suggest that temporal attention fosters visual perceptual processing in a visual search task independently from spatial attention and feature-based attention; this provides support for the nonspecific enhancement hypothesis of temporal attention. © 2016 Society for Psychophysiological Research.
Davidesco, Ido; Harel, Michal; Ramot, Michal; Kramer, Uri; Kipervasser, Svetlana; Andelman, Fani; Neufeld, Miri Y; Goelman, Gadi; Fried, Itzhak; Malach, Rafael
2013-01-16
One of the puzzling aspects in the visual attention literature is the discrepancy between electrophysiological and fMRI findings: whereas fMRI studies reveal strong attentional modulation in the earliest visual areas, single-unit and local field potential studies yielded mixed results. In addition, it is not clear to what extent spatial attention effects extend from early to high-order visual areas. Here we addressed these issues using electrocorticography recordings in epileptic patients. The patients performed a task that allowed simultaneous manipulation of both spatial and object-based attention. They were presented with composite stimuli, consisting of a small object (face or house) superimposed on a large one, and in separate blocks, were instructed to attend one of the objects. We found a consistent increase in broadband high-frequency (30-90 Hz) power, but not in visual evoked potentials, associated with spatial attention starting with V1/V2 and continuing throughout the visual hierarchy. The magnitude of the attentional modulation was correlated with the spatial selectivity of each electrode and its distance from the occipital pole. Interestingly, the latency of the attentional modulation showed a significant decrease along the visual hierarchy. In addition, electrodes placed over high-order visual areas (e.g., fusiform gyrus) showed both effects of spatial and object-based attention. Overall, our results help to reconcile previous observations of discrepancy between fMRI and electrophysiology. They also imply that spatial attention effects can be found both in early and high-order visual cortical areas, in parallel with their stimulus tuning properties.
Visual Attention and Applications in Multimedia Technologies
Le Callet, Patrick; Niebur, Ernst
2013-01-01
Making technological advances in the field of human-machine interactions requires that the capabilities and limitations of the human perceptual system are taken into account. The focus of this report is an important mechanism of perception, visual selective attention, which is becoming more and more important for multimedia applications. We introduce the concept of visual attention and describe its underlying mechanisms. In particular, we introduce the concepts of overt and covert visual attention, and of bottom-up and top-down processing. Challenges related to modeling visual attention and their validation using ad hoc ground truth are also discussed. Examples of the usage of visual attention models in image and video processing are presented. We emphasize multimedia delivery, retargeting and quality assessment of image and video, medical imaging, and the field of stereoscopic 3D images applications. PMID:24489403
Degraded attentional modulation of cortical neural populations in strabismic amblyopia
Hou, Chuan; Kim, Yee-Joon; Lai, Xin Jie; Verghese, Preeti
2016-01-01
Behavioral studies have reported reduced spatial attention in amblyopia, a developmental disorder of spatial vision. However, the neural populations in the visual cortex linked with these behavioral spatial attention deficits have not been identified. Here, we use functional MRI–informed electroencephalography source imaging to measure the effect of attention on neural population activity in the visual cortex of human adult strabismic amblyopes who were stereoblind. We show that compared with controls, the modulatory effects of selective visual attention on the input from the amblyopic eye are substantially reduced in the primary visual cortex (V1) as well as in extrastriate visual areas hV4 and hMT+. Degraded attentional modulation is also found in the normal-acuity fellow eye in areas hV4 and hMT+ but not in V1. These results provide electrophysiological evidence that abnormal binocular input during a developmental critical period may impact cortical connections between the visual cortex and higher level cortices beyond the known amblyopic losses in V1 and V2, suggesting that a deficit of attentional modulation in the visual cortex is an important component of the functional impairment in amblyopia. Furthermore, we find that degraded attentional modulation in V1 is correlated with the magnitude of interocular suppression and the depth of amblyopia. These results support the view that the visual suppression often seen in strabismic amblyopia might be a form of attentional neglect of the visual input to the amblyopic eye. PMID:26885628
Degraded attentional modulation of cortical neural populations in strabismic amblyopia.
Hou, Chuan; Kim, Yee-Joon; Lai, Xin Jie; Verghese, Preeti
2016-01-01
Behavioral studies have reported reduced spatial attention in amblyopia, a developmental disorder of spatial vision. However, the neural populations in the visual cortex linked with these behavioral spatial attention deficits have not been identified. Here, we use functional MRI-informed electroencephalography source imaging to measure the effect of attention on neural population activity in the visual cortex of human adult strabismic amblyopes who were stereoblind. We show that compared with controls, the modulatory effects of selective visual attention on the input from the amblyopic eye are substantially reduced in the primary visual cortex (V1) as well as in extrastriate visual areas hV4 and hMT+. Degraded attentional modulation is also found in the normal-acuity fellow eye in areas hV4 and hMT+ but not in V1. These results provide electrophysiological evidence that abnormal binocular input during a developmental critical period may impact cortical connections between the visual cortex and higher level cortices beyond the known amblyopic losses in V1 and V2, suggesting that a deficit of attentional modulation in the visual cortex is an important component of the functional impairment in amblyopia. Furthermore, we find that degraded attentional modulation in V1 is correlated with the magnitude of interocular suppression and the depth of amblyopia. These results support the view that the visual suppression often seen in strabismic amblyopia might be a form of attentional neglect of the visual input to the amblyopic eye.
Cross-modal cueing of attention alters appearance and early cortical processing of visual stimuli
Störmer, Viola S.; McDonald, John J.; Hillyard, Steven A.
2009-01-01
The question of whether attention makes sensory impressions appear more intense has been a matter of debate for over a century. Recent psychophysical studies have reported that attention increases apparent contrast of visual stimuli, but the issue continues to be debated. We obtained converging neurophysiological evidence from human observers as they judged the relative contrast of visual stimuli presented to the left and right visual fields following a lateralized auditory cue. Cross-modal cueing of attention boosted the apparent contrast of the visual target in association with an enlarged neural response in the contralateral visual cortex that began within 100 ms after target onset. The magnitude of the enhanced neural response was positively correlated with perceptual reports of the cued target being higher in contrast. The results suggest that attention increases the perceived contrast of visual stimuli by boosting early sensory processing in the visual cortex. PMID:20007778
Cross-modal cueing of attention alters appearance and early cortical processing of visual stimuli.
Störmer, Viola S; McDonald, John J; Hillyard, Steven A
2009-12-29
The question of whether attention makes sensory impressions appear more intense has been a matter of debate for over a century. Recent psychophysical studies have reported that attention increases apparent contrast of visual stimuli, but the issue continues to be debated. We obtained converging neurophysiological evidence from human observers as they judged the relative contrast of visual stimuli presented to the left and right visual fields following a lateralized auditory cue. Cross-modal cueing of attention boosted the apparent contrast of the visual target in association with an enlarged neural response in the contralateral visual cortex that began within 100 ms after target onset. The magnitude of the enhanced neural response was positively correlated with perceptual reports of the cued target being higher in contrast. The results suggest that attention increases the perceived contrast of visual stimuli by boosting early sensory processing in the visual cortex.
Attentional Processes in Young Children with Congenital Visual Impairment
ERIC Educational Resources Information Center
Tadic, Valerie; Pring, Linda; Dale, Naomi
2009-01-01
The study investigated attentional processes of 32 preschool children with congenital visual impairment (VI). Children with profound visual impairment (PVI) and severe visual impairment (SVI) were compared to a group of typically developing sighted children in their ability to respond to adult directed attention in terms of establishing,…
Visual Attention to Antismoking PSAs: Smoking Cues versus Other Attention-Grabbing Features
ERIC Educational Resources Information Center
Sanders-Jackson, Ashley N.; Cappella, Joseph N.; Linebarger, Deborah L.; Piotrowski, Jessica Taylor; O'Keeffe, Moira; Strasser, Andrew A.
2011-01-01
This study examines how addicted smokers attend visually to smoking-related public service announcements (PSAs) in adults smokers. Smokers' onscreen visual fixation is an indicator of cognitive resources allocated to visual attention. Characteristic of individuals with addictive tendencies, smokers are expected to be appetitively activated by…
Baars, B J
1999-07-01
A common confound between consciousness and attention makes it difficult to think clearly about recent advances in the understanding of the visual brain. Visual consciousness involves phenomenal experience of the visual world, but visual attention is more plausibly treated as a function that selects and maintains the selection of potential conscious contents, often unconsciously. In the same sense, eye movements select conscious visual events, which are not the same as conscious visual experience. According to common sense, visual experience is consciousness, and selective processes are labeled as attention. The distinction is reflected in very different behavioral measures and in very different brain anatomy and physiology. Visual consciousness tends to be associated with the "what" stream of visual feature neurons in the ventral temporal lobe. In contrast, attentional selection and maintenance are mediated by other brain regions, ranging from superior colliculi to thalamus, prefrontal cortex, and anterior cingulate. The author applied the common-sense distinction between attention and consciousness to the theoretical positions of M. I. Posner (1992, 1994) and D. LaBerge (1997, 1998) to show how it helps to clarify the evidence. He concluded that clarity of thought is served by calling a thing by its proper name.
NASA Astrophysics Data System (ADS)
Ahmetoglu, Emine; Aral, Neriman; Butun Ayhan, Aynur
This study was conducted in order to (a) compare the visual perceptions of seven-year-old children diagnosed with attention deficit hyperactivity disorder with those of normally developing children of the same age and development level and (b) determine whether the visual perceptions of children with attention deficit hyperactivity disorder vary with respect to gender, having received preschool education and parents` educational level. A total of 60 children, 30 with attention deficit hyperactivity disorder and 30 with normal development, were assigned to the study. Data about children with attention deficit hyperactivity disorder and their families was collected by using a General Information Form and the visual perception of children was examined through the Frostig Developmental Test of Visual Perception. The Mann-Whitney U-test and Kruskal-Wallis variance analysis was used to determine whether there was a difference of between the visual perceptions of children with normal development and those diagnosed with attention deficit hyperactivity disorder and to discover whether the variables of gender, preschool education and parents` educational status affected the visual perceptions of children with attention deficit hyperactivity disorder. The results showed that there was a statistically meaningful difference between the visual perceptions of the two groups and that the visual perceptions of children with attention deficit hyperactivity disorder were affected meaningfully by gender, preschool education and parents` educational status.
Overview of Human-Centric Space Situational Awareness (SSA) Science and Technology (S&T)
NASA Astrophysics Data System (ADS)
Ianni, J.; Aleva, D.; Ellis, S.
2012-09-01
A number of organizations, within the government, industry, and academia, are researching ways to help humans understand and react to events in space. The problem is both helped and complicated by the fact that there are numerous data sources that need to be planned (i.e., tasked), collected, processed, analyzed, and disseminated. A large part of the research is in support of the Joint Space Operational Center (JSpOC), National Air and Space Intelligence Center (NASIC), and similar organizations. Much recent research has been specifically targeting the JSpOC Mission System (JMS) which has provided a unifying software architecture. This paper will first outline areas of science and technology (S&T) related to human-centric space situational awareness (SSA) and space command and control (C2) including: 1. Object visualization - especially data fused from disparate sources. Also satellite catalog visualizations that convey the physical relationships between space objects. 2. Data visualization - improve data trend analysis as in visual analytics and interactive visualization; e.g., satellite anomaly trends over time, space weather visualization, dynamic visualizations 3. Workflow support - human-computer interfaces that encapsulate multiple computer services (i.e., algorithms, programs, applications) into a 4. Command and control - e.g., tools that support course of action (COA) development and selection, tasking for satellites and sensors, etc. 5. Collaboration - improve individuals or teams ability to work with others; e.g., video teleconferencing, shared virtual spaces, file sharing, virtual white-boards, chat, and knowledge search. 6. Hardware/facilities - e.g., optimal layouts for operations centers, ergonomic workstations, immersive displays, interaction technologies, and mobile computing. Secondly we will provide a survey of organizations working these areas and suggest where more attention may be needed. Although no detailed master plan exists for human-centric SSA and C2, we see little redundancy among the groups supporting SSA human factors at this point.
Simultaneous selection by object-based attention in visual and frontal cortex
Pooresmaeili, Arezoo; Poort, Jasper; Roelfsema, Pieter R.
2014-01-01
Models of visual attention hold that top-down signals from frontal cortex influence information processing in visual cortex. It is unknown whether situations exist in which visual cortex actively participates in attentional selection. To investigate this question, we simultaneously recorded neuronal activity in the frontal eye fields (FEF) and primary visual cortex (V1) during a curve-tracing task in which attention shifts are object-based. We found that accurate performance was associated with similar latencies of attentional selection in both areas and that the latency in both areas increased if the task was made more difficult. The amplitude of the attentional signals in V1 saturated early during a trial, whereas these selection signals kept increasing for a longer time in FEF, until the moment of an eye movement, as if FEF integrated attentional signals present in early visual cortex. In erroneous trials, we observed an interareal latency difference because FEF selected the wrong curve before V1 and imposed its erroneous decision onto visual cortex. The neuronal activity in visual and frontal cortices was correlated across trials, and this trial-to-trial coupling was strongest for the attended curve. These results imply that selective attention relies on reciprocal interactions within a large network of areas that includes V1 and FEF. PMID:24711379
Chen, Chen; Schneps, Matthew H; Masyn, Katherine E; Thomson, Jennifer M
2016-11-01
Increasing evidence has shown visual attention span to be a factor, distinct from phonological skills, that explains single-word identification (pseudo-word/word reading) performance in dyslexia. Yet, little is known about how well visual attention span explains text comprehension. Observing reading comprehension in a sample of 105 high school students with dyslexia, we used a pathway analysis to examine the direct and indirect path between visual attention span and reading comprehension while controlling for other factors such as phonological awareness, letter identification, short-term memory, IQ and age. Integrating phonemic decoding efficiency skills in the analytic model, this study aimed to disentangle how visual attention span and phonological skills work together in reading comprehension for readers with dyslexia. We found visual attention span to have a significant direct effect on more difficult reading comprehension but not on an easier level. It also had a significant direct effect on pseudo-word identification but not on word identification. In addition, we found that visual attention span indirectly explains reading comprehension through pseudo-word reading and word reading skills. This study supports the hypothesis that at least part of the dyslexic profile can be explained by visual attention abilities. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Characteristics of Print in Books for Preschool Children.
Treiman, Rebecca; Rosales, Nicole; Kessler, Brett
Children begin to learn about the characteristics of print well before formal literacy instruction begins. Reading to children can expose them to print and help them learn about its characteristics. This may be especially true if the print is visually salient, for studies suggest that prereaders pay more attention to such print than to print that is visually less salient. To shed light on the characteristics of the print that US children see in books, especially those characteristics that may contribute to visual salience, we report a quantitative analysis of 73 books that were chosen to be representative of those seen by preschoolers. We found that print that is visually salient due to color, variation, and other features tends to be more common on the covers of books than in the interiors. It also tends to be more common in recently published books than in older books. Even in recent books, however, the print is much less visually salient than the accompanying pictures. Many studies have examined the behavior of adults and children during shared reading, but little research has examined the characteristics of books themselves. Our results provide quantitative information about this topic for one set of characteristics in books for young US children.
Characteristics of Print in Books for Preschool Children
Treiman, Rebecca; Rosales, Nicole; Kessler, Brett
2015-01-01
Children begin to learn about the characteristics of print well before formal literacy instruction begins. Reading to children can expose them to print and help them learn about its characteristics. This may be especially true if the print is visually salient, for studies suggest that prereaders pay more attention to such print than to print that is visually less salient. To shed light on the characteristics of the print that US children see in books, especially those characteristics that may contribute to visual salience, we report a quantitative analysis of 73 books that were chosen to be representative of those seen by preschoolers. We found that print that is visually salient due to color, variation, and other features tends to be more common on the covers of books than in the interiors. It also tends to be more common in recently published books than in older books. Even in recent books, however, the print is much less visually salient than the accompanying pictures. Many studies have examined the behavior of adults and children during shared reading, but little research has examined the characteristics of books themselves. Our results provide quantitative information about this topic for one set of characteristics in books for young US children. PMID:27239231
Hartzler, Andrea L; Chaudhuri, Shomir; Fey, Brett C; Flum, David R; Lavallee, Danielle
2015-01-01
The collection of patient-reported outcomes (PROs) draws attention to issues of importance to patients-physical function and quality of life. The integration of PRO data into clinical decisions and discussions with patients requires thoughtful design of user-friendly interfaces that consider user experience and present data in personalized ways to enhance patient care. Whereas most prior work on PROs focuses on capturing data from patients, little research details how to design effective user interfaces that facilitate use of this data in clinical practice. We share lessons learned from engaging health care professionals to inform design of visual dashboards, an emerging type of health information technology (HIT). We employed human-centered design (HCD) methods to create visual displays of PROs to support patient care and quality improvement. HCD aims to optimize the design of interactive systems through iterative input from representative users who are likely to use the system in the future. Through three major steps, we engaged health care professionals in targeted, iterative design activities to inform the development of a PRO Dashboard that visually displays patient-reported pain and disability outcomes following spine surgery. Design activities to engage health care administrators, providers, and staff guided our work from design concept to specifications for dashboard implementation. Stakeholder feedback from these health care professionals shaped user interface design features, including predefined overviews that illustrate at-a-glance trends and quarterly snapshots, granular data filters that enable users to dive into detailed PRO analytics, and user-defined views to share and reuse. Feedback also revealed important considerations for quality indicators and privacy-preserving sharing and use of PROs. Our work illustrates a range of engagement methods guided by human-centered principles and design recommendations for optimizing PRO Dashboards for patient care and quality improvement. Engaging health care professionals as stakeholders is a critical step toward the design of user-friendly HIT that is accepted, usable, and has the potential to enhance quality of care and patient outcomes.
The role of visual attention in multiple object tracking: evidence from ERPs.
Doran, Matthew M; Hoffman, James E
2010-01-01
We examined the role of visual attention in the multiple object tracking (MOT) task by measuring the amplitude of the N1 component of the event-related potential (ERP) to probe flashes presented on targets, distractors, or empty background areas. We found evidence that visual attention enhances targets and suppresses distractors (Experiment 1 & 3). However, we also found that when tracking load was light (two targets and two distractors), accurate tracking could be carried out without any apparent contribution from the visual attention system (Experiment 2). Our results suggest that attentional selection during MOT is flexibly determined by task demands as well as tracking load and that visual attention may not always be necessary for accurate tracking.
Perceptual grouping and attention in visual search for features and for objects.
Treisman, A
1982-04-01
This article explores the effects of perceptual grouping on search for targets defined by separate features or by conjunction of features. Treisman and Gelade proposed a feature-integration theory of attention, which claims that in the absence of prior knowledge, the separable features of objects are correctly combined only when focused attention is directed to each item in turn. If items are preattentively grouped, however, attention may be directed to groups rather than to single items whenever no recombination of features within a group could generate an illusory target. This prediction is confirmed: In search for conjunctions, subjects appear to scan serially between groups rather than items. The scanning rate shows little effect of the spatial density of distractors, suggesting that it reflects serial fixations of attention rather than eye movements. Search for features, on the other hand, appears to independent of perceptual grouping, suggesting that features are detected preattentively. A conjunction target can be camouflaged at the preattentive level by placing it at the boundary between two adjacent groups, each of which shares one of its features. This suggests that preattentive grouping creates separate feature maps within each separable dimension rather than one global configuration.
Social Image Captioning: Exploring Visual Attention and User Attention.
Wang, Leiquan; Chu, Xiaoliang; Zhang, Weishan; Wei, Yiwei; Sun, Weichen; Wu, Chunlei
2018-02-22
Image captioning with a natural language has been an emerging trend. However, the social image, associated with a set of user-contributed tags, has been rarely investigated for a similar task. The user-contributed tags, which could reflect the user attention, have been neglected in conventional image captioning. Most existing image captioning models cannot be applied directly to social image captioning. In this work, a dual attention model is proposed for social image captioning by combining the visual attention and user attention simultaneously.Visual attention is used to compress a large mount of salient visual information, while user attention is applied to adjust the description of the social images with user-contributed tags. Experiments conducted on the Microsoft (MS) COCO dataset demonstrate the superiority of the proposed method of dual attention.
Social Image Captioning: Exploring Visual Attention and User Attention
Chu, Xiaoliang; Zhang, Weishan; Wei, Yiwei; Sun, Weichen; Wu, Chunlei
2018-01-01
Image captioning with a natural language has been an emerging trend. However, the social image, associated with a set of user-contributed tags, has been rarely investigated for a similar task. The user-contributed tags, which could reflect the user attention, have been neglected in conventional image captioning. Most existing image captioning models cannot be applied directly to social image captioning. In this work, a dual attention model is proposed for social image captioning by combining the visual attention and user attention simultaneously.Visual attention is used to compress a large mount of salient visual information, while user attention is applied to adjust the description of the social images with user-contributed tags. Experiments conducted on the Microsoft (MS) COCO dataset demonstrate the superiority of the proposed method of dual attention. PMID:29470409
The involvement of central attention in visual search is determined by task demands.
Han, Suk Won
2017-04-01
Attention, the mechanism by which a subset of sensory inputs is prioritized over others, operates at multiple processing stages. Specifically, attention enhances weak sensory signal at the perceptual stage, while it serves to select appropriate responses or consolidate sensory representations into short-term memory at the central stage. This study investigated the independence and interaction between perceptual and central attention. To do so, I used a dual-task paradigm, pairing a four-alternative choice task with a visual search task. The results showed that central attention for response selection was engaged in perceptual processing for visual search when the number of search items increased, thereby increasing the demand for serial allocation of focal attention. By contrast, central attention and perceptual attention remained independent as far as the demand for serial shifting of focal attention remained constant; decreasing stimulus contrast or increasing the set size of a parallel search did not evoke the involvement of central attention in visual search. These results suggest that the nature of concurrent visual search process plays a crucial role in the functional interaction between two different types of attention.
Conscious visual memory with minimal attention.
Pinto, Yair; Vandenbroucke, Annelinde R; Otten, Marte; Sligte, Ilja G; Seth, Anil K; Lamme, Victor A F
2017-02-01
Is conscious visual perception limited to the locations that a person attends? The remarkable phenomenon of change blindness, which shows that people miss nearly all unattended changes in a visual scene, suggests the answer is yes. However, change blindness is found after visual interference (a mask or a new scene), so that subjects have to rely on working memory (WM), which has limited capacity, to detect the change. Before such interference, however, a much larger capacity store, called fragile memory (FM), which is easily overwritten by newly presented visual information, is present. Whether these different stores depend equally on spatial attention is central to the debate on the role of attention in conscious vision. In 2 experiments, we found that minimizing spatial attention almost entirely erases visual WM, as expected. Critically, FM remains largely intact. Moreover, minimally attended FM responses yield accurate metacognition, suggesting that conscious memory persists with limited spatial attention. Together, our findings help resolve the fundamental issue of how attention affects perception: Both visual consciousness and memory can be supported by only minimal attention. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Low-level visual attention and its relation to joint attention in autism spectrum disorder.
Jaworski, Jessica L Bean; Eigsti, Inge-Marie
2017-04-01
Visual attention is integral to social interaction and is a critical building block for development in other domains (e.g., language). Furthermore, atypical attention (especially joint attention) is one of the earliest markers of autism spectrum disorder (ASD). The current study assesses low-level visual attention and its relation to social attentional processing in youth with ASD and typically developing (TD) youth, aged 7 to 18 years. The findings indicate difficulty overriding incorrect attentional cues in ASD, particularly with non-social (arrow) cues relative to social (face) cues. The findings also show reduced competition in ASD from cues that remain on-screen. Furthermore, social attention, autism severity, and age were all predictors of competing cue processing. The results suggest that individuals with ASD may be biased towards speeded rather than accurate responding, and further, that reduced engagement with visual information may impede responses to visual attentional cues. Once attention is engaged, individuals with ASD appear to interpret directional cues as meaningful. These findings from a controlled, experimental paradigm were mirrored in results from an ecologically valid measure of social attention. Attentional difficulties may be exacerbated during the complex and dynamic experience of actual social interaction. Implications for intervention are discussed.
Spatial attention, feature-based attention and saccades: Three sides of one coin?
Mazer, James A.
2013-01-01
The last three decades has seen a steady growth of neuroscience research aimed at understanding the functions and sources of top-down attentional modulation in the brain. This correlates with recognition that attention may be a necessary component of sensory systems to support natural behaviors in natural environments. Complexity and clutter are two of the most recognizable hallmarks of natural environments, which can simultaneously contain vitally important and completely irrelevant stimuli. Attention serves as an adaptive filter allowing each sensory modality preferential processing routes for important stimuli while suppressing responses to distracters, thus optimizing use of limited neural resources. In other words, “attention” is the family of mechanisms by which organisms are able to effectively and selectively allocate limited neural resources to achieve specific behavioral goals. This review provides some historical context for considering attentional frameworks and modern neurophysiological attention research, focusing on visual attention. A taxonomy of common attentional effects and neural mechanisms is provided, along with consideration of the specific relationship between attention and saccade planning. We examine the validity of premotor theories of attention, which posit that attention and saccade planning are one and the same. While there is strong evidence that attention and oculomotor planning are similar, with shared neural substrates, there is also evidence that these two functions are not synonymous. Finally, we examine neurophysiological explanations for dysfunction in Attention Deficit Hyperactivity Disorder (ADHD) and the hypothesis that social impairment in Autism Spectrum Disorders (ASD) is partially attributable to perturbations of attentional control circuitry. PMID:21529782
Spatial attention increases high-frequency gamma synchronisation in human medial visual cortex.
Koelewijn, Loes; Rich, Anina N; Muthukumaraswamy, Suresh D; Singh, Krish D
2013-10-01
Visual information processing involves the integration of stimulus and goal-driven information, requiring neuronal communication. Gamma synchronisation is linked to neuronal communication, and is known to be modulated in visual cortex both by stimulus properties and voluntarily-directed attention. Stimulus-driven modulations of gamma activity are particularly associated with early visual areas such as V1, whereas attentional effects are generally localised to higher visual areas such as V4. The absence of a gamma increase in early visual cortex is at odds with robust attentional enhancements found with other measures of neuronal activity in this area. Here we used magnetoencephalography (MEG) to explore the effect of spatial attention on gamma activity in human early visual cortex using a highly effective gamma-inducing stimulus and strong attentional manipulation. In separate blocks, subjects tracked either a parafoveal grating patch that induced gamma activity in contralateral medial visual cortex, or a small line at fixation, effectively attending away from the gamma-inducing grating. Both items were always present, but rotated unpredictably and independently of each other. The rotating grating induced gamma synchronisation in medial visual cortex at 30-70 Hz, and in lateral visual cortex at 60-90 Hz, regardless of whether it was attended. Directing spatial attention to the grating increased gamma synchronisation in medial visual cortex, but only at 60-90 Hz. These results suggest that the generally found increase in gamma activity by spatial attention can be localised to early visual cortex in humans, and that stimulus and goal-driven modulations may be mediated at different frequencies within the gamma range. Copyright © 2013 Elsevier Inc. All rights reserved.
Cognitive Control Network Contributions to Memory-Guided Visual Attention
Rosen, Maya L.; Stern, Chantal E.; Michalka, Samantha W.; Devaney, Kathryn J.; Somers, David C.
2016-01-01
Visual attentional capacity is severely limited, but humans excel in familiar visual contexts, in part because long-term memories guide efficient deployment of attention. To investigate the neural substrates that support memory-guided visual attention, we performed a set of functional MRI experiments that contrast long-term, memory-guided visuospatial attention with stimulus-guided visuospatial attention in a change detection task. Whereas the dorsal attention network was activated for both forms of attention, the cognitive control network (CCN) was preferentially activated during memory-guided attention. Three posterior nodes in the CCN, posterior precuneus, posterior callosal sulcus/mid-cingulate, and lateral intraparietal sulcus exhibited the greatest specificity for memory-guided attention. These 3 regions exhibit functional connectivity at rest, and we propose that they form a subnetwork within the broader CCN. Based on the task activation patterns, we conclude that the nodes of this subnetwork are preferentially recruited for long-term memory guidance of visuospatial attention. PMID:25750253
A Neural Theory of Visual Attention: Bridging Cognition and Neurophysiology
ERIC Educational Resources Information Center
Bundesen, Claus; Habekost, Thomas; Kyllingsbaek, Soren
2005-01-01
A neural theory of visual attention (NTVA) is presented. NTVA is a neural interpretation of C. Bundesen's (1990) theory of visual attention (TVA). In NTVA, visual processing capacity is distributed across stimuli by dynamic remapping of receptive fields of cortical cells such that more processing resources (cells) are devoted to behaviorally…
ERIC Educational Resources Information Center
Solan, Harold A.; Shelley-Tremblay, John F.; Hansen, Peter C.; Larson, Steven
2007-01-01
The authors examined the relationships between reading comprehension, visual attention, and magnocellular processing in 42 Grade 7 students. The goal was to quantify the sensitivity of visual attention and magnocellular visual processing as concomitants of poor reading comprehension in the absence of either vision therapy or cognitive…
Spatial Working Memory Interferes with Explicit, but Not Probabilistic Cuing of Spatial Attention
ERIC Educational Resources Information Center
Won, Bo-Yeong; Jiang, Yuhong V.
2015-01-01
Recent empirical and theoretical work has depicted a close relationship between visual attention and visual working memory. For example, rehearsal in spatial working memory depends on spatial attention, whereas adding a secondary spatial working memory task impairs attentional deployment in visual search. These findings have led to the proposal…
A dual-task investigation of automaticity in visual word processing
NASA Technical Reports Server (NTRS)
McCann, R. S.; Remington, R. W.; Van Selst, M.
2000-01-01
An analysis of activation models of visual word processing suggests that frequency-sensitive forms of lexical processing should proceed normally while unattended. This hypothesis was tested by having participants perform a speeded pitch discrimination task followed by lexical decisions or word naming. As the stimulus onset asynchrony between the tasks was reduced, lexical-decision and naming latencies increased dramatically. Word-frequency effects were additive with the increase, indicating that frequency-sensitive processing was subject to postponement while attention was devoted to the other task. Either (a) the same neural hardware shares responsibility for lexical processing and central stages of choice reaction time task processing and cannot perform both computations simultaneously, or (b) lexical processing is blocked in order to optimize performance on the pitch discrimination task. Either way, word processing is not as automatic as activation models suggest.
Park, George D; Reed, Catherine L
2015-10-01
Despite attentional prioritization for grasping space near the hands, tool-use appears to transfer attentional bias to the tool's end/functional part. The contributions of haptic and visual inputs to attentional distribution along a tool were investigated as a function of tool-use in near (Experiment 1) and far (Experiment 2) space. Visual attention was assessed with a 50/50, go/no-go, target discrimination task, while a tool was held next to targets appearing near the tool-occupied hand or tool-end. Target response times (RTs) and sensitivity (d-prime) were measured at target locations, before and after functional tool practice for three conditions: (1) open-tool: tool-end visible (visual + haptic inputs), (2) hidden-tool: tool-end visually obscured (haptic input only), and (3) short-tool: stick missing tool's length/end (control condition: hand occupied but no visual/haptic input). In near space, both open- and hidden-tool groups showed a tool-end, attentional bias (faster RTs toward tool-end) before practice; after practice, RTs near the hand improved. In far space, the open-tool group showed no bias before practice; after practice, target RTs near the tool-end improved. However, the hidden-tool group showed a consistent tool-end bias despite practice. Lack of short-tool group results suggested that hidden-tool group results were specific to haptic inputs. In conclusion, (1) allocation of visual attention along a tool due to tool practice differs in near and far space, and (2) visual attention is drawn toward the tool's end even when visually obscured, suggesting haptic input provides sufficient information for directing attention along the tool.
Schubert, Torsten; Finke, Kathrin; Redel, Petra; Kluckow, Steffen; Müller, Hermann; Strobach, Tilo
2015-05-01
Experts with video game experience, in contrast to non-experienced persons, are superior in multiple domains of visual attention. However, it is an open question which basic aspects of attention underlie this superiority. We approached this question using the framework of Theory of Visual Attention (TVA) with tools that allowed us to assess various parameters that are related to different visual attention aspects (e.g., perception threshold, processing speed, visual short-term memory storage capacity, top-down control, spatial distribution of attention) and that are measurable on the same experimental basis. In Experiment 1, we found advantages of video game experts in perception threshold and visual processing speed; the latter being restricted to the lower positions of the used computer display. The observed advantages were not significantly moderated by general person-related characteristics such as personality traits, sensation seeking, intelligence, social anxiety, or health status. Experiment 2 tested a potential causal link between the expert advantages and video game practice with an intervention protocol. It found no effects of action video gaming on perception threshold, visual short-term memory storage capacity, iconic memory storage, top-down control, and spatial distribution of attention after 15 days of training. However, observations of a selected improvement of processing speed at the lower positions of the computer screen after video game training and of retest effects are suggestive for limited possibilities to improve basic aspects of visual attention (TVA) with practice. Copyright © 2015 Elsevier B.V. All rights reserved.
Xia, Jing; Zhang, Wei; Jiang, Yizhou; Li, You; Chen, Qi
2018-05-16
Practice and experiences gradually shape the central nervous system, from the synaptic level to large-scale neural networks. In natural multisensory environment, even when inundated by streams of information from multiple sensory modalities, our brain does not give equal weight to different modalities. Rather, visual information more frequently receives preferential processing and eventually dominates consciousness and behavior, i.e., visual dominance. It remains unknown, however, the supra-modal and modality-specific practice effect during cross-modal selective attention, and moreover whether the practice effect shows similar modality preferences as the visual dominance effect in the multisensory environment. To answer the above two questions, we adopted a cross-modal selective attention paradigm in conjunction with the hybrid fMRI design. Behaviorally, visual performance significantly improved while auditory performance remained constant with practice, indicating that visual attention more flexibly adapted behavior with practice than auditory attention. At the neural level, the practice effect was associated with decreasing neural activity in the frontoparietal executive network and increasing activity in the default mode network, which occurred independently of the modality attended, i.e., the supra-modal mechanisms. On the other hand, functional decoupling between the auditory and the visual system was observed with the progress of practice, which varied as a function of the modality attended. The auditory system was functionally decoupled with both the dorsal and ventral visual stream during auditory attention while was decoupled only with the ventral visual stream during visual attention. To efficiently suppress the irrelevant visual information with practice, auditory attention needs to additionally decouple the auditory system from the dorsal visual stream. The modality-specific mechanisms, together with the behavioral effect, thus support the visual dominance model in terms of the practice effect during cross-modal selective attention. Copyright © 2018 Elsevier Ltd. All rights reserved.
Guidance of visual attention by semantic information in real-world scenes
Wu, Chia-Chien; Wick, Farahnaz Ahmed; Pomplun, Marc
2014-01-01
Recent research on attentional guidance in real-world scenes has focused on object recognition within the context of a scene. This approach has been valuable for determining some factors that drive the allocation of visual attention and determine visual selection. This article provides a review of experimental work on how different components of context, especially semantic information, affect attentional deployment. We review work from the areas of object recognition, scene perception, and visual search, highlighting recent studies examining semantic structure in real-world scenes. A better understanding on how humans parse scene representations will not only improve current models of visual attention but also advance next-generation computer vision systems and human-computer interfaces. PMID:24567724
Attentional bias to food-related visual cues: is there a role in obesity?
Doolan, K J; Breslin, G; Hanna, D; Gallagher, A M
2015-02-01
The incentive sensitisation model of obesity suggests that modification of the dopaminergic associated reward systems in the brain may result in increased awareness of food-related visual cues present in the current food environment. Having a heightened awareness of these visual food cues may impact on food choices and eating behaviours with those being most aware of or demonstrating greater attention to food-related stimuli potentially being at greater risk of overeating and subsequent weight gain. To date, research related to attentional responses to visual food cues has been both limited and conflicting. Such inconsistent findings may in part be explained by the use of different methodological approaches to measure attentional bias and the impact of other factors such as hunger levels, energy density of visual food cues and individual eating style traits that may influence visual attention to food-related cues outside of weight status alone. This review examines the various methodologies employed to measure attentional bias with a particular focus on the role that attentional processing of food-related visual cues may have in obesity. Based on the findings of this review, it appears that it may be too early to clarify the role visual attention to food-related cues may have in obesity. Results however highlight the importance of considering the most appropriate methodology to use when measuring attentional bias and the characteristics of the study populations targeted while interpreting results to date and in designing future studies.
Attention Effects During Visual Short-Term Memory Maintenance: Protection or Prioritization?
Matsukura, Michi; Luck, Steven J.; Vecera, Shaun P.
2007-01-01
Interactions between visual attention and visual short-term memory (VSTM) play a central role in cognitive processing. For example, attention can assist in selectively encoding items into visual memory. Attention appears to be able to influence items already stored in visual memory as well; cues that appear long after the presentation of an array of objects can affect memory for those objects (Griffin & Nobre, 2003). In five experiments, we distinguished two possible mechanisms for the effects of cues on items currently stored in VSTM. A protection account proposes that attention protects the cued item from becoming degraded during the retention interval. By contrast, a prioritization account suggests that attention increases a cued item’s priority during the comparison process that occurs when memory is tested. The results of the experiments were consistent with the first of these possibilities, suggesting that attention can serve to protect VSTM representations while they are being maintained. PMID:18078232
Attention distributed across sensory modalities enhances perceptual performance
Mishra, Jyoti; Gazzaley, Adam
2012-01-01
This study investigated the interaction between top-down attentional control and multisensory processing in humans. Using semantically congruent and incongruent audiovisual stimulus streams, we found target detection to be consistently improved in the setting of distributed audiovisual attention versus focused visual attention. This performance benefit was manifested as faster reaction times for congruent audiovisual stimuli, and as accuracy improvements for incongruent stimuli, resulting in a resolution of stimulus interference. Electrophysiological recordings revealed that these behavioral enhancements were associated with reduced neural processing of both auditory and visual components of the audiovisual stimuli under distributed vs. focused visual attention. These neural changes were observed at early processing latencies, within 100–300 ms post-stimulus onset, and localized to auditory, visual, and polysensory temporal cortices. These results highlight a novel neural mechanism for top-down driven performance benefits via enhanced efficacy of sensory neural processing during distributed audiovisual attention relative to focused visual attention. PMID:22933811
ERIC Educational Resources Information Center
Thiessen, Amber; Beukelman, David; Hux, Karen; Longenecker, Maria
2016-01-01
Purpose: The purpose of the study was to compare the visual attention patterns of adults with aphasia and adults without neurological conditions when viewing visual scenes with 2 types of engagement. Method: Eye-tracking technology was used to measure the visual attention patterns of 10 adults with aphasia and 10 adults without neurological…
Attention modulates perception of visual space
Zhou, Liu; Deng, Chenglong; Ooi, Teng Leng; He, Zijiang J.
2017-01-01
Attention readily facilitates the detection and discrimination of objects, but it is not known whether it helps to form the vast volume of visual space that contains the objects and where actions are implemented. Conventional wisdom suggests not, given the effortless ease with which we perceive three-dimensional (3D) scenes on opening our eyes. Here, we show evidence to the contrary. In Experiment 1, the observer judged the location of a briefly presented target, placed either on the textured ground or ceiling surface. Judged location was more accurate for a target on the ground, provided that the ground was visible and that the observer directed attention to the lower visual field, not the upper field. This reveals that attention facilitates space perception with reference to the ground. Experiment 2 showed that judged location of a target in mid-air, with both ground and ceiling surfaces present, was more accurate when the observer directed their attention to the lower visual field; this indicates that the attention effect extends to visual space above the ground. These findings underscore the role of attention in anchoring visual orientation in space, which is arguably a primal event that enhances one’s ability to interact with objects and surface layouts within the visual space. The fact that the effect of attention was contingent on the ground being visible suggests that our terrestrial visual system is best served by its ecological niche. PMID:29177198
Automatic Guidance of Visual Attention from Verbal Working Memory
ERIC Educational Resources Information Center
Soto, David; Humphreys, Glyn W.
2007-01-01
Previous studies have shown that visual attention can be captured by stimuli matching the contents of working memory (WM). Here, the authors assessed the nature of the representation that mediates the guidance of visual attention from WM. Observers were presented with either verbal or visual primes (to hold in memory, Experiment 1; to verbalize,…
Visual Spatial Attention to Multiple Locations At Once: The Jury Is Still Out
ERIC Educational Resources Information Center
Jans, Bert; Peters, Judith C.; De Weerd, Peter
2010-01-01
Although in traditional attention research the focus of visual spatial attention has been considered as indivisible, many studies in the last 15 years have claimed the contrary. These studies suggest that humans can direct their attention simultaneously to multiple noncontiguous regions of the visual field upon mere instruction. The notion that…
Television Viewing at Home: Age Trends in Visual Attention and Time with TV.
ERIC Educational Resources Information Center
Anderson, Daniel R.; And Others
1986-01-01
Decribes age trends in television viewing time and visual attention of children and adults videotaped in their homes for 10-day periods. Shows that the increase in visual attention to television during the preschool years is consistent with the theory that television program comprehensibility is a major determinant of attention in young children.…
Enhancing cognition with video games: a multiple game training study.
Oei, Adam C; Patterson, Michael D
2013-01-01
Previous evidence points to a causal link between playing action video games and enhanced cognition and perception. However, benefits of playing other video games are under-investigated. We examined whether playing non-action games also improves cognition. Hence, we compared transfer effects of an action and other non-action types that required different cognitive demands. We instructed 5 groups of non-gamer participants to play one game each on a mobile device (iPhone/iPod Touch) for one hour a day/five days a week over four weeks (20 hours). Games included action, spatial memory, match-3, hidden- object, and an agent-based life simulation. Participants performed four behavioral tasks before and after video game training to assess for transfer effects. Tasks included an attentional blink task, a spatial memory and visual search dual task, a visual filter memory task to assess for multiple object tracking and cognitive control, as well as a complex verbal span task. Action game playing eliminated attentional blink and improved cognitive control and multiple-object tracking. Match-3, spatial memory and hidden object games improved visual search performance while the latter two also improved spatial working memory. Complex verbal span improved after match-3 and action game training. Cognitive improvements were not limited to action game training alone and different games enhanced different aspects of cognition. We conclude that training specific cognitive abilities frequently in a video game improves performance in tasks that share common underlying demands. Overall, these results suggest that many video game-related cognitive improvements may not be due to training of general broad cognitive systems such as executive attentional control, but instead due to frequent utilization of specific cognitive processes during game play. Thus, many video game training related improvements to cognition may be attributed to near-transfer effects.
Poggel, Dorothe A; Treutwein, Bernhard; Calmanti, Claudia; Strasburger, Hans
2012-08-01
Part I described the topography of visual performance over the life span. Performance decline was explained only partly by deterioration of the optical apparatus. Part II therefore examines the influence of higher visual and cognitive functions. Visual field maps for 95 healthy observers of static perimetry, double-pulse resolution (DPR), reaction times, and contrast thresholds, were correlated with measures of visual attention (alertness, divided attention, spatial cueing), visual search, and the size of the attention focus. Correlations with the attentional variables were substantial, particularly for variables of temporal processing. DPR thresholds depended on the size of the attention focus. The extraction of cognitive variables from the correlations between topographical variables and participant age substantially reduced those correlations. There is a systematic top-down influence on the aging of visual functions, particularly of temporal variables, that largely explains performance decline and the change of the topography over the life span.
Vision in Flies: Measuring the Attention Span
Koenig, Sebastian; Wolf, Reinhard; Heisenberg, Martin
2016-01-01
A visual stimulus at a particular location of the visual field may elicit a behavior while at the same time equally salient stimuli in other parts do not. This property of visual systems is known as selective visual attention (SVA). The animal is said to have a focus of attention (FoA) which it has shifted to a particular location. Visual attention normally involves an attention span at the location to which the FoA has been shifted. Here the attention span is measured in Drosophila. The fly is tethered and hence has its eyes fixed in space. It can shift its FoA internally. This shift is revealed using two simultaneous test stimuli with characteristic responses at their particular locations. In tethered flight a wild type fly keeps its FoA at a certain location for up to 4s. Flies with a mutation in the radish gene, that has been suggested to be involved in attention-like mechanisms, display a reduced attention span of only 1s. PMID:26848852
Vision in Flies: Measuring the Attention Span.
Koenig, Sebastian; Wolf, Reinhard; Heisenberg, Martin
2016-01-01
A visual stimulus at a particular location of the visual field may elicit a behavior while at the same time equally salient stimuli in other parts do not. This property of visual systems is known as selective visual attention (SVA). The animal is said to have a focus of attention (FoA) which it has shifted to a particular location. Visual attention normally involves an attention span at the location to which the FoA has been shifted. Here the attention span is measured in Drosophila. The fly is tethered and hence has its eyes fixed in space. It can shift its FoA internally. This shift is revealed using two simultaneous test stimuli with characteristic responses at their particular locations. In tethered flight a wild type fly keeps its FoA at a certain location for up to 4s. Flies with a mutation in the radish gene, that has been suggested to be involved in attention-like mechanisms, display a reduced attention span of only 1s.
Self-face Captures, Holds, and Biases Attention.
Wójcik, Michał J; Nowicka, Maria M; Kotlewska, Ilona; Nowicka, Anna
2017-01-01
The implicit self-recognition process may take place already in the pre-attentive stages of perception. After a silent stimulus has captured attention, it is passed on to the attentive stage where it can affect decision making and responding. Numerous studies show that the presence of self-referential information affects almost every cognitive level. These effects may share a common and fundamental basis in an attentional mechanism, conceptualized as attentional bias: the exaggerated deployment of attentional resources to a salient stimulus. A gold standard in attentional bias research is the dot-probe paradigm. In this task, a prominent stimulus (cue) and a neutral stimulus are presented in different spatial locations, followed by the presentation of a target. In the current study we aimed at investigating whether the self-face captures, holds and biases attention when presented as a task-irrelevant stimulus. In two dot-probe experiments coupled with the event-related potential (ERP) technique we analyzed the following relevant ERPs components: N2pc and SPCN which reflect attentional shifts and the maintenance of attention, respectively. An inter-stimulus interval separating face-cues and probes (800 ms) was introduced only in the first experiment. In line with our predictions, in Experiment 1 the self-face elicited the N2pc and the SPCN component. In Experiment 2 in addition to N2pc, an attentional bias was observed. Our results indicate that unintentional self-face processing disables the top-down control setting to filter out distractors, thus leading to the engagement of attentional resources and visual short-term memory.
Guidance of attention by information held in working memory.
Calleja, Marissa Ortiz; Rich, Anina N
2013-05-01
Information held in working memory (WM) can guide attention during visual search. The authors of recent studies have interpreted the effect of holding verbal labels in WM as guidance of visual attention by semantic information. In a series of experiments, we tested how attention is influenced by visual features versus category-level information about complex objects held in WM. Participants either memorized an object's image or its category. While holding this information in memory, they searched for a target in a four-object search display. On exact-match trials, the memorized item reappeared as a distractor in the search display. On category-match trials, another exemplar of the memorized item appeared as a distractor. On neutral trials, none of the distractors were related to the memorized object. We found attentional guidance in visual search on both exact-match and category-match trials in Experiment 1, in which the exemplars were visually similar. When we controlled for visual similarity among the exemplars by using four possible exemplars (Exp. 2) or by using two exemplars rated as being visually dissimilar (Exp. 3), we found attentional guidance only on exact-match trials when participants memorized the object's image. The same pattern of results held when the target was invariant (Exps. 2-3) and when the target was defined semantically and varied in visual features (Exp. 4). The findings of these experiments suggest that attentional guidance by WM requires active visual information.
Intensive video gaming improves encoding speed to visual short-term memory in young male adults.
Wilms, Inge L; Petersen, Anders; Vangkilde, Signe
2013-01-01
The purpose of this study was to measure the effect of action video gaming on central elements of visual attention using Bundesen's (1990) Theory of Visual Attention. To examine the cognitive impact of action video gaming, we tested basic functions of visual attention in 42 young male adults. Participants were divided into three groups depending on the amount of time spent playing action video games: non-players (<2h/month, N=12), casual players (4-8h/month, N=10), and experienced players (>15h/month, N=20). All participants were tested in three tasks which tap central functions of visual attention and short-term memory: a test based on the Theory of Visual Attention (TVA), an enumeration test and finally the Attentional Network Test (ANT). The results show that action video gaming does not seem to impact the capacity of visual short-term memory. However, playing action video games does seem to improve the encoding speed of visual information into visual short-term memory and the improvement does seem to depend on the time devoted to gaming. This suggests that intense action video gaming improves basic attentional functioning and that this improvement generalizes into other activities. The implications of these findings for cognitive rehabilitation training are discussed. Copyright © 2012 Elsevier B.V. All rights reserved.
Cholinergic enhancement of visual attention and neural oscillations in the human brain.
Bauer, Markus; Kluge, Christian; Bach, Dominik; Bradbury, David; Heinze, Hans Jochen; Dolan, Raymond J; Driver, Jon
2012-03-06
Cognitive processes such as visual perception and selective attention induce specific patterns of brain oscillations. The neurochemical bases of these spectral changes in neural activity are largely unknown, but neuromodulators are thought to regulate processing. The cholinergic system is linked to attentional function in vivo, whereas separate in vitro studies show that cholinergic agonists induce high-frequency oscillations in slice preparations. This has led to theoretical proposals that cholinergic enhancement of visual attention might operate via gamma oscillations in visual cortex, although low-frequency alpha/beta modulation may also play a key role. Here we used MEG to record cortical oscillations in the context of administration of a cholinergic agonist (physostigmine) during a spatial visual attention task in humans. This cholinergic agonist enhanced spatial attention effects on low-frequency alpha/beta oscillations in visual cortex, an effect correlating with a drug-induced speeding of performance. By contrast, the cholinergic agonist did not alter high-frequency gamma oscillations in visual cortex. Thus, our findings show that cholinergic neuromodulation enhances attentional selection via an impact on oscillatory synchrony in visual cortex, for low rather than high frequencies. We discuss this dissociation between high- and low-frequency oscillations in relation to proposals that lower-frequency oscillations are generated by feedback pathways within visual cortex. Copyright © 2012 Elsevier Ltd. All rights reserved.
Gersch, Timothy M.; Schnitzer, Brian S.; Dosher, Barbara A.; Kowler, Eileen
2012-01-01
Saccadic eye movements and perceptual attention work in a coordinated fashion to allow selection of the objects, features or regions with the greatest momentary need for limited visual processing resources. This study investigates perceptual characteristics of pre-saccadic shifts of attention during a sequence of saccades using the visual manipulations employed to study mechanisms of attention during maintained fixation. The first part of this paper reviews studies of the connections between saccades and attention, and their significance for both saccadic control and perception. The second part presents three experiments that examine the effects of pre-saccadic shifts of attention on vision during sequences of saccades. Perceptual enhancements at the saccadic goal location relative to non-goal locations were found across a range of stimulus contrasts, with either perceptual discrimination or detection tasks, with either single or multiple perceptual targets, and regardless of the presence of external noise. The results show that the preparation of saccades can evoke a variety of attentional effects, including attentionally-mediated changes in the strength of perceptual representations, selection of targets for encoding in visual memory, exclusion of external noise, or changes in the levels of internal visual noise. The visual changes evoked by saccadic planning make it possible for the visual system to effectively use saccadic eye movements to explore the visual environment. PMID:22809798
Joint attention enhances visual working memory.
Gregory, Samantha E A; Jackson, Margaret C
2017-02-01
Joint attention-the mutual focus of 2 individuals on an item-speeds detection and discrimination of target information. However, what happens to that information beyond the initial perceptual episode? To fully comprehend and engage with our immediate environment also requires working memory (WM), which integrates information from second to second to create a coherent and fluid picture of our world. Yet, no research exists at present that examines how joint attention directly impacts WM. To investigate this, we created a unique paradigm that combines gaze cues with a traditional visual WM task. A central, direct gaze 'cue' face looked left or right, followed 500 ms later by 4, 6, or 8 colored squares presented on one side of the face for encoding. Crucially, the cue face either looked at the squares (valid cue) or looked away from them (invalid cue). A no shift (direct gaze) condition served as a baseline. After a blank 1,000 ms maintenance interval, participants stated whether a single test square color was present or not in the preceding display. WM accuracy was significantly greater for colors encoded in the valid versus invalid and direct conditions. Further experiments showed that an arrow cue and a low-level motion cue-both shown to reliably orient attention-did not reliably modulate WM, indicating that social cues are more powerful. This study provides the first direct evidence that sharing the focus of another individual establishes a point of reference from which information is advantageously encoded into WM. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Robertson, Kayela; Schmitter-Edgecombe, Maureen
2017-01-01
Impairments in attention following traumatic brain injury (TBI) can significantly impact recovery and rehabilitation effectiveness. This study investigated the multi-faceted construct of selective attention following TBI, highlighting the differences on visual nonsearch (focused attention) and search (divided attention) tasks. Participants were 30 individuals with moderate to severe TBI who were tested acutely (i.e. following emergence from PTA) and 30 age- and education-matched controls. Participants were presented with visual displays that contained either two or eight items. In the focused attention, nonsearch condition, the location of the target (if present) was cued with a peripheral arrow prior to presentation of the visual displays. In the divided attention, search condition, no spatial cue was provided prior to presentation of the visual displays. The results revealed intact focused, nonsearch, attention abilities in the acute phase of TBI recovery. In contrast, when no spatial cue was provided (divided attention condition), participants with TBI demonstrated slower visual search compared to the control group. The results of this study suggest that capitalizing on intact focused attention abilities by allocating attention during cognitively demanding tasks may help to reduce mental workload and improve rehabilitation effectiveness.
Kirk, Hannah E; Gray, Kylie; Riby, Deborah M; Taffe, John; Cornish, Kim M
2017-11-01
Despite well-documented attention deficits in children with intellectual and developmental disabilities (IDD), distinctions across types of attention problems and their association with academic attainment has not been fully explored. This study examines visual attention capacities and inattentive/hyperactive behaviours in 77 children aged 4 to 11 years with IDD and elevated behavioural attention difficulties. Children with autism spectrum disorder (ASD; n = 23), Down syndrome (DS; n = 22), and non-specific intellectual disability (NSID; n = 32) completed computerized visual search and vigilance paradigms. In addition, parents and teachers completed rating scales of inattention and hyperactivity. Concurrent associations between attention abilities and early literacy and numeracy skills were also examined. Children completed measures of receptive vocabulary, phonological abilities and cardinality skills. As expected, the results indicated that all groups had relatively comparable levels of inattentive/hyperactive behaviours as rated by parents and teachers. However, the extent of visual attention deficits varied as a result of group; namely children with DS had poorer visual search and vigilance abilities than children with ASD and NSID. Further, significant associations between visual attention difficulties and poorer literacy and numeracy skills were observed, regardless of group. Collectively the findings demonstrate that in children with IDD who present with homogenous behavioural attention difficulties, at the cognitive level, subtle profiles of attentional problems can be delineated. © 2016 John Wiley & Sons Ltd.
Koda, Hiroki; Sato, Anna; Kato, Akemi
2013-09-01
Humans innately perceive infantile features as cute. The ethologist Konrad Lorenz proposed that the infantile features of mammals and birds, known as the baby schema (kindchenschema), motivate caretaking behaviour. As biologically relevant stimuli, newborns are likely to be processed specially in terms of visual attention, perception, and cognition. Recent demonstrations on human participants have shown visual attentional prioritisation to newborn faces (i.e., newborn faces capture visual attention). Although characteristics equivalent to those found in the faces of human infants are found in nonhuman primates, attentional capture by newborn faces has not been tested in nonhuman primates. We examined whether conspecific newborn faces captured the visual attention of two Japanese monkeys using a target-detection task based on dot-probe tasks commonly used in human visual attention studies. Although visual cues enhanced target detection in subject monkeys, our results, unlike those for humans, showed no evidence of an attentional prioritisation for newborn faces by monkeys. Our demonstrations showed the validity of dot-probe task for visual attention studies in monkeys and propose a novel approach to bridge the gap between human and nonhuman primate social cognition research. This suggests that attentional capture by newborn faces is not common to macaques, but it is unclear if nursing experiences influence their perception and recognition of infantile appraisal stimuli. We need additional comparative studies to reveal the evolutionary origins of baby-schema perception and recognition. Copyright © 2013 Elsevier B.V. All rights reserved.
Focused and shifting attention in children with heavy prenatal alcohol exposure.
Mattson, Sarah N; Calarco, Katherine E; Lang, Aimée R
2006-05-01
Attention deficits are a hallmark of the teratogenic effects of alcohol. However, characterization of these deficits remains inconclusive. Children with heavy prenatal alcohol exposure and nonexposed controls were evaluated using a paradigm consisting of three conditions: visual focus, auditory focus, and auditory-visual shift of attention. For the focus conditions, participants responded manually to visual or auditory targets. For the shift condition, participants alternated responses between visual targets and auditory targets. For the visual focus condition, alcohol-exposed children had lower accuracy and slower reaction time for all intertarget intervals (ITIs), while on the auditory focus condition, alcohol-exposed children were less accurate but displayed slower reaction time only on the longest ITI. Finally, for the shift condition, the alcohol-exposed group was accurate but had slowed reaction times. These results indicate that children with heavy prenatal alcohol exposure have pervasive deficits in visual focused attention and deficits in maintaining auditory attention over time. However, no deficits were noted in the ability to disengage and reengage attention when required to shift attention between visual and auditory stimuli, although reaction times to shift were slower. Copyright (c) 2006 APA, all rights reserved.
Visual Attention Model Based on Statistical Properties of Neuron Responses
Duan, Haibin; Wang, Xiaohua
2015-01-01
Visual attention is a mechanism of the visual system that can select relevant objects from a specific scene. Interactions among neurons in multiple cortical areas are considered to be involved in attentional allocation. However, the characteristics of the encoded features and neuron responses in those attention related cortices are indefinite. Therefore, further investigations carried out in this study aim at demonstrating that unusual regions arousing more attention generally cause particular neuron responses. We suppose that visual saliency is obtained on the basis of neuron responses to contexts in natural scenes. A bottom-up visual attention model is proposed based on the self-information of neuron responses to test and verify the hypothesis. Four different color spaces are adopted and a novel entropy-based combination scheme is designed to make full use of color information. Valuable regions are highlighted while redundant backgrounds are suppressed in the saliency maps obtained by the proposed model. Comparative results reveal that the proposed model outperforms several state-of-the-art models. This study provides insights into the neuron responses based saliency detection and may underlie the neural mechanism of early visual cortices for bottom-up visual attention. PMID:25747859
The effect of search condition and advertising type on visual attention to Internet advertising.
Kim, Gho; Lee, Jang-Han
2011-05-01
This research was conducted to examine the level of consumers' visual attention to Internet advertising. It was predicted that consumers' search type would influence visual attention to advertising. Specifically, it was predicted that more attention to advertising would be attracted in the exploratory search condition than in the goal-directed search condition. It was also predicted that there would be a difference in visual attention depending on the advertisement type (advertising type: text vs. pictorial advertising). An eye tracker was used for measurement. Results revealed that search condition and advertising type influenced advertising effectiveness.
Mangun, G R; Buck, L A
1998-03-01
This study investigated the simple reaction time (RT) and event-related potential (ERP) correlates of biasing attention towards a location in the visual field. RTs and ERPs were recorded to stimuli flashed randomly and with equal probability to the left and right visual hemifields in the three blocked, covert attention conditions: (i) attention divided equally to left and right hemifield locations; (ii) attention biased towards the left location; or (iii) attention biased towards the right location. Attention was biased towards left or right by instructions to the subjects, and responses were required to all stimuli. Relative to the divided attention condition, RTs were significantly faster for targets occurring where more attention was allocated (benefits), and slower to targets where less attention was allocated (costs). The early P1 (100-140 msec) component over the lateral occipital scalp regions showed attentional benefits. There were no amplitude modulations of the occipital N1 (125-180 msec) component with attention. Between 200 and 500 msec latency, a late positive deflection (LPD) showed both attentional costs and benefits. The behavioral findings show that when sufficiently induced to bias attention, human observers demonstrate RT benefits as well as costs. The corresponding P1 benefits suggest that the RT benefits of spatial attention may arise as the result of modulations of visual information processing in the extrastriate visual cortex.
Global motion compensated visual attention-based video watermarking
NASA Astrophysics Data System (ADS)
Oakes, Matthew; Bhowmik, Deepayan; Abhayaratne, Charith
2016-11-01
Imperceptibility and robustness are two key but complementary requirements of any watermarking algorithm. Low-strength watermarking yields high imperceptibility but exhibits poor robustness. High-strength watermarking schemes achieve good robustness but often suffer from embedding distortions resulting in poor visual quality in host media. This paper proposes a unique video watermarking algorithm that offers a fine balance between imperceptibility and robustness using motion compensated wavelet-based visual attention model (VAM). The proposed VAM includes spatial cues for visual saliency as well as temporal cues. The spatial modeling uses the spatial wavelet coefficients while the temporal modeling accounts for both local and global motion to arrive at the spatiotemporal VAM for video. The model is then used to develop a video watermarking algorithm, where a two-level watermarking weighting parameter map is generated from the VAM saliency maps using the saliency model and data are embedded into the host image according to the visual attentiveness of each region. By avoiding higher strength watermarking in the visually attentive region, the resulting watermarked video achieves high perceived visual quality while preserving high robustness. The proposed VAM outperforms the state-of-the-art video visual attention methods in joint saliency detection and low computational complexity performance. For the same embedding distortion, the proposed visual attention-based watermarking achieves up to 39% (nonblind) and 22% (blind) improvement in robustness against H.264/AVC compression, compared to existing watermarking methodology that does not use the VAM. The proposed visual attention-based video watermarking results in visual quality similar to that of low-strength watermarking and a robustness similar to those of high-strength watermarking.
Kamke, Marc R; Van Luyn, Jeanette; Constantinescu, Gabriella; Harris, Jill
2014-01-01
Evidence suggests that deafness-induced changes in visual perception, cognition and attention may compensate for a hearing loss. Such alterations, however, may also negatively influence adaptation to a cochlear implant. This study investigated whether involuntary attentional capture by salient visual stimuli is altered in children who use a cochlear implant. Thirteen experienced implant users (aged 8-16 years) and age-matched normally hearing children were presented with a rapid sequence of simultaneous visual and auditory events. Participants were tasked with detecting numbers presented in a specified color and identifying a change in the tonal frequency whilst ignoring irrelevant visual distractors. Compared to visual distractors that did not possess the target-defining characteristic, target-colored distractors were associated with a decrement in visual performance (response time and accuracy), demonstrating a contingent capture of involuntary attention. Visual distractors did not, however, impair auditory task performance. Importantly, detection performance for the visual and auditory targets did not differ between the groups. These results suggest that proficient cochlear implant users demonstrate normal capture of visuospatial attention by stimuli that match top-down control settings.
Splitting attention across the two visual fields in visual short-term memory.
Delvenne, Jean-Francois; Holt, Jessica L
2012-02-01
Humans have the ability to attentionally select the most relevant visual information from their extrapersonal world and to retain it in a temporary buffer, known as visual short-term memory (VSTM). Research suggests that at least two non-contiguous items can be selected simultaneously when they are distributed across the two visual hemifields. In two experiments, we show that attention can also be split between the left and right sides of internal representations held in VSTM. Participants were asked to remember several colors, while cues presented during the delay instructed them to orient their attention to a subset of memorized colors. Experiment 1 revealed that orienting attention to one or two colors strengthened equally participants' memory for those colors, but only when they were from separate hemifields. Experiment 2 showed that in the absence of attentional cues the distribution of the items in the visual field per se had no effect on memory. These findings strongly suggest the existence of independent attentional resources in the two hemifields for selecting and/or consolidating information in VSTM. Copyright © 2011 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Bogon, Johanna; Finke, Kathrin; Schulte-Körne, Gerd; Müller, Hermann J.; Schneider, Werner X.; Stenneken, Prisca
2014-01-01
People with developmental dyslexia (DD) have been shown to be impaired in tasks that require the processing of multiple visual elements in parallel. It has been suggested that this deficit originates from disturbed visual attentional functions. The parameter-based assessment of visual attention based on Bundesen's (1990) theory of visual…
ERIC Educational Resources Information Center
Teubert, Manuel; Lohaus, Arnold; Fassbender, Ina; Vierhaus, Marc; Spangler, Sibylle; Borchert, Sonja; Freitag, Claudia; Goertz, Claudia; Graf, Frauke; Gudi, Helene; Kolling, Thorsten; Lamm, Bettina; Keller, Heidi; Knopf, Monika; Schwarzer, Gudrun
2012-01-01
This longitudinal study examined the influence of stimulus material on attention and expectation learning in the visual expectation paradigm. Female faces were used as attention-attracting stimuli, and non-meaningful visual stimuli of comparable complexity (Greebles) were used as low attention-attracting stimuli. Expectation learning performance…
The Role of Target-Distractor Relationships in Guiding Attention and the Eyes in Visual Search
ERIC Educational Resources Information Center
Becker, Stefanie I.
2010-01-01
Current models of visual search assume that visual attention can be guided by tuning attention toward specific feature values (e.g., particular size, color) or by inhibiting the features of the irrelevant nontargets. The present study demonstrates that attention and eye movements can also be guided by a relational specification of how the target…
Attention biases visual activity in visual short-term memory.
Kuo, Bo-Cheng; Stokes, Mark G; Murray, Alexandra M; Nobre, Anna Christina
2014-07-01
In the current study, we tested whether representations in visual STM (VSTM) can be biased via top-down attentional modulation of visual activity in retinotopically specific locations. We manipulated attention using retrospective cues presented during the retention interval of a VSTM task. Retrospective cues triggered activity in a large-scale network implicated in attentional control and led to retinotopically specific modulation of activity in early visual areas V1-V4. Importantly, shifts of attention during VSTM maintenance were associated with changes in functional connectivity between pFC and retinotopic regions within V4. Our findings provide new insights into top-down control mechanisms that modulate VSTM representations for flexible and goal-directed maintenance of the most relevant memoranda.
Association of blood antioxidants status with visual and auditory sustained attention.
Shiraseb, Farideh; Siassi, Fereydoun; Sotoudeh, Gity; Qorbani, Mostafa; Rostami, Reza; Sadeghi-Firoozabadi, Vahid; Narmaki, Elham
2015-01-01
A low antioxidants status has been shown to result in oxidative stress and cognitive impairment. Because antioxidants can protect the nervous system, it is expected that a better blood antioxidant status might be related to sustained attention. However, the relationship between the blood antioxidant status and visual and auditory sustained attention has not been investigated. The aim of this study was to evaluate the association of fruits and vegetables intake and the blood antioxidant status with visual and auditory sustained attention in women. This cross-sectional study was performed on 400 healthy women (20-50 years) who attended the sports clubs of Tehran Municipality. Sustained attention was evaluated based on the Integrated Visual and Auditory Continuous Performance Test using the Integrated Visual and Auditory (IVA) software. The 24-hour food recall questionnaire was used for estimating fruits and vegetables intake. Serum total antioxidant capacity (TAC), and erythrocyte superoxide dismutase (SOD) and glutathione peroxidase (GPx) activities were measured in 90 participants. After adjusting for energy intake, age, body mass index (BMI), years of education and physical activity, higher reported fruits, and vegetables intake was associated with better visual and auditory sustained attention (P < 0.001). A high intake of some subgroups of fruits and vegetables (i.e. berries, cruciferous vegetables, green leafy vegetables, and other vegetables) was also associated with better sustained attention (P < 0.02). Serum TAC, and erythrocyte SOD and GPx activities increased with the increase in the tertiles of visual and auditory sustained attention after adjusting for age, years of education, physical activity, energy, BMI, and caffeine intake (P < 0.05). Improved visual and auditory sustained attention is associated with a better blood antioxidant status. Therefore, improvement of the antioxidant status through an appropriate dietary intake can possibly enhance sustained attention.
Sneve, Markus H; Sreenivasan, Kartik K; Alnæs, Dag; Endestad, Tor; Magnussen, Svein
2015-01-01
Retention of features in visual short-term memory (VSTM) involves maintenance of sensory traces in early visual cortex. However, the mechanism through which this is accomplished is not known. Here, we formulate specific hypotheses derived from studies on feature-based attention to test the prediction that visual cortex is recruited by attentional mechanisms during VSTM of low-level features. Functional magnetic resonance imaging (fMRI) of human visual areas revealed that neural populations coding for task-irrelevant feature information are suppressed during maintenance of detailed spatial frequency memory representations. The narrow spectral extent of this suppression agrees well with known effects of feature-based attention. Additionally, analyses of effective connectivity during maintenance between retinotopic areas in visual cortex show that the observed highlighting of task-relevant parts of the feature spectrum originates in V4, a visual area strongly connected with higher-level control regions and known to convey top-down influence to earlier visual areas during attentional tasks. In line with this property of V4 during attentional operations, we demonstrate that modulations of earlier visual areas during memory maintenance have behavioral consequences, and that these modulations are a result of influences from V4. Copyright © 2014 Elsevier Ltd. All rights reserved.
Components of working memory and visual selective attention.
Burnham, Bryan R; Sabia, Matthew; Langan, Catherine
2014-02-01
Load theory (Lavie, N., Hirst, A., De Fockert, J. W., & Viding, E. [2004]. Load theory of selective attention and cognitive control. Journal of Experimental Psychology: General, 133, 339-354.) proposes that control of attention depends on the amount and type of load that is imposed by current processing. Specifically, perceptual load should lead to efficient distractor rejection, whereas working memory load (dual-task coordination) should hinder distractor rejection. Studies support load theory's prediction that working memory load will lead to larger distractor effects; however, these studies used secondary tasks that required only verbal working memory and the central executive. The present study examined which other working memory components (visual, spatial, and phonological) influence visual selective attention. Subjects completed an attentional capture task alone (single-task) or while engaged in a working memory task (dual-task). Results showed that along with the central executive, visual and spatial working memory influenced selective attention, but phonological working memory did not. Specifically, attentional capture was larger when visual or spatial working memory was loaded, but phonological working memory load did not affect attentional capture. The results are consistent with load theory and suggest specific components of working memory influence visual selective attention. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Harasawa, Masamitsu; Shioiri, Satoshi
2011-04-01
The effect of the visual hemifield to which spatial attention was oriented on the activities of the posterior parietal and occipital visual cortices was examined using functional near-infrared spectroscopy in order to investigate the neural substrates of voluntary visuospatial attention. Our brain imaging data support the theory put forth in a previous psychophysical study, namely, the attentional resources for the left and right visual hemifields are distinct. Increasing the attentional load asymmetrically increased the brain activity. Increase in attentional load produced a greater increase in brain activity in the case of the left visual hemifield than in the case of the right visual hemifield. This asymmetry was observed in all the examined brain areas, including the right and left occipital and parietal cortices. These results suggest the existence of asymmetrical inhibitory interactions between the hemispheres and the presence of an extensive inhibitory network. Copyright © 2011 Elsevier Inc. All rights reserved.
Research progress on Drosophila visual cognition in China.
Guo, AiKe; Zhang, Ke; Peng, YueQin; Xi, Wang
2010-03-01
Visual cognition, as one of the fundamental aspects of cognitive neuroscience, is generally associated with high-order brain functions in animals and human. Drosophila, as a model organism, shares certain features of visual cognition in common with mammals at the genetic, molecular, cellular, and even higher behavioral levels. From learning and memory to decision making, Drosophila covers a broad spectrum of higher cognitive behaviors beyond what we had expected. Armed with powerful tools of genetic manipulation in Drosophila, an increasing number of studies have been conducted in order to elucidate the neural circuit mechanisms underlying these cognitive behaviors from a genes-brain-behavior perspective. The goal of this review is to integrate the most important studies on visual cognition in Drosophila carried out in mainland China during the last decade into a body of knowledge encompassing both the basic neural operations and circuitry of higher brain function in Drosophila. Here, we consider a series of the higher cognitive behaviors beyond learning and memory, such as visual pattern recognition, feature and context generalization, different feature memory traces, salience-based decision, attention-like behavior, and cross-modal leaning and memory. We discuss the possible general gain-gating mechanism implementing by dopamine - mushroom body circuit in fly's visual cognition. We hope that our brief review on this aspect will inspire further study on visual cognition in flies, or even beyond.
Filippopulos, Filipp M; Grafenstein, Jessica; Straube, Andreas; Eggert, Thomas
2015-11-01
In natural life pain automatically draws attention towards the painful body part suggesting that it interacts with different attentional mechanisms such as visual attention. Complex regional pain syndrome (CRPS) patients who typically report on chronic distally located pain of one extremity may suffer from so-called neglect-like symptoms, which have also been linked to attentional mechanisms. The purpose of the study was to further evaluate how continuous pain conditions influence visual attention. Saccade latencies were recorded in two experiments using a common visual attention paradigm whereby orientating saccades to cued or uncued lateral visual targets had to be performed. In the first experiment saccade latencies of healthy subjects were measured under two conditions: one in which continuous experimental pain stimulation was applied to the index finger to imitate a continuous pain situation, and one without pain stimulation. In the second experiment saccade latencies of patients suffering from CRPS were compared to controls. The results showed that neither the continuous experimental pain stimulation during the experiment nor the chronic pain in CRPS led to an unilateral increase of saccade latencies or to a unilateral increase of the cue effect on latency. The results show that unilateral, continuously applied pain stimuli or chronic pain have no or only very limited influence on visual attention. Differently from patients with visual neglect, patients with CRPS did not show strong side asymmetries of saccade latencies or of cue effects on saccade latencies. Thus, neglect-like clinical symptoms of CRPS patients do not involve the allocation of visual attention.
ERIC Educational Resources Information Center
Rubia, Katya; Halari, Rozmin; Smith, Anna B.; Mohammad, Majeed; Scott, Stephen; Brammer, Michael J.
2009-01-01
Background: Inhibitory and attention deficits have been suggested to be shared problems of disruptive behaviour disorders. Patients with attention deficit hyperactivity disorder (ADHD) and patients with conduct disorder (CD) show deficits in tasks of attention allocation and interference inhibition. However, functional magnetic resonance imaging…
Multiple Sensory-Motor Pathways Lead to Coordinated Visual Attention
Yu, Chen; Smith, Linda B.
2016-01-01
Joint attention has been extensively studied in the developmental literature because of overwhelming evidence that the ability to socially coordinate visual attention to an object is essential to healthy developmental outcomes, including language learning. The goal of the present study is to understand the complex system of sensory-motor behaviors that may underlie the establishment of joint attention between parents and toddlers. In an experimental task, parents and toddlers played together with multiple toys. We objectively measured joint attention – and the sensory-motor behaviors that underlie it – using a dual head-mounted eye-tracking system and frame-by-frame coding of manual actions. By tracking the momentary visual fixations and hand actions of each participant, we precisely determined just how often they fixated on the same object at the same time, the visual behaviors that preceded joint attention, and manual behaviors that preceded and co-occurred with joint attention. We found that multiple sequential sensory-motor patterns lead to joint attention. In addition, there are developmental changes in this multi-pathway system evidenced as variations in strength among multiple routes. We propose that coordinated visual attention between parents and toddlers is primarily a sensory-motor behavior. Skill in achieving coordinated visual attention in social settings – like skills in other sensory-motor domains – emerges from multiple pathways to the same functional end. PMID:27016038
Multiple Sensory-Motor Pathways Lead to Coordinated Visual Attention.
Yu, Chen; Smith, Linda B
2017-02-01
Joint attention has been extensively studied in the developmental literature because of overwhelming evidence that the ability to socially coordinate visual attention to an object is essential to healthy developmental outcomes, including language learning. The goal of this study was to understand the complex system of sensory-motor behaviors that may underlie the establishment of joint attention between parents and toddlers. In an experimental task, parents and toddlers played together with multiple toys. We objectively measured joint attention-and the sensory-motor behaviors that underlie it-using a dual head-mounted eye-tracking system and frame-by-frame coding of manual actions. By tracking the momentary visual fixations and hand actions of each participant, we precisely determined just how often they fixated on the same object at the same time, the visual behaviors that preceded joint attention and manual behaviors that preceded and co-occurred with joint attention. We found that multiple sequential sensory-motor patterns lead to joint attention. In addition, there are developmental changes in this multi-pathway system evidenced as variations in strength among multiple routes. We propose that coordinated visual attention between parents and toddlers is primarily a sensory-motor behavior. Skill in achieving coordinated visual attention in social settings-like skills in other sensory-motor domains-emerges from multiple pathways to the same functional end. Copyright © 2016 Cognitive Science Society, Inc.
Annie Yoon, Seungyeon; Kelso, Gwendolyn A; Lock, Anna; Lyons-Ruth, Karlen
2014-01-01
The normative development of infant shared attention has been studied extensively, but few studies have examined the impact of disorganized attachment and disturbed maternal caregiving on mother-infant shared attention. The authors examined both maternal initiations of joint attention and infants' responses to those initiations during the reunion episodes of the Strange Situation Procedure at 12 and 18 months of infant age. The mothers' initiations of joint attention and three forms of infant response, including shunning, simple joint attention, and sharing attention, were examined in relation to infant disorganized attachment and maternal disrupted communication. Mothers who were disrupted in communication with their infants at 18 months initiated fewer bids for joint attention at 12 months, and, at 18 months, mothers of infants classified disorganized initiated fewer bids. However, the infant' responses were unrelated to either the infant' or the mother' disturbed attachment. At both ages, disorganized infants and infants of disrupted mothers were as likely to respond to maternal bids as were their lower risk counterparts. Our results suggest that a disposition to share experiences with others is robust in infancy, even among infants with adverse attachment experiences, but this infant disposition may depend on adult initiation of bids to be realized.
Wang, Wei; Ji, Xiangtong; Ni, Jun; Ye, Qian; Zhang, Sicong; Chen, Wenli; Bian, Rong; Yu, Cui; Zhang, Wenting; Shen, Guangyu; Machado, Sergio; Yuan, Tifei; Shan, Chunlei
2015-01-01
To compare the effect of visual spatial training on the spatial attention to that on motor control and to correlate the improvement of spatial attention to motor control progress after visual spatial training in subjects with unilateral spatial neglect (USN). 9 cases with USN after right cerebral stroke were randomly divided into Conventional treatment group + visual spatial attention and Conventional treatment group. The Conventional treatment group + visual spatial attention received conventional rehabilitation therapy (physical and occupational therapy) and visual spatial attention training (optokinetic stimulation and right half-field eye patching). The Conventional treatment group was only treated with conventional rehabilitation training (physical and occupational therapy). All patients were assessed by behavioral inattention test (BIT), Fugl-Meyer Assessment of motor function (FMA), equilibrium coordination test (ECT) and non-equilibrium coordination test (NCT) before and after 4 weeks treatment. Total scores in both groups (without visual spatial attention/with visual spatial attention) improved significantly (BIT: P=0.021/P=0.000, d=1.667/d=2.116, power=0.69/power=0.98, 95%CI[-0.8839,45.88]/95%CI=[16.96,92.64]; FMA: P=0.002/P=0.000, d=2.521/d=2.700, power=0.93/power=0.98, 95%CI[5.707,30.79]/95%CI=[16.06,53.94]; ECT: P=0.002/ P=0.000, d=2.031/d=1.354, power=0.90/power=0.17, 95%CI[3.380,42.61]/95%CI=[-1.478,39.08]; NCT: P=0.013/P=0.000, d=1.124/d=1.822, power=0.41/power=0.56, 95%CI[-7.980,37.48]/95%CI=[4.798,43.60],) after treatment. Among the 2 groups, the group with visual spatial attention significantly improved in BIT (P=0.003, d=3.103, power=1, 95%CI[15.68,48.92]), FMA of upper extremity (P=0.006, d=2.771, power=1, 95%CI[5.061,20.14]) and NCT (P=0.010, d=2.214, power=0.81-0.90, 95%CI[3.018,15.88]). Correlative analysis shows that the change of BIT scores is positively correlated to the change of FMA total score (r=0.77, P<;0.01), FMA of upper extremity (r=0.81, P<0.01), NCT (r=0.78, P<0.01). Four weeks visual spatial training could improve spatial attention as well as motor control functions in hemineglect patients. The improvement of motor function is positively correlated to the progresses of visual spatial functions after visual spatial attention training.
Dynamic crossmodal links revealed by steady-state responses in auditory-visual divided attention.
de Jong, Ritske; Toffanin, Paolo; Harbers, Marten
2010-01-01
Frequency tagging has been often used to study intramodal attention but not intermodal attention. We used EEG and simultaneous frequency tagging of auditory and visual sources to study intermodal focused and divided attention in detection and discrimination performance. Divided-attention costs were smaller, but still significant, in detection than in discrimination. The auditory steady-state response (SSR) showed no effects of attention at frontocentral locations, but did so at occipital locations where it was evident only when attention was divided between audition and vision. Similarly, the visual SSR at occipital locations was substantially enhanced when attention was divided across modalities. Both effects were equally present in detection and discrimination. We suggest that both effects reflect a common cause: An attention-dependent influence of auditory information processing on early cortical stages of visual information processing, mediated by enhanced effective connectivity between the two modalities under conditions of divided attention. Copyright (c) 2009 Elsevier B.V. All rights reserved.
Measuring and Modeling Shared Visual Attention
NASA Technical Reports Server (NTRS)
Mulligan, Jeffrey B.; Gontar, Patrick
2016-01-01
Multi-person teams are sometimes responsible for critical tasks, such as flying an airliner. Here we present a method using gaze tracking data to assess shared visual attention, a term we use to describe the situation where team members are attending to a common set of elements in the environment. Gaze data are quantized with respect to a set of N areas of interest (AOIs); these are then used to construct a time series of N dimensional vectors, with each vector component representing one of the AOIs, all set to 0 except for the component corresponding to the currently fixated AOI, which is set to 1. The resulting sequence of vectors can be averaged in time, with the result that each vector component represents the proportion of time that the corresponding AOI was fixated within the given time interval. We present two methods for comparing sequences of this sort, one based on computing the time-varying correlation of the averaged vectors, and another based on a chi-square test testing the hypothesis that the observed gaze proportions are drawn from identical probability distributions. We have evaluated the method using synthetic data sets, in which the behavior was modeled as a series of "activities," each of which was modeled as a first-order Markov process. By tabulating distributions for pairs of identical and disparate activities, we are able to perform a receiver operating characteristic (ROC) analysis, allowing us to choose appropriate criteria and estimate error rates. We have applied the methods to data from airline crews, collected in a high-fidelity flight simulator (Haslbeck, Gontar & Schubert, 2014). We conclude by considering the problem of automatic (blind) discovery of activities, using methods developed for text analysis.
Measuring and Modeling Shared Visual Attention
NASA Technical Reports Server (NTRS)
Mulligan, Jeffrey B.
2016-01-01
Multi-person teams are sometimes responsible for critical tasks, such as flying an airliner. Here we present a method using gaze tracking data to assess shared visual attention, a term we use to describe the situation where team members are attending to a common set of elements in the environment. Gaze data are quantized with respect to a set of N areas of interest (AOIs); these are then used to construct a time series of N dimensional vectors, with each vector component representing one of the AOIs, all set to 0 except for the component corresponding to the currently fixated AOI, which is set to 1. The resulting sequence of vectors can be averaged in time, with the result that each vector component represents the proportion of time that the corresponding AOI was fixated within the given time interval. We present two methods for comparing sequences of this sort, one based on computing the time-varying correlation of the averaged vectors, and another based on a chi-square test testing the hypothesis that the observed gaze proportions are drawn from identical probability distributions.We have evaluated the method using synthetic data sets, in which the behavior was modeled as a series of activities, each of which was modeled as a first-order Markov process. By tabulating distributions for pairs of identical and disparate activities, we are able to perform a receiver operating characteristic (ROC) analysis, allowing us to choose appropriate criteria and estimate error rates.We have applied the methods to data from airline crews, collected in a high-fidelity flight simulator (Haslbeck, Gontar Schubert, 2014). We conclude by considering the problem of automatic (blind) discovery of activities, using methods developed for text analysis.
Grossberg, Stephen; Vladusich, Tony
2010-01-01
How does an infant learn through visual experience to imitate actions of adult teachers, despite the fact that the infant and adult view one another and the world from different perspectives? To accomplish this, an infant needs to learn how to share joint attention with adult teachers and to follow their gaze towards valued goal objects. The infant also needs to be capable of view-invariant object learning and recognition whereby it can carry out goal-directed behaviors, such as the use of tools, using different object views than the ones that its teachers use. Such capabilities are often attributed to "mirror neurons". This attribution does not, however, explain the brain processes whereby these competences arise. This article describes the CRIB (Circular Reactions for Imitative Behavior) neural model of how the brain achieves these goals through inter-personal circular reactions. Inter-personal circular reactions generalize the intra-personal circular reactions of Piaget, which clarify how infants learn from their own babbled arm movements and reactive eye movements how to carry out volitional reaches, with or without tools, towards valued goal objects. The article proposes how intra-personal circular reactions create a foundation for inter-personal circular reactions when infants and other learners interact with external teachers in space. Both types of circular reactions involve learned coordinate transformations between body-centered arm movement commands and retinotopic visual feedback, and coordination of processes within and between the What and Where cortical processing streams. Specific breakdowns of model processes generate formal symptoms similar to clinical symptoms of autism. Copyright © 2010 Elsevier Ltd. All rights reserved.
Dividing time: concurrent timing of auditory and visual events by young and elderly adults.
McAuley, J Devin; Miller, Jonathan P; Wang, Mo; Pang, Kevin C H
2010-07-01
This article examines age differences in individual's ability to produce the durations of learned auditory and visual target events either in isolation (focused attention) or concurrently (divided attention). Young adults produced learned target durations equally well in focused and divided attention conditions. Older adults, in contrast, showed an age-related increase in timing variability in divided attention conditions that tended to be more pronounced for visual targets than for auditory targets. Age-related impairments were associated with a decrease in working memory span; moreover, the relationship between working memory and timing performance was largest for visual targets in divided attention conditions.
ERIC Educational Resources Information Center
Mather, Susan M.; Clark, M. Diane
2012-01-01
One of the ongoing challenges teachers of students who are deaf or hard of hearing face is managing the visual split attention implicit in multimedia learning. When a teacher presents various types of visual information at the same time, visual learners have no choice but to divide their attention among those materials and the teacher and…
Evidence for an attentional component of inhibition of return in visual search.
Pierce, Allison M; Crouse, Monique D; Green, Jessica J
2017-11-01
Inhibition of return (IOR) is typically described as an inhibitory bias against returning attention to a recently attended location as a means of promoting efficient visual search. Most studies examining IOR, however, either do not use visual search paradigms or do not effectively isolate attentional processes, making it difficult to conclusively link IOR to a bias in attention. Here, we recorded ERPs during a simple visual search task designed to isolate the attentional component of IOR to examine whether an inhibitory bias of attention is observed and, if so, how it influences visual search behavior. Across successive visual search displays, we found evidence of both a broad, hemisphere-wide inhibitory bias of attention along with a focal, target location-specific facilitation. When the target appeared in the same visual hemifield in successive searches, responses were slower and the N2pc component was reduced, reflecting a bias of attention away from the previously attended side of space. When the target occurred at the same location in successive searches, responses were facilitated and the P1 component was enhanced, likely reflecting spatial priming of the target. These two effects are combined in the response times, leading to a reduction in the IOR effect for repeated target locations. Using ERPs, however, these two opposing effects can be isolated in time, demonstrating that the inhibitory biasing of attention still occurs even when response-time slowing is ameliorated by spatial priming. © 2017 Society for Psychophysiological Research.
Top-down alpha oscillatory network interactions during visuospatial attention orienting.
Doesburg, Sam M; Bedo, Nicolas; Ward, Lawrence M
2016-05-15
Neuroimaging and lesion studies indicate that visual attention is controlled by a distributed network of brain areas. The covert control of visuospatial attention has also been associated with retinotopic modulation of alpha-band oscillations within early visual cortex, which are thought to underlie inhibition of ignored areas of visual space. The relation between distributed networks mediating attention control and more focal oscillatory mechanisms, however, remains unclear. The present study evaluated the hypothesis that alpha-band, directed, network interactions within the attention control network are systematically modulated by the locus of visuospatial attention. We localized brain areas involved in visuospatial attention orienting using magnetoencephalographic (MEG) imaging and investigated alpha-band Granger-causal interactions among activated regions using narrow-band transfer entropy. The deployment of attention to one side of visual space was indexed by lateralization of alpha power changes between about 400ms and 700ms post-cue onset. The changes in alpha power were associated, in the same time period, with lateralization of anterior-to-posterior information flow in the alpha-band from various brain areas involved in attention control, including the anterior cingulate cortex, left middle and inferior frontal gyri, left superior temporal gyrus, and right insula, and inferior parietal lobule, to early visual areas. We interpreted these results to indicate that distributed network interactions mediated by alpha oscillations exert top-down influences on early visual cortex to modulate inhibition of processing for ignored areas of visual space. Copyright © 2016. Published by Elsevier Inc.
Selective maintenance in visual working memory does not require sustained visual attention.
Hollingworth, Andrew; Maxcey-Richard, Ashleigh M
2013-08-01
In four experiments, we tested whether sustained visual attention is required for the selective maintenance of objects in visual working memory (VWM). Participants performed a color change-detection task. During the retention interval, a valid cue indicated the item that would be tested. Change-detection performance was higher in the valid-cue condition than in a neutral-cue control condition. To probe the role of visual attention in the cuing effect, on half of the trials, a difficult search task was inserted after the cue, precluding sustained attention on the cued item. The addition of the search task produced no observable decrement in the magnitude of the cuing effect. In a complementary test, search efficiency was not impaired by simultaneously prioritizing an object for retention in VWM. The results demonstrate that selective maintenance in VWM can be dissociated from the locus of visual attention. 2013 APA, all rights reserved
Yadav, Naveen K; Thiagarajan, Preethi; Ciuffreda, Kenneth J
2014-01-01
The purpose of the experiment was to investigate the effect of oculomotor vision rehabilitation (OVR) on the visual-evoked potential (VEP) and visual attention in the mTBI population. Subjects (n = 7) were adults with a history of mild traumatic brain injury (mTBI). Each received 9 hours of OVR over a 6-week period. The effects of OVR on VEP amplitude and latency, the attention-related alpha band (8-13 Hz) power (µV(2)) and the clinical Visual Search and Attention Test (VSAT) were assessed before and after the OVR. After the OVR, the VEP amplitude increased and its variability decreased. There was no change in VEP latency, which was normal. Alpha band power increased, as did the VSAT score, following the OVR. The significant changes in most test parameters suggest that OVR affects the visual system at early visuo-cortical levels, as well as other pathways which are involved in visual attention.
Bressler, David W.; Fortenbaugh, Francesca C.; Robertson, Lynn C.; Silver, Michael A.
2013-01-01
Endogenous visual spatial attention improves perception and enhances neural responses to visual stimuli at attended locations. Although many aspects of visual processing differ significantly between central and peripheral vision, little is known regarding the neural substrates of the eccentricity dependence of spatial attention effects. We measured amplitudes of positive and negative fMRI responses to visual stimuli as a function of eccentricity in a large number of topographically-organized cortical areas. Responses to each stimulus were obtained when the stimulus was attended and when spatial attention was directed to a stimulus in the opposite visual hemifield. Attending to the stimulus increased both positive and negative response amplitudes in all cortical areas we studied: V1, V2, V3, hV4, VO1, LO1, LO2, V3A/B, IPS0, TO1, and TO2. However, the eccentricity dependence of these effects differed considerably across cortical areas. In early visual, ventral, and lateral occipital cortex, attentional enhancement of positive responses was greater for central compared to peripheral eccentricities. The opposite pattern was observed in dorsal stream areas IPS0 and putative MT homolog TO1, where attentional enhancement of positive responses was greater in the periphery. Both the magnitude and the eccentricity dependence of attentional modulation of negative fMRI responses closely mirrored that of positive responses across cortical areas. PMID:23562388
Marino, Alexandria C.; Mazer, James A.
2016-01-01
During natural vision, saccadic eye movements lead to frequent retinal image changes that result in different neuronal subpopulations representing the same visual feature across fixations. Despite these potentially disruptive changes to the neural representation, our visual percept is remarkably stable. Visual receptive field remapping, characterized as an anticipatory shift in the position of a neuron’s spatial receptive field immediately before saccades, has been proposed as one possible neural substrate for visual stability. Many of the specific properties of remapping, e.g., the exact direction of remapping relative to the saccade vector and the precise mechanisms by which remapping could instantiate stability, remain a matter of debate. Recent studies have also shown that visual attention, like perception itself, can be sustained across saccades, suggesting that the attentional control system can also compensate for eye movements. Classical remapping could have an attentional component, or there could be a distinct attentional analog of visual remapping. At this time we do not yet fully understand how the stability of attentional representations relates to perisaccadic receptive field shifts. In this review, we develop a vocabulary for discussing perisaccadic shifts in receptive field location and perisaccadic shifts of attentional focus, review and synthesize behavioral and neurophysiological studies of perisaccadic perception and perisaccadic attention, and identify open questions that remain to be experimentally addressed. PMID:26903820
Wang, Wuyi; Viswanathan, Shivakumar; Lee, Taraz; Grafton, Scott T
2016-01-01
Cortical theta band oscillations (4-8 Hz) in EEG signals have been shown to be important for a variety of different cognitive control operations in visual attention paradigms. However the synchronization source of these signals as defined by fMRI BOLD activity and the extent to which theta oscillations play a role in multimodal attention remains unknown. Here we investigated the extent to which cross-modal visual and auditory attention impacts theta oscillations. Using a simultaneous EEG-fMRI paradigm, healthy human participants performed an attentional vigilance task with six cross-modal conditions using naturalistic stimuli. To assess supramodal mechanisms, modulation of theta oscillation amplitude for attention to either visual or auditory stimuli was correlated with BOLD activity by conjunction analysis. Negative correlation was localized to cortical regions associated with the default mode network and positively with ventral premotor areas. Modality-associated attention to visual stimuli was marked by a positive correlation of theta and BOLD activity in fronto-parietal area that was not observed in the auditory condition. A positive correlation of theta and BOLD activity was observed in auditory cortex, while a negative correlation of theta and BOLD activity was observed in visual cortex during auditory attention. The data support a supramodal interaction of theta activity with of DMN function, and modality-associated processes within fronto-parietal networks related to top-down theta related cognitive control in cross-modal visual attention. On the other hand, in sensory cortices there are opposing effects of theta activity during cross-modal auditory attention.
Visual attention spreads broadly but selects information locally.
Shioiri, Satoshi; Honjyo, Hajime; Kashiwase, Yoshiyuki; Matsumiya, Kazumichi; Kuriki, Ichiro
2016-10-19
Visual attention spreads over a range around the focus as the spotlight metaphor describes. Spatial spread of attentional enhancement and local selection/inhibition are crucial factors determining the profile of the spatial attention. Enhancement and ignorance/suppression are opposite effects of attention, and appeared to be mutually exclusive. Yet, no unified view of the factors has been provided despite their necessity for understanding the functions of spatial attention. This report provides electroencephalographic and behavioral evidence for the attentional spread at an early stage and selection/inhibition at a later stage of visual processing. Steady state visual evoked potential showed broad spatial tuning whereas the P3 component of the event related potential showed local selection or inhibition of the adjacent areas. Based on these results, we propose a two-stage model of spatial attention with broad spread at an early stage and local selection at a later stage.
Stuart, Samuel; Lord, Sue; Galna, Brook; Rochester, Lynn
2018-04-01
Gait impairment is a core feature of Parkinson's disease (PD) with implications for falls risk. Visual cues improve gait in PD, but the underlying mechanisms are unclear. Evidence suggests that attention and vision play an important role; however, the relative contribution from each is unclear. Measurement of visual exploration (specifically saccade frequency) during gait allows for real-time measurement of attention and vision. Understanding how visual cues influence visual exploration may allow inferences of the underlying mechanisms to response which could help to develop effective therapeutics. This study aimed to examine saccade frequency during gait in response to a visual cue in PD and older adults and investigate the roles of attention and vision in visual cue response in PD. A mobile eye-tracker measured saccade frequency during gait in 55 people with PD and 32 age-matched controls. Participants walked in a straight line with and without a visual cue (50 cm transverse lines) presented under single task and dual-task (concurrent digit span recall). Saccade frequency was reduced when walking in PD compared to controls; however, visual cues ameliorated saccadic deficit. Visual cues significantly increased saccade frequency in both PD and controls under both single task and dual-task. Attention rather than visual function was central to saccade frequency and gait response to visual cues in PD. In conclusion, this study highlights the impact of visual cues on visual exploration when walking and the important role of attention in PD. Understanding these complex features will help inform intervention development. © 2018 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Haptic guidance of overt visual attention.
List, Alexandra; Iordanescu, Lucica; Grabowecky, Marcia; Suzuki, Satoru
2014-11-01
Research has shown that information accessed from one sensory modality can influence perceptual and attentional processes in another modality. Here, we demonstrated a novel crossmodal influence of haptic-shape information on visual attention. Participants visually searched for a target object (e.g., an orange) presented among distractor objects, fixating the target as quickly as possible. While searching for the target, participants held (never viewed and out of sight) an item of a specific shape in their hands. In two experiments, we demonstrated that the time for the eyes to reach a target-a measure of overt visual attention-was reduced when the shape of the held item (e.g., a sphere) was consistent with the shape of the visual target (e.g., an orange), relative to when the held shape was unrelated to the target (e.g., a hockey puck) or when no shape was held. This haptic-to-visual facilitation occurred despite the fact that the held shapes were not predictive of the visual targets' shapes, suggesting that the crossmodal influence occurred automatically, reflecting shape-specific haptic guidance of overt visual attention.
Collinearity Impairs Local Element Visual Search
ERIC Educational Resources Information Center
Jingling, Li; Tseng, Chia-Huei
2013-01-01
In visual searches, stimuli following the law of good continuity attract attention to the global structure and receive attentional priority. Also, targets that have unique features are of high feature contrast and capture attention in visual search. We report on a salient global structure combined with a high orientation contrast to the…
Contextual Cueing: Implicit Learning and Memory of Visual Context Guides Spatial Attention.
ERIC Educational Resources Information Center
Chun, Marvin M.; Jiang, Yuhong
1998-01-01
Six experiments involving a total of 112 college students demonstrate that a robust memory for visual context exists to guide spatial attention. Results show how implicit learning and memory of visual context can guide spatial attention toward task-relevant aspects of a scene. (SLD)
Visual Memory for Objects Following Foveal Vision Loss
ERIC Educational Resources Information Center
Geringswald, Franziska; Herbik, Anne; Hofmüller, Wolfram; Hoffmann, Michael B.; Pollmann, Stefan
2015-01-01
Allocation of visual attention is crucial for encoding items into visual long-term memory. In free vision, attention is closely linked to the center of gaze, raising the question whether foveal vision loss entails suboptimal deployment of attention and subsequent impairment of object encoding. To investigate this question, we examined visual…
Landa, Rebecca J.; Haworth, Joshua L.; Nebel, Mary Beth
2016-01-01
Children with autism spectrum disorder (ASD) demonstrate a host of motor impairments that may share a common developmental basis with ASD core symptoms. School-age children with ASD exhibit particular difficulty with hand-eye coordination and appear to be less sensitive to visual feedback during motor learning. Sensorimotor deficits are observable as early as 6 months of age in children who later develop ASD; yet the interplay of early motor, visual and social skill development in ASD is not well understood. Integration of visual input with motor output is vital for the formation of internal models of action. Such integration is necessary not only to master a wide range of motor skills, but also to imitate and interpret the actions of others. Thus, closer examination of the early development of visual-motor deficits is of critical importance to ASD. In the present study of infants at high risk (HR) and low risk (LR) for ASD, we examined visual-motor coupling, or action anticipation, during a dynamic, interactive ball-rolling activity. We hypothesized that, compared to LR infants, HR infants would display decreased anticipatory response (perception-guided predictive action) to the approaching ball. We also examined visual attention before and during ball rolling to determine whether attention engagement contributed to differences in anticipation. Results showed that LR and HR infants demonstrated context appropriate looking behavior, both before and during the ball’s trajectory toward them. However, HR infants were less likely to exhibit context appropriate anticipatory motor response to the approaching ball (moving their arm/hand to intercept the ball) than LR infants. This finding did not appear to be driven by differences in motor skill between risk groups at 6 months of age and was extended to show an atypical predictive relationship between anticipatory behavior at 6 months and preference for looking at faces compared to objects at age 14 months in the HR group. PMID:27252667
Image Mapping and Visual Attention on the Sensory Ego-Sphere
NASA Technical Reports Server (NTRS)
Fleming, Katherine Achim; Peters, Richard Alan, II
2012-01-01
The Sensory Ego-Sphere (SES) is a short-term memory for a robot in the form of an egocentric, tessellated, spherical, sensory-motor map of the robot s locale. Visual attention enables fast alignment of overlapping images without warping or position optimization, since an attentional point (AP) on the composite typically corresponds to one on each of the collocated regions in the images. Such alignment speeds analysis of the multiple images of the area. Compositing and attention were performed two ways and compared: (1) APs were computed directly on the composite and not on the full-resolution images until the time of retrieval; and (2) the attentional operator was applied to all incoming imagery. It was found that although the second method was slower, it produced consistent and, thereby, more useful APs. The SES is an integral part of a control system that will enable a robot to learn new behaviors based on its previous experiences, and that will enable it to recombine its known behaviors in such a way as to solve related, but novel, task problems with apparent creativity. The approach is to combine sensory-motor data association and dimensionality reduction to learn navigation and manipulation tasks as sequences of basic behaviors that can be implemented with a small set of closed-loop controllers. Over time, the aggregate of behaviors and their transition probabilities form a stochastic network. Then given a task, the robot finds a path in the network that leads from its current state to the goal. The SES provides a short-term memory for the cognitive functions of the robot, association of sensory and motor data via spatio-temporal coincidence, direction of the attention of the robot, navigation through spatial localization with respect to known or discovered landmarks, and structured data sharing between the robot and human team members, the individuals in multi-robot teams, or with a C3 center.
Attention, Awareness, and the Perception of Auditory Scenes
Snyder, Joel S.; Gregg, Melissa K.; Weintraub, David M.; Alain, Claude
2011-01-01
Auditory perception and cognition entails both low-level and high-level processes, which are likely to interact with each other to create our rich conscious experience of soundscapes. Recent research that we review has revealed numerous influences of high-level factors, such as attention, intention, and prior experience, on conscious auditory perception. And recently, studies have shown that auditory scene analysis tasks can exhibit multistability in a manner very similar to ambiguous visual stimuli, presenting a unique opportunity to study neural correlates of auditory awareness and the extent to which mechanisms of perception are shared across sensory modalities. Research has also led to a growing number of techniques through which auditory perception can be manipulated and even completely suppressed. Such findings have important consequences for our understanding of the mechanisms of perception and also should allow scientists to precisely distinguish the influences of different higher-level influences. PMID:22347201
Modality-specificity of Selective Attention Networks.
Stewart, Hannah J; Amitay, Sygal
2015-01-01
To establish the modality specificity and generality of selective attention networks. Forty-eight young adults completed a battery of four auditory and visual selective attention tests based upon the Attention Network framework: the visual and auditory Attention Network Tests (vANT, aANT), the Test of Everyday Attention (TEA), and the Test of Attention in Listening (TAiL). These provided independent measures for auditory and visual alerting, orienting, and conflict resolution networks. The measures were subjected to an exploratory factor analysis to assess underlying attention constructs. The analysis yielded a four-component solution. The first component comprised of a range of measures from the TEA and was labeled "general attention." The third component was labeled "auditory attention," as it only contained measures from the TAiL using pitch as the attended stimulus feature. The second and fourth components were labeled as "spatial orienting" and "spatial conflict," respectively-they were comprised of orienting and conflict resolution measures from the vANT, aANT, and TAiL attend-location task-all tasks based upon spatial judgments (e.g., the direction of a target arrow or sound location). These results do not support our a-priori hypothesis that attention networks are either modality specific or supramodal. Auditory attention separated into selectively attending to spatial and non-spatial features, with the auditory spatial attention loading onto the same factor as visual spatial attention, suggesting spatial attention is supramodal. However, since our study did not include a non-spatial measure of visual attention, further research will be required to ascertain whether non-spatial attention is modality-specific.
Giuliano, Ryan J; Karns, Christina M; Neville, Helen J; Hillyard, Steven A
2014-12-01
A growing body of research suggests that the predictive power of working memory (WM) capacity for measures of intellectual aptitude is due to the ability to control attention and select relevant information. Crucially, attentional mechanisms implicated in controlling access to WM are assumed to be domain-general, yet reports of enhanced attentional abilities in individuals with larger WM capacities are primarily within the visual domain. Here, we directly test the link between WM capacity and early attentional gating across sensory domains, hypothesizing that measures of visual WM capacity should predict an individual's capacity to allocate auditory selective attention. To address this question, auditory ERPs were recorded in a linguistic dichotic listening task, and individual differences in ERP modulations by attention were correlated with estimates of WM capacity obtained in a separate visual change detection task. Auditory selective attention enhanced ERP amplitudes at an early latency (ca. 70-90 msec), with larger P1 components elicited by linguistic probes embedded in an attended narrative. Moreover, this effect was associated with greater individual estimates of visual WM capacity. These findings support the view that domain-general attentional control mechanisms underlie the wide variation of WM capacity across individuals.
Griffis, Joseph C.; Elkhetali, Abdurahman S.; Burge, Wesley K.; Chen, Richard H.; Visscher, Kristina M.
2015-01-01
Attention facilitates the processing of task-relevant visual information and suppresses interference from task-irrelevant information. Modulations of neural activity in visual cortex depend on attention, and likely result from signals originating in fronto-parietal and cingulo-opercular regions of cortex. Here, we tested the hypothesis that attentional facilitation of visual processing is accomplished in part by changes in how brain networks involved in attentional control interact with sectors of V1 that represent different retinal eccentricities. We measured the strength of background connectivity between fronto-parietal and cingulo-opercular regions with different eccentricity sectors in V1 using functional MRI data that were collected while participants performed tasks involving attention to either a centrally presented visual stimulus or a simultaneously presented auditory stimulus. We found that when the visual stimulus was attended, background connectivity between V1 and the left frontal eye fields (FEF), left intraparietal sulcus (IPS), and right IPS varied strongly across different eccentricity sectors in V1 so that foveal sectors were more strongly connected than peripheral sectors. This retinotopic gradient was weaker when the visual stimulus was ignored, indicating that it was driven by attentional effects. Greater task-driven differences between foveal and peripheral sectors in background connectivity to these regions were associated with better performance on the visual task and faster response times on correct trials. These findings are consistent with the notion that attention drives the configuration of task-specific functional pathways that enable the prioritized processing of task-relevant visual information, and show that the prioritization of visual information by attentional processes may be encoded in the retinotopic gradient of connectivty between V1 and fronto-parietal regions. PMID:26106320
Pacing Visual Attention: Temporal Structure Effects
1993-06-01
of perception and motor action: Ideomotor compatibility and interference in divided attention . Journal of Motor Behavior, 2, (3), 155-162. Kwak, H...1993 Dissertation, Jun 89 - Jun 93 4. TITLE AND SUBTITLE S. FUNDING NUMBERS Pacing Visual Attention : Temporal Structure Effects PE - 62202F 6. AUTHOR(S...that persisting temporal relationships may be an important factor in the external (exogenous) control of visual attention , at least to some extent, was
Perception and Attention for Visualization
ERIC Educational Resources Information Center
Haroz, Steve
2013-01-01
This work examines how a better understanding of visual perception and attention can impact visualization design. In a collection of studies, I explore how different levels of the visual system can measurably affect a variety of visualization metrics. The results show that expert preference, user performance, and even computational performance are…
Markant, Julie; Worden, Michael S.; Amso, Dima
2015-01-01
Learning through visual exploration often requires orienting of attention to meaningful information in a cluttered world. Previous work has shown that attention modulates visual cortex activity, with enhanced activity for attended targets and suppressed activity for competing inputs, thus enhancing the visual experience. Here we examined the idea that learning may be engaged differentially with variations in attention orienting mechanisms that drive driving eye movements during visual search and exploration. We hypothesized that attention orienting mechanisms that engaged suppression of a previously attended location will boost memory encoding of the currently attended target objects to a greater extent than those that involve target enhancement alone To test this hypothesis we capitalized on the classic spatial cueing task and the inhibition of return (IOR) mechanism (Posner, Rafal, & Choate, 1985; Posner, 1980) to demonstrate that object images encoded in the context of concurrent suppression at a previously attended location were encoded more effectively and remembered better than those encoded without concurrent suppression. Furthermore, fMRI analyses revealed that this memory benefit was driven by attention modulation of visual cortex activity, as increased suppression of the previously attended location in visual cortex during target object encoding predicted better subsequent recognition memory performance. These results suggest that not all attention orienting impacts learning and memory equally. PMID:25701278
Botly, Leigh C P; De Rosa, Eve
2012-10-01
The visual search task established the feature integration theory of attention in humans and measures visuospatial attentional contributions to feature binding. We recently demonstrated that the neuromodulator acetylcholine (ACh), from the nucleus basalis magnocellularis (NBM), supports the attentional processes required for feature binding using a rat digging-based task. Additional research has demonstrated cholinergic contributions from the NBM to visuospatial attention in rats. Here, we combined these lines of evidence and employed visual search in rats to examine whether cortical cholinergic input supports visuospatial attention specifically for feature binding. We trained 18 male Long-Evans rats to perform visual search using touch screen-equipped operant chambers. Sessions comprised Feature Search (no feature binding required) and Conjunctive Search (feature binding required) trials using multiple stimulus set sizes. Following acquisition of visual search, 8 rats received bilateral NBM lesions using 192 IgG-saporin to selectively reduce cholinergic afferentation of the neocortex, which we hypothesized would selectively disrupt the visuospatial attentional processes needed for efficient conjunctive visual search. As expected, relative to sham-lesioned rats, ACh-NBM-lesioned rats took significantly longer to locate the target stimulus on Conjunctive Search, but not Feature Search trials, thus demonstrating that cholinergic contributions to visuospatial attention are important for feature binding in rats.
Hirai, Masahiro; Muramatsu, Yukako; Mizuno, Seiji; Kurahashi, Naoko; Kurahashi, Hirokazu; Nakamura, Miho
2016-01-01
Evidence indicates that individuals with Williams syndrome (WS) exhibit atypical attentional characteristics when viewing faces. However, the dynamics of visual attention captured by faces remain unclear, especially when explicit attentional forces are present. To clarify this, we introduced a visual search paradigm and assessed how the relative strength of visual attention captured by a face and explicit attentional control changes as search progresses. Participants (WS and controls) searched for a target (butterfly) within an array of distractors, which sometimes contained an upright face. We analyzed reaction time and location of the first fixation-which reflect the attentional profile at the initial stage-and fixation durations. These features represent aspects of attention at later stages of visual search. The strength of visual attention captured by faces and explicit attentional control (toward the butterfly) was characterized by the frequency of first fixations on a face or butterfly and on the duration of face or butterfly fixations. Although reaction time was longer in all groups when faces were present, and visual attention was not dominated by faces in any group during the initial stages of the search, when faces were present, attention to faces dominated in the WS group during the later search stages. Furthermore, for the WS group, reaction time correlated with eye-movement measures at different stages of searching such that longer reaction times were associated with longer face-fixations, specifically at the initial stage of searching. Moreover, longer reaction times were associated with longer face-fixations at the later stages of searching, while shorter reaction times were associated with longer butterfly fixations. The relative strength of attention captured by faces in people with WS is not observed at the initial stage of searching but becomes dominant as the search progresses. Furthermore, although behavioral responses are associated with some aspects of eye movements, they are not as sensitive as eye-movement measurements themselves at detecting atypical attentional characteristics in people with WS.
The Attentional Drift Diffusion Model of Simple Perceptual Decision-Making.
Tavares, Gabriela; Perona, Pietro; Rangel, Antonio
2017-01-01
Perceptual decisions requiring the comparison of spatially distributed stimuli that are fixated sequentially might be influenced by fluctuations in visual attention. We used two psychophysical tasks with human subjects to investigate the extent to which visual attention influences simple perceptual choices, and to test the extent to which the attentional Drift Diffusion Model (aDDM) provides a good computational description of how attention affects the underlying decision processes. We find evidence for sizable attentional choice biases and that the aDDM provides a reasonable quantitative description of the relationship between fluctuations in visual attention, choices and reaction times. We also find that exogenous manipulations of attention induce choice biases consistent with the predictions of the model.
Cognitive Control Network Contributions to Memory-Guided Visual Attention.
Rosen, Maya L; Stern, Chantal E; Michalka, Samantha W; Devaney, Kathryn J; Somers, David C
2016-05-01
Visual attentional capacity is severely limited, but humans excel in familiar visual contexts, in part because long-term memories guide efficient deployment of attention. To investigate the neural substrates that support memory-guided visual attention, we performed a set of functional MRI experiments that contrast long-term, memory-guided visuospatial attention with stimulus-guided visuospatial attention in a change detection task. Whereas the dorsal attention network was activated for both forms of attention, the cognitive control network(CCN) was preferentially activated during memory-guided attention. Three posterior nodes in the CCN, posterior precuneus, posterior callosal sulcus/mid-cingulate, and lateral intraparietal sulcus exhibited the greatest specificity for memory-guided attention. These 3 regions exhibit functional connectivity at rest, and we propose that they form a subnetwork within the broader CCN. Based on the task activation patterns, we conclude that the nodes of this subnetwork are preferentially recruited for long-term memory guidance of visuospatial attention. Published by Oxford University Press 2015. This work is written by (a) US Government employee(s) and is in the public domain in the US.
Paneri, Sofia; Gregoriou, Georgia G.
2017-01-01
The ability to select information that is relevant to current behavioral goals is the hallmark of voluntary attention and an essential part of our cognition. Attention tasks are a prime example to study at the neuronal level, how task related information can be selectively processed in the brain while irrelevant information is filtered out. Whereas, numerous studies have focused on elucidating the mechanisms of visual attention at the single neuron and population level in the visual cortices, considerably less work has been devoted to deciphering the distinct contribution of higher-order brain areas, which are known to be critical for the employment of attention. Among these areas, the prefrontal cortex (PFC) has long been considered a source of top-down signals that bias selection in early visual areas in favor of the attended features. Here, we review recent experimental data that support the role of PFC in attention. We examine the existing evidence for functional specialization within PFC and we discuss how long-range interactions between PFC subregions and posterior visual areas may be implemented in the brain and contribute to the attentional modulation of different measures of neural activity in visual cortices. PMID:29033784
Paneri, Sofia; Gregoriou, Georgia G
2017-01-01
The ability to select information that is relevant to current behavioral goals is the hallmark of voluntary attention and an essential part of our cognition. Attention tasks are a prime example to study at the neuronal level, how task related information can be selectively processed in the brain while irrelevant information is filtered out. Whereas, numerous studies have focused on elucidating the mechanisms of visual attention at the single neuron and population level in the visual cortices, considerably less work has been devoted to deciphering the distinct contribution of higher-order brain areas, which are known to be critical for the employment of attention. Among these areas, the prefrontal cortex (PFC) has long been considered a source of top-down signals that bias selection in early visual areas in favor of the attended features. Here, we review recent experimental data that support the role of PFC in attention. We examine the existing evidence for functional specialization within PFC and we discuss how long-range interactions between PFC subregions and posterior visual areas may be implemented in the brain and contribute to the attentional modulation of different measures of neural activity in visual cortices.
Changes in search rate but not in the dynamics of exogenous attention in action videogame players.
Hubert-Wallander, Bjorn; Green, C Shawn; Sugarman, Michael; Bavelier, Daphne
2011-11-01
Many previous studies have shown that the speed of processing in attentionally demanding tasks seems enhanced following habitual action videogame play. However, using one of the diagnostic tasks for efficiency of attentional processing, a visual search task, Castel and collaborators (Castel, Pratt, & Drummond, Acta Psychologica 119:217-230, 2005) reported no difference in visual search rates, instead proposing that action gaming may change response execution time rather than the efficiency of visual selective attention per se. Here we used two hard visual search tasks, one measuring reaction time and the other accuracy, to test whether visual search rate may be changed by action videogame play. We found greater search rates in the gamer group than in the nongamer controls, consistent with increased efficiency in visual selective attention. We then asked how general the change in attentional throughput noted so far in gamers might be by testing whether exogenous attentional cues would lead to a disproportional enhancement in throughput in gamers as compared to nongamers. Interestingly, exogenous cues were found to enhance throughput equivalently between gamers and nongamers, suggesting that not all mechanisms known to enhance throughput are similarly enhanced in action videogamers.
Visual selective attention and reading efficiency are related in children.
Casco, C; Tressoldi, P E; Dellantonio, A
1998-09-01
We investigated the relationship between visual selective attention and linguistic performance. Subjects were classified in four categories according to their accuracy in a letter cancellation task involving selective attention. The task consisted in searching a target letter in a set of background letters and accuracy was measured as a function of set size. We found that children with the lowest performance in the cancellation task present a significantly slower reading rate and a higher number of reading visual errors than children with highest performance. Results also show that these groups of searchers present significant differences in a lexical search task whereas their performance did not differ in lexical decision and syllables control task. The relationship between letter search and reading, as well as the finding that poor readers-searchers perform poorly lexical search tasks also involving selective attention, suggest that the relationship between letter search and reading difficulty may reflect a deficit in a visual selective attention mechanisms which is involved in all these tasks. A deficit in visual attention can be linked to the problems that disabled readers present in the function of magnocellular stream which culminates in posterior parietal cortex, an area which plays an important role in guiding visual attention.
Eccentricity effects in vision and attention.
Staugaard, Camilla Funch; Petersen, Anders; Vangkilde, Signe
2016-11-01
Stimulus eccentricity affects visual processing in multiple ways. Performance on a visual task is often better when target stimuli are presented near or at the fovea compared to the retinal periphery. For instance, reaction times and error rates are often reported to increase with increasing eccentricity. Such findings have been interpreted as purely visual, reflecting neurophysiological differences in central and peripheral vision, as well as attentional, reflecting a central bias in the allocation of attentional resources. Other findings indicate that in some cases, information from the periphery is preferentially processed. Specifically, it has been suggested that visual processing speed increases with increasing stimulus eccentricity, and that this positive correlation is reduced, but not eliminated, when the amount of cortex activated by a stimulus is kept constant by magnifying peripheral stimuli (Carrasco et al., 2003). In this study, we investigated effects of eccentricity on visual attentional capacity with and without magnification, using computational modeling based on Bundesen's (1990) theory of visual attention. Our results suggest a general decrease in attentional capacity with increasing stimulus eccentricity, irrespective of magnification. We discuss these results in relation to the physiology of the visual system, the use of different paradigms for investigating visual perception across the visual field, and the use of different stimulus materials (e.g. Gabor patches vs. letters). Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Behavioral and Brain Measures of Phasic Alerting Effects on Visual Attention.
Wiegand, Iris; Petersen, Anders; Finke, Kathrin; Bundesen, Claus; Lansner, Jon; Habekost, Thomas
2017-01-01
In the present study, we investigated effects of phasic alerting on visual attention in a partial report task, in which half of the displays were preceded by an auditory warning cue. Based on the computational Theory of Visual Attention (TVA), we estimated parameters of spatial and non-spatial aspects of visual attention and measured event-related lateralizations (ERLs) over visual processing areas. We found that the TVA parameter sensory effectiveness a , which is thought to reflect visual processing capacity, significantly increased with phasic alerting. By contrast, the distribution of visual processing resources according to task relevance and spatial position, as quantified in parameters top-down control α and spatial bias w index , was not modulated by phasic alerting. On the electrophysiological level, the latencies of ERLs in response to the task displays were reduced following the warning cue. These results suggest that phasic alerting facilitates visual processing in a general, unselective manner and that this effect originates in early stages of visual information processing.
Saccade-synchronized rapid attention shifts in macaque visual cortical area MT.
Yao, Tao; Treue, Stefan; Krishna, B Suresh
2018-03-06
While making saccadic eye-movements to scan a visual scene, humans and monkeys are able to keep track of relevant visual stimuli by maintaining spatial attention on them. This ability requires a shift of attentional modulation from the neuronal population representing the relevant stimulus pre-saccadically to the one representing it post-saccadically. For optimal performance, this trans-saccadic attention shift should be rapid and saccade-synchronized. Whether this is so is not known. We trained two rhesus monkeys to make saccades while maintaining covert attention at a fixed spatial location. We show that the trans-saccadic attention shift in cortical visual medial temporal (MT) area is well synchronized to saccades. Attentional modulation crosses over from the pre-saccadic to the post-saccadic neuronal representation by about 50 ms after a saccade. Taking response latency into account, the trans-saccadic attention shift is well timed to maintain spatial attention on relevant stimuli, so that they can be optimally tracked and processed across saccades.
Stimulus-driven changes in the direction of neural priming during visual word recognition.
Pas, Maciej; Nakamura, Kimihiro; Sawamoto, Nobukatsu; Aso, Toshihiko; Fukuyama, Hidenao
2016-01-15
Visual object recognition is generally known to be facilitated when targets are preceded by the same or relevant stimuli. For written words, however, the beneficial effect of priming can be reversed when primes and targets share initial syllables (e.g., "boca" and "bono"). Using fMRI, the present study explored neuroanatomical correlates of this negative syllabic priming. In each trial, participants made semantic judgment about a centrally presented target, which was preceded by a masked prime flashed either to the left or right visual field. We observed that the inhibitory priming during reading was associated with a left-lateralized effect of repetition enhancement in the inferior frontal gyrus (IFG), rather than repetition suppression in the ventral visual region previously associated with facilitatory behavioral priming. We further performed a second fMRI experiment using a classical whole-word repetition priming paradigm with the same hemifield procedure and task instruction, and obtained well-known effects of repetition suppression in the left occipito-temporal cortex. These results therefore suggest that the left IFG constitutes a fast word processing system distinct from the posterior visual word-form system and that the directions of repetition effects can change with intrinsic properties of stimuli even when participants' cognitive and attentional states are kept constant. Copyright © 2015 Elsevier Inc. All rights reserved.
Feature-selective attention: evidence for a decline in old age.
Quigley, Cliodhna; Andersen, Søren K; Schulze, Lars; Grunwald, Martin; Müller, Matthias M
2010-04-19
Although attention in older adults is an active research area, feature-selective aspects have not yet been explicitly studied. Here we report the results of an exploratory study involving directed changes in feature-selective attention. The stimuli used were two random dot kinematograms (RDKs) of different colours, superimposed and centrally presented. A colour cue with random onset after the beginning of each trial instructed young and older subjects to attend to one of the RDKs and detect short intervals of coherent motion while ignoring analogous motion events in the non-cued RDK. Behavioural data show that older adults could detect motion, but discriminated target from distracter motion less reliably than young adults. The method of frequency tagging allowed us to separate the EEG responses to the attended and ignored stimuli and directly compare steady-state visual evoked potential (SSVEP) amplitudes elicited by each stimulus before and after cue onset. We found that younger adults show a clear attentional enhancement of SSVEP amplitude in the post-cue interval, while older adults' SSVEP responses to attended and ignored stimuli do not differ. Thus, in situations where attentional selection cannot be spatially resolved, older adults show a deficit in selection that is not shared by young adults. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.
Holographic data visualization: using synthetic full-parallax holography to share information
NASA Astrophysics Data System (ADS)
Dalenius, Tove N.; Rees, Simon; Richardson, Martin
2017-03-01
This investigation explores representing information through data visualization using the medium holography. It is an exploration from the perspective of a creative practitioner deploying a transdisciplinary approach. The task of visualizing and making use of data and "big data" has been the focus of a large number of research projects during the opening of this century. As the amount of data that can be gathered has increased in a short time our ability to comprehend and get meaning out of the numbers has been brought into attention. This project is looking at the possibility of employing threedimensional imaging using holography to visualize data and additional information. To explore the viability of the concept, this project has set out to transform the visualization of calculated energy and fluid flow data to a holographic medium. A Computational Fluid Dynamics (CFD) model of flow around a vehicle, and a model of Solar irradiation on a building were chosen to investigate the process. As no pre-existing software is available to directly transform the data into a compatible format the team worked collaboratively and transdisciplinary in order to achieve an accurate conversion from the format of the calculation and visualization tools to a configuration suitable for synthetic holography production. The project also investigates ideas for layout and design suitable for holographic visualization of energy data. Two completed holograms will be presented. Future possibilities for developing the concept of Holographic Data Visualization are briefly deliberated upon.
Shifting Attention within Memory Representations Involves Early Visual Areas
Munneke, Jaap; Belopolsky, Artem V.; Theeuwes, Jan
2012-01-01
Prior studies have shown that spatial attention modulates early visual cortex retinotopically, resulting in enhanced processing of external perceptual representations. However, it is not clear whether the same visual areas are modulated when attention is focused on, and shifted within a working memory representation. In the current fMRI study participants were asked to memorize an array containing four stimuli. After a delay, participants were presented with a verbal cue instructing them to actively maintain the location of one of the stimuli in working memory. Additionally, on a number of trials a second verbal cue instructed participants to switch attention to the location of another stimulus within the memorized representation. Results of the study showed that changes in the BOLD pattern closely followed the locus of attention within the working memory representation. A decrease in BOLD-activity (V1–V3) was observed at ROIs coding a memory location when participants switched away from this location, whereas an increase was observed when participants switched towards this location. Continuous increased activity was obtained at the memorized location when participants did not switch. This study shows that shifting attention within memory representations activates the earliest parts of visual cortex (including V1) in a retinotopic fashion. We conclude that even in the absence of visual stimulation, early visual areas support shifting of attention within memorized representations, similar to when attention is shifted in the outside world. The relationship between visual working memory and visual mental imagery is discussed in light of the current findings. PMID:22558165
ERIC Educational Resources Information Center
Hart, Verna; Ferrell, Kay
Twenty-four congenitally visually handicapped infants, aged 6-24 months, participated in a study to determine (1) those stimuli best able to elicit visual attention, (2) the stability of visual acuity over time, and (3) the effects of binaural sensory aids on both visual attention and visual acuity. Ss were dichotomized into visually handicapped…
Cheng, Yufang; Huang, Ruowen
2012-01-01
The focus of this study is using data glove to practice Joint attention skill in virtual reality environment for people with pervasive developmental disorder (PDD). The virtual reality environment provides a safe environment for PDD people. Especially, when they made errors during practice in virtual reality environment, there is no suffering or dangerous consequences to deal with. Joint attention is a critical skill in the disorder characteristics of children with PDD. The absence of joint attention is a deficit frequently affects their social relationship in daily life. Therefore, this study designed the Joint Attention Skills Learning (JASL) systems with data glove tool to help children with PDD to practice joint attention behavior skills. The JASL specifically focus the skills of pointing, showing, sharing things and behavior interaction with other children with PDD. The system is designed in playroom-scene and presented in the first-person perspectives for users. The functions contain pointing and showing, moving virtual objects, 3D animation, text, speaking sounds, and feedback. The method was employed single subject multiple-probe design across subjects' designs, and analysis of visual inspection in this study. It took 3 months to finish the experimental section. Surprisingly, the experiment results reveal that the participants have further extension in improving the joint attention skills in their daily life after using the JASL system. The significant potential in this particular treatment of joint attention for each participant will be discussed in details in this paper. Copyright © 2012 Elsevier Ltd. All rights reserved.
Joint attention and oromotor abilities in young children with and without autism spectrum disorder.
Dalton, Jennifer C; Crais, Elizabeth R; Velleman, Shelley L
2017-09-01
This study examined the relationship between joint attention ability and oromotor imitation skill in three groups of young children with and without Autism Spectrum Disorder using both nonverbal oral and verbal motor imitation tasks. Research questions addressed a) differences among joint attention and oromotor imitation abilities; b) the relationship between independently measured joint attention and oromotor imitation, both nonverbal oral and verbal motor; c) the relationships between joint attention and verbal motor imitation during interpersonal interaction; and d) the relationship between the sensory input demands (auditory, visual, and tactile) and oromotor imitation, both nonverbal oral and verbal motor. A descriptive, nonexperimental design was used to compare joint attention and oromotor skills of 10 preschool-aged children with ASD, with those of two control groups: 6 typically developing children (TD), and 6 children with suspected Childhood Apraxia of Speech (sCAS) or apraxic-like symptoms. All children had at least a 3.0 mean length utterance. Children with ASD had poorer joint attention skills overall than children with sCAS or typically developing children. Typically developing children demonstrated higher verbal motor imitation skills overall compared to children with sCAS. Correlational analyses revealed that nonverbal oral imitation and verbal motor imitation were positively related to joint attention abilities only in the children with ASD. Strong positive relationships between joint attention in a naturalistic context (e.g., shared story experience) and oromotor imitation skills, both nonverbal oral and verbal motor, were found only for children with ASD. These data suggest there is a strong positive relationship between joint attention skills and the ability to sequence nonverbal oral and verbal motor movements in children with ASD. The combined sensory input approach involving auditory, visual, and tactile modalities contributed to significantly higher nonverbal oral and verbal motor imitation performance for all groups of children. Verbal children with ASD in this study had difficulties with both the social and cognitive demands of oromotor imitation within a natural environment that demanded cross-modal processing of incoming stimuli within an interpersonal interaction. Further, joint attention and oral praxis may serve as components of an important coupling mechanism in the development of spoken communication and later developing socialcognitive skills. Copyright © 2017 Elsevier Inc. All rights reserved.
Villena-González, Mario; López, Vladimir; Rodríguez, Eugenio
2016-05-15
When attention is oriented toward inner thoughts, as spontaneously occurs during mind wandering, the processing of external information is attenuated. However, the potential effects of thought's content regarding sensory attenuation are still unknown. The present study aims to assess if the representational format of thoughts, such as visual imagery or inner speech, might differentially affect the sensory processing of external stimuli. We recorded the brain activity of 20 participants (12 women) while they were exposed to a probe visual stimulus in three different conditions: executing a task on the visual probe (externally oriented attention), and two conditions involving inward-turned attention i.e. generating inner speech and performing visual imagery. Event-related potentials results showed that the P1 amplitude, related with sensory response, was significantly attenuated during both task involving inward attention compared with external task. When both representational formats were compared, the visual imagery condition showed stronger attenuation in sensory processing than inner speech condition. Alpha power in visual areas was measured as an index of cortical inhibition. Larger alpha amplitude was found when participants engaged in an internal thought contrasted with the external task, with visual imagery showing even more alpha power than inner speech condition. Our results show, for the first time to our knowledge, that visual attentional processing to external stimuli during self-generated thoughts is differentially affected by the representational format of the ongoing train of thoughts. Copyright © 2016 Elsevier Inc. All rights reserved.
Finke, Kathrin; Neitzel, Julia; Bäuml, Josef G; Redel, Petra; Müller, Hermann J; Meng, Chun; Jaekel, Julia; Daamen, Marcel; Scheef, Lukas; Busch, Barbara; Baumann, Nicole; Boecker, Henning; Bartmann, Peter; Habekost, Thomas; Wolke, Dieter; Wohlschläger, Afra; Sorg, Christian
2015-02-15
Although pronounced and lasting deficits in selective attention have been observed for preterm born individuals it is unknown which specific attentional sub-mechanisms are affected and how they relate to brain networks. We used the computationally specified 'Theory of Visual Attention' together with whole- and partial-report paradigms to compare attentional sub-mechanisms of pre- (n=33) and full-term (n=32) born adults. Resting-state fMRI was used to evaluate both between-group differences and inter-individual variance in changed functional connectivity of intrinsic brain networks relevant for visual attention. In preterm born adults, we found specific impairments of visual short-term memory (vSTM) storage capacity while other sub-mechanisms such as processing speed or attentional weighting were unchanged. Furthermore, changed functional connectivity was found in unimodal visual and supramodal attention-related intrinsic networks. Among preterm born adults, the individual pattern of changed connectivity in occipital and parietal cortices was systematically associated with vSTM in such a way that the more distinct the connectivity differences, the better the preterm adults' storage capacity. These findings provide first evidence for selectively changed attentional sub-mechanisms in preterm born adults and their relation to altered intrinsic brain networks. In particular, data suggest that cortical changes in intrinsic functional connectivity may compensate adverse developmental consequences of prematurity on visual short-term storage capacity. Copyright © 2014 Elsevier Inc. All rights reserved.
Spatial Attention Reduces Burstiness in Macaque Visual Cortical Area MST.
Xue, Cheng; Kaping, Daniel; Ray, Sonia Baloni; Krishna, B Suresh; Treue, Stefan
2017-01-01
Visual attention modulates the firing rate of neurons in many primate cortical areas. In V4, a cortical area in the ventral visual pathway, spatial attention has also been shown to reduce the tendency of neurons to fire closely separated spikes (burstiness). A recent model proposes that a single mechanism accounts for both the firing rate enhancement and the burstiness reduction in V4, but this has not been empirically tested. It is also unclear if the burstiness reduction by spatial attention is found in other visual areas and for other attentional types. We therefore recorded from single neurons in the medial superior temporal area (MST), a key motion-processing area along the dorsal visual pathway, of two rhesus monkeys while they performed a task engaging both spatial and feature-based attention. We show that in MST, spatial attention is associated with a clear reduction in burstiness that is independent of the concurrent enhancement of firing rate. In contrast, feature-based attention enhances firing rate but is not associated with a significant reduction in burstiness. These results establish burstiness reduction as a widespread effect of spatial attention. They also suggest that in contrast to the recently proposed model, the effects of spatial attention on burstiness and firing rate emerge from different mechanisms. © The Author 2016. Published by Oxford University Press.
Spatial Attention Reduces Burstiness in Macaque Visual Cortical Area MST
Xue, Cheng; Kaping, Daniel; Ray, Sonia Baloni; Krishna, B. Suresh; Treue, Stefan
2017-01-01
Abstract Visual attention modulates the firing rate of neurons in many primate cortical areas. In V4, a cortical area in the ventral visual pathway, spatial attention has also been shown to reduce the tendency of neurons to fire closely separated spikes (burstiness). A recent model proposes that a single mechanism accounts for both the firing rate enhancement and the burstiness reduction in V4, but this has not been empirically tested. It is also unclear if the burstiness reduction by spatial attention is found in other visual areas and for other attentional types. We therefore recorded from single neurons in the medial superior temporal area (MST), a key motion-processing area along the dorsal visual pathway, of two rhesus monkeys while they performed a task engaging both spatial and feature-based attention. We show that in MST, spatial attention is associated with a clear reduction in burstiness that is independent of the concurrent enhancement of firing rate. In contrast, feature-based attention enhances firing rate but is not associated with a significant reduction in burstiness. These results establish burstiness reduction as a widespread effect of spatial attention. They also suggest that in contrast to the recently proposed model, the effects of spatial attention on burstiness and firing rate emerge from different mechanisms. PMID:28365773
Attentional load modulates responses of human primary visual cortex to invisible stimuli.
Bahrami, Bahador; Lavie, Nilli; Rees, Geraint
2007-03-20
Visual neuroscience has long sought to determine the extent to which stimulus-evoked activity in visual cortex depends on attention and awareness. Some influential theories of consciousness maintain that the allocation of attention is restricted to conscious representations [1, 2]. However, in the load theory of attention [3], competition between task-relevant and task-irrelevant stimuli for limited-capacity attention does not depend on conscious perception of the irrelevant stimuli. The critical test is whether the level of attentional load in a relevant task would determine unconscious neural processing of invisible stimuli. Human participants were scanned with high-field fMRI while they performed a foveal task of low or high attentional load. Irrelevant, invisible monocular stimuli were simultaneously presented peripherally and were continuously suppressed by a flashing mask in the other eye [4]. Attentional load in the foveal task strongly modulated retinotopic activity evoked in primary visual cortex (V1) by the invisible stimuli. Contrary to traditional views [1, 2, 5, 6], we found that availability of attentional capacity determines neural representations related to unconscious processing of continuously suppressed stimuli in human primary visual cortex. Spillover of attention to cortical representations of invisible stimuli (under low load) cannot be a sufficient condition for their awareness.
Are videogame training gains specific or general?
Oei, Adam C; Patterson, Michael D
2014-01-01
Many recent studies using healthy adults document enhancements in perception and cognition from playing commercial action videogames (AVGs). Playing action games (e.g., Call of Duty, Medal of Honor) is associated with improved bottom-up lower-level information processing skills like visual-perceptual and attentional processes. One proposal states a general improvement in the ability to interpret and gather statistical information to predict future actions which then leads to better performance across different perceptual/attentional tasks. Another proposal claims all the tasks are separately trained in the AVGs because the AVGs and laboratory tasks contain similar demands. We review studies of action and non-AVGs to show support for the latter proposal. To explain transfer in AVGs, we argue that the perceptual and attention tasks share common demands with the trained videogames (e.g., multiple object tracking (MOT), rapid attentional switches, and peripheral vision). In non-AVGs, several studies also demonstrate specific, limited transfer. One instance of specific transfer is the specific enhancement to mental rotation after training in games with a spatial emphasis (e.g., Tetris). In contrast, the evidence for transfer is equivocal where the game and task do not share common demands (e.g., executive functioning). Thus, the "common demands" hypothesis of transfer not only characterizes transfer effects in AVGs, but also non-action games. Furthermore, such a theory provides specific predictions, which can help in the selection of games to train human cognition as well as in the design of videogames purposed for human cognitive and perceptual enhancement. Finally this hypothesis is consistent with the cognitive training literature where most post-training gains are for tasks similar to the training rather than general, non-specific improvements.
Are videogame training gains specific or general?
Patterson, Michael D.
2014-01-01
Many recent studies using healthy adults document enhancements in perception and cognition from playing commercial action videogames (AVGs). Playing action games (e.g., Call of Duty, Medal of Honor) is associated with improved bottom-up lower-level information processing skills like visual-perceptual and attentional processes. One proposal states a general improvement in the ability to interpret and gather statistical information to predict future actions which then leads to better performance across different perceptual/attentional tasks. Another proposal claims all the tasks are separately trained in the AVGs because the AVGs and laboratory tasks contain similar demands. We review studies of action and non-AVGs to show support for the latter proposal. To explain transfer in AVGs, we argue that the perceptual and attention tasks share common demands with the trained videogames (e.g., multiple object tracking (MOT), rapid attentional switches, and peripheral vision). In non-AVGs, several studies also demonstrate specific, limited transfer. One instance of specific transfer is the specific enhancement to mental rotation after training in games with a spatial emphasis (e.g., Tetris). In contrast, the evidence for transfer is equivocal where the game and task do not share common demands (e.g., executive functioning). Thus, the “common demands” hypothesis of transfer not only characterizes transfer effects in AVGs, but also non-action games. Furthermore, such a theory provides specific predictions, which can help in the selection of games to train human cognition as well as in the design of videogames purposed for human cognitive and perceptual enhancement. Finally this hypothesis is consistent with the cognitive training literature where most post-training gains are for tasks similar to the training rather than general, non-specific improvements. PMID:24782722
Prefrontal contributions to visual selective attention.
Squire, Ryan F; Noudoost, Behrad; Schafer, Robert J; Moore, Tirin
2013-07-08
The faculty of attention endows us with the capacity to process important sensory information selectively while disregarding information that is potentially distracting. Much of our understanding of the neural circuitry underlying this fundamental cognitive function comes from neurophysiological studies within the visual modality. Past evidence suggests that a principal function of the prefrontal cortex (PFC) is selective attention and that this function involves the modulation of sensory signals within posterior cortices. In this review, we discuss recent progress in identifying the specific prefrontal circuits controlling visual attention and its neural correlates within the primate visual system. In addition, we examine the persisting challenge of precisely defining how behavior should be affected when attentional function is lost.
Geldof, Christiaan J A; van Hus, Janeline W P; Jeukens-Visser, Martine; Nollet, Frans; Kok, Joke H; Oosterlaan, Jaap; van Wassenaer-Leemhuis, Aleid G
2016-01-01
To extend understanding of impaired motor functioning of very preterm (VP)/very low birth weight (VLBW) children by investigating its relationship with visual attention, visual and visual-motor functioning. Motor functioning (Movement Assessment Battery for Children, MABC-2; Manual Dexterity, Aiming & Catching, and Balance component), as well as visual attention (attention network and visual search tests), vision (oculomotor, visual sensory and perceptive functioning), visual-motor integration (Beery Visual Motor Integration), and neurological status (Touwen examination) were comprehensively assessed in a sample of 106 5.5-year-old VP/VLBW children. Stepwise linear regression analyses were conducted to investigate multivariate associations between deficits in visual attention, oculomotor, visual sensory, perceptive and visual-motor integration functioning, abnormal neurological status, neonatal risk factors, and MABC-2 scores. Abnormal MABC-2 Total or component scores occurred in 23-36% of VP/VLBW children. Visual and visual-motor functioning accounted for 9-11% of variance in MABC-2 Total, Manual Dexterity and Balance scores. Visual perceptive deficits only were associated with Aiming & Catching. Abnormal neurological status accounted for an additional 19-30% of variance in MABC-2 Total, Manual Dexterity and Balance scores, and 5% of variance in Aiming & Catching, and neonatal risk factors for 3-6% of variance in MABC-2 Total, Manual Dexterity and Balance scores. Motor functioning is weakly associated with visual and visual-motor integration deficits and moderately associated with abnormal neurological status, indicating that motor performance reflects long term vulnerability following very preterm birth, and that visual deficits are of minor importance in understanding motor functioning of VP/VLBW children. Copyright © 2016 Elsevier Ltd. All rights reserved.
Functional size of human visual area V1: a neural correlate of top-down attention.
Verghese, Ashika; Kolbe, Scott C; Anderson, Andrew J; Egan, Gary F; Vidyasagar, Trichur R
2014-06-01
Heavy demands are placed on the brain's attentional capacity when selecting a target item in a cluttered visual scene, or when reading. It is widely accepted that such attentional selection is mediated by top-down signals from higher cortical areas to early visual areas such as the primary visual cortex (V1). Further, it has also been reported that there is considerable variation in the surface area of V1. This variation may impact on either the number or specificity of attentional feedback signals and, thereby, the efficiency of attentional mechanisms. In this study, we investigated whether individual differences between humans performing attention-demanding tasks can be related to the functional area of V1. We found that those with a larger representation in V1 of the central 12° of the visual field as measured using BOLD signals from fMRI were able to perform a serial search task at a faster rate. In line with recent suggestions of the vital role of visuo-spatial attention in reading, the speed of reading showed a strong positive correlation with the speed of visual search, although it showed little correlation with the size of V1. The results support the idea that the functional size of the primary visual cortex is an important determinant of the efficiency of selective spatial attention for simple tasks, and that the attentional processing required for complex tasks like reading are to a large extent determined by other brain areas and inter-areal connections. Copyright © 2014 Elsevier Inc. All rights reserved.
Visual short-term memory always requires general attention.
Morey, Candice C; Bieler, Malte
2013-02-01
The role of attention in visual memory remains controversial; while some evidence has suggested that memory for binding between features demands no more attention than does memory for the same features, other evidence has indicated cognitive costs or mnemonic benefits for explicitly attending to bindings. We attempted to reconcile these findings by examining how memory for binding, for features, and for features during binding is affected by a concurrent attention-demanding task. We demonstrated that performing a concurrent task impairs memory for as few as two visual objects, regardless of whether each object includes one or more features. We argue that this pattern of results reflects an essential role for domain-general attention in visual memory, regardless of the simplicity of the to-be-remembered stimuli. We then discuss the implications of these findings for theories of visual working memory.
Age-equivalent top-down modulation during cross-modal selective attention.
Guerreiro, Maria J S; Anguera, Joaquin A; Mishra, Jyoti; Van Gerven, Pascal W M; Gazzaley, Adam
2014-12-01
Selective attention involves top-down modulation of sensory cortical areas, such that responses to relevant information are enhanced whereas responses to irrelevant information are suppressed. Suppression of irrelevant information, unlike enhancement of relevant information, has been shown to be deficient in aging. Although these attentional mechanisms have been well characterized within the visual modality, little is known about these mechanisms when attention is selectively allocated across sensory modalities. The present EEG study addressed this issue by testing younger and older participants in three different tasks: Participants attended to the visual modality and ignored the auditory modality, attended to the auditory modality and ignored the visual modality, or passively perceived information presented through either modality. We found overall modulation of visual and auditory processing during cross-modal selective attention in both age groups. Top-down modulation of visual processing was observed as a trend toward enhancement of visual information in the setting of auditory distraction, but no significant suppression of visual distraction when auditory information was relevant. Top-down modulation of auditory processing, on the other hand, was observed as suppression of auditory distraction when visual stimuli were relevant, but no significant enhancement of auditory information in the setting of visual distraction. In addition, greater visual enhancement was associated with better recognition of relevant visual information, and greater auditory distractor suppression was associated with a better ability to ignore auditory distraction. There were no age differences in these effects, suggesting that when relevant and irrelevant information are presented through different sensory modalities, selective attention remains intact in older age.
The visual attention span deficit in Chinese children with reading fluency difficulty.
Zhao, Jing; Liu, Menglian; Liu, Hanlong; Huang, Chen
2018-02-01
With reading development, some children fail to learn to read fluently. However, reading fluency difficulty (RFD) has not been fully investigated. The present study explored the underlying mechanism of RFD from the aspect of visual attention span. Fourteen Chinese children with RFD and fourteen age-matched normal readers participated. The visual 1-back task was adopted to examine visual attention span. Reaction time and accuracy were recorded, and relevant d-prime (d') scores were computed. Results showed that children with RFD exhibited lower accuracy and lower d' values than the controls did in the visual 1-back task, revealing a visual attention span deficit. Further analyses on d' values revealed that the attention distribution seemed to exhibit an inverted U-shaped pattern without lateralization for normal readers, but a W-shaped pattern with a rightward bias for children with RFD, which was discussed based on between-group variation in reading strategies. Results of the correlation analyses showed that visual attention span was associated with reading fluency at the sentence level for normal readers, but was related to reading fluency at the single-character level for children with RFD. The different patterns in correlations between groups revealed that visual attention span might be affected by the variation in reading strategies. The current findings extend previous data from alphabetic languages to Chinese, a logographic language with a particularly deep orthography, and have implications for reading-dysfluency remediation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Bressler, David W; Fortenbaugh, Francesca C; Robertson, Lynn C; Silver, Michael A
2013-06-07
Endogenous visual spatial attention improves perception and enhances neural responses to visual stimuli at attended locations. Although many aspects of visual processing differ significantly between central and peripheral vision, little is known regarding the neural substrates of the eccentricity dependence of spatial attention effects. We measured amplitudes of positive and negative fMRI responses to visual stimuli as a function of eccentricity in a large number of topographically-organized cortical areas. Responses to each stimulus were obtained when the stimulus was attended and when spatial attention was directed to a stimulus in the opposite visual hemifield. Attending to the stimulus increased both positive and negative response amplitudes in all cortical areas we studied: V1, V2, V3, hV4, VO1, LO1, LO2, V3A/B, IPS0, TO1, and TO2. However, the eccentricity dependence of these effects differed considerably across cortical areas. In early visual, ventral, and lateral occipital cortex, attentional enhancement of positive responses was greater for central compared to peripheral eccentricities. The opposite pattern was observed in dorsal stream areas IPS0 and putative MT homolog TO1, where attentional enhancement of positive responses was greater in the periphery. Both the magnitude and the eccentricity dependence of attentional modulation of negative fMRI responses closely mirrored that of positive responses across cortical areas. Copyright © 2013 Elsevier Ltd. All rights reserved.
Visual Scan Adaptation During Repeated Visual Search
2010-01-01
Junge, J. A. (2004). Searching for stimulus-driven shifts of attention. Psychonomic Bulletin & Review , 11, 876–881. Furst, C. J. (1971...search strategies cannot override attentional capture. Psychonomic Bulletin & Review , 11, 65–70. Wolfe, J. M. (1994). Guided search 2.0: A revised model...of visual search. Psychonomic Bulletin & Review , 1, 202–238. Wolfe, J. M. (1998a). Visual search. In H. Pashler (Ed.), Attention (pp. 13–73). East
Attentive Tracking Disrupts Feature Binding in Visual Working Memory
Fougnie, Daryl; Marois, René
2009-01-01
One of the most influential theories in visual cognition proposes that attention is necessary to bind different visual features into coherent object percepts (Treisman & Gelade, 1980). While considerable evidence supports a role for attention in perceptual feature binding, whether attention plays a similar function in visual working memory (VWM) remains controversial. To test the attentional requirements of VWM feature binding, here we gave participants an attention-demanding multiple object tracking task during the retention interval of a VWM task. Results show that the tracking task disrupted memory for color-shape conjunctions above and beyond any impairment to working memory for object features, and that this impairment was larger when the VWM stimuli were presented at different spatial locations. These results demonstrate that the role of visuospatial attention in feature binding is not unique to perception, but extends to the working memory of these perceptual representations as well. PMID:19609460
Attentional modulation of cell-class specific gamma-band synchronization in awake monkey area V4
Vinck, Martin; Womelsdorf, Thilo; Buffalo, Elizabeth A.; Desimone, Robert; Fries, Pascal
2013-01-01
Summary Selective visual attention is subserved by selective neuronal synchronization, entailing precise orchestration among excitatory and inhibitory cells. We tentatively identified these as broad (BS) and narrow spiking (NS) cells and analyzed their synchronization to the local field potential in two macaque monkeys performing a selective visual attention task. Across cells, gamma phases scattered widely but were unaffected by stimulation or attention. During stimulation, NS cells lagged BS cells on average by ~60° and gamma synchronized twice as strongly. Attention enhanced and reduced the gamma locking of strongly and weakly activated cells, respectively. During a pre-stimulus attentional cue period, BS cells showed weak gamma synchronization, while NS cells gamma synchronized as strongly as with visual stimulation. These analyses reveal the cell-type specific dynamics of the gamma cycle in macaque visual cortex and suggest that attention affects neurons differentially depending on cell type and activation level. PMID:24267656
Kamitani, Toshiaki; Kuroiwa, Yoshiyuki
2009-01-01
Recent studies demonstrated an altered P3 component and prolonged reaction time during the visual discrimination tasks in multiple system atrophy (MSA). In MSA, however, little is known about the N2 component which is known to be closely related to the visual discrimination process. We therefore compared the N2 component as well as the N1 and P3 components in 17 MSA patients with these components in 10 normal controls, by using a visual selective attention task to color or to shape. While the P3 in MSA was significantly delayed in selective attention to shape, the N2 in MSA was significantly delayed in selective attention to color. N1 was normally preserved both in attention to color and in attention to shape. Our electrophysiological results indicate that the color discrimination process during selective attention is impaired in MSA.
’What’ and ’Where’ in Visual Attention: Evidence from the Neglect Syndrome
1992-01-01
representations of the visual world, visual attention, and object representations. 24 Bauer, R. M., & Rubens, A. B. (1985). Agnosia . In K. M. Heilman, & E...visual information. Journal of Experimental Psychology: General, 1-1, 501-517. Farah, M. J. (1990). Visual Agnosia : Disorders of Object Recognition and
Thiessen, Amber; Beukelman, David; Hux, Karen; Longenecker, Maria
2016-04-01
The purpose of the study was to compare the visual attention patterns of adults with aphasia and adults without neurological conditions when viewing visual scenes with 2 types of engagement. Eye-tracking technology was used to measure the visual attention patterns of 10 adults with aphasia and 10 adults without neurological conditions. Participants viewed camera-engaged (i.e., human figure facing camera) and task-engaged (i.e., human figure looking at and touching an object) visual scenes. Participants with aphasia responded to engagement cues by focusing on objects of interest more for task-engaged scenes than camera-engaged scenes; however, the difference in their responses to these scenes were not as pronounced as those observed in adults without neurological conditions. In addition, people with aphasia spent more time looking at background areas of interest and less time looking at person areas of interest for camera-engaged scenes than did control participants. Results indicate people with aphasia visually attend to scenes differently than adults without neurological conditions. As a consequence, augmentative and alternative communication (AAC) facilitators may have different visual attention behaviors than the people with aphasia for whom they are constructing or selecting visual scenes. Further examination of the visual attention of people with aphasia may help optimize visual scene selection.
The Attention Cascade Model and Attentional Blink
ERIC Educational Resources Information Center
Shih, Shui-I
2008-01-01
An attention cascade model is proposed to account for attentional blinks in rapid serial visual presentation (RSVP) of stimuli. Data were collected using single characters in a single RSVP stream at 10 Hz [Shih, S., & Reeves, A. (2007). "Attentional capture in rapid serial visual presentation." "Spatial Vision", 20(4), 301-315], and single words,…
ERIC Educational Resources Information Center
Kirk, Hannah E.; Gray, Kylie; Riby, Deborah M.; Taffe, John; Cornish, Kim M.
2017-01-01
Despite well-documented attention deficits in children with intellectual and developmental disabilities (IDD), distinctions across types of attention problems and their association with academic attainment has not been fully explored. This study examines visual attention capacities and inattentive/hyperactive behaviours in 77 children aged 4 to…
The Attentional Field Revealed by Single-Voxel Modeling of fMRI Time Courses
DeYoe, Edgar A.
2015-01-01
The spatial topography of visual attention is a distinguishing and critical feature of many theoretical models of visuospatial attention. Previous fMRI-based measurements of the topography of attention have typically been too crude to adequately test the predictions of different competing models. This study demonstrates a new technique to make detailed measurements of the topography of visuospatial attention from single-voxel, fMRI time courses. Briefly, this technique involves first estimating a voxel's population receptive field (pRF) and then “drifting” attention through the pRF such that the modulation of the voxel's fMRI time course reflects the spatial topography of attention. The topography of the attentional field (AF) is then estimated using a time-course modeling procedure. Notably, we are able to make these measurements in many visual areas including smaller, higher order areas, thus enabling a more comprehensive comparison of attentional mechanisms throughout the full hierarchy of human visual cortex. Using this technique, we show that the AF scales with eccentricity and varies across visual areas. We also show that voxels in multiple visual areas exhibit suppressive attentional effects that are well modeled by an AF having an enhancing Gaussian center with a suppressive surround. These findings provide extensive, quantitative neurophysiological data for use in modeling the psychological effects of visuospatial attention. PMID:25810532
Bottom-up and top-down attentional contributions to the size congruity effect.
Sobel, Kenith V; Puri, Amrita M; Faulkenberry, Thomas J
2016-07-01
The size congruity effect refers to the interaction between the numerical and physical (i.e., font) sizes of digits in a numerical (or physical) magnitude selection task. Although various accounts of the size congruity effect have attributed this interaction to either an early representational stage or a late decision stage, only Risko, Maloney, and Fugelsang (Attention, Perception, & Psychophysics, 75, 1137-1147, 2013) have asserted a central role for attention. In the present study, we used a visual search paradigm to further study the role of attention in the size congruity effect. In Experiments 1 and 2, we showed that manipulating top-down attention (via the task instructions) had a significant impact on the size congruity effect. The interaction between numerical and physical size was larger for numerical size comparison (Exp. 1) than for physical size comparison (Exp. 2). In the remaining experiments, we boosted the feature salience by using a unique target color (Exp. 3) or by increasing the display density by using three-digit numerals (Exps. 4 and 5). As expected, a color singleton target abolished the size congruity effect. Searching for three-digit targets based on numerical size (Exp. 4) resulted in a large size congruity effect, but search based on physical size (Exp. 5) abolished the effect. Our results reveal a substantial role for top-down attention in the size congruity effect, which we interpreted as support for a shared-decision account.
Modality-specificity of Selective Attention Networks
Stewart, Hannah J.; Amitay, Sygal
2015-01-01
Objective: To establish the modality specificity and generality of selective attention networks. Method: Forty-eight young adults completed a battery of four auditory and visual selective attention tests based upon the Attention Network framework: the visual and auditory Attention Network Tests (vANT, aANT), the Test of Everyday Attention (TEA), and the Test of Attention in Listening (TAiL). These provided independent measures for auditory and visual alerting, orienting, and conflict resolution networks. The measures were subjected to an exploratory factor analysis to assess underlying attention constructs. Results: The analysis yielded a four-component solution. The first component comprised of a range of measures from the TEA and was labeled “general attention.” The third component was labeled “auditory attention,” as it only contained measures from the TAiL using pitch as the attended stimulus feature. The second and fourth components were labeled as “spatial orienting” and “spatial conflict,” respectively—they were comprised of orienting and conflict resolution measures from the vANT, aANT, and TAiL attend-location task—all tasks based upon spatial judgments (e.g., the direction of a target arrow or sound location). Conclusions: These results do not support our a-priori hypothesis that attention networks are either modality specific or supramodal. Auditory attention separated into selectively attending to spatial and non-spatial features, with the auditory spatial attention loading onto the same factor as visual spatial attention, suggesting spatial attention is supramodal. However, since our study did not include a non-spatial measure of visual attention, further research will be required to ascertain whether non-spatial attention is modality-specific. PMID:26635709
Color impact in visual attention deployment considering emotional images
NASA Astrophysics Data System (ADS)
Chamaret, C.
2012-03-01
Color is a predominant factor in the human visual attention system. Even if it cannot be sufficient to the global or complete understanding of a scene, it may impact the visual attention deployment. We propose to study the color impact as well as the emotion aspect of pictures regarding the visual attention deployment. An eye-tracking campaign has been conducted involving twenty people watching half pictures of database in full color and the other half of database in grey color. The eye fixations of color and black and white images were highly correlated leading to the question of the integration of such cues in the design of visual attention model. Indeed, the prediction of two state-of-the-art computational models shows similar results for the two color categories. Similarly, the study of saccade amplitude and fixation duration versus time viewing did not bring any significant differences between the two mentioned categories. In addition, spatial coordinates of eye fixations reveal an interesting indicator for investigating the differences of visual attention deployment over time and fixation number. The second factor related to emotion categories shows evidences of emotional inter-categories differences between color and grey eye fixations for passive and positive emotion. The particular aspect associated to this category induces a specific behavior, rather based on high frequencies, where the color components influence the visual attention deployment.
Visual attention and stability
Mathôt, Sebastiaan; Theeuwes, Jan
2011-01-01
In the present review, we address the relationship between attention and visual stability. Even though with each eye, head and body movement the retinal image changes dramatically, we perceive the world as stable and are able to perform visually guided actions. However, visual stability is not as complete as introspection would lead us to believe. We attend to only a few items at a time and stability is maintained only for those items. There appear to be two distinct mechanisms underlying visual stability. The first is a passive mechanism: the visual system assumes the world to be stable, unless there is a clear discrepancy between the pre- and post-saccadic image of the region surrounding the saccade target. This is related to the pre-saccadic shift of attention, which allows for an accurate preview of the saccade target. The second is an active mechanism: information about attended objects is remapped within retinotopic maps to compensate for eye movements. The locus of attention itself, which is also characterized by localized retinotopic activity, is remapped as well. We conclude that visual attention is crucial in our perception of a stable world. PMID:21242140
Visualizing Trumps Vision in Training Attention.
Reinhart, Robert M G; McClenahan, Laura J; Woodman, Geoffrey F
2015-07-01
Mental imagery can have powerful training effects on behavior, but how this occurs is not well understood. Here we show that even a single instance of mental imagery can improve attentional selection of a target more effectively than actually practicing visual search. By recording subjects' brain activity, we found that these imagery-induced training effects were due to perceptual attention being more effectively focused on targets following imagined training. Next, we examined the downside of this potent training by changing the target after several trials of training attention with imagery and found that imagined search resulted in more potent interference than actual practice following these target changes. Finally, we found that proactive interference from task-irrelevant elements in the visual displays appears to underlie the superiority of imagined training relative to actual practice. Our findings demonstrate that visual attention mechanisms can be effectively trained to select target objects in the absence of visual input, and this results in more effective control of attention than practicing the task itself. © The Author(s) 2015.
Walsh, Kyle P.; Pasanen, Edward G.; McFadden, Dennis
2014-01-01
Human subjects performed in several behavioral conditions requiring, or not requiring, selective attention to visual stimuli. Specifically, the attentional task was to recognize strings of digits that had been presented visually. A nonlinear version of the stimulus-frequency otoacoustic emission (SFOAE), called the nSFOAE, was collected during the visual presentation of the digits. The segment of the physiological response discussed here occurred during brief silent periods immediately following the SFOAE-evoking stimuli. For all subjects tested, the physiological-noise magnitudes were substantially weaker (less noisy) during the tasks requiring the most visual attention. Effect sizes for the differences were >2.0. Our interpretation is that cortico-olivo influences adjusted the magnitude of efferent activation during the SFOAE-evoking stimulation depending upon the attention task in effect, and then that magnitude of efferent activation persisted throughout the silent period where it also modulated the physiological noise present. Because the results were highly similar to those obtained when the behavioral conditions involved auditory attention, similar mechanisms appear to operate both across modalities and within modalities. Supplementary measurements revealed that the efferent activation was spectrally global, as it was for auditory attention. PMID:24732070
Infants’ Early Visual Attention and Social Engagement as Developmental Precursors to Joint Attention
Salley, Brenda; Sheinkopf, Stephen J.; Neal-Beevers, A. Rebecca; Tenenbaum, Elena J.; Miller-Loncar, Cynthia L.; Tronick, Ed; Lagasse, Linda L.; Shankaran, Seetha; Bada, Henrietta; Bauer, Charles; Whitaker, Toni; Hammond, Jane; Lester, Barry M.
2016-01-01
This study examined infants’ early visual attention (at 1 month of age) and social engagement (4 months) as predictors of their later joint attention (12 and 18 months). The sample (n=325), drawn from the Maternal Lifestyle Study, a longitudinal multicenter project conducted at four centers of the NICHD Neonatal Research Network, included high-risk (cocaine exposed) and matched non-cocaine exposed infants. Hierarchical regressions revealed that infants’ attention orienting at 1 month significantly predicted more frequent initiating joint attention at 12 (but not 18) months of age. Social engagement at 4 months predicted initiating joint attention at 18 months. Results provide the first empirical evidence for the role of visual attention and social engagement behaviors as developmental precursors for later joint attention outcome. PMID:27786527
Nicotinic Receptor Gene CHRNA4 Interacts with Processing Load in Attention
Espeseth, Thomas; Sneve, Markus Handal; Rootwelt, Helge; Laeng, Bruno
2010-01-01
Background Pharmacological studies suggest that cholinergic neurotransmission mediates increases in attentional effort in response to high processing load during attention demanding tasks [1]. Methodology/Principal Findings In the present study we tested whether individual variation in CHRNA4, a gene coding for a subcomponent in α4β2 nicotinic receptors in the human brain, interacted with processing load in multiple-object tracking (MOT) and visual search (VS). We hypothesized that the impact of genotype would increase with greater processing load in the MOT task. Similarly, we predicted that genotype would influence performance under high but not low load in the VS task. Two hundred and two healthy persons (age range = 39–77, Mean = 57.5, SD = 9.4) performed the MOT task in which twelve identical circular objects moved about the display in an independent and unpredictable manner. Two to six objects were designated as targets and the remaining objects were distracters. The same observers also performed a visual search for a target letter (i.e. X or Z) presented together with five non-targets while ignoring centrally presented distracters (i.e. X, Z, or L). Targets differed from non-targets by a unique feature in the low load condition, whereas they shared features in the high load condition. CHRNA4 genotype interacted with processing load in both tasks. Homozygotes for the T allele (N = 62) had better tracking capacity in the MOT task and identified targets faster in the high load trials of the VS task. Conclusion The results support the hypothesis that the cholinergic system modulates attentional effort, and that common genetic variation can be used to study the molecular biology of cognition. PMID:21203548
Audition and Visual Attention: The Developmental Trajectory in Deaf and Hearing Populations.
ERIC Educational Resources Information Center
Smith, Linda B.; Quittner, Alexandra L.; Osberger, Mary Joe; Miyamoto, Richard
1998-01-01
Two experiments examined visual attention in 5- to 13-year olds who were hearing or deaf with or without cochlear implants. Findings indicated that visual selective attention changes occurred around 8 years for all groups, with deaf children without cochlear implants performing less well than others. Differences between deaf children with and…
The Impact of Visual-Spatial Attention on Reading and Spelling in Chinese Children
ERIC Educational Resources Information Center
Liu, Duo; Chen, Xi; Wang, Ying
2016-01-01
The present study investigated the associations of visual-spatial attention with word reading fluency and spelling in 92 third grade Hong Kong Chinese children. Word reading fluency was measured with a timed reading task whereas spelling was measured with a dictation task. Results showed that visual-spatial attention was a unique predictor of…
ERIC Educational Resources Information Center
Kaefer, Tanya; Pinkham, Ashley M.; Neuman, Susan B.
2017-01-01
Research (Evans & Saint-Aubin, 2005) suggests systematic patterns in how young children visually attend to storybooks. However, these studies have not addressed whether visual attention is predictive of children's storybook comprehension. In the current study, we used eye-tracking methodology to examine two-year-olds' visual attention while…
Recoding between Two Types of STM Representation Revealed by the Dynamics of Memory Search
ERIC Educational Resources Information Center
Leszczynski, Marcin; Myers, Nicholas E.; Akyurek, Elkan G.; Schubo, Anna
2012-01-01
Visual STM (VSTM) is thought to be related to visual attention in several ways. Attention controls access to VSTM during memory encoding and plays a role in the maintenance of stored information by strengthening memorized content. We investigated the involvement of visual attention in recall from VSTM. In two experiments, we measured…
The Spatial Resolution of Visual Attention.
ERIC Educational Resources Information Center
Intriligator, James; Cavanaugh, Patrick
2001-01-01
Used two tasks to evaluate the grain of visual attention, the minimum spacing at which attention can select individual items. Results for eight adults on a tracking task and five adults on an individuation task show that selection has a coarser grain than visual resolution and suggest that the parietal area is the most likely locus of the…
Rinne, Teemu; Muers, Ross S; Salo, Emma; Slater, Heather; Petkov, Christopher I
2017-06-01
The cross-species correspondences and differences in how attention modulates brain responses in humans and animal models are poorly understood. We trained 2 monkeys to perform an audio-visual selective attention task during functional magnetic resonance imaging (fMRI), rewarding them to attend to stimuli in one modality while ignoring those in the other. Monkey fMRI identified regions strongly modulated by auditory or visual attention. Surprisingly, auditory attention-related modulations were much more restricted in monkeys than humans performing the same tasks during fMRI. Further analyses ruled out trivial explanations, suggesting that labile selective-attention performance was associated with inhomogeneous modulations in wide cortical regions in the monkeys. The findings provide initial insights into how audio-visual selective attention modulates the primate brain, identify sources for "lost" attention effects in monkeys, and carry implications for modeling the neurobiology of human cognition with nonhuman animals. © The Author 2017. Published by Oxford University Press.
Muers, Ross S.; Salo, Emma; Slater, Heather; Petkov, Christopher I.
2017-01-01
Abstract The cross-species correspondences and differences in how attention modulates brain responses in humans and animal models are poorly understood. We trained 2 monkeys to perform an audio–visual selective attention task during functional magnetic resonance imaging (fMRI), rewarding them to attend to stimuli in one modality while ignoring those in the other. Monkey fMRI identified regions strongly modulated by auditory or visual attention. Surprisingly, auditory attention-related modulations were much more restricted in monkeys than humans performing the same tasks during fMRI. Further analyses ruled out trivial explanations, suggesting that labile selective-attention performance was associated with inhomogeneous modulations in wide cortical regions in the monkeys. The findings provide initial insights into how audio–visual selective attention modulates the primate brain, identify sources for “lost” attention effects in monkeys, and carry implications for modeling the neurobiology of human cognition with nonhuman animals. PMID:28419201
Buchholz, Judy; Aimola Davies, Anne
2005-02-01
Performance on a covert visual attention task is compared between a group of adults with developmental dyslexia (specifically phonological difficulties) and a group of age and IQ matched controls. The group with dyslexia were generally slower to detect validly-cued targets. Costs of shifting attention toward the periphery when the target was invalidly cued were significantly higher for the group with dyslexia, while costs associated with shifts toward the fovea tended to be lower. Higher costs were also shown by the group with dyslexia for up-down shifts of attention in the periphery. A visual field processing difference was found, in that the group with dyslexia showed higher costs associated with shifting attention between objects in they LVF. These findings indicate that these adults with dyslexia have difficulty in both the space-based and the object-based components of covert visual attention, and more specifically to stimuli located in the periphery.
Green, Jessica J; Boehler, Carsten N; Roberts, Kenneth C; Chen, Ling-Chia; Krebs, Ruth M; Song, Allen W; Woldorff, Marty G
2017-08-16
Visual spatial attention has been studied in humans with both electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) individually. However, due to the intrinsic limitations of each of these methods used alone, our understanding of the systems-level mechanisms underlying attentional control remains limited. Here, we examined trial-to-trial covariations of concurrently recorded EEG and fMRI in a cued visual spatial attention task in humans, which allowed delineation of both the generators and modulators of the cue-triggered event-related oscillatory brain activity underlying attentional control function. The fMRI activity in visual cortical regions contralateral to the cued direction of attention covaried positively with occipital gamma-band EEG, consistent with activation of cortical regions representing attended locations in space. In contrast, fMRI activity in ipsilateral visual cortical regions covaried inversely with occipital alpha-band oscillations, consistent with attention-related suppression of the irrelevant hemispace. Moreover, the pulvinar nucleus of the thalamus covaried with both of these spatially specific, attention-related, oscillatory EEG modulations. Because the pulvinar's neuroanatomical geometry makes it unlikely to be a direct generator of the scalp-recorded EEG, these covariational patterns appear to reflect the pulvinar's role as a regulatory control structure, sending spatially specific signals to modulate visual cortex excitability proactively. Together, these combined EEG/fMRI results illuminate the dynamically interacting cortical and subcortical processes underlying spatial attention, providing important insight not realizable using either method alone. SIGNIFICANCE STATEMENT Noninvasive recordings of changes in the brain's blood flow using functional magnetic resonance imaging and electrical activity using electroencephalography in humans have individually shown that shifting attention to a location in space produces spatially specific changes in visual cortex activity in anticipation of a stimulus. The mechanisms controlling these attention-related modulations of sensory cortex, however, are poorly understood. Here, we recorded these two complementary measures of brain activity simultaneously and examined their trial-to-trial covariations to gain insight into these attentional control mechanisms. This multi-methodological approach revealed the attention-related coordination of visual cortex modulation by the subcortical pulvinar nucleus of the thalamus while also disentangling the mechanisms underlying the attentional enhancement of relevant stimulus input and those underlying the concurrent suppression of irrelevant input. Copyright © 2017 the authors 0270-6474/17/377803-08$15.00/0.
Activity in human visual and parietal cortex reveals object-based attention in working memory.
Peters, Benjamin; Kaiser, Jochen; Rahm, Benjamin; Bledowski, Christoph
2015-02-25
Visual attention enables observers to select behaviorally relevant information based on spatial locations, features, or objects. Attentional selection is not limited to physically present visual information, but can also operate on internal representations maintained in working memory (WM) in service of higher-order cognition. However, only little is known about whether attention to WM contents follows the same principles as attention to sensory stimuli. To address this question, we investigated in humans whether the typically observed effects of object-based attention in perception are also evident for object-based attentional selection of internal object representations in WM. In full accordance with effects in visual perception, the key behavioral and neuronal characteristics of object-based attention were observed in WM. Specifically, we found that reaction times were shorter when shifting attention to memory positions located on the currently attended object compared with equidistant positions on a different object. Furthermore, functional magnetic resonance imaging and multivariate pattern analysis of visuotopic activity in visual (areas V1-V4) and parietal cortex revealed that directing attention to one position of an object held in WM also enhanced brain activation for other positions on the same object, suggesting that attentional selection in WM activates the entire object. This study demonstrated that all characteristic features of object-based attention are present in WM and thus follows the same principles as in perception. Copyright © 2015 the authors 0270-6474/15/353360-10$15.00/0.
Reinke, Karen S.; LaMontagne, Pamela J.; Habib, Reza
2011-01-01
Spatial attention has been argued to be adaptive by enhancing the processing of visual stimuli within the ‘spotlight of attention’. We previously reported that crude threat cues (backward masked fearful faces) facilitate spatial attention through a network of brain regions consisting of the amygdala, anterior cingulate and contralateral visual cortex. However, results from previous functional magnetic resonance imaging (fMRI) dot-probe studies have been inconclusive regarding a fearful face-elicited contralateral modulation of visual targets. Here, we tested the hypothesis that the capture of spatial attention by crude threat cues would facilitate processing of subsequently presented visual stimuli within the masked fearful face-elicited ‘spotlight of attention’ in the contralateral visual cortex. Participants performed a backward masked fearful face dot-probe task while brain activity was measured with fMRI. Masked fearful face left visual field trials enhanced activity for spatially congruent targets in the right superior occipital gyrus, fusiform gyrus and lateral occipital complex, while masked fearful face right visual field trials enhanced activity in the left middle occipital gyrus. These data indicate that crude threat elicited spatial attention enhances the processing of subsequent visual stimuli in contralateral occipital cortex, which may occur by lowering neural activation thresholds in this retinotopic location. PMID:20702500
Giuliano, Ryan J.; Karns, Christina M.; Neville, Helen J.; Hillyard, Steven A.
2015-01-01
A growing body of research suggests that the predictive power of working memory (WM) capacity for measures of intellectual aptitude is due to the ability to control attention and select relevant information. Crucially, attentional mechanisms implicated in controlling access to WM are assumed to be domain-general, yet reports of enhanced attentional abilities in individuals with larger WM capacities are primarily within the visual domain. Here, we directly test the link between WM capacity and early attentional gating across sensory domains, hypothesizing that measures of visual WM capacity should predict an individual’s capacity to allocate auditory selective attention. To address this question, auditory ERPs were recorded in a linguistic dichotic listening task, and individual differences in ERP modulations by attention were correlated with estimates of WM capacity obtained in a separate visual change detection task. Auditory selective attention enhanced ERP amplitudes at an early latency (ca. 70–90 msec), with larger P1 components elicited by linguistic probes embedded in an attended narrative. Moreover, this effect was associated with greater individual estimates of visual WM capacity. These findings support the view that domain-general attentional control mechanisms underlie the wide variation of WM capacity across individuals. PMID:25000526
Visual attention in posterior stroke and relations to alexia.
Petersen, A; Vangkilde, S; Fabricius, C; Iversen, H K; Delfi, T S; Starrfelt, R
2016-11-01
Impaired visual attention is common following strokes in the territory of the middle cerebral artery, particularly in the right hemisphere, while attentional effects of more posterior lesions are less clear. Commonly, such deficits are investigated in relation to specific syndromes like visual agnosia or pure alexia. The aim of this study was to characterize visual processing speed and apprehension span following posterior cerebral artery (PCA) stroke. In addition, the relationship between these attentional parameters and single word reading is investigated, as previous studies have suggested that reduced visual speed and span may explain pure alexia. Eight patients with unilateral PCA strokes (four left hemisphere, four right hemisphere) were selected on the basis of lesion location, rather than the presence of any visual symptoms. Visual attention was characterized by a whole report paradigm allowing for hemifield-specific measurements of processing speed and apprehension span. All patients showed reductions in visual span contralateral to the lesion site, and four patients showed bilateral reductions in visual span despite unilateral lesions (2L; 2R). Six patients showed selective deficits in visual span, though processing speed was unaffected in the same field (ipsi- or contralesionally). Only patients with right hemifield reductions in visual span were impaired in reading, and this could follow either right or left lateralized stroke and was irrespective of visual field impairments. In conclusion, visual span may be affected bilaterally by unilateral PCA-lesions. Reductions in visual span may also be confined to one hemifield, and may be affected in spite of preserved visual processing speed. Furthermore, reduced span in the right visual field seems to be related to reading impairment in this group, regardless of lesion lateralization. Copyright © 2016 Elsevier Ltd. All rights reserved.
Markant, Julie; Worden, Michael S; Amso, Dima
2015-04-01
Learning through visual exploration often requires orienting of attention to meaningful information in a cluttered world. Previous work has shown that attention modulates visual cortex activity, with enhanced activity for attended targets and suppressed activity for competing inputs, thus enhancing the visual experience. Here we examined the idea that learning may be engaged differentially with variations in attention orienting mechanisms that drive eye movements during visual search and exploration. We hypothesized that attention orienting mechanisms that engaged suppression of a previously attended location would boost memory encoding of the currently attended target objects to a greater extent than those that involve target enhancement alone. To test this hypothesis we capitalized on the classic spatial cueing task and the inhibition of return (IOR) mechanism (Posner, 1980; Posner, Rafal, & Choate, 1985) to demonstrate that object images encoded in the context of concurrent suppression at a previously attended location were encoded more effectively and remembered better than those encoded without concurrent suppression. Furthermore, fMRI analyses revealed that this memory benefit was driven by attention modulation of visual cortex activity, as increased suppression of the previously attended location in visual cortex during target object encoding predicted better subsequent recognition memory performance. These results suggest that not all attention orienting impacts learning and memory equally. Copyright © 2015 Elsevier Inc. All rights reserved.
Franceschini, Sandro; Trevisan, Piergiorgio; Ronconi, Luca; Bertoni, Sara; Colmar, Susan; Double, Kit; Facoetti, Andrea; Gori, Simone
2017-07-19
Dyslexia is characterized by difficulties in learning to read and there is some evidence that action video games (AVG), without any direct phonological or orthographic stimulation, improve reading efficiency in Italian children with dyslexia. However, the cognitive mechanism underlying this improvement and the extent to which the benefits of AVG training would generalize to deep English orthography, remain two critical questions. During reading acquisition, children have to integrate written letters with speech sounds, rapidly shifting their attention from visual to auditory modality. In our study, we tested reading skills and phonological working memory, visuo-spatial attention, auditory, visual and audio-visual stimuli localization, and cross-sensory attentional shifting in two matched groups of English-speaking children with dyslexia before and after they played AVG or non-action video games. The speed of words recognition and phonological decoding increased after playing AVG, but not non-action video games. Furthermore, focused visuo-spatial attention and visual-to-auditory attentional shifting also improved only after AVG training. This unconventional reading remediation program also increased phonological short-term memory and phoneme blending skills. Our report shows that an enhancement of visuo-spatial attention and phonological working memory, and an acceleration of visual-to-auditory attentional shifting can directly translate into better reading in English-speaking children with dyslexia.
Neural Basis of Visual Attentional Orienting in Childhood Autism Spectrum Disorders.
Murphy, Eric R; Norr, Megan; Strang, John F; Kenworthy, Lauren; Gaillard, William D; Vaidya, Chandan J
2017-01-01
We examined spontaneous attention orienting to visual salience in stimuli without social significance using a modified Dot-Probe task during functional magnetic resonance imaging in high-functioning preadolescent children with Autism Spectrum Disorder (ASD) and age- and IQ-matched control children. While the magnitude of attentional bias (faster response to probes in the location of solid color patch) to visually salient stimuli was similar in the groups, activation differences in frontal and temporoparietal regions suggested hyper-sensitivity to visual salience or to sameness in ASD children. Further, activation in a subset of those regions was associated with symptoms of restricted and repetitive behavior. Thus, atypicalities in response to visual properties of stimuli may drive attentional orienting problems associated with ASD.
Behold the voice of wrath: cross-modal modulation of visual attention by anger prosody.
Brosch, Tobias; Grandjean, Didier; Sander, David; Scherer, Klaus R
2008-03-01
Emotionally relevant stimuli are prioritized in human information processing. It has repeatedly been shown that selective spatial attention is modulated by the emotional content of a stimulus. Until now, studies investigating this phenomenon have only examined within-modality effects, most frequently using pictures of emotional stimuli to modulate visual attention. In this study, we used simultaneously presented utterances with emotional and neutral prosody as cues for a visually presented target in a cross-modal dot probe task. Response times towards targets were faster when they appeared at the location of the source of the emotional prosody. Our results show for the first time a cross-modal attentional modulation of visual attention by auditory affective prosody.
Data of ERPs and spectral alpha power when attention is engaged on visual or verbal/auditory imagery
Villena-González, Mario; López, Vladimir; Rodríguez, Eugenio
2016-01-01
This article provides data from statistical analysis of event-related brain potentials (ERPs) and spectral power from 20 participants during three attentional conditions. Specifically, P1, N1 and P300 amplitude of ERP were compared when participant׳s attention was oriented to an external task, to a visual imagery and to an inner speech. The spectral power from alpha band was also compared in these three attentional conditions. These data are related to the research article where sensory processing of external information was compared during these three conditions entitled “Orienting attention to visual or verbal/auditory imagery differentially impairs the processing of visual stimuli” (Villena-Gonzalez et al., 2016) [1]. PMID:27077090
Towards the quantitative evaluation of visual attention models.
Bylinskii, Z; DeGennaro, E M; Rajalingham, R; Ruda, H; Zhang, J; Tsotsos, J K
2015-11-01
Scores of visual attention models have been developed over the past several decades of research. Differences in implementation, assumptions, and evaluations have made comparison of these models very difficult. Taxonomies have been constructed in an attempt at the organization and classification of models, but are not sufficient at quantifying which classes of models are most capable of explaining available data. At the same time, a multitude of physiological and behavioral findings have been published, measuring various aspects of human and non-human primate visual attention. All of these elements highlight the need to integrate the computational models with the data by (1) operationalizing the definitions of visual attention tasks and (2) designing benchmark datasets to measure success on specific tasks, under these definitions. In this paper, we provide some examples of operationalizing and benchmarking different visual attention tasks, along with the relevant design considerations. Copyright © 2015 Elsevier Ltd. All rights reserved.
Selective Maintenance in Visual Working Memory Does Not Require Sustained Visual Attention
Hollingworth, Andrew; Maxcey-Richard, Ashleigh M.
2012-01-01
In four experiments, we tested whether sustained visual attention is required for the selective maintenance of objects in VWM. Participants performed a color change-detection task. During the retention interval, a valid cue indicated the item that would be tested. Change detection performance was higher in the valid-cue condition than in a neutral-cue control condition. To probe the role of visual attention in the cuing effect, on half of the trials, a difficult search task was inserted after the cue, precluding sustained attention on the cued item. The addition of the search task produced no observable decrement in the magnitude of the cuing effect. In a complementary test, search efficiency was not impaired by simultaneously prioritizing an object for retention in VWM. The results demonstrate that selective maintenance in VWM can be dissociated from the locus of visual attention. PMID:23067118
Johnson, Aaron W; Duda, Kevin R; Sheridan, Thomas B; Oman, Charles M
2017-03-01
This article describes a closed-loop, integrated human-vehicle model designed to help understand the underlying cognitive processes that influenced changes in subject visual attention, mental workload, and situation awareness across control mode transitions in a simulated human-in-the-loop lunar landing experiment. Control mode transitions from autopilot to manual flight may cause total attentional demands to exceed operator capacity. Attentional resources must be reallocated and reprioritized, which can increase the average uncertainty in the operator's estimates of low-priority system states. We define this increase in uncertainty as a reduction in situation awareness. We present a model built upon the optimal control model for state estimation, the crossover model for manual control, and the SEEV (salience, effort, expectancy, value) model for visual attention. We modify the SEEV attention executive to direct visual attention based, in part, on the uncertainty in the operator's estimates of system states. The model was validated using the simulated lunar landing experimental data, demonstrating an average difference in the percentage of attention ≤3.6% for all simulator instruments. The model's predictions of mental workload and situation awareness, measured by task performance and system state uncertainty, also mimicked the experimental data. Our model supports the hypothesis that visual attention is influenced by the uncertainty in system state estimates. Conceptualizing situation awareness around the metric of system state uncertainty is a valuable way for system designers to understand and predict how reallocations in the operator's visual attention during control mode transitions can produce reallocations in situation awareness of certain states.
Effects of attention and laterality on motion and orientation discrimination in deaf signers.
Bosworth, Rain G; Petrich, Jennifer A F; Dobkins, Karen R
2013-06-01
Previous studies have asked whether visual sensitivity and attentional processing in deaf signers are enhanced or altered as a result of their different sensory experiences during development, i.e., auditory deprivation and exposure to a visual language. In particular, deaf and hearing signers have been shown to exhibit a right visual field/left hemisphere advantage for motion processing, while hearing nonsigners do not. To examine whether this finding extends to other aspects of visual processing, we compared deaf signers and hearing nonsigners on motion, form, and brightness discrimination tasks. Secondly, to examine whether hemispheric lateralities are affected by attention, we employed a dual-task paradigm to measure form and motion thresholds under "full" vs. "poor" attention conditions. Deaf signers, but not hearing nonsigners, exhibited a right visual field advantage for motion processing. This effect was also seen for form processing and not for the brightness task. Moreover, no group differences were observed in attentional effects, and the motion and form visual field asymmetries were not modulated by attention, suggesting they occur at early levels of sensory processing. In sum, the results show that processing of motion and form, believed to be mediated by dorsal and ventral visual pathways, respectively, are left-hemisphere dominant in deaf signers. Published by Elsevier Inc.
Lee, Kyoung-Min; Ahn, Kyung-Ha; Keller, Edward L.
2012-01-01
The frontal eye fields (FEF), originally identified as an oculomotor cortex, have also been implicated in perceptual functions, such as constructing a visual saliency map and shifting visual attention. Further dissecting the area’s role in the transformation from visual input to oculomotor command has been difficult because of spatial confounding between stimuli and responses and consequently between intermediate cognitive processes, such as attention shift and saccade preparation. Here we developed two tasks in which the visual stimulus and the saccade response were dissociated in space (the extended memory-guided saccade task), and bottom-up attention shift and saccade target selection were independent (the four-alternative delayed saccade task). Reversible inactivation of the FEF in rhesus monkeys disrupted, as expected, contralateral memory-guided saccades, but visual detection was demonstrated to be intact at the same field. Moreover, saccade behavior was impaired when a bottom-up shift of attention was not a prerequisite for saccade target selection, indicating that the inactivation effect was independent of the previously reported dysfunctions in bottom-up attention control. These findings underscore the motor aspect of the area’s functions, especially in situations where saccades are generated by internal cognitive processes, including visual short-term memory and long-term associative memory. PMID:22761923
Lee, Kyoung-Min; Ahn, Kyung-Ha; Keller, Edward L
2012-01-01
The frontal eye fields (FEF), originally identified as an oculomotor cortex, have also been implicated in perceptual functions, such as constructing a visual saliency map and shifting visual attention. Further dissecting the area's role in the transformation from visual input to oculomotor command has been difficult because of spatial confounding between stimuli and responses and consequently between intermediate cognitive processes, such as attention shift and saccade preparation. Here we developed two tasks in which the visual stimulus and the saccade response were dissociated in space (the extended memory-guided saccade task), and bottom-up attention shift and saccade target selection were independent (the four-alternative delayed saccade task). Reversible inactivation of the FEF in rhesus monkeys disrupted, as expected, contralateral memory-guided saccades, but visual detection was demonstrated to be intact at the same field. Moreover, saccade behavior was impaired when a bottom-up shift of attention was not a prerequisite for saccade target selection, indicating that the inactivation effect was independent of the previously reported dysfunctions in bottom-up attention control. These findings underscore the motor aspect of the area's functions, especially in situations where saccades are generated by internal cognitive processes, including visual short-term memory and long-term associative memory.
Splitting Attention across the Two Visual Fields in Visual Short-Term Memory
ERIC Educational Resources Information Center
Delvenne, Jean-Francois; Holt, Jessica L.
2012-01-01
Humans have the ability to attentionally select the most relevant visual information from their extrapersonal world and to retain it in a temporary buffer, known as visual short-term memory (VSTM). Research suggests that at least two non-contiguous items can be selected simultaneously when they are distributed across the two visual hemifields. In…
Visual Attention to Movement and Color in Children with Cortical Visual Impairment
ERIC Educational Resources Information Center
Cohen-Maitre, Stacey Ann; Haerich, Paul
2005-01-01
This study investigated the ability of color and motion to elicit and maintain visual attention in a sample of children with cortical visual impairment (CVI). It found that colorful and moving objects may be used to engage children with CVI, increase their motivation to use their residual vision, and promote visual learning.
Interlateral Asymmetry in the Time Course of the Effect of a Peripheral Prime Stimulus
ERIC Educational Resources Information Center
Castro-Barros, B. A.; Righi, L. L.; Grechi, G.; Ribeiro-do-Valle, L. E.
2008-01-01
Evidence exists that both right and left hemisphere attentional mechanisms are mobilized when attention is directed to the right visual hemifield and only right hemisphere attentional mechanisms are mobilized when attention is directed to the left visual hemifield. This arrangement might lead to a rightward bias of automatic attention. The…
Attention Gating in Short-Term Visual Memory.
ERIC Educational Resources Information Center
Reeves, Adam; Sperling, George
1986-01-01
An experiment is conducted showing that an attention shift to a stream of numerals presented in rapid serial visual presentation mode produces not a total loss, but a systematic distortion of order. An attention gating model (AGM) is developed from a more general attention model. (Author/LMO)
Bersani, Giuseppe; Quartini, Adele; Ratti, Flavia; Pagliuca, Giulio; Gallo, Andrea
2013-11-30
Olfactory identification ability implicates the integrity of the orbitofrontal cortex (OFC). The fronto-striatal circuits including the OFC have been involved in the neuropathology of Obsessive Compulsive Disorder (OCD). However, only a few studies have examined olfactory function in patients with OCD. The Brief Smell Identification Test (B-SIT) and tests from the Cambridge Neuropsychological Automated Battery (CANTAB) were administered to 25 patients with OCD and to 21 healthy matched controls. OCD patients showed a significant impairment in olfactory identification ability as well as widely distributed cognitive deficits in visual memory, executive functions, attention, and response inhibition. The degree of behavioural impairment on motor impulsivity (prolonged response inhibition Stop-Signal Reaction Time) strongly correlated with the B-SIT score. Our study is the first to indicate a shared OFC pathological neural substrate underlying olfactory identification impairment, impulsivity, and OCD. Deficits in visual memory, executive functions and attention further indicate that regions outside of the orbitofronto-striatal loop may be involved in this disorder. Such results may help delineate the clinical complexity of OCD and support more targeted investigations and interventions. In this regard, research on the potential diagnostic utility of olfactory identification deficits in the assessment of OCD would certainly be useful. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Modulation of early cortical processing during divided attention to non-contiguous locations
Frey, Hans-Peter; Schmid, Anita M.; Murphy, Jeremy W.; Molholm, Sophie; Lalor, Edmund C.; Foxe, John J.
2015-01-01
We often face the challenge of simultaneously attending to multiple non-contiguous regions of space. There is ongoing debate as to how spatial attention is divided under these situations. While for several years the predominant view was that humans could divide the attentional spotlight, several recent studies argue in favor of a unitary spotlight that rhythmically samples relevant locations. Here, this issue was addressed using high-density electrophysiology in concert with the multifocal m-sequence technique to examine visual evoked responses to multiple simultaneous streams of stimulation. Concurrently, we assayed the topographic distribution of alpha-band oscillatory mechanisms, a measure of attentional suppression. Participants performed a difficult detection task that required simultaneous attention to two stimuli in contiguous (undivided) or non-contiguous parts of space. In the undivided condition, the classical pattern of attentional modulation was observed, with increased amplitude of the early visual evoked response and increased alpha amplitude ipsilateral to the attended hemifield. For the divided condition, early visual responses to attended stimuli were also enhanced and the observed multifocal topographic distribution of alpha suppression was in line with the divided attention hypothesis. These results support the existence of divided attentional spotlights, providing evidence that the corresponding modulation occurs during initial sensory processing timeframes in hierarchically early visual regions and that suppressive mechanisms of visual attention selectively target distracter locations during divided spatial attention. PMID:24606564
Cognitive programs: software for attention's executive
Tsotsos, John K.; Kruijne, Wouter
2014-01-01
What are the computational tasks that an executive controller for visual attention must solve? This question is posed in the context of the Selective Tuning model of attention. The range of required computations go beyond top-down bias signals or region-of-interest determinations, and must deal with overt and covert fixations, process timing and synchronization, information routing, memory, matching control to task, spatial localization, priming, and coordination of bottom-up with top-down information. During task execution, results must be monitored to ensure the expected results. This description includes the kinds of elements that are common in the control of any kind of complex machine or system. We seek a mechanistic integration of the above, in other words, algorithms that accomplish control. Such algorithms operate on representations, transforming a representation of one kind into another, which then forms the input to yet another algorithm. Cognitive Programs (CPs) are hypothesized to capture exactly such representational transformations via stepwise sequences of operations. CPs, an updated and modernized offspring of Ullman's Visual Routines, impose an algorithmic structure to the set of attentional functions and play a role in the overall shaping of attentional modulation of the visual system so that it provides its best performance. This requires that we consider the visual system as a dynamic, yet general-purpose processor tuned to the task and input of the moment. This differs dramatically from the almost universal cognitive and computational views, which regard vision as a passively observing module to which simple questions about percepts can be posed, regardless of task. Differing from Visual Routines, CPs explicitly involve the critical elements of Visual Task Executive (vTE), Visual Attention Executive (vAE), and Visual Working Memory (vWM). Cognitive Programs provide the software that directs the actions of the Selective Tuning model of visual attention. PMID:25505430
Hartzler, Andrea L.; Chaudhuri, Shomir; Fey, Brett C.; Flum, David R.; Lavallee, Danielle
2015-01-01
Introduction: The collection of patient-reported outcomes (PROs) draws attention to issues of importance to patients—physical function and quality of life. The integration of PRO data into clinical decisions and discussions with patients requires thoughtful design of user-friendly interfaces that consider user experience and present data in personalized ways to enhance patient care. Whereas most prior work on PROs focuses on capturing data from patients, little research details how to design effective user interfaces that facilitate use of this data in clinical practice. We share lessons learned from engaging health care professionals to inform design of visual dashboards, an emerging type of health information technology (HIT). Methods: We employed human-centered design (HCD) methods to create visual displays of PROs to support patient care and quality improvement. HCD aims to optimize the design of interactive systems through iterative input from representative users who are likely to use the system in the future. Through three major steps, we engaged health care professionals in targeted, iterative design activities to inform the development of a PRO Dashboard that visually displays patient-reported pain and disability outcomes following spine surgery. Findings: Design activities to engage health care administrators, providers, and staff guided our work from design concept to specifications for dashboard implementation. Stakeholder feedback from these health care professionals shaped user interface design features, including predefined overviews that illustrate at-a-glance trends and quarterly snapshots, granular data filters that enable users to dive into detailed PRO analytics, and user-defined views to share and reuse. Feedback also revealed important considerations for quality indicators and privacy-preserving sharing and use of PROs. Conclusion: Our work illustrates a range of engagement methods guided by human-centered principles and design recommendations for optimizing PRO Dashboards for patient care and quality improvement. Engaging health care professionals as stakeholders is a critical step toward the design of user-friendly HIT that is accepted, usable, and has the potential to enhance quality of care and patient outcomes. PMID:25988187
Self-reflection Orients Visual Attention Downward
Liu, Yi; Tong, Yu; Li, Hong
2017-01-01
Previous research has demonstrated abstract concepts associated with spatial location (e.g., God in the Heavens) could direct visual attention upward or downward, because thinking about the abstract concepts activates the corresponding vertical perceptual symbols. For self-concept, there are similar metaphors (e.g., “I am above others”). However, whether thinking about the self can induce visual attention orientation is still unknown. Therefore, the current study tested whether self-reflection can direct visual attention. Individuals often display the tendency of self-enhancement in social comparison, which reminds the individual of the higher position one possesses relative to others within the social environment. As the individual is the agent of the attention orientation, and high status tends to make an individual look down upon others to obtain a sense of pride, it was hypothesized that thinking about the self would lead to a downward attention orientation. Using reflection of personality traits and a target discrimination task, Study 1 found that, after self-reflection, visual attention was directed downward. Similar effects were also found after friend-reflection, with the level of downward attention being correlated with the likability rating scores of the friend. Thus, in Study 2, a disliked other was used as a control and the positive self-view was measured with above-average judgment task. We found downward attention orientation after self-reflection, but not after reflection upon the disliked other. Moreover, the attentional bias after self-reflection was correlated with above-average self-view. The current findings provide the first evidence that thinking about the self could direct visual-spatial attention downward, and suggest that this effect is probably derived from a positive self-view within the social context. PMID:28928694
Self-reflection Orients Visual Attention Downward.
Liu, Yi; Tong, Yu; Li, Hong
2017-01-01
Previous research has demonstrated abstract concepts associated with spatial location (e.g., God in the Heavens) could direct visual attention upward or downward, because thinking about the abstract concepts activates the corresponding vertical perceptual symbols. For self-concept, there are similar metaphors (e.g., "I am above others"). However, whether thinking about the self can induce visual attention orientation is still unknown. Therefore, the current study tested whether self-reflection can direct visual attention. Individuals often display the tendency of self-enhancement in social comparison, which reminds the individual of the higher position one possesses relative to others within the social environment. As the individual is the agent of the attention orientation, and high status tends to make an individual look down upon others to obtain a sense of pride, it was hypothesized that thinking about the self would lead to a downward attention orientation. Using reflection of personality traits and a target discrimination task, Study 1 found that, after self-reflection, visual attention was directed downward. Similar effects were also found after friend-reflection, with the level of downward attention being correlated with the likability rating scores of the friend. Thus, in Study 2, a disliked other was used as a control and the positive self-view was measured with above-average judgment task. We found downward attention orientation after self-reflection, but not after reflection upon the disliked other. Moreover, the attentional bias after self-reflection was correlated with above-average self-view. The current findings provide the first evidence that thinking about the self could direct visual-spatial attention downward, and suggest that this effect is probably derived from a positive self-view within the social context.
Facing Sorrow as a Group Unites. Facing Sorrow in a Group Divides
Rennung, Miriam; Göritz, Anja S.
2015-01-01
Collective gatherings foster group cohesion through providing occasion for emotional sharing among participants. However, prior studies have failed to disentangle two processes that are involved in emotional sharing: 1) focusing shared attention on the same emotion-eliciting event and 2) actively sharing one’s experiences and disclosing one’s feelings to others. To date, it has remained untested if shared attention influences group cohesion independent of active emotional sharing. Our experiment investigated the effect of shared versus individual attention on cohesion in groups of strangers. We predicted that differences in group cohesion as called forth by shared vs. individual attention are most pronounced when experiencing highly arousing negative affect, in that the act of experiencing intensely negative affect with others buffers negative affect’s otherwise detrimental effect on group cohesion. Two-hundred sixteen participants were assembled in groups of 3 to 4 people to either watch an emotion-eliciting film simultaneously on a common screen or to watch the same emotion-eliciting film clip on a laptop in front of each group member using earphones. The film clips were chosen to elicit either highly arousing negative affect or one of three other affective states representing the other poles in Russel’s Circumplex model of affect. We examined self-reported affective and cognitive group cohesion and a behavioral measure of group cohesion. Results support our buffer-hypothesis, in that experiencing intense negative affect in unison leads to higher levels of group cohesion than experiencing this affect individually despite the group setting. The present study demonstrates that shared attention to intense negative emotional stimuli affects group cohesion independently of active emotional sharing. PMID:26335924
A Salient and Task-Irrelevant Collinear Structure Hurts Visual Search
Tseng, Chia-huei; Jingling, Li
2015-01-01
Salient distractors draw our attention spontaneously, even when we intentionally want to ignore them. When this occurs, the real targets close to or overlapping with the distractors benefit from attention capture and thus are detected and discriminated more quickly. However, a puzzling opposite effect was observed in a search display with a column of vertical collinear bars presented as a task-irrelevant distractor [6]. In this case, it was harder to discriminate the targets overlapping with the salient distractor. Here we examined whether this effect originated from factors known to modulate attentional capture: (a) low probability—the probability occurrence of target location at the collinear column was much less (14%) than the rest of the display (86%), and observers might strategically direct their attention away from the collinear distractor; (b) attentional control setting—the distractor and target task interfered with each other because they shared the same continuity set in attentional task; and/or (c) lack of time to establish the optional strategy. We tested these hypotheses by (a) increasing to 60% the trials in which targets overlapped with the same collinear distractor columns, (b) replacing the target task to be connectivity-irrelevant (i.e., luminance discrimination), and (c) having our observers practice the same search task for 10 days. Our results speak against all these hypotheses and lead us to conclude that a collinear distractor impairs search at a level that is unaffected by probabilistic information, attentional setting, and learning. PMID:25909986
Yeari, Menahem; Isser, Michal; Schiff, Rachel
2017-07-01
A controversy has recently developed regarding the hypothesis that developmental dyslexia may be caused, in some cases, by a reduced visual attention span (VAS). To examine this hypothesis, independent of phonological abilities, researchers tested the ability of dyslexic participants to recognize arrays of unfamiliar visual characters. Employing this test, findings were rather equivocal: dyslexic participants exhibited poor performance in some studies but normal performance in others. The present study explored four methodological differences revealed between the two sets of studies that might underlie their conflicting results. Specifically, in two experiments we examined whether a VAS deficit is (a) specific to recognition of multi-character arrays as wholes rather than of individual characters within arrays, (b) specific to characters' position within arrays rather than to characters' identity, or revealed only under a higher attention load due to (c) low-discriminable characters, and/or (d) characters' short exposure. Furthermore, in this study we examined whether pure dyslexic participants who do not have attention disorder exhibit a reduced VAS. Although comorbidity of dyslexia and attention disorder is common and the ability to sustain attention for a long time plays a major rule in the visual recognition task, the presence of attention disorder was neither evaluated nor ruled out in previous studies. Findings did not reveal any differences between the performance of dyslexic and control participants on eight versions of the visual recognition task. These findings suggest that pure dyslexic individuals do not present a reduced visual attention span.
Stuart, Samuel; Galna, Brook; Delicato, Louise S; Lord, Sue; Rochester, Lynn
2017-07-01
Gait impairment is a core feature of Parkinson's disease (PD) which has been linked to cognitive and visual deficits, but interactions between these features are poorly understood. Monitoring saccades allows investigation of real-time cognitive and visual processes and their impact on gait when walking. This study explored: (i) saccade frequency when walking under different attentional manipulations of turning and dual-task; and (ii) direct and indirect relationships between saccades, gait impairment, vision and attention. Saccade frequency (number of fast eye movements per-second) was measured during gait in 60 PD and 40 age-matched control participants using a mobile eye-tracker. Saccade frequency was significantly reduced in PD compared to controls during all conditions. However, saccade frequency increased with a turn and decreased under dual-task for both groups. Poorer attention directly related to saccade frequency, visual function and gait impairment in PD, but not controls. Saccade frequency did not directly relate to gait in PD, but did in controls. Instead, saccade frequency and visual function deficit indirectly impacted gait impairment in PD, which was underpinned by their relationship with attention. In conclusion, our results suggest a vital role for attention with direct and indirect influences on gait impairment in PD. Attention directly impacted saccade frequency, visual function and gait impairment in PD, with connotations for falls. It also underpinned indirect impact of visual and saccadic impairment on gait. Attention therefore represents a key therapeutic target that should be considered in future research. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Orienting Attention within Visual Short-Term Memory: Development and Mechanisms
ERIC Educational Resources Information Center
Shimi, Andria; Nobre, Anna C.; Astle, Duncan; Scerif, Gaia
2014-01-01
How does developing attentional control operate within visual short-term memory (VSTM)? Seven-year-olds, 11-year-olds, and adults (total n = 205) were asked to report whether probe items were part of preceding visual arrays. In Experiment 1, central or peripheral cues oriented attention to the location of to-be-probed items either prior to…
ERIC Educational Resources Information Center
Horowitz-Kraus, Tzipi
2017-01-01
Reading difficulty (RD; or dyslexia) is a heritable condition characterized by slow, inaccurate reading accompanied by executive dysfunction, specifically with respect to visual attention. The current study was designed to examine the effect of familial history of RD on the relationship between reading and visual attention abilities in children…
ERIC Educational Resources Information Center
Harasawa, Masamitsu; Shioiri, Satoshi
2011-01-01
The effect of the visual hemifield to which spatial attention was oriented on the activities of the posterior parietal and occipital visual cortices was examined using functional near-infrared spectroscopy in order to investigate the neural substrates of voluntary visuospatial attention. Our brain imaging data support the theory put forth in a…
Interactions between attention, context and learning in primary visual cortex.
Gilbert, C; Ito, M; Kapadia, M; Westheimer, G
2000-01-01
Attention in early visual processing engages the higher order, context dependent properties of neurons. Even at the earliest stages of visual cortical processing neurons play a role in intermediate level vision - contour integration and surface segmentation. The contextual influences mediating this process may be derived from long range connections within primary visual cortex (V1). These influences are subject to perceptual learning, and are strongly modulated by visuospatial attention, which is itself a learning dependent process. The attentional influences may involve interactions between feedback and horizontal connections in V1. V1 is therefore a dynamic and active processor, subject to top-down influences.
TVA-based assessment of visual attentional functions in developmental dyslexia
Bogon, Johanna; Finke, Kathrin; Stenneken, Prisca
2014-01-01
There is an ongoing debate whether an impairment of visual attentional functions constitutes an additional or even an isolated deficit of developmental dyslexia (DD). Especially performance in tasks that require the processing of multiple visual elements in parallel has been reported to be impaired in DD. We review studies that used parameter-based assessment for identifying and quantifying impaired aspect(s) of visual attention that underlie this multi-element processing deficit in DD. These studies used the mathematical framework provided by the “theory of visual attention” (Bundesen, 1990) to derive quantitative measures of general attentional resources and attentional weighting aspects on the basis of behavioral performance in whole- and partial-report tasks. Based on parameter estimates in children and adults with DD, the reviewed studies support a slowed perceptual processing speed as an underlying primary deficit in DD. Moreover, a reduction in visual short term memory storage capacity seems to present a modulating component, contributing to difficulties in written language processing. Furthermore, comparing the spatial distributions of attentional weights in children and adults suggests that having limited reading and writing skills might impair the development of a slight leftward bias, that is typical for unimpaired adult readers. PMID:25360129
Object-based attentional selection modulates anticipatory alpha oscillations
Knakker, Balázs; Weiss, Béla; Vidnyánszky, Zoltán
2015-01-01
Visual cortical alpha oscillations are involved in attentional gating of incoming visual information. It has been shown that spatial and feature-based attentional selection result in increased alpha oscillations over the cortical regions representing sensory input originating from the unattended visual field and task-irrelevant visual features, respectively. However, whether attentional gating in the case of object based selection is also associated with alpha oscillations has not been investigated before. Here we measured anticipatory electroencephalography (EEG) alpha oscillations while participants were cued to attend to foveal face or word stimuli, the processing of which is known to have right and left hemispheric lateralization, respectively. The results revealed that in the case of simultaneously displayed, overlapping face and word stimuli, attending to the words led to increased power of parieto-occipital alpha oscillations over the right hemisphere as compared to when faces were attended. This object category-specific modulation of the hemispheric lateralization of anticipatory alpha oscillations was maintained during sustained attentional selection of sequentially presented face and word stimuli. These results imply that in the case of object-based attentional selection—similarly to spatial and feature-based attention—gating of visual information processing might involve visual cortical alpha oscillations. PMID:25628554
Capturing Attention When Attention "Blinks"
ERIC Educational Resources Information Center
Wee, Serena; Chua, Fook K.
2004-01-01
Four experiments addressed the question of whether attention may be captured when the visual system is in the midst of an attentional blink (AB). Participants identified 2 target letters embedded among distractor letters in a rapid serial visual presentation sequence. In some trials, a square frame was inserted between the targets; as the only…
Motivationally Significant Stimuli Show Visual Prior Entry: Evidence for Attentional Capture
ERIC Educational Resources Information Center
West, Greg L.; Anderson, Adam A. K.; Pratt, Jay
2009-01-01
Previous studies that have found attentional capture effects for stimuli of motivational significance do not directly measure initial attentional deployment, leaving it unclear to what extent these items produce attentional capture. Visual prior entry, as measured by temporal order judgments (TOJs), rests on the premise that allocated attention…
Braun, J
1994-02-01
In more than one respect, visual search for the most salient or the least salient item in a display are different kinds of visual tasks. The present work investigated whether this difference is primarily one of perceptual difficulty, or whether it is more fundamental and relates to visual attention. Display items of different salience were produced by varying either size, contrast, color saturation, or pattern. Perceptual masking was employed and, on average, mask onset was delayed longer in search for the least salient item than in search for the most salient item. As a result, the two types of visual search presented comparable perceptual difficulty, as judged by psychophysical measures of performance, effective stimulus contrast, and stability of decision criterion. To investigate the role of attention in the two types of search, observers attempted to carry out a letter discrimination and a search task concurrently. To discriminate the letters, observers had to direct visual attention at the center of the display and, thus, leave unattended the periphery, which contained target and distractors of the search task. In this situation, visual search for the least salient item was severely impaired while visual search for the most salient item was only moderately affected, demonstrating a fundamental difference with respect to visual attention. A qualitatively identical pattern of results was encountered by Schiller and Lee (1991), who used similar visual search tasks to assess the effect of a lesion in extrastriate area V4 of the macaque.
Lateralization in Alpha-Band Oscillations Predicts the Locus and Spatial Distribution of Attention.
Ikkai, Akiko; Dandekar, Sangita; Curtis, Clayton E
2016-01-01
Attending to a task-relevant location changes how neural activity oscillates in the alpha band (8-13Hz) in posterior visual cortical areas. However, a clear understanding of the relationships between top-down attention, changes in alpha oscillations in visual cortex, and attention performance are still poorly understood. Here, we tested the degree to which the posterior alpha power tracked the locus of attention, the distribution of attention, and how well the topography of alpha could predict the locus of attention. We recorded magnetoencephalographic (MEG) data while subjects performed an attention demanding visual discrimination task that dissociated the direction of attention from the direction of a saccade to indicate choice. On some trials, an endogenous cue predicted the target's location, while on others it contained no spatial information. When the target's location was cued, alpha power decreased in sensors over occipital cortex contralateral to the attended visual field. When the cue did not predict the target's location, alpha power again decreased in sensors over occipital cortex, but bilaterally, and increased in sensors over frontal cortex. Thus, the distribution and the topography of alpha reliably indicated the locus of covert attention. Together, these results suggest that alpha synchronization reflects changes in the excitability of populations of neurons whose receptive fields match the locus of attention. This is consistent with the hypothesis that alpha oscillations reflect the neural mechanisms by which top-down control of attention biases information processing and modulate the activity of neurons in visual cortex.
Preserved figure-ground segregation and symmetry perception in visual neglect.
Driver, J; Baylis, G C; Rafal, R D
1992-11-05
A central controversy in current research on visual attention is whether figures are segregated from their background preattentively, or whether attention is first directed to unstructured regions of the image. Here we present neurological evidence for the former view from studies of a brain-injured patient with visual neglect. His attentional impairment arises after normal segmentation of the image into figures and background has taken place. Our results indicate that information which is neglected and unavailable to higher levels of visual processing can nevertheless be processed by earlier stages in the visual system concerned with segmentation.
Norman, Luke J; Carlisi, Christina O; Christakou, Anastasia; Cubillo, Ana; Murphy, Clodagh M; Chantiluke, Kaylita; Simmons, Andrew; Giampietro, Vincent; Brammer, Michael; Mataix-Cols, David; Rubia, Katya
2017-01-01
Patients with Attention-Deficit/Hyperactivity Disorder (ADHD) and obsessive/compulsive disorder (OCD) share problems with sustained attention, and are proposed to share deficits in switching between default mode and task positive networks. The aim of this study was to investigate shared and disorder-specific brain activation abnormalities during sustained attention in the two disorders. Twenty boys with ADHD, 20 boys with OCD and 20 age-matched healthy controls aged between 12 and 18 years completed a functional magnetic resonance imaging (fMRI) version of a parametrically modulated sustained attention task with a progressively increasing sustained attention load. Performance and brain activation were compared between groups. Only ADHD patients were impaired in performance. Group by sustained attention load interaction effects showed that OCD patients had disorder-specific middle anterior cingulate underactivation relative to controls and ADHD patients, while ADHD patients showed disorder-specific underactivation in left dorsolateral prefrontal cortex/dorsal inferior frontal gyrus (IFG). ADHD and OCD patients shared left insula/ventral IFG underactivation and increased activation in posterior default mode network relative to controls, but had disorder-specific overactivation in anterior default mode regions, in dorsal anterior cingulate for ADHD and in anterior ventromedial prefrontal cortex for OCD. In sum, ADHD and OCD patients showed mostly disorder-specific patterns of brain abnormalities in both task positive salience/ventral attention networks with lateral frontal deficits in ADHD and middle ACC deficits in OCD, as well as in their deactivation patterns in medial frontal DMN regions. The findings suggest that attention performance in the two disorders is underpinned by disorder-specific activation patterns.
Störmer, Viola S; Passow, Susanne; Biesenack, Julia; Li, Shu-Chen
2012-05-01
Attention and working memory are fundamental for selecting and maintaining behaviorally relevant information. Not only do both processes closely intertwine at the cognitive level, but they implicate similar functional brain circuitries, namely the frontoparietal and the frontostriatal networks, which are innervated by cholinergic and dopaminergic pathways. Here we review the literature on cholinergic and dopaminergic modulations of visual-spatial attention and visual working memory processes to gain insights on aging-related changes in these processes. Some extant findings have suggested that the cholinergic system plays a role in the orienting of attention to enable the detection and discrimination of visual information, whereas the dopaminergic system has mainly been associated with working memory processes such as updating and stabilizing representations. However, since visual-spatial attention and working memory processes are not fully dissociable, there is also evidence of interacting cholinergic and dopaminergic modulations of both processes. We further review gene-cognition association studies that have shown that individual differences in visual-spatial attention and visual working memory are associated with acetylcholine- and dopamine-relevant genes. The efficiency of these 2 transmitter systems declines substantially during healthy aging. These declines, in part, contribute to age-related deficits in attention and working memory functions. We report novel data showing an effect of dopamine COMT gene on spatial updating processes in older but not in younger adults, indicating potential magnification of genetic effects in old age.
Feature-based attentional modulations in the absence of direct visual stimulation.
Serences, John T; Boynton, Geoffrey M
2007-07-19
When faced with a crowded visual scene, observers must selectively attend to behaviorally relevant objects to avoid sensory overload. Often this selection process is guided by prior knowledge of a target-defining feature (e.g., the color red when looking for an apple), which enhances the firing rate of visual neurons that are selective for the attended feature. Here, we used functional magnetic resonance imaging and a pattern classification algorithm to predict the attentional state of human observers as they monitored a visual feature (one of two directions of motion). We find that feature-specific attention effects spread across the visual field-even to regions of the scene that do not contain a stimulus. This spread of feature-based attention to empty regions of space may facilitate the perception of behaviorally relevant stimuli by increasing sensitivity to attended features at all locations in the visual field.
Emotion and anxiety potentiate the way attention alters visual appearance.
Barbot, Antoine; Carrasco, Marisa
2018-04-12
The ability to swiftly detect and prioritize the processing of relevant information around us is critical for the way we interact with our environment. Selective attention is a key mechanism that serves this purpose, improving performance in numerous visual tasks. Reflexively attending to sudden information helps detect impeding threat or danger, a possible reason why emotion modulates the way selective attention affects perception. For instance, the sudden appearance of a fearful face potentiates the effects of exogenous (involuntary, stimulus-driven) attention on performance. Internal states such as trait anxiety can also modulate the impact of attention on early visual processing. However, attention does not only improve performance; it also alters the way visual information appears to us, e.g. by enhancing perceived contrast. Here we show that emotion potentiates the effects of exogenous attention on both performance and perceived contrast. Moreover, we found that trait anxiety mediates these effects, with stronger influences of attention and emotion in anxious observers. Finally, changes in performance and appearance correlated with each other, likely reflecting common attentional modulations. Altogether, our findings show that emotion and anxiety interact with selective attention to truly alter how we see.
Visual attention in violent offenders: Susceptibility to distraction.
Slotboom, Jantine; Hoppenbrouwers, Sylco S; Bouman, Yvonne H A; In 't Hout, Willem; Sergiou, Carmen; van der Stigchel, Stefan; Theeuwes, Jan
2017-05-01
Impairments in executive functioning give rise to reduced control of behavior and impulses, and are therefore a risk factor for violence and criminal behavior. However, the contribution of specific underlying processes remains unclear. A crucial element of executive functioning, and essential for cognitive control and goal-directed behavior, is visual attention. To further elucidate the importance of attentional functioning in the general offender population, we employed an attentional capture task to measure visual attention. We expected offenders to have impaired visual attention, as revealed by increased attentional capture, compared to healthy controls. When comparing the performance of 62 offenders to 69 healthy community controls, we found our hypothesis to be partly confirmed. Offenders were more accurate overall, more accurate in the absence of distracting information, suggesting superior attention. In the presence of distracting information offenders were significantly less accurate compared to when no distracting information was present. Together, these findings indicate that violent offenders may have superior attention, yet worse control over attention. As such, violent offenders may have trouble adjusting to unexpected, irrelevant stimuli, which may relate to failures in self-regulation and inhibitory control. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Cross-modal orienting of visual attention.
Hillyard, Steven A; Störmer, Viola S; Feng, Wenfeng; Martinez, Antigona; McDonald, John J
2016-03-01
This article reviews a series of experiments that combined behavioral and electrophysiological recording techniques to explore the hypothesis that salient sounds attract attention automatically and facilitate the processing of visual stimuli at the sound's location. This cross-modal capture of visual attention was found to occur even when the attracting sound was irrelevant to the ongoing task and was non-predictive of subsequent events. A slow positive component in the event-related potential (ERP) that was localized to the visual cortex was found to be closely coupled with the orienting of visual attention to a sound's location. This neural sign of visual cortex activation was predictive of enhanced perceptual processing and was paralleled by a desynchronization (blocking) of the ongoing occipital alpha rhythm. Further research is needed to determine the nature of the relationship between the slow positive ERP evoked by the sound and the alpha desynchronization and to understand how these electrophysiological processes contribute to improved visual-perceptual processing. Copyright © 2015 Elsevier Ltd. All rights reserved.
Does visual attention drive the dynamics of bistable perception?
Dieter, Kevin C.; Brascamp, Jan; Tadin, Duje; Blake, Randolph
2016-01-01
How does attention interact with incoming sensory information to determine what we perceive? One domain in which this question has received serious consideration is that of bistable perception: a captivating class of phenomena that involves fluctuating visual experience in the face of physically unchanging sensory input. Here, some investigations have yielded support for the idea that attention alone determines what is seen, while others have implicated entirely attention-independent processes in driving alternations during bistable perception. We review the body of literature addressing this divide and conclude that in fact both sides are correct – depending on the form of bistable perception being considered. Converging evidence suggests that visual attention is required for alternations in the type of bistable perception called binocular rivalry, while alternations during other types of bistable perception appear to continue without requiring attention. We discuss some implications of this differential effect of attention for our understanding of the mechanisms underlying bistable perception, and examine how these mechanisms operate during our everyday visual experiences. PMID:27230785
Does visual attention drive the dynamics of bistable perception?
Dieter, Kevin C; Brascamp, Jan; Tadin, Duje; Blake, Randolph
2016-10-01
How does attention interact with incoming sensory information to determine what we perceive? One domain in which this question has received serious consideration is that of bistable perception: a captivating class of phenomena that involves fluctuating visual experience in the face of physically unchanging sensory input. Here, some investigations have yielded support for the idea that attention alone determines what is seen, while others have implicated entirely attention-independent processes in driving alternations during bistable perception. We review the body of literature addressing this divide and conclude that in fact both sides are correct-depending on the form of bistable perception being considered. Converging evidence suggests that visual attention is required for alternations in the type of bistable perception called binocular rivalry, while alternations during other types of bistable perception appear to continue without requiring attention. We discuss some implications of this differential effect of attention for our understanding of the mechanisms underlying bistable perception, and examine how these mechanisms operate during our everyday visual experiences.
Huang, Liqiang; Mo, Lei; Li, Ying
2012-04-01
A large part of the empirical research in the field of visual attention has focused on various concrete paradigms. However, as yet, there has been no clear demonstration of whether or not these paradigms are indeed measuring the same underlying construct. We collected a very large data set (nearly 1.3 million trials) to address this question. We tested 257 participants on nine paradigms: conjunction search, configuration search, counting, tracking, feature access, spatial pattern, response selection, visual short-term memory, and change blindness. A fairly general attention factor was identified. Some of the participants were also tested on eight other paradigms. This general attention factor was found to be correlated with intelligence, visual marking, task switching, mental rotation, and Stroop task. On the other hand, a few paradigms that are very important in the attention literature (attentional capture, consonance-driven orienting, and inhibition of return) were found to be dissociated from this general attention factor.
Gaze-independent brain-computer interfaces based on covert attention and feature attention
NASA Astrophysics Data System (ADS)
Treder, M. S.; Schmidt, N. M.; Blankertz, B.
2011-10-01
There is evidence that conventional visual brain-computer interfaces (BCIs) based on event-related potentials cannot be operated efficiently when eye movements are not allowed. To overcome this limitation, the aim of this study was to develop a visual speller that does not require eye movements. Three different variants of a two-stage visual speller based on covert spatial attention and non-spatial feature attention (i.e. attention to colour and form) were tested in an online experiment with 13 healthy participants. All participants achieved highly accurate BCI control. They could select one out of thirty symbols (chance level 3.3%) with mean accuracies of 88%-97% for the different spellers. The best results were obtained for a speller that was operated using non-spatial feature attention only. These results show that, using feature attention, it is possible to realize high-accuracy, fast-paced visual spellers that have a large vocabulary and are independent of eye gaze.
Dye, Matthew W G; Seymour, Jenessa L; Hauser, Peter C
2016-04-01
Deafness results in cross-modal plasticity, whereby visual functions are altered as a consequence of a lack of hearing. Here, we present a reanalysis of data originally reported by Dye et al. (PLoS One 4(5):e5640, 2009) with the aim of testing additional hypotheses concerning the spatial redistribution of visual attention due to deafness and the use of a visuogestural language (American Sign Language). By looking at the spatial distribution of errors made by deaf and hearing participants performing a visuospatial selective attention task, we sought to determine whether there was evidence for (1) a shift in the hemispheric lateralization of visual selective function as a result of deafness, and (2) a shift toward attending to the inferior visual field in users of a signed language. While no evidence was found for or against a shift in lateralization of visual selective attention as a result of deafness, a shift in the allocation of attention from the superior toward the inferior visual field was inferred in native signers of American Sign Language, possibly reflecting an adaptation to the perceptual demands imposed by a visuogestural language.
Heuer, Anna; Schubö, Anna
2016-01-01
Visual working memory can be modulated according to changes in the cued task relevance of maintained items. Here, we investigated the mechanisms underlying this modulation. In particular, we studied the consequences of attentional selection for selected and unselected items, and the role of individual differences in the efficiency with which attention is deployed. To this end, performance in a visual working memory task as well as the CDA/SPCN and the N2pc, ERP components associated with visual working memory and attentional processes, were analysed. Selection during the maintenance stage was manipulated by means of two successively presented retrocues providing spatial information as to which items were most likely to be tested. Results show that attentional selection serves to robustly protect relevant representations in the focus of attention while unselected representations which may become relevant again still remain available. Individuals with larger retrocueing benefits showed higher efficiency of attentional selection, as indicated by the N2pc, and showed stronger maintenance-associated activity (CDA/SPCN). The findings add to converging evidence that focused representations are protected, and highlight the flexibility of visual working memory, in which information can be weighted according its relevance.
Raffone, Antonino; Srinivasan, Narayanan; van Leeuwen, Cees
2014-01-01
Despite the acknowledged relationship between consciousness and attention, theories of the two have mostly been developed separately. Moreover, these theories have independently attempted to explain phenomena in which both are likely to interact, such as the attentional blink (AB) and working memory (WM) consolidation. Here, we make an effort to bridge the gap between, on the one hand, a theory of consciousness based on the notion of global workspace (GW) and, on the other, a synthesis of theories of visual attention. We offer a theory of attention and consciousness (TAC) that provides a unified neurocognitive account of several phenomena associated with visual search, AB and WM consolidation. TAC assumes multiple processing stages between early visual representation and conscious access, and extends the dynamics of the global neuronal workspace model to a visual attentional workspace (VAW). The VAW is controlled by executive routers, higher-order representations of executive operations in the GW, without the need for explicit saliency or priority maps. TAC leads to newly proposed mechanisms for illusory conjunctions, AB, inattentional blindness and WM capacity, and suggests neural correlates of phenomenal consciousness. Finally, the theory reconciles the all-or-none and graded perspectives on conscious representation. PMID:24639586
Ip, Ifan Betina; Bridge, Holly; Parker, Andrew J.
2014-01-01
An important advance in the study of visual attention has been the identification of a non-spatial component of attention that enhances the response to similar features or objects across the visual field. Here we test whether this non-spatial component can co-select individual features that are perceptually bound into a coherent object. We combined human psychophysics and functional magnetic resonance imaging (fMRI) to demonstrate the ability to co-select individual features from perceptually coherent objects. Our study used binocular disparity and visual motion to define disparity structure-from-motion (dSFM) stimuli. Although the spatial attention system induced strong modulations of the fMRI response in visual regions, the non-spatial system’s ability to co-select features of the dSFM stimulus was less pronounced and variable across subjects. Our results demonstrate that feature and global feature attention effects are variable across participants, suggesting that the feature attention system may be limited in its ability to automatically select features within the attended object. Careful comparison of the task design suggests that even minor differences in the perceptual task may be critical in revealing the presence of global feature attention. PMID:24936974
Raffone, Antonino; Srinivasan, Narayanan; van Leeuwen, Cees
2014-05-05
Despite the acknowledged relationship between consciousness and attention, theories of the two have mostly been developed separately. Moreover, these theories have independently attempted to explain phenomena in which both are likely to interact, such as the attentional blink (AB) and working memory (WM) consolidation. Here, we make an effort to bridge the gap between, on the one hand, a theory of consciousness based on the notion of global workspace (GW) and, on the other, a synthesis of theories of visual attention. We offer a theory of attention and consciousness (TAC) that provides a unified neurocognitive account of several phenomena associated with visual search, AB and WM consolidation. TAC assumes multiple processing stages between early visual representation and conscious access, and extends the dynamics of the global neuronal workspace model to a visual attentional workspace (VAW). The VAW is controlled by executive routers, higher-order representations of executive operations in the GW, without the need for explicit saliency or priority maps. TAC leads to newly proposed mechanisms for illusory conjunctions, AB, inattentional blindness and WM capacity, and suggests neural correlates of phenomenal consciousness. Finally, the theory reconciles the all-or-none and graded perspectives on conscious representation.
Souza, Alessandra S; Rerko, Laura; Oberauer, Klaus
2016-06-01
Visual working memory (VWM) has a limited capacity. This limitation can be mitigated by the use of focused attention: if attention is drawn to the relevant working memory content before test, performance improves (the so-called retro-cue benefit). This study tests 2 explanations of the retro-cue benefit: (a) Focused attention protects memory representations from interference by visual input at test, and (b) focusing attention enhances retrieval. Across 6 experiments using color recognition and color reproduction tasks, we varied the amount of color interference at test, and the delay between a retrieval cue (i.e., the retro-cue) and the memory test. Retro-cue benefits were larger when the memory test introduced interfering visual stimuli, showing that the retro-cue effect is in part because of protection from visual interference. However, when visual interference was held constant, retro-cue benefits were still obtained whenever the retro-cue enabled retrieval of an object from VWM but delayed response selection. Our results show that accessible information in VWM might be lost in the processes of testing memory because of visual interference and incomplete retrieval. This is not an inevitable state of affairs, though: Focused attention can be used to get the most out of VWM. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Evans, Karla K; Horowitz, Todd S; Howe, Piers; Pedersini, Roccardo; Reijnen, Ester; Pinto, Yair; Kuzmova, Yoana; Wolfe, Jeremy M
2011-09-01
A typical visual scene we encounter in everyday life is complex and filled with a huge amount of perceptual information. The term, 'visual attention' describes a set of mechanisms that limit some processing to a subset of incoming stimuli. Attentional mechanisms shape what we see and what we can act upon. They allow for concurrent selection of some (preferably, relevant) information and inhibition of other information. This selection permits the reduction of complexity and informational overload. Selection can be determined both by the 'bottom-up' saliency of information from the environment and by the 'top-down' state and goals of the perceiver. Attentional effects can take the form of modulating or enhancing the selected information. A central role for selective attention is to enable the 'binding' of selected information into unified and coherent representations of objects in the outside world. In the overview on visual attention presented here we review the mechanisms and consequences of selection and inhibition over space and time. We examine theoretical, behavioral and neurophysiologic work done on visual attention. We also discuss the relations between attention and other cognitive processes such as automaticity and awareness. WIREs Cogni Sci 2011 2 503-514 DOI: 10.1002/wcs.127 For further resources related to this article, please visit the WIREs website. Copyright © 2011 John Wiley & Sons, Ltd.
Cognitive load reducing in destination decision system
NASA Astrophysics Data System (ADS)
Wu, Chunhua; Wang, Cong; Jiang, Qien; Wang, Jian; Chen, Hong
2007-12-01
With limited cognitive resource, the quantity of information can be processed by a person is limited. If the limitation is broken, the whole cognitive process would be affected, so did the final decision. The research of effective ways to reduce the cognitive load is launched from two aspects: cutting down the number of alternatives and directing the user to allocate his limited attention resource based on the selective visual attention theory. Decision-making is such a complex process that people usually have difficulties to express their requirements completely. An effective method to get user's hidden requirements is put forward in this paper. With more requirements be caught, the destination decision system can filtering more quantity of inappropriate alternatives. Different information piece has different utility, if the information with high utility would get attention easily, the decision might be made more easily. After analyzing the current selective visual attention theory, a new presentation style based on user's visual attention also put forward in this paper. This model arranges information presentation according to the movement of sightline. Through visual attention, the user can put their limited attention resource on the important information. Hidden requirements catching and presenting information based on the selective visual attention are effective ways to reducing the cognitive load.
Memory-guided attention during active viewing of edited dynamic scenes.
Valuch, Christian; König, Peter; Ansorge, Ulrich
2017-01-01
Films, TV shows, and other edited dynamic scenes contain many cuts, which are abrupt transitions from one video shot to the next. Cuts occur within or between scenes, and often join together visually and semantically related shots. Here, we tested to which degree memory for the visual features of the precut shot facilitates shifting attention to the postcut shot. We manipulated visual similarity across cuts, and measured how this affected covert attention (Experiment 1) and overt attention (Experiments 2 and 3). In Experiments 1 and 2, participants actively viewed a target movie that randomly switched locations with a second, distractor movie at the time of the cuts. In Experiments 1 and 2, participants were able to deploy attention more rapidly and accurately to the target movie's continuation when visual similarity was high than when it was low. Experiment 3 tested whether this could be explained by stimulus-driven (bottom-up) priming by feature similarity, using one clip at screen center that was followed by two alternative continuations to the left and right. Here, even the highest similarity across cuts did not capture attention. We conclude that following cuts of high visual similarity, memory-guided attention facilitates the deployment of attention, but this effect is (top-down) dependent on the viewer's active matching of scene content across cuts.
Attended but unseen: visual attention is not sufficient for visual awareness.
Kentridge, R W; Nijboer, T C W; Heywood, C A
2008-02-12
Does any one psychological process give rise to visual awareness? One candidate is selective attention-when we attend to something it seems we always see it. But if attention can selectively enhance our response to an unseen stimulus then attention cannot be a sufficient precondition for awareness. Kentridge, Heywood & Weiskrantz [Kentridge, R. W., Heywood, C. A., & Weiskrantz, L. (1999). Attention without awareness in blindsight. Proceedings of the Royal Society of London, Series B, 266, 1805-1811; Kentridge, R. W., Heywood, C. A., & Weiskrantz, L. (2004). Spatial attention speeds discrimination without awareness in blindsight. Neuropsychologia, 42, 831-835.] demonstrated just such a dissociation in the blindsight subject GY. Here, we test whether the dissociation generalizes to the normal population. We presented observers with pairs of coloured discs, each masked by the subsequent presentation of a coloured annulus. The discs acted as primes, speeding discrimination of the colour of the annulus when they matched in colour and slowing it when they differed. We show that the location of attention modulated the size of this priming effect. However, the primes were rendered invisible by metacontrast-masking and remained unseen despite being attended. Visual attention could therefore facilitate processing of an invisible target and cannot, therefore, be a sufficient precondition for visual awareness.
What we remember affects how we see: spatial working memory steers saccade programming.
Wong, Jason H; Peterson, Matthew S
2013-02-01
Relationships between visual attention, saccade programming, and visual working memory have been hypothesized for over a decade. Awh, Jonides, and Reuter-Lorenz (Journal of Experimental Psychology: Human Perception and Performance 24(3):780-90, 1998) and Awh et al. (Psychological Science 10(5):433-437, 1999) proposed that rehearsing a location in memory also leads to enhanced attentional processing at that location. In regard to eye movements, Belopolsky and Theeuwes (Attention, Perception & Psychophysics 71(3):620-631, 2009) found that holding a location in working memory affects saccade programming, albeit negatively. In three experiments, we attempted to replicate the findings of Belopolsky and Theeuwes (Attention, Perception & Psychophysics 71(3):620-631, 2009) and determine whether the spatial memory effect can occur in other saccade-cuing paradigms, including endogenous central arrow cues and exogenous irrelevant singletons. In the first experiment, our results were the opposite of those in Belopolsky and Theeuwes (Attention, Perception & Psychophysics 71(3):620-631, 2009), in that we found facilitation (shorter saccade latencies) instead of inhibition when the saccade target matched the region in spatial working memory. In Experiment 2, we sought to determine whether the spatial working memory effect would generalize to other endogenous cuing tasks, such as a central arrow that pointed to one of six possible peripheral locations. As in Experiment 1, we found that saccade programming was facilitated when the cued location coincided with the saccade target. In Experiment 3, we explored how spatial memory interacts with other types of cues, such as a peripheral color singleton target or irrelevant onset. In both cases, the eyes were more likely to go to either singleton when it coincided with the location held in spatial working memory. On the basis of these results, we conclude that spatial working memory and saccade programming are likely to share common overlapping circuitry.
Alvarez, George A.; Cavanagh, Patrick
2014-01-01
It is much easier to divide attention across the left and right visual hemifields than within the same visual hemifield. Here we investigate whether this benefit of dividing attention across separate visual fields is evident at early cortical processing stages. We measured the steady-state visual evoked potential, an oscillatory response of the visual cortex elicited by flickering stimuli, of moving targets and distractors while human observers performed a tracking task. The amplitude of responses at the target frequencies was larger than that of the distractor frequencies when participants tracked two targets in separate hemifields, indicating that attention can modulate early visual processing when it is divided across hemifields. However, these attentional modulations disappeared when both targets were tracked within the same hemifield. These effects were not due to differences in task performance, because accuracy was matched across the tracking conditions by adjusting target speed (with control conditions ruling out effects due to speed alone). To investigate later processing stages, we examined the P3 component over central-parietal scalp sites that was elicited by the test probe at the end of the trial. The P3 amplitude was larger for probes on targets than on distractors, regardless of whether attention was divided across or within a hemifield, indicating that these higher-level processes were not constrained by visual hemifield. These results suggest that modulating early processing stages enables more efficient target tracking, and that within-hemifield competition limits the ability to modulate multiple target representations within the hemifield maps of the early visual cortex. PMID:25164651
Visual attention is required for multiple object tracking.
Tran, Annie; Hoffman, James E
2016-12-01
In the multiple object tracking task, participants attempt to keep track of a moving set of target objects embedded in an identical set of moving distractors. Depending on several display parameters, observers are usually only able to accurately track 3 to 4 objects. Various proposals attribute this limit to a fixed number of discrete indexes (Pylyshyn, 1989), limits in visual attention (Cavanagh & Alvarez, 2005), or "architectural limits" in visual cortical areas (Franconeri, 2013). The present set of experiments examined the specific role of visual attention in tracking using a dual-task methodology in which participants tracked objects while identifying letter probes appearing on the tracked objects and distractors. As predicted by the visual attention model, probe identification was faster and/or more accurate when probes appeared on tracked objects. This was the case even when probes were more than twice as likely to appear on distractors suggesting that some minimum amount of attention is required to maintain accurate tracking performance. When the need to protect tracking accuracy was relaxed, participants were able to allocate more attention to distractors when probes were likely to appear there but only at the expense of large reductions in tracking accuracy. A final experiment showed that people attend to tracked objects even when letters appearing on them are task-irrelevant, suggesting that allocation of attention to tracked objects is an obligatory process. These results support the claim that visual attention is required for tracking objects. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Attention modulates specific motor cortical circuits recruited by transcranial magnetic stimulation.
Mirdamadi, J L; Suzuki, L Y; Meehan, S K
2017-09-17
Skilled performance and acquisition is dependent upon afferent input to motor cortex. The present study used short-latency afferent inhibition (SAI) to probe how manipulation of sensory afference by attention affects different circuits projecting to pyramidal tract neurons in motor cortex. SAI was assessed in the first dorsal interosseous muscle while participants performed a low or high attention-demanding visual detection task. SAI was evoked by preceding a suprathreshold transcranial magnetic stimulus with electrical stimulation of the median nerve at the wrist. To isolate different afferent intracortical circuits in motor cortex SAI was evoked using either posterior-anterior (PA) or anterior-posterior (PA) monophasic current. In an independent sample, somatosensory processing during the same attention-demanding visual detection tasks was assessed using somatosensory-evoked potentials (SEP) elicited by median nerve stimulation. SAI elicited by AP TMS was reduced under high compared to low visual attention demands. SAI elicited by PA TMS was not affected by visual attention demands. SEPs revealed that the high visual attention load reduced the fronto-central P20-N30 but not the contralateral parietal N20-P25 SEP component. P20-N30 reduction confirmed that the visual attention task altered sensory afference. The current results offer further support that PA and AP TMS recruit different neuronal circuits. AP circuits may be one substrate by which cognitive strategies shape sensorimotor processing during skilled movement by altering sensory processing in premotor areas. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.
Tas, A. Caglar; Luck, Steven J.; Hollingworth, Andrew
2016-01-01
There is substantial debate over whether visual working memory (VWM) and visual attention constitute a single system for the selection of task-relevant perceptual information or whether they are distinct systems that can be dissociated when their representational demands diverge. In the present study, we focused on the relationship between visual attention and the encoding of objects into visual working memory (VWM). Participants performed a color change-detection task. During the retention interval, a secondary object, irrelevant to the memory task, was presented. Participants were instructed either to execute an overt shift of gaze to this object (Experiments 1–3) or to attend it covertly (Experiments 4 and 5). Our goal was to determine whether these overt and covert shifts of attention disrupted the information held in VWM. We hypothesized that saccades, which typically introduce a memorial demand to bridge perceptual disruption, would lead to automatic encoding of the secondary object. However, purely covert shifts of attention, which introduce no such demand, would not result in automatic memory encoding. The results supported these predictions. Saccades to the secondary object produced substantial interference with VWM performance, but covert shifts of attention to this object produced no interference with VWM performance. These results challenge prevailing theories that consider attention and VWM to reflect a common mechanism. In addition, they indicate that the relationship between attention and VWM is dependent on the memorial demands of the orienting behavior. PMID:26854532
Standardization of Performance Tests: A Proposal for Further Steps.
1986-07-01
obviously demand substantial attention can sometimes be time shared perfectly. Wickens describes cases in which skilled pianists can time share sight-reading...effects of divided attention on information processing in tracking. Journal of Experimental Psychology, 1, 1-13. Wickens, C.D. (1984). Processing resources... attention he regards focused- divided attention tasks (e.g. dichotic listening, dual task situations) as theoretically useful. From his point of view good
Liu, Ying; Hu, Huijing; Jones, Jeffery A; Guo, Zhiqiang; Li, Weifeng; Chen, Xi; Liu, Peng; Liu, Hanjun
2015-08-01
Speakers rapidly adjust their ongoing vocal productions to compensate for errors they hear in their auditory feedback. It is currently unclear what role attention plays in these vocal compensations. This event-related potential (ERP) study examined the influence of selective and divided attention on the vocal and cortical responses to pitch errors heard in auditory feedback regarding ongoing vocalisations. During the production of a sustained vowel, participants briefly heard their vocal pitch shifted up two semitones while they actively attended to auditory or visual events (selective attention), or both auditory and visual events (divided attention), or were not told to attend to either modality (control condition). The behavioral results showed that attending to the pitch perturbations elicited larger vocal compensations than attending to the visual stimuli. Moreover, ERPs were likewise sensitive to the attentional manipulations: P2 responses to pitch perturbations were larger when participants attended to the auditory stimuli compared to when they attended to the visual stimuli, and compared to when they were not explicitly told to attend to either the visual or auditory stimuli. By contrast, dividing attention between the auditory and visual modalities caused suppressed P2 responses relative to all the other conditions and caused enhanced N1 responses relative to the control condition. These findings provide strong evidence for the influence of attention on the mechanisms underlying the auditory-vocal integration in the processing of pitch feedback errors. In addition, selective attention and divided attention appear to modulate the neurobehavioral processing of pitch feedback errors in different ways. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Zhang, Dan; Hong, Bo; Gao, Shangkai; Röder, Brigitte
2017-05-01
While the behavioral dynamics as well as the functional network of sustained and transient attention have extensively been studied, their underlying neural mechanisms have most often been investigated in separate experiments. In the present study, participants were instructed to perform an audio-visual spatial attention task. They were asked to attend to either the left or the right hemifield and to respond to deviant transient either auditory or visual stimuli. Steady-state visual evoked potentials (SSVEPs) elicited by two task irrelevant pattern reversing checkerboards flickering at 10 and 15 Hz in the left and the right hemifields, respectively, were used to continuously monitor the locus of spatial attention. The amplitude and phase of the SSVEPs were extracted for single trials and were separately analyzed. Sustained attention to one hemifield (spatial attention) as well as to the auditory modality (intermodal attention) increased the inter-trial phase locking of the SSVEP responses, whereas briefly presented visual and auditory stimuli decreased the single-trial SSVEP amplitude between 200 and 500 ms post-stimulus. This transient change of the single-trial amplitude was restricted to the SSVEPs elicited by the reversing checkerboard in the spatially attended hemifield and thus might reflect a transient re-orienting of attention towards the brief stimuli. Thus, the present results demonstrate independent, but interacting neural mechanisms of sustained and transient attentional orienting.
Arend, Isabel; Machado, Liana; Ward, Robert; McGrath, Michelle; Ro, Tony; Rafal, Robert D
2008-01-01
The pulvinar nucleus of the thalamus has been considered as a key structure for visual attention functions (Grieve, K.L. et al. (2000). Trends Neurosci., 23: 35-39; Shipp, S. (2003). Philos. Trans. R. Soc. Lond. B Biol. Sci., 358(1438): 1605-1624). During the past several years, we have studied the role of the human pulvinar in visual attention and oculomotor behaviour by testing a small group of patients with unilateral pulvinar lesions. Here we summarize some of these findings, and present new evidence for the role of this structure in both eye movements and visual attention through two versions of a temporal-order judgment task and an antisaccade task. Pulvinar damage induces an ipsilesional bias in perceptual temporal-order judgments and in saccadic decision, and also increases the latency of antisaccades away from contralesional targets. The demonstration that pulvinar damage affects both attention and oculomotor behaviour highlights the role of this structure in the integration of visual and oculomotor signals and, more generally, its role in flexibly linking visual stimuli with context-specific motor responses.
Fuzzy Classification of High Resolution Remote Sensing Scenes Using Visual Attention Features.
Li, Linyi; Xu, Tingbao; Chen, Yun
2017-01-01
In recent years the spatial resolutions of remote sensing images have been improved greatly. However, a higher spatial resolution image does not always lead to a better result of automatic scene classification. Visual attention is an important characteristic of the human visual system, which can effectively help to classify remote sensing scenes. In this study, a novel visual attention feature extraction algorithm was proposed, which extracted visual attention features through a multiscale process. And a fuzzy classification method using visual attention features (FC-VAF) was developed to perform high resolution remote sensing scene classification. FC-VAF was evaluated by using remote sensing scenes from widely used high resolution remote sensing images, including IKONOS, QuickBird, and ZY-3 images. FC-VAF achieved more accurate classification results than the others according to the quantitative accuracy evaluation indices. We also discussed the role and impacts of different decomposition levels and different wavelets on the classification accuracy. FC-VAF improves the accuracy of high resolution scene classification and therefore advances the research of digital image analysis and the applications of high resolution remote sensing images.
Fuzzy Classification of High Resolution Remote Sensing Scenes Using Visual Attention Features
Xu, Tingbao; Chen, Yun
2017-01-01
In recent years the spatial resolutions of remote sensing images have been improved greatly. However, a higher spatial resolution image does not always lead to a better result of automatic scene classification. Visual attention is an important characteristic of the human visual system, which can effectively help to classify remote sensing scenes. In this study, a novel visual attention feature extraction algorithm was proposed, which extracted visual attention features through a multiscale process. And a fuzzy classification method using visual attention features (FC-VAF) was developed to perform high resolution remote sensing scene classification. FC-VAF was evaluated by using remote sensing scenes from widely used high resolution remote sensing images, including IKONOS, QuickBird, and ZY-3 images. FC-VAF achieved more accurate classification results than the others according to the quantitative accuracy evaluation indices. We also discussed the role and impacts of different decomposition levels and different wavelets on the classification accuracy. FC-VAF improves the accuracy of high resolution scene classification and therefore advances the research of digital image analysis and the applications of high resolution remote sensing images. PMID:28761440
Long-term musical training may improve different forms of visual attention ability.
Rodrigues, Ana Carolina; Loureiro, Maurício Alves; Caramelli, Paulo
2013-08-01
Many studies have suggested that structural and functional cerebral neuroplastic processes result from long-term musical training, which in turn may produce cognitive differences between musicians and non-musicians. We aimed to investigate whether intensive, long-term musical practice is associated with improvements in three different forms of visual attention ability: selective, divided and sustained attention. Musicians from symphony orchestras (n=38) and non-musicians (n=38), who were comparable in age, gender and education, were submitted to three neuropsychological tests, measuring reaction time and accuracy. Musicians showed better performance relative to non-musicians on four variables of the three visual attention tests, and such an advantage could not solely be explained by better sensorimotor integration. Moreover, in the group of musicians, significant correlations were observed between the age at the commencement of musical studies and reaction time in all visual attention tests. The results suggest that musicians present augmented ability in different forms of visual attention, thus illustrating the possible cognitive benefits of long-term musical training. Copyright © 2013 Elsevier Inc. All rights reserved.
An integrated theory of attention and decision making in visual signal detection.
Smith, Philip L; Ratcliff, Roger
2009-04-01
The simplest attentional task, detecting a cued stimulus in an otherwise empty visual field, produces complex patterns of performance. Attentional cues interact with backward masks and with spatial uncertainty, and there is a dissociation in the effects of these variables on accuracy and on response time. A computational theory of performance in this task is described. The theory links visual encoding, masking, spatial attention, visual short-term memory (VSTM), and perceptual decision making in an integrated dynamic framework. The theory assumes that decisions are made by a diffusion process driven by a neurally plausible, shunting VSTM. The VSTM trace encodes the transient outputs of early visual filters in a durable form that is preserved for the time needed to make a decision. Attention increases the efficiency of VSTM encoding, either by increasing the rate of trace formation or by reducing the delay before trace formation begins. The theory provides a detailed, quantitative account of attentional effects in spatial cuing tasks at the level of response accuracy and the response time distributions. (c) 2009 APA, all rights reserved
MacLeod, Jeffrey; Stewart, Brandie M; Newman, Aaron J; Arnell, Karen M
2017-06-01
When two targets are presented within approximately 500 ms of each other in the context of rapid serial visual presentation (RSVP), participants' ability to report the second target is reduced compared to when the targets are presented further apart in time. This phenomenon is known as the attentional blink (AB). The AB is increased in magnitude when the first target is emotionally arousing. Emotionally arousing stimuli can also capture attention and create an AB-like effect even when these stimuli are presented as to-be-ignored distractor items in a single-target RSVP task. This phenomenon is known as emotion-induced blindness (EIB). The phenomenological similarity in the behavioral results associated with the AB with an emotional T1 and EIB suggest that these effects may result from similar underlying mechanisms - a hypothesis that we tested using event-related electrical brain potentials (ERPs). Behavioral results replicated those reported previously, demonstrating an enhanced AB following an emotionally arousing target and a clear EIB effect. In both paradigms highly arousing taboo/sexual words resulted in an increased early posterior negativity (EPN) component that has been suggested to represent early semantic activation and selection for further processing in working memory. In both paradigms taboo/sexual words also produced an increased late positive potential (LPP) component that has been suggested to represent consolidation of a stimulus in working memory. Therefore, ERP results provide evidence that the EIB and emotion-enhanced AB effects share a common underlying mechanism.
Visual Attention and Autistic Behavior in Infants with Fragile X Syndrome
ERIC Educational Resources Information Center
Roberts, Jane E.; Hatton, Deborah D.; Long, Anna C. J.; Anello, Vittoria; Colombo, John
2012-01-01
Aberrant attention is a core feature of fragile X syndrome (FXS), however, little is known regarding the developmental trajectory and underlying physiological processes of attention deficits in FXS. Atypical visual attention is an early emerging and robust indicator of autism in idiopathic (non-FXS) autism. Using a biobehavioral approach with gaze…
Infant Visual Recognition Memory: Independent Contributions of Speed and Attention.
ERIC Educational Resources Information Center
Rose, Susan A.; Feldman, Judith F.; Jankowski, Jeffery J.
2003-01-01
Examined contributions of cognitive processing speed, short-term memory capacity, and attention to infant visual recognition memory. Found that infants who showed better attention and faster processing had better recognition memory. Contributions of attention and processing speed were independent of one another and similar at all ages studied--5,…
Focusing the Spotlight: Individual Differences in Visual Attention Control
ERIC Educational Resources Information Center
Heitz, Richard P.; Engle, Randall W.
2007-01-01
A time-course analysis of visual attention focusing (attentional constraint) was conducted in groups of participants with high and low working memory spans, a dimension the authors have argued reflects the ability to control attention. In 4 experiments, participants performed the Eriksen flanker paradigm under increasing levels of speed stress.…
Learning to Look for Language: Development of Joint Attention in Young Deaf Children
ERIC Educational Resources Information Center
Lieberman, Amy M.; Hatrak, Marla; Mayberry, Rachel I.
2014-01-01
Joint attention between hearing children and their caregivers is typically achieved when the adult provides spoken, auditory linguistic input that relates to the child's current visual focus of attention. Deaf children interacting through sign language must learn to continually switch visual attention between people and objects in order to achieve…
Insights into the Control of Attentional Set in ADHD Using the Attentional Blink Paradigm
ERIC Educational Resources Information Center
Mason, Deanna J.; Humphreys, Glyn W.; Kent, Lindsey
2005-01-01
Background: Previous work on visual selective attention in Attention Deficit Hyperactivity Disorder (ADHD) has utilised spatial search paradigms. This study compared ADHD to control children on a temporal search task using Rapid Serial Visual Presentation (RSVP). In addition, the effects of irrelevant singleton distractors on search performance…
The µ-opioid system promotes visual attention to faces and eyes.
Chelnokova, Olga; Laeng, Bruno; Løseth, Guro; Eikemo, Marie; Willoch, Frode; Leknes, Siri
2016-12-01
Paying attention to others' faces and eyes is a cornerstone of human social behavior. The µ-opioid receptor (MOR) system, central to social reward-processing in rodents and primates, has been proposed to mediate the capacity for affiliative reward in humans. We assessed the role of the human MOR system in visual exploration of faces and eyes of conspecifics. Thirty healthy males received a novel, bidirectional battery of psychopharmacological treatment (an MOR agonist, a non-selective opioid antagonist, or placebo, on three separate days). Eye-movements were recorded while participants viewed facial photographs. We predicted that the MOR system would promote visual exploration of faces, and hypothesized that MOR agonism would increase, whereas antagonism decrease overt attention to the information-rich eye region. The expected linear effect of MOR manipulation on visual attention to the stimuli was observed, such that MOR agonism increased while antagonism decreased visual exploration of faces and overt attention to the eyes. The observed effects suggest that the human MOR system promotes overt visual attention to socially significant cues, in line with theories linking reward value to gaze control and target selection. Enhanced attention to others' faces and eyes represents a putative behavioral mechanism through which the human MOR system promotes social interest. © The Author (2016). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Wirth, Maria; Isaacowitz, Derek M; Kunzmann, Ute
2017-09-01
Prominent life span theories of emotion propose that older adults attend less to negative emotional information and report less negative emotional reactions to the same information than younger adults do. Although parallel age differences in affective information processing and age differences in emotional reactivity have been proposed, they have rarely been investigated within the same study. In this eye-tracking study, we tested age differences in visual attention and emotional reactivity, using standardized emotionally negative stimuli. Additionally, we investigated age differences in the association between visual attention and emotional reactivity, and whether these are moderated by cognitive reappraisal. Older as compared with younger adults showed fixation patterns away from negative image content, while they reacted with greater negative emotions. The association between visual attention and emotional reactivity differed by age group and positive reappraisal. Younger adults felt better when they attended more to negative content rather than less, but this relationship only held for younger adults who did not attach a positive meaning to the negative situation. For older adults, overall, there was no significant association between visual attention and emotional reactivity. However, for older adults who did not use positive reappraisal, decreases in attention to negative information were associated with less negative emotions. The present findings point to a complex relationship between younger and older adults' visual attention and emotional reactions. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Anderson, Brian A
2017-03-01
Through associative reward learning, arbitrary cues acquire the ability to automatically capture visual attention. Previous studies have examined the neural correlates of value-driven attentional orienting, revealing elevated activity within a network of brain regions encompassing the visual corticostriatal loop [caudate tail, lateral occipital complex (LOC) and early visual cortex] and intraparietal sulcus (IPS). Such attentional priority signals raise a broader question concerning how visual signals are combined with reward signals during learning to create a representation that is sensitive to the confluence of the two. This study examines reward signals during the cued reward training phase commonly used to generate value-driven attentional biases. High, compared with low, reward feedback preferentially activated the value-driven attention network, in addition to regions typically implicated in reward processing. Further examination of these reward signals within the visual system revealed information about the identity of the preceding cue in the caudate tail and LOC, and information about the location of the preceding cue in IPS, while early visual cortex represented both location and identity. The results reveal teaching signals within the value-driven attention network during associative reward learning, and further suggest functional specialization within different regions of this network during the acquisition of an integrated representation of stimulus value. © The Author (2016). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Perception of ensemble statistics requires attention.
Jackson-Nielsen, Molly; Cohen, Michael A; Pitts, Michael A
2017-02-01
To overcome inherent limitations in perceptual bandwidth, many aspects of the visual world are represented as summary statistics (e.g., average size, orientation, or density of objects). Here, we investigated the relationship between summary (ensemble) statistics and visual attention. Recently, it was claimed that one ensemble statistic in particular, color diversity, can be perceived without focal attention. However, a broader debate exists over the attentional requirements of conscious perception, and it is possible that some form of attention is necessary for ensemble perception. To test this idea, we employed a modified inattentional blindness paradigm and found that multiple types of summary statistics (color and size) often go unnoticed without attention. In addition, we found attentional costs in dual-task situations, further implicating a role for attention in statistical perception. Overall, we conclude that while visual ensembles may be processed efficiently, some amount of attention is necessary for conscious perception of ensemble statistics. Copyright © 2016 Elsevier Inc. All rights reserved.
2014-01-01
Background Neurofibromatosis type 1 (NF1) affects several areas of cognitive function including visual processing and attention. We investigated the neural mechanisms underlying the visual deficits of children and adolescents with NF1 by studying visual evoked potentials (VEPs) and brain oscillations during visual stimulation and rest periods. Methods Electroencephalogram/event-related potential (EEG/ERP) responses were measured during visual processing (NF1 n = 17; controls n = 19) and idle periods with eyes closed and eyes open (NF1 n = 12; controls n = 14). Visual stimulation was chosen to bias activation of the three detection mechanisms: achromatic, red-green and blue-yellow. Results We found significant differences between the groups for late chromatic VEPs and a specific enhancement in the amplitude of the parieto-occipital alpha amplitude both during visual stimulation and idle periods. Alpha modulation and the negative influence of alpha oscillations in visual performance were found in both groups. Conclusions Our findings suggest abnormal later stages of visual processing and enhanced amplitude of alpha oscillations supporting the existence of deficits in basic sensory processing in NF1. Given the link between alpha oscillations, visual perception and attention, these results indicate a neural mechanism that might underlie the visual sensitivity deficits and increased lapses of attention observed in individuals with NF1. PMID:24559228
Modulation of early cortical processing during divided attention to non-contiguous locations.
Frey, Hans-Peter; Schmid, Anita M; Murphy, Jeremy W; Molholm, Sophie; Lalor, Edmund C; Foxe, John J
2014-05-01
We often face the challenge of simultaneously attending to multiple non-contiguous regions of space. There is ongoing debate as to how spatial attention is divided under these situations. Whereas, for several years, the predominant view was that humans could divide the attentional spotlight, several recent studies argue in favor of a unitary spotlight that rhythmically samples relevant locations. Here, this issue was addressed by the use of high-density electrophysiology in concert with the multifocal m-sequence technique to examine visual evoked responses to multiple simultaneous streams of stimulation. Concurrently, we assayed the topographic distribution of alpha-band oscillatory mechanisms, a measure of attentional suppression. Participants performed a difficult detection task that required simultaneous attention to two stimuli in contiguous (undivided) or non-contiguous parts of space. In the undivided condition, the classic pattern of attentional modulation was observed, with increased amplitude of the early visual evoked response and increased alpha amplitude ipsilateral to the attended hemifield. For the divided condition, early visual responses to attended stimuli were also enhanced, and the observed multifocal topographic distribution of alpha suppression was in line with the divided attention hypothesis. These results support the existence of divided attentional spotlights, providing evidence that the corresponding modulation occurs during initial sensory processing time-frames in hierarchically early visual regions, and that suppressive mechanisms of visual attention selectively target distracter locations during divided spatial attention. © 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Visual Attention and Math Performance in Survivors of Childhood Acute Lymphoblastic Leukemia.
Richard, Annette E; Hodges, Elise K; Heinrich, Kimberley P
2018-01-24
Attentional and academic difficulties, particularly in math, are common in survivors of childhood acute lymphoblastic leukemia (ALL). Of cognitive deficits experienced by survivors of childhood ALL, attention deficits may be particularly responsive to intervention. However, it is unknown whether deficits in particular aspects of attention are associated with deficits in math skills. The current study investigated relationships between math calculation skills, performance on an objective measure of sustained attention, and parent- and teacher-reported attention difficulties. Twenty-four survivors of childhood ALL (Mage = 13.5 years, SD= 2.8 years) completed a computerized measure of sustained attention and response control and a written measure of math calculation skills in the context of a comprehensive clinical neuropsychological evaluation. Parent and teacher ratings of inattention and impulsivity were obtained. Visual response control and visual attention accounted for 26.4% of the variance observed among math performance scores after controlling for IQ (p < .05). Teacher-rated, but not parent-rated, inattention was significantly negatively correlated with math calculation scores. Consistency of responses to visual stimuli on a computerized measure of attention is a unique predictor of variance in math performance among survivors of childhood ALL. Objective testing of visual response control, rather than parent-rated attentional problems, may have clinical utility in identifying ALL survivors at risk for math difficulties. © The Author(s) 2018. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Internal and external spatial attention examined with lateralized EEG power spectra.
Van der Lubbe, Rob H J; Bundt, Carsten; Abrahamse, Elger L
2014-10-02
Several authors argued that retrieval of an item from visual short term memory (internal spatial attention) and focusing attention on an externally presented item (external spatial attention) are similar. Part of the neuroimaging support for this view may be due to the employed experimental procedures. Furthermore, as internal spatial attention may have a more induced than evoked nature some effects may not have been visible in event related analyses of the electroencephalogram (EEG), which limits the possibility to demonstrate differences. In the current study, a colored frame cued which stimulus, one out of four presented in separate quadrants, required a response, which depended on the form of the cued stimulus (circle or square). Importantly, the frame occurred either before (precue), simultaneously with (simultaneous cue), or after the stimuli (postcue). The precue and simultaneous cue condition both concern external attention, while the postcue condition implies the involvement of internal spatial attention. Event-related lateralizations (ERLs), reflecting evoked effects, and lateralized power spectra (LPS), reflecting both evoked and induced effects, were determined. ERLs revealed a posterior contralateral negativity (PCN) only in the precue condition. LPS analyses on the raw EEG showed early increased contralateral theta power at posterior sites and later increased ipsilateral alpha power at occipito-temporal sites in all cue conditions. Responses were faster when the internally or externally attended location corresponded with the required response side than when not. These findings provide further support for the view that internal and external spatial attention share their underlying mechanism. Copyright © 2014 Elsevier B.V. All rights reserved.
Smid, Henderikus G. O. M.; Bruggeman, Richard; Martens, Sander
2013-01-01
Background Schizophrenia is associated with impairments of the perception of objects, but how this affects higher cognitive functions, whether this impairment is already present after recent onset of psychosis, and whether it is specific for schizophrenia related psychosis, is not clear. We therefore tested the hypothesis that because schizophrenia is associated with impaired object perception, schizophrenia patients should differ in shifting attention between objects compared to healthy controls. To test this hypothesis, a task was used that allowed us to separately observe space-based and object-based covert orienting of attention. To examine whether impairment of object-based visual attention is related to higher order cognitive functions, standard neuropsychological tests were also administered. Method Patients with recent onset psychosis and normal controls performed the attention task, in which space- and object-based attention shifts were induced by cue-target sequences that required reorienting of attention within an object, or reorienting attention between objects. Results Patients with and without schizophrenia showed slower than normal spatial attention shifts, but the object-based component of attention shifts in patients was smaller than normal. Schizophrenia was specifically associated with slowed right-to-left attention shifts. Reorienting speed was significantly correlated with verbal memory scores in controls, and with visual attention scores in patients, but not with speed-of-processing scores in either group. Conclusions deficits of object-perception and spatial attention shifting are not only associated with schizophrenia, but are common to all psychosis patients. Schizophrenia patients only differed by having abnormally slow right-to-left visual field reorienting. Deficits of object-perception and spatial attention shifting are already present after recent onset of psychosis. Studies investigating visual spatial attention should take into account the separable effects of space-based and object-based shifting of attention. Impaired reorienting in patients was related to impaired visual attention, but not to deficits of processing speed and verbal memory. PMID:23536901
Smid, Henderikus G O M; Bruggeman, Richard; Martens, Sander
2013-01-01
Schizophrenia is associated with impairments of the perception of objects, but how this affects higher cognitive functions, whether this impairment is already present after recent onset of psychosis, and whether it is specific for schizophrenia related psychosis, is not clear. We therefore tested the hypothesis that because schizophrenia is associated with impaired object perception, schizophrenia patients should differ in shifting attention between objects compared to healthy controls. To test this hypothesis, a task was used that allowed us to separately observe space-based and object-based covert orienting of attention. To examine whether impairment of object-based visual attention is related to higher order cognitive functions, standard neuropsychological tests were also administered. Patients with recent onset psychosis and normal controls performed the attention task, in which space- and object-based attention shifts were induced by cue-target sequences that required reorienting of attention within an object, or reorienting attention between objects. Patients with and without schizophrenia showed slower than normal spatial attention shifts, but the object-based component of attention shifts in patients was smaller than normal. Schizophrenia was specifically associated with slowed right-to-left attention shifts. Reorienting speed was significantly correlated with verbal memory scores in controls, and with visual attention scores in patients, but not with speed-of-processing scores in either group. deficits of object-perception and spatial attention shifting are not only associated with schizophrenia, but are common to all psychosis patients. Schizophrenia patients only differed by having abnormally slow right-to-left visual field reorienting. Deficits of object-perception and spatial attention shifting are already present after recent onset of psychosis. Studies investigating visual spatial attention should take into account the separable effects of space-based and object-based shifting of attention. Impaired reorienting in patients was related to impaired visual attention, but not to deficits of processing speed and verbal memory.
(C)overt attention and visual speller design in an ERP-based brain-computer interface.
Treder, Matthias S; Blankertz, Benjamin
2010-05-28
In a visual oddball paradigm, attention to an event usually modulates the event-related potential (ERP). An ERP-based brain-computer interface (BCI) exploits this neural mechanism for communication. Hitherto, it was unclear to what extent the accuracy of such a BCI requires eye movements (overt attention) or whether it is also feasible for targets in the visual periphery (covert attention). Also unclear was how the visual design of the BCI can be improved to meet peculiarities of peripheral vision such as low spatial acuity and crowding. Healthy participants (N = 13) performed a copy-spelling task wherein they had to count target intensifications. EEG and eye movements were recorded concurrently. First, (c)overt attention was investigated by way of a target fixation condition and a central fixation condition. In the latter, participants had to fixate a dot in the center of the screen and allocate their attention to a target in the visual periphery. Second, the effect of visual speller layout was investigated by comparing the symbol Matrix to an ERP-based Hex-o-Spell, a two-levels speller consisting of six discs arranged on an invisible hexagon. We assessed counting errors, ERP amplitudes, and offline classification performance. There is an advantage (i.e., less errors, larger ERP amplitude modulation, better classification) of overt attention over covert attention, and there is also an advantage of the Hex-o-Spell over the Matrix. Using overt attention, P1, N1, P2, N2, and P3 components are enhanced by attention. Using covert attention, only N2 and P3 are enhanced for both spellers, and N1 and P2 are modulated when using the Hex-o-Spell but not when using the Matrix. Consequently, classifiers rely mainly on early evoked potentials in overt attention and on later cognitive components in covert attention. Both overt and covert attention can be used to drive an ERP-based BCI, but performance is markedly lower for covert attention. The Hex-o-Spell outperforms the Matrix, especially when eye movements are not permitted, illustrating that performance can be increased if one accounts for peculiarities of peripheral vision.
(C)overt attention and visual speller design in an ERP-based brain-computer interface
2010-01-01
Background In a visual oddball paradigm, attention to an event usually modulates the event-related potential (ERP). An ERP-based brain-computer interface (BCI) exploits this neural mechanism for communication. Hitherto, it was unclear to what extent the accuracy of such a BCI requires eye movements (overt attention) or whether it is also feasible for targets in the visual periphery (covert attention). Also unclear was how the visual design of the BCI can be improved to meet peculiarities of peripheral vision such as low spatial acuity and crowding. Method Healthy participants (N = 13) performed a copy-spelling task wherein they had to count target intensifications. EEG and eye movements were recorded concurrently. First, (c)overt attention was investigated by way of a target fixation condition and a central fixation condition. In the latter, participants had to fixate a dot in the center of the screen and allocate their attention to a target in the visual periphery. Second, the effect of visual speller layout was investigated by comparing the symbol Matrix to an ERP-based Hex-o-Spell, a two-levels speller consisting of six discs arranged on an invisible hexagon. Results We assessed counting errors, ERP amplitudes, and offline classification performance. There is an advantage (i.e., less errors, larger ERP amplitude modulation, better classification) of overt attention over covert attention, and there is also an advantage of the Hex-o-Spell over the Matrix. Using overt attention, P1, N1, P2, N2, and P3 components are enhanced by attention. Using covert attention, only N2 and P3 are enhanced for both spellers, and N1 and P2 are modulated when using the Hex-o-Spell but not when using the Matrix. Consequently, classifiers rely mainly on early evoked potentials in overt attention and on later cognitive components in covert attention. Conclusions Both overt and covert attention can be used to drive an ERP-based BCI, but performance is markedly lower for covert attention. The Hex-o-Spell outperforms the Matrix, especially when eye movements are not permitted, illustrating that performance can be increased if one accounts for peculiarities of peripheral vision. PMID:20509913
Bressler, David W.; Silver, Michael A.
2010-01-01
Spatial attention improves visual perception and increases the amplitude of neural responses in visual cortex. In addition, spatial attention tasks and fMRI have been used to discover topographic visual field representations in regions outside visual cortex. We therefore hypothesized that requiring subjects to attend to a retinotopic mapping stimulus would facilitate the characterization of visual field representations in a number of cortical areas. In our study, subjects attended either a central fixation point or a wedge-shaped stimulus that rotated about the fixation point. Response reliability was assessed by computing coherence between the fMRI time series and a sinusoid with the same frequency as the rotating wedge stimulus. When subjects attended to the rotating wedge instead of ignoring it, the reliability of retinotopic mapping signals increased by approximately 50% in early visual cortical areas (V1, V2, V3, V3A/B, V4) and ventral occipital cortex (VO1) and by approximately 75% in lateral occipital (LO1, LO2) and posterior parietal (IPS0, IPS1 and IPS2) cortical areas. Additionally, one 5-minute run of retinotopic mapping in the attention-to-wedge condition produced responses as reliable as the average of three to five (early visual cortex) or more than five (lateral occipital, ventral occipital, and posterior parietal cortex) attention-to-fixation runs. These results demonstrate that allocating attention to the retinotopic mapping stimulus substantially reduces the amount of scanning time needed to determine the visual field representations in occipital and parietal topographic cortical areas. Attention significantly increased response reliability in every cortical area we examined and may therefore be a general mechanism for improving the fidelity of neural representations of sensory stimuli at multiple levels of the cortical processing hierarchy. PMID:20600961
Schwartz, Sophie; Vuilleumier, Patrik; Hutton, Chloe; Maravita, Angelo; Dolan, Raymond J; Driver, Jon
2005-06-01
Perceptual suppression of distractors may depend on both endogenous and exogenous factors, such as attentional load of the current task and sensory competition among simultaneous stimuli, respectively. We used functional magnetic resonance imaging (fMRI) to compare these two types of attentional effects and examine how they may interact in the human brain. We varied the attentional load of a visual monitoring task performed on a rapid stream at central fixation without altering the central stimuli themselves, while measuring the impact on fMRI responses to task-irrelevant peripheral checkerboards presented either unilaterally or bilaterally. Activations in visual cortex for irrelevant peripheral stimulation decreased with increasing attentional load at fixation. This relative decrease was present even in V1, but became larger for successive visual areas through to V4. Decreases in activation for contralateral peripheral checkerboards due to higher central load were more pronounced within retinotopic cortex corresponding to 'inner' peripheral locations relatively near the central targets than for more eccentric 'outer' locations, demonstrating a predominant suppression of nearby surround rather than strict 'tunnel vision' during higher task load at central fixation. Contralateral activations for peripheral stimulation in one hemifield were reduced by competition with concurrent stimulation in the other hemifield only in inferior parietal cortex, not in retinotopic areas of occipital visual cortex. In addition, central attentional load interacted with competition due to bilateral versus unilateral peripheral stimuli specifically in posterior parietal and fusiform regions. These results reveal that task-dependent attentional load, and interhemifield stimulus-competition, can produce distinct influences on the neural responses to peripheral visual stimuli within the human visual system. These distinct mechanisms in selective visual processing may be integrated within posterior parietal areas, rather than earlier occipital cortex.
Sex differences in visual attention to sexually explicit videos: a preliminary study.
Tsujimura, Akira; Miyagawa, Yasushi; Takada, Shingo; Matsuoka, Yasuhiro; Takao, Tetsuya; Hirai, Toshiaki; Matsushita, Masateru; Nonomura, Norio; Okuyama, Akihiko
2009-04-01
Although men appear to be more interested in sexual stimuli than women, this difference is not completely understood. Eye-tracking technology has been used to investigate visual attention to still sexual images; however, it has not been applied to moving sexual images. To investigate whether sex difference exists in visual attention to sexual videos. Eleven male and 11 female healthy volunteers were studied by our new methodology. The subjects viewed two sexual videos (one depicting sexual intercourse and one not) in which several regions were designated for eye-gaze analysis in each frame. Visual attention was measured across each designated region according to gaze duration. Sex differences, the region attracting the most attention, and visually favored sex were evaluated. In the nonintercourse clip, gaze time for the face and body of the actress was significantly shorter among women than among men. Gaze time for the face and body of the actor and nonhuman regions was significantly longer for women than men. The region attracting the most attention was the face of the actress for both men and women. Men viewed the opposite sex for a significantly longer period than did women, and women viewed their own sex for a significantly longer period than did men. However, gaze times for the clip showing intercourse were not significantly different between sexes. A sex difference existed in visual attention to a sexual video without heterosexual intercourse; men viewed the opposite sex for longer periods than did women, and women viewed the same sex for longer periods than did men. There was no statistically significant sex difference in viewing patterns in a sexual video showing heterosexual intercourse, and we speculate that men and women may have similar visual attention patterns if the sexual stimuli are sufficiently explicit.
Theory of Visual Attention (TVA) applied to mice in the 5-choice serial reaction time task.
Fitzpatrick, C M; Caballero-Puntiverio, M; Gether, U; Habekost, T; Bundesen, C; Vangkilde, S; Woldbye, D P D; Andreasen, J T; Petersen, A
2017-03-01
The 5-choice serial reaction time task (5-CSRTT) is widely used to measure rodent attentional functions. In humans, many attention studies in healthy and clinical populations have used testing based on Bundesen's Theory of Visual Attention (TVA) to estimate visual processing speeds and other parameters of attentional capacity. We aimed to bridge these research fields by modifying the 5-CSRTT's design and by mathematically modelling data to derive attentional parameters analogous to human TVA-based measures. C57BL/6 mice were tested in two 1-h sessions on consecutive days with a version of the 5-CSRTT where stimulus duration (SD) probe length was varied based on information from previous TVA studies. Thereafter, a scopolamine hydrobromide (HBr; 0.125 or 0.25 mg/kg) pharmacological challenge was undertaken, using a Latin square design. Mean score values were modelled using a new three-parameter version of TVA to obtain estimates of visual processing speeds, visual thresholds and motor response baselines in each mouse. The parameter estimates for each animal were reliable across sessions, showing that the data were stable enough to support analysis on an individual level. Scopolamine HBr dose-dependently reduced 5-CSRTT attentional performance while also increasing reward collection latency at the highest dose. Upon TVA modelling, scopolamine HBr significantly reduced visual processing speed at both doses, while having less pronounced effects on visual thresholds and motor response baselines. This study shows for the first time how 5-CSRTT performance in mice can be mathematically modelled to yield estimates of attentional capacity that are directly comparable to estimates from human studies.
Cross-Modal Attention Effects in the Vestibular Cortex during Attentive Tracking of Moving Objects.
Frank, Sebastian M; Sun, Liwei; Forster, Lisa; Tse, Peter U; Greenlee, Mark W
2016-12-14
The midposterior fundus of the Sylvian fissure in the human brain is central to the cortical processing of vestibular cues. At least two vestibular areas are located at this site: the parietoinsular vestibular cortex (PIVC) and the posterior insular cortex (PIC). It is now well established that activity in sensory systems is subject to cross-modal attention effects. Attending to a stimulus in one sensory modality enhances activity in the corresponding cortical sensory system, but simultaneously suppresses activity in other sensory systems. Here, we wanted to probe whether such cross-modal attention effects also target the vestibular system. To this end, we used a visual multiple-object tracking task. By parametrically varying the number of tracked targets, we could measure the effect of attentional load on the PIVC and the PIC while holding the perceptual load constant. Participants performed the tracking task during functional magnetic resonance imaging. Results show that, compared with passive viewing of object motion, activity during object tracking was suppressed in the PIVC and enhanced in the PIC. Greater attentional load, induced by increasing the number of tracked targets, was associated with a corresponding increase in the suppression of activity in the PIVC. Activity in the anterior part of the PIC decreased with increasing load, whereas load effects were absent in the posterior PIC. Results of a control experiment show that attention-induced suppression in the PIVC is stronger than any suppression evoked by the visual stimulus per se. Overall, our results suggest that attention has a cross-modal modulatory effect on the vestibular cortex during visual object tracking. In this study we investigate cross-modal attention effects in the human vestibular cortex. We applied the visual multiple-object tracking task because it is known to evoke attentional load effects on neural activity in visual motion-processing and attention-processing areas. Here we demonstrate a load-dependent effect of attention on the activation in the vestibular cortex, despite constant visual motion stimulation. We find that activity in the parietoinsular vestibular cortex is more strongly suppressed the greater the attentional load on the visual tracking task. These findings suggest cross-modal attentional modulation in the vestibular cortex. Copyright © 2016 the authors 0270-6474/16/3612720-09$15.00/0.
Searching in clutter : visual attention strategies of expert pilots
DOT National Transportation Integrated Search
2012-10-22
Clutter can slow visual search. However, experts may develop attention strategies that alleviate the effects of clutter on search performance. In the current study we examined the effects of global and local clutter on visual search performance and a...
Wiese, Holger; Schweinberger, Stefan R
2015-01-01
The present study examined whether semantic memory for newly learned people is structured by visual co-occurrence, shared semantics, or both. Participants were trained with pairs of simultaneously presented (i.e., co-occurring) preexperimentally unfamiliar faces, which either did or did not share additionally provided semantic information (occupation, place of living, etc.). Semantic information could also be shared between faces that did not co-occur. A subsequent priming experiment revealed faster responses for both co-occurrence/no shared semantics and no co-occurrence/shared semantics conditions, than for an unrelated condition. Strikingly, priming was strongest in the co-occurrence/shared semantics condition, suggesting additive effects of these factors. Additional analysis of event-related brain potentials yielded priming in the N400 component only for combined effects of visual co-occurrence and shared semantics, with more positive amplitudes in this than in the unrelated condition. Overall, these findings suggest that both semantic relatedness and visual co-occurrence are important when novel information is integrated into person-related semantic memory.
Crossmodal attention switching: auditory dominance in temporal discrimination tasks.
Lukas, Sarah; Philipp, Andrea M; Koch, Iring
2014-11-01
Visual stimuli are often processed more efficiently than accompanying stimuli in another modality. In line with this "visual dominance", earlier studies on attentional switching showed a clear benefit for visual stimuli in a bimodal visual-auditory modality-switch paradigm that required spatial stimulus localization in the relevant modality. The present study aimed to examine the generality of this visual dominance effect. The modality appropriateness hypothesis proposes that stimuli in different modalities are differentially effectively processed depending on the task dimension, so that processing of visual stimuli is favored in the dimension of space, whereas processing auditory stimuli is favored in the dimension of time. In the present study, we examined this proposition by using a temporal duration judgment in a bimodal visual-auditory switching paradigm. Two experiments demonstrated that crossmodal interference (i.e., temporal stimulus congruence) was larger for visual stimuli than for auditory stimuli, suggesting auditory dominance when performing temporal judgment tasks. However, attention switch costs were larger for the auditory modality than for visual modality, indicating a dissociation of the mechanisms underlying crossmodal competition in stimulus processing and modality-specific biasing of attentional set. Copyright © 2014 Elsevier B.V. All rights reserved.
Manipulating the disengage operation of covert visual spatial attention.
Danckert, J; Maruff, P
1997-05-01
Processes of covert visual spatial attention have been closely linked to the programming of saccadic eye movements. In particular, it has been hypothesized that the reduction in saccadic latency that occurs in the gap paradigm is due to the prior disengagement of covert visual spatial attention. This explanation has received considerable criticism. No study as yet as attempted to demonstrate a facilitation of the disengagement of attention from a covertly attended object. If such facilitation were possible, it would support the hypothesis that the predisengagement of covert attention is necessary for the generation of express saccades. In two experiments using covert orienting of visual attention tasks (COVAT), with a high probability that targets would appear contralateral to the cued location, we attempted to facilitate the disengagement of covert attention by extinguishing peripheral cues prior to the appearance of targets. We hypothesized that the gap between cue offset and target onset would facilitate disengagement of attention from a covertly attended object. For both experiments, responses to targets appearing after a gap were slower than were responses in the no-gap condition. These results suggest that the prior offset of a covertly attended object does not facilitate the disengagement of attention.
Williams, Isla M; Schofield, Peter; Khade, Neha; Abel, Larry A
2016-12-01
Multiple sclerosis (MS) frequently causes impairment of cognitive function. We compared patients with MS with controls on divided visual attention tasks. The MS patients' and controls' stare optokinetic nystagmus (OKN) was recorded in response to a 24°/s full field stimulus. Suppression of the OKN response, judged by the gain, was measured during tasks dividing visual attention between the fixation target and a second stimulus, central or peripheral, static or dynamic. All participants completed the Audio Recorded Cognitive Screen. MS patients had lower gain on the baseline stare OKN. OKN suppression in divided attention tasks was the same in MS patients as in controls but in both groups was better maintained in static than in dynamic tasks. In only dynamic tasks, older age was associated with less effective OKN suppression. MS patients had lower scores on a timed attention task and on memory. There was no significant correlation between attention or memory and eye movement parameters. Attention, a complex multifaceted construct, has different neural combinations for each task. Despite impairments on some measures of attention, MS patients completed the divided visual attention tasks normally. Copyright © 2016 Elsevier Ltd. All rights reserved.
Color categories affect pre-attentive color perception.
Clifford, Alexandra; Holmes, Amanda; Davies, Ian R L; Franklin, Anna
2010-10-01
Categorical perception (CP) of color is the faster and/or more accurate discrimination of colors from different categories than equivalently spaced colors from the same category. Here, we investigate whether color CP at early stages of chromatic processing is independent of top-down modulation from attention. A visual oddball task was employed where frequent and infrequent colored stimuli were either same- or different-category, with chromatic differences equated across conditions. Stimuli were presented peripheral to a central distractor task to elicit an event-related potential (ERP) known as the visual mismatch negativity (vMMN). The vMMN is an index of automatic and pre-attentive visual change detection arising from generating loci in visual cortices. The results revealed a greater vMMN for different-category than same-category change detection when stimuli appeared in the lower visual field, and an absence of attention-related ERP components. The findings provide the first clear evidence for an automatic and pre-attentive categorical code for color. Copyright © 2010 Elsevier B.V. All rights reserved.
Munafò, Marcus R; Roberts, Nicole; Bauld, Linda; Leonards, Ute
2011-08-01
To assess the impact of plain packaging on visual attention towards health warning information on cigarette packs. Mixed-model experimental design, comprising smoking status as a between-subjects factor, and package type (branded versus plain) as a within-subjects factor. University laboratory. Convenience sample of young adults, comprising non-smokers (n = 15), weekly smokers (n = 14) and daily smokers (n = 14). Number of saccades (eye movements) towards health warnings on cigarette packs, to directly index visual attention. Analysis of variance indicated more eye movements (i.e. greater visual attention) towards health warnings compared to brand information on plain packs versus branded packs. This effect was observed among non-smokers and weekly smokers, but not daily smokers. Among non-smokers and non-daily cigarette smokers, plain packaging appears to increase visual attention towards health warning information and away from brand information. © 2011 The Authors, Addiction © 2011 Society for the Study of Addiction.
Briand, K A; Klein, R M
1987-05-01
In the present study we investigated whether the visually allocated "beam" studied by Posner and others is the same visual attentional resource that performs the role of feature integration in Treisman's model. Subjects were cued to attend to a certain spatial location by a visual cue, and performance at expected and unexpected stimulus locations was compared. Subjects searched for a target letter (R) with distractor letters that either could give rise to illusory conjunctions (PQ) or could not (PB). Results from three separate experiments showed that orienting attention in response to central cues (endogenous orienting) showed similar effects for both conjunction and feature search. However, when attention was oriented with peripheral visual cues (exogenous orienting), conjunction search showed larger effects of attention than did feature search. It is suggested that the attentional systems that are oriented in response to central and peripheral cues may not be the same and that only the latter performs a role in feature integration. Possibilities for future research are discussed.
Attentional enhancement of spatial resolution: linking behavioural and neurophysiological evidence
Anton-Erxleben, Katharina; Carrasco, Marisa
2014-01-01
Attention allows us to select relevant sensory information for preferential processing. Behaviourally, it improves performance in various visual tasks. One prominent effect of attention is the modulation of performance in tasks that involve the visual system’s spatial resolution. Physiologically, attention modulates neuronal responses and alters the profile and position of receptive fields near the attended location. Here, we develop a hypothesis linking the behavioural and electrophysiological evidence. The proposed framework seeks to explain how these receptive field changes enhance the visual system’s effective spatial resolution and how the same mechanisms may also underlie attentional effects on the representation of spatial information. PMID:23422910
Modulation of Neuronal Responses by Exogenous Attention in Macaque Primary Visual Cortex.
Wang, Feng; Chen, Minggui; Yan, Yin; Zhaoping, Li; Li, Wu
2015-09-30
Visual perception is influenced by attention deployed voluntarily or triggered involuntarily by salient stimuli. Modulation of visual cortical processing by voluntary or endogenous attention has been extensively studied, but much less is known about how involuntary or exogenous attention affects responses of visual cortical neurons. Using implanted microelectrode arrays, we examined the effects of exogenous attention on neuronal responses in the primary visual cortex (V1) of awake monkeys. A bright annular cue was flashed either around the receptive fields of recorded neurons or in the opposite visual field to capture attention. A subsequent grating stimulus probed the cue-induced effects. In a fixation task, when the cue-to-probe stimulus onset asynchrony (SOA) was <240 ms, the cue induced a transient increase of neuronal responses to the probe at the cued location during 40-100 ms after the onset of neuronal responses to the probe. This facilitation diminished and disappeared after repeated presentations of the same cue but recurred for a new cue of a different color. In another task to detect the probe, relative shortening of monkey's reaction times for the validly cued probe depended on the SOA in a way similar to the cue-induced V1 facilitation, and the behavioral and physiological cueing effects remained after repeated practice. Flashing two cues simultaneously in the two opposite visual fields weakened or diminished both the physiological and behavioral cueing effects. Our findings indicate that exogenous attention significantly modulates V1 responses and that the modulation strength depends on both novelty and task relevance of the stimulus. Significance statement: Visual attention can be involuntarily captured by a sudden appearance of a conspicuous object, allowing rapid reactions to unexpected events of significance. The current study discovered a correlate of this effect in monkey primary visual cortex. An abrupt, salient, flash enhanced neuronal responses, and shortened the animal's reaction time, to a subsequent visual probe stimulus at the same location. However, the enhancement of the neural responses diminished after repeated exposures to this flash if the animal was not required to react to the probe. Moreover, a second, simultaneous, flash at another location weakened the neuronal and behavioral effects of the first one. These findings revealed, beyond the observations reported so far, the effects of exogenous attention in the brain. Copyright © 2015 the authors 0270-6474/15/3513419-11$15.00/0.
Attention affects visual perceptual processing near the hand.
Cosman, Joshua D; Vecera, Shaun P
2010-09-01
Specialized, bimodal neural systems integrate visual and tactile information in the space near the hand. Here, we show that visuo-tactile representations allow attention to influence early perceptual processing, namely, figure-ground assignment. Regions that were reached toward were more likely than other regions to be assigned as foreground figures, and hand position competed with image-based information to bias figure-ground assignment. Our findings suggest that hand position allows attention to influence visual perceptual processing and that visual processes typically viewed as unimodal can be influenced by bimodal visuo-tactile representations.
Lateralization in Alpha-Band Oscillations Predicts the Locus and Spatial Distribution of Attention
Ikkai, Akiko; Dandekar, Sangita; Curtis, Clayton E.
2016-01-01
Attending to a task-relevant location changes how neural activity oscillates in the alpha band (8–13Hz) in posterior visual cortical areas. However, a clear understanding of the relationships between top-down attention, changes in alpha oscillations in visual cortex, and attention performance are still poorly understood. Here, we tested the degree to which the posterior alpha power tracked the locus of attention, the distribution of attention, and how well the topography of alpha could predict the locus of attention. We recorded magnetoencephalographic (MEG) data while subjects performed an attention demanding visual discrimination task that dissociated the direction of attention from the direction of a saccade to indicate choice. On some trials, an endogenous cue predicted the target’s location, while on others it contained no spatial information. When the target’s location was cued, alpha power decreased in sensors over occipital cortex contralateral to the attended visual field. When the cue did not predict the target’s location, alpha power again decreased in sensors over occipital cortex, but bilaterally, and increased in sensors over frontal cortex. Thus, the distribution and the topography of alpha reliably indicated the locus of covert attention. Together, these results suggest that alpha synchronization reflects changes in the excitability of populations of neurons whose receptive fields match the locus of attention. This is consistent with the hypothesis that alpha oscillations reflect the neural mechanisms by which top-down control of attention biases information processing and modulate the activity of neurons in visual cortex. PMID:27144717
ERIC Educational Resources Information Center
Vergauwe, Evie; Barrouillet, Pierre; Camos, Valerie
2009-01-01
Examinations of interference between visual and spatial materials in working memory have suggested domain- and process-based fractionations of visuo-spatial working memory. The present study examined the role of central time-based resource sharing in visuo-spatial working memory and assessed its role in obtained interference patterns. Visual and…
What Is the Unit of Visual Attention? Object for Selection, but Boolean Map for Access
ERIC Educational Resources Information Center
Huang, Liqiang
2010-01-01
In the past 20 years, numerous theories and findings have suggested that the unit of visual attention is the object. In this study, I first clarify 2 different meanings of unit of visual attention, namely the unit of access in the sense of measurement and the unit of selection in the sense of division. In accordance with this distinction, I argue…
Visual attention to food cues in obesity: an eye-tracking study.
Doolan, Katy J; Breslin, Gavin; Hanna, Donncha; Murphy, Kate; Gallagher, Alison M
2014-12-01
Based on the theory of incentive sensitization, the aim of this study was to investigate differences in attentional processing of food-related visual cues between normal-weight and overweight/obese males and females. Twenty-six normal-weight (14M, 12F) and 26 overweight/obese (14M, 12F) adults completed a visual probe task and an eye-tracking paradigm. Reaction times and eye movements to food and control images were collected during both a fasted and fed condition in a counterbalanced design. Participants had greater visual attention towards high-energy-density food images compared to low-energy-density food images regardless of hunger condition. This was most pronounced in overweight/obese males who had significantly greater maintained attention towards high-energy-density food images when compared with their normal-weight counterparts however no between weight group differences were observed for female participants. High-energy-density food images appear to capture visual attention more readily than low-energy-density food images. Results also suggest the possibility of an altered visual food cue-associated reward system in overweight/obese males. Attentional processing of food cues may play a role in eating behaviors thus should be taken into consideration as part of an integrated approach to curbing obesity. © 2014 The Obesity Society.
Integrating mechanisms of visual guidance in naturalistic language production.
Coco, Moreno I; Keller, Frank
2015-05-01
Situated language production requires the integration of visual attention and linguistic processing. Previous work has not conclusively disentangled the role of perceptual scene information and structural sentence information in guiding visual attention. In this paper, we present an eye-tracking study that demonstrates that three types of guidance, perceptual, conceptual, and structural, interact to control visual attention. In a cued language production experiment, we manipulate perceptual (scene clutter) and conceptual guidance (cue animacy) and measure structural guidance (syntactic complexity of the utterance). Analysis of the time course of language production, before and during speech, reveals that all three forms of guidance affect the complexity of visual responses, quantified in terms of the entropy of attentional landscapes and the turbulence of scan patterns, especially during speech. We find that perceptual and conceptual guidance mediate the distribution of attention in the scene, whereas structural guidance closely relates to scan pattern complexity. Furthermore, the eye-voice span of the cued object and its perceptual competitor are similar; its latency mediated by both perceptual and structural guidance. These results rule out a strict interpretation of structural guidance as the single dominant form of visual guidance in situated language production. Rather, the phase of the task and the associated demands of cross-modal cognitive processing determine the mechanisms that guide attention.
Both hand position and movement direction modulate visual attention
Festman, Yariv; Adam, Jos J.; Pratt, Jay; Fischer, Martin H.
2013-01-01
The current study explored effects of continuous hand motion on the allocation of visual attention. A concurrent paradigm was used to combine visually concealed continuous hand movements with an attentionally demanding letter discrimination task. The letter probe appeared contingent upon the moving right hand passing through one of six positions. Discrimination responses were then collected via a keyboard press with the static left hand. Both the right hand's position and its movement direction systematically contributed to participants' visual sensitivity. Discrimination performance increased substantially when the right hand was distant from, but moving toward the visual probe location (replicating the far-hand effect, Festman et al., 2013). However, this effect disappeared when the probe appeared close to the static left hand, supporting the view that static and dynamic features of both hands combine in modulating pragmatic maps of attention. PMID:24098288
A theta rhythm in macaque visual cortex and its attentional modulation
Spyropoulos, Georgios; Fries, Pascal
2018-01-01
Theta rhythms govern rodent sniffing and whisking, and human language processing. Human psychophysics suggests a role for theta also in visual attention. However, little is known about theta in visual areas and its attentional modulation. We used electrocorticography (ECoG) to record local field potentials (LFPs) simultaneously from areas V1, V2, V4, and TEO of two macaque monkeys performing a selective visual attention task. We found a ≈4-Hz theta rhythm within both the V1–V2 and the V4–TEO region, and theta synchronization between them, with a predominantly feedforward directed influence. ECoG coverage of large parts of these regions revealed a surprising spatial correspondence between theta and visually induced gamma. Furthermore, gamma power was modulated with theta phase. Selective attention to the respective visual stimulus strongly reduced these theta-rhythmic processes, leading to an unusually strong attention effect for V1. Microsaccades (MSs) were partly locked to theta. However, neuronal theta rhythms tended to be even more pronounced for epochs devoid of MSs. Thus, we find an MS-independent theta rhythm specific to visually driven parts of V1–V2, which rhythmically modulates local gamma and entrains V4–TEO, and which is strongly reduced by attention. We propose that the less theta-rhythmic and thereby more continuous processing of the attended stimulus serves the exploitation of this behaviorally most relevant information. The theta-rhythmic and thereby intermittent processing of the unattended stimulus likely reflects the ecologically important exploration of less relevant sources of information. PMID:29848632
Enhancing Cognition with Video Games: A Multiple Game Training Study
Oei, Adam C.; Patterson, Michael D.
2013-01-01
Background Previous evidence points to a causal link between playing action video games and enhanced cognition and perception. However, benefits of playing other video games are under-investigated. We examined whether playing non-action games also improves cognition. Hence, we compared transfer effects of an action and other non-action types that required different cognitive demands. Methodology/Principal Findings We instructed 5 groups of non-gamer participants to play one game each on a mobile device (iPhone/iPod Touch) for one hour a day/five days a week over four weeks (20 hours). Games included action, spatial memory, match-3, hidden- object, and an agent-based life simulation. Participants performed four behavioral tasks before and after video game training to assess for transfer effects. Tasks included an attentional blink task, a spatial memory and visual search dual task, a visual filter memory task to assess for multiple object tracking and cognitive control, as well as a complex verbal span task. Action game playing eliminated attentional blink and improved cognitive control and multiple-object tracking. Match-3, spatial memory and hidden object games improved visual search performance while the latter two also improved spatial working memory. Complex verbal span improved after match-3 and action game training. Conclusion/Significance Cognitive improvements were not limited to action game training alone and different games enhanced different aspects of cognition. We conclude that training specific cognitive abilities frequently in a video game improves performance in tasks that share common underlying demands. Overall, these results suggest that many video game-related cognitive improvements may not be due to training of general broad cognitive systems such as executive attentional control, but instead due to frequent utilization of specific cognitive processes during game play. Thus, many video game training related improvements to cognition may be attributed to near-transfer effects. PMID:23516504
Phonological Skills, Visual Attention Span, and Visual Stress in Developmental Dyslexia
ERIC Educational Resources Information Center
Saksida, Amanda; Iannuzzi, Stéphanie; Bogliotti, Caroline; Chaix, Yves; Démonet, Jean-François; Bricout, Laure; Billard, Catherine; Nguyen-Morel, Marie-Ange; Le Heuzey, Marie-France; Soares-Boucaud, Isabelle; George, Florence; Ziegler, Johannes C.; Ramus, Franck
2016-01-01
In this study, we concurrently investigated 3 possible causes of dyslexia--a phonological deficit, visual stress, and a reduced visual attention span--in a large population of 164 dyslexic and 118 control French children, aged between 8 and 13 years old. We found that most dyslexic children showed a phonological deficit, either in terms of…
ERIC Educational Resources Information Center
Yeari, Menahem; Isser, Michal; Schiff, Rachel
2017-01-01
A controversy has recently developed regarding the hypothesis that developmental dyslexia may be caused, in some cases, by a reduced visual attention span (VAS). To examine this hypothesis, independent of phonological abilities, researchers tested the ability of dyslexic participants to recognize arrays of unfamiliar visual characters. Employing…
The Effect of Visual Threat on Spatial Attention to Touch
ERIC Educational Resources Information Center
Poliakoff, Ellen; Miles, Eleanor; Li, Xinying; Blanchette, Isabelle
2007-01-01
Viewing a threatening stimulus can bias visual attention toward that location. Such effects have typically been investigated only in the visual modality, despite the fact that many threatening stimuli are most dangerous when close to or in contact with the body. Recent multisensory research indicates that a neutral visual stimulus, such as a light…
Visual Search Deficits Are Independent of Magnocellular Deficits in Dyslexia
ERIC Educational Resources Information Center
Wright, Craig M.; Conlon, Elizabeth G.; Dyck, Murray
2012-01-01
The aim of this study was to investigate the theory that visual magnocellular deficits seen in groups with dyslexia are linked to reading via the mechanisms of visual attention. Visual attention was measured with a serial search task and magnocellular function with a coherent motion task. A large group of children with dyslexia (n = 70) had slower…
Infants' Selective Attention to Reliable Visual Cues in the Presence of Salient Distractors
ERIC Educational Resources Information Center
Tummeltshammer, Kristen Swan; Mareschal, Denis; Kirkham, Natasha Z.
2014-01-01
With many features competing for attention in their visual environment, infants must learn to deploy attention toward informative cues while ignoring distractions. Three eye tracking experiments were conducted to investigate whether 6- and 8-month-olds (total N = 102) would shift attention away from a distractor stimulus to learn a cue-reward…
A Unique Role of Endogenous Visual-Spatial Attention in Rapid Processing of Multiple Targets
ERIC Educational Resources Information Center
Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Palafox, German; Suzuki, Satoru
2011-01-01
Visual spatial attention can be exogenously captured by a salient stimulus or can be endogenously allocated by voluntary effort. Whether these two attention modes serve distinctive functions is debated, but for processing of single targets the literature suggests superiority of exogenous attention (it is faster acting and serves more functions).…
Attention and Memory Play Different Roles in Syntactic Choice during Sentence Production
ERIC Educational Resources Information Center
Myachykov, Andriy; Garrod, Simon; Scheepers, Christoph
2018-01-01
Attentional control of referential information is an important contributor to the structure of discourse. We investigated how attention and memory interplay during visually situated sentence production. We manipulated speakers' attention to the agent or the patient of a described event by means of a referential or a dot visual cue. We also…
Exploring conflict- and target-related movement of visual attention.
Wendt, Mike; Garling, Marco; Luna-Rodriguez, Aquiles; Jacobsen, Thomas
2014-01-01
Intermixing trials of a visual search task with trials of a modified flanker task, the authors investigated whether the presentation of conflicting distractors at only one side (left or right) of a target stimulus triggers shifts of visual attention towards the contralateral side. Search time patterns provided evidence for lateral attention shifts only when participants performed the flanker task under an instruction assumed to widen the focus of attention, demonstrating that instruction-based control settings of an otherwise identical task can impact performance in an unrelated task. Contrasting conditions with response-related and response-unrelated distractors showed that shifting attention does not depend on response conflict and may be explained as stimulus-conflict-related withdrawal or target-related deployment of attention.
Xie, Jun; Xu, Guanghua; Luo, Ailing; Li, Min; Zhang, Sicong; Han, Chengcheng; Yan, Wenqiang
2017-08-14
As a spatial selective attention-based brain-computer interface (BCI) paradigm, steady-state visual evoked potential (SSVEP) BCI has the advantages of high information transfer rate, high tolerance to artifacts, and robust performance across users. However, its benefits come at the cost of mental load and fatigue occurring in the concentration on the visual stimuli. Noise, as a ubiquitous random perturbation with the power of randomness, may be exploited by the human visual system to enhance higher-level brain functions. In this study, a novel steady-state motion visual evoked potential (SSMVEP, i.e., one kind of SSVEP)-based BCI paradigm with spatiotemporal visual noise was used to investigate the influence of noise on the compensation of mental load and fatigue deterioration during prolonged attention tasks. Changes in α , θ , θ + α powers, θ / α ratio, and electroencephalography (EEG) properties of amplitude, signal-to-noise ratio (SNR), and online accuracy, were used to evaluate mental load and fatigue. We showed that presenting a moderate visual noise to participants could reliably alleviate the mental load and fatigue during online operation of visual BCI that places demands on the attentional processes. This demonstrated that noise could provide a superior solution to the implementation of visual attention controlling-based BCI applications.
ERIC Educational Resources Information Center
Buchholz, J.; Davies, A.A.
2005-01-01
Performance on a covert visual attention task is compared between a group of adults with developmental dyslexia (specifically phonological difficulties) and a group of age and IQ matched controls. The group with dyslexia were generally slower to detect validly-cued targets. Costs of shifting attention toward the periphery when the target was…
Cognitive Food Processing in Binge-Eating Disorder: An Eye-Tracking Study.
Sperling, Ingmar; Baldofski, Sabrina; Lüthold, Patrick; Hilbert, Anja
2017-08-19
Studies indicate an attentional bias towards food in binge-eating disorder (BED); however, more evidence on attentional engagement and disengagement and processing of multiple attention-competing stimuli is needed. This study aimed to examine visual attention to food and non-food stimuli in BED. In n = 23 participants with full-syndrome and subsyndromal BED and n = 23 individually matched healthy controls, eye-tracking was used to assess attention to food and non-food stimuli during a free exploration paradigm and a visual search task. In the free exploration paradigm, groups did not differ in their initial fixation position. While both groups fixated non-food stimuli significantly longer than food stimuli, the BED group allocated significantly more attention towards food than controls. In the visual search task, groups did not differ in detection times. However, a significant detection bias for food was found in full-syndrome BED, but not in controls. An increased initial attention towards food was related to greater BED symptomatology and lower body mass index (BMI) only in full-syndrome BED, while a greater maintained attention to food was associated with lower BMI in controls. The results suggest food-biased visual attentional processing in adults with BED. Further studies should clarify the implications of attentional processes for the etiology and maintenance of BED.
Cognitive Food Processing in Binge-Eating Disorder: An Eye-Tracking Study
Sperling, Ingmar; Lüthold, Patrick; Hilbert, Anja
2017-01-01
Studies indicate an attentional bias towards food in binge-eating disorder (BED); however, more evidence on attentional engagement and disengagement and processing of multiple attention-competing stimuli is needed. This study aimed to examine visual attention to food and non-food stimuli in BED. In n = 23 participants with full-syndrome and subsyndromal BED and n = 23 individually matched healthy controls, eye-tracking was used to assess attention to food and non-food stimuli during a free exploration paradigm and a visual search task. In the free exploration paradigm, groups did not differ in their initial fixation position. While both groups fixated non-food stimuli significantly longer than food stimuli, the BED group allocated significantly more attention towards food than controls. In the visual search task, groups did not differ in detection times. However, a significant detection bias for food was found in full-syndrome BED, but not in controls. An increased initial attention towards food was related to greater BED symptomatology and lower body mass index (BMI) only in full-syndrome BED, while a greater maintained attention to food was associated with lower BMI in controls. The results suggest food-biased visual attentional processing in adults with BED. Further studies should clarify the implications of attentional processes for the etiology and maintenance of BED. PMID:28825607
Perceptual Learning Induces Persistent Attentional Capture by Nonsalient Shapes.
Qu, Zhe; Hillyard, Steven A; Ding, Yulong
2017-02-01
Visual attention can be attracted automatically by salient simple features, but whether and how nonsalient complex stimuli such as shapes may capture attention in humans remains unclear. Here, we present strong electrophysiological evidence that a nonsalient shape presented among similar shapes can provoke a robust and persistent capture of attention as a consequence of extensive training in visual search (VS) for that shape. Strikingly, this attentional capture that followed perceptual learning (PL) was evident even when the trained shape was task-irrelevant, was presented outside the focus of top-down spatial attention, and was undetected by the observer. Moreover, this attentional capture persisted for at least 3-5 months after training had been terminated. This involuntary capture of attention was indexed by electrophysiological recordings of the N2pc component of the event-related brain potential, which was localized to ventral extrastriate visual cortex, and was highly predictive of stimulus-specific improvement in VS ability following PL. These findings provide the first evidence that nonsalient shapes can capture visual attention automatically following PL and challenge the prominent view that detection of feature conjunctions requires top-down focal attention. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Störmer, Viola S; Alvarez, George A; Cavanagh, Patrick
2014-08-27
It is much easier to divide attention across the left and right visual hemifields than within the same visual hemifield. Here we investigate whether this benefit of dividing attention across separate visual fields is evident at early cortical processing stages. We measured the steady-state visual evoked potential, an oscillatory response of the visual cortex elicited by flickering stimuli, of moving targets and distractors while human observers performed a tracking task. The amplitude of responses at the target frequencies was larger than that of the distractor frequencies when participants tracked two targets in separate hemifields, indicating that attention can modulate early visual processing when it is divided across hemifields. However, these attentional modulations disappeared when both targets were tracked within the same hemifield. These effects were not due to differences in task performance, because accuracy was matched across the tracking conditions by adjusting target speed (with control conditions ruling out effects due to speed alone). To investigate later processing stages, we examined the P3 component over central-parietal scalp sites that was elicited by the test probe at the end of the trial. The P3 amplitude was larger for probes on targets than on distractors, regardless of whether attention was divided across or within a hemifield, indicating that these higher-level processes were not constrained by visual hemifield. These results suggest that modulating early processing stages enables more efficient target tracking, and that within-hemifield competition limits the ability to modulate multiple target representations within the hemifield maps of the early visual cortex. Copyright © 2014 the authors 0270-6474/14/3311526-08$15.00/0.
Gidlöf, Kerstin; Anikin, Andrey; Lingonblad, Martin; Wallin, Annika
2017-09-01
There is a battle in the supermarket isle, a battle between what the consumer wants and what the retailer and others want her to see, and subsequently to buy. Product packages and displays contain a number of features and attributes tailored to catch consumers' attention. These are what we call external factors comprising the visual saliency, the number of facings, and the placement of each product. But a consumer also brings with her a number of goals and interests related to the products and their attributes. These are important internal factors, including brand preferences, price sensitivity, and dietary inclinations. We fit mobile eye trackers to consumers visiting real-life supermarkets in order to investigate to what extent external and internal factors affect consumers' visual attention and purchases. Both external and internal factors influenced what products consumers looked at, with a strong positive interaction between visual saliency and consumer preferences. Consumers appear to take advantage of visual saliency in their decision making, using their knowledge about products' appearance to guide their visual attention towards those that fit their preferences. When it comes to actual purchases, however, visual attention was by far the most important predictor, even after controlling for all other internal and external factors. In other words, the very act of looking longer or repeatedly at a package, for any reason, makes it more likely that this product will be bought. Visual attention is thus crucial for understanding consumer behaviour, even in the cluttered supermarket environment, but it cannot be captured by measurements of visual saliency alone. Copyright © 2017 Elsevier Ltd. All rights reserved.
Foley, Nicholas C.; Grossberg, Stephen; Mingolla, Ennio
2015-01-01
How are spatial and object attention coordinated to achieve rapid object learning and recognition during eye movement search? How do prefrontal priming and parietal spatial mechanisms interact to determine the reaction time costs of intra-object attention shifts, inter-object attention shifts, and shifts between visible objects and covertly cued locations? What factors underlie individual differences in the timing and frequency of such attentional shifts? How do transient and sustained spatial attentional mechanisms work and interact? How can volition, mediated via the basal ganglia, influence the span of spatial attention? A neural model is developed of how spatial attention in the where cortical stream coordinates view-invariant object category learning in the what cortical stream under free viewing conditions. The model simulates psychological data about the dynamics of covert attention priming and switching requiring multifocal attention without eye movements. The model predicts how “attentional shrouds” are formed when surface representations in cortical area V4 resonate with spatial attention in posterior parietal cortex (PPC) and prefrontal cortex (PFC), while shrouds compete among themselves for dominance. Winning shrouds support invariant object category learning, and active surface-shroud resonances support conscious surface perception and recognition. Attentive competition between multiple objects and cues simulates reaction-time data from the two-object cueing paradigm. The relative strength of sustained surface-driven and fast-transient motion-driven spatial attention controls individual differences in reaction time for invalid cues. Competition between surface-driven attentional shrouds controls individual differences in detection rate of peripheral targets in useful-field-of-view tasks. The model proposes how the strength of competition can be mediated, though learning or momentary changes in volition, by the basal ganglia. A new explanation of crowding shows how the cortical magnification factor, among other variables, can cause multiple object surfaces to share a single surface-shroud resonance, thereby preventing recognition of the individual objects. PMID:22425615
Foley, Nicholas C; Grossberg, Stephen; Mingolla, Ennio
2012-08-01
How are spatial and object attention coordinated to achieve rapid object learning and recognition during eye movement search? How do prefrontal priming and parietal spatial mechanisms interact to determine the reaction time costs of intra-object attention shifts, inter-object attention shifts, and shifts between visible objects and covertly cued locations? What factors underlie individual differences in the timing and frequency of such attentional shifts? How do transient and sustained spatial attentional mechanisms work and interact? How can volition, mediated via the basal ganglia, influence the span of spatial attention? A neural model is developed of how spatial attention in the where cortical stream coordinates view-invariant object category learning in the what cortical stream under free viewing conditions. The model simulates psychological data about the dynamics of covert attention priming and switching requiring multifocal attention without eye movements. The model predicts how "attentional shrouds" are formed when surface representations in cortical area V4 resonate with spatial attention in posterior parietal cortex (PPC) and prefrontal cortex (PFC), while shrouds compete among themselves for dominance. Winning shrouds support invariant object category learning, and active surface-shroud resonances support conscious surface perception and recognition. Attentive competition between multiple objects and cues simulates reaction-time data from the two-object cueing paradigm. The relative strength of sustained surface-driven and fast-transient motion-driven spatial attention controls individual differences in reaction time for invalid cues. Competition between surface-driven attentional shrouds controls individual differences in detection rate of peripheral targets in useful-field-of-view tasks. The model proposes how the strength of competition can be mediated, though learning or momentary changes in volition, by the basal ganglia. A new explanation of crowding shows how the cortical magnification factor, among other variables, can cause multiple object surfaces to share a single surface-shroud resonance, thereby preventing recognition of the individual objects. Copyright © 2012 Elsevier Inc. All rights reserved.
Liebel, Spencer W; Nelson, Jason M
2017-12-01
We investigated the auditory and visual working memory functioning in college students with attention-deficit/hyperactivity disorder, learning disabilities, and clinical controls. We examined the role attention-deficit/hyperactivity disorder subtype status played in working memory functioning. The unique influence that both domains of working memory have on reading and math abilities was investigated. A sample of 268 individuals seeking postsecondary education comprise four groups of the present study: 110 had an attention-deficit/hyperactivity disorder diagnosis only, 72 had a learning disability diagnosis only, 35 had comorbid attention-deficit/hyperactivity disorder and learning disability diagnoses, and 60 individuals without either of these disorders comprise a clinical control group. Participants underwent a comprehensive neuropsychological evaluation, and licensed psychologists employed a multi-informant, multi-method approach in obtaining diagnoses. In the attention-deficit/hyperactivity disorder only group, there was no difference between auditory and visual working memory functioning, t(100) = -1.57, p = .12. In the learning disability group, however, auditory working memory functioning was significantly weaker compared with visual working memory, t(71) = -6.19, p < .001, d = -0.85. Within the attention-deficit/hyperactivity disorder only group, there were no auditory or visual working memory functioning differences between participants with either a predominantly inattentive type or a combined type diagnosis. Visual working memory did not incrementally contribute to the prediction of academic achievement skills. Individuals with attention-deficit/hyperactivity disorder did not demonstrate significant working memory differences compared with clinical controls. Individuals with a learning disability demonstrated weaker auditory working memory than individuals in either the attention-deficit/hyperactivity or clinical control groups. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
A unique role of endogenous visual-spatial attention in rapid processing of multiple targets
Guzman, Emmanuel; Grabowecky, Marcia; Palafox, German; Suzuki, Satoru
2012-01-01
Visual spatial attention can be exogenously captured by a salient stimulus or can be endogenously allocated by voluntary effort. Whether these two attention modes serve distinctive functions is debated, but for processing of single targets the literature suggests superiority of exogenous attention (it is faster acting and serves more functions). We report that endogenous attention uniquely contributes to processing of multiple targets. For speeded visual discrimination, response times are faster for multiple redundant targets than for single targets due to probability summation and/or signal integration. This redundancy gain was unaffected when attention was exogenously diverted from the targets, but was completely eliminated when attention was endogenously diverted. This was not due to weaker manipulation of exogenous attention because our exogenous and endogenous cues similarly affected overall response times. Thus, whereas exogenous attention is superior for processing single targets, endogenous attention plays a unique role in allocating resources crucial for rapid concurrent processing of multiple targets. PMID:21517209
Attention bias to threat faces in severe mood dysregulation.
Hommer, Rebecca E; Meyer, Allison; Stoddard, Joel; Connolly, Megan E; Mogg, Karin; Bradley, Brendan P; Pine, Daniel S; Leibenluft, Ellen; Brotman, Melissa A
2014-07-01
We used a dot-probe paradigm to examine attention bias toward threat (i.e., angry) and happy face stimuli in severe mood dysregulation (SMD) versus healthy comparison (HC) youth. The tendency to allocate attention to threat is well established in anxiety and other disorders of negative affect. SMD is characterized by the negative affect of irritability, and longitudinal studies suggest childhood irritability predicts adult anxiety and depression. Therefore, it is important to study pathophysiologic connections between irritability and anxiety disorders. SMD patients (N = 74) and HC youth (N = 42) completed a visual probe paradigm to assess attention bias to emotional faces. Diagnostic interviews were conducted and measures of irritability and anxiety were obtained in patients. SMD youth differed from HC youth in having a bias toward threatening faces (P < .01). Threat bias was positively correlated with the severity of the SMD syndrome and depressive symptoms; degree of threat bias did not differ between SMD youth with and without co-occurring anxiety disorders or depression. SMD and HC youth did not differ in bias toward or away from happy faces. SMD youth demonstrate an attention bias toward threat, with greater threat bias associated with higher levels of SMD symptom severity. Our findings suggest that irritability may share a pathophysiological link with anxiety and depressive disorders. This finding suggests the value of exploring further whether attention bias modification treatments that are effective for anxiety are also helpful in the treatment of irritability. © 2013 Wiley Periodicals, Inc.
Spatial attention does not require preattentive grouping.
Vecera, S P; Behrmann, M
1997-01-01
Does spatial attention follow a full preattentive analysis of the visual field, or can attention select from ungrouped regions of the visual field? We addressed this question by testing an apperceptive agnosic patient, J. W., in tasks involving both spatial selection and preattentive grouping. Results suggest that J.W. had intact spatial attention: He was faster to detect targets appearing at cued location relative to targets appearing at uncued locations. However, his preattentive processes were severely disrupted. Gestalt grouping and symmetry perception, both thought to involve preattentive processes, were impaired in J. W. Also, he could not use gestalt grouping cues to guide spatial attention. These results suggest that spatial attention is not completely dependent on preattentive grouping processes. We argue that preattentive grouping processes and spatial attention may mutually constrain one another in guiding the attentional selection of visual stimuli but that these 2 processes are isolated from one another.
Attention Priority Map of Face Images in Human Early Visual Cortex.
Mo, Ce; He, Dongjun; Fang, Fang
2018-01-03
Attention priority maps are topographic representations that are used for attention selection and guidance of task-related behavior during visual processing. Previous studies have identified attention priority maps of simple artificial stimuli in multiple cortical and subcortical areas, but investigating neural correlates of priority maps of natural stimuli is complicated by the complexity of their spatial structure and the difficulty of behaviorally characterizing their priority map. To overcome these challenges, we reconstructed the topographic representations of upright/inverted face images from fMRI BOLD signals in human early visual areas primary visual cortex (V1) and the extrastriate cortex (V2 and V3) based on a voxelwise population receptive field model. We characterized the priority map behaviorally as the first saccadic eye movement pattern when subjects performed a face-matching task relative to the condition in which subjects performed a phase-scrambled face-matching task. We found that the differential first saccadic eye movement pattern between upright/inverted and scrambled faces could be predicted from the reconstructed topographic representations in V1-V3 in humans of either sex. The coupling between the reconstructed representation and the eye movement pattern increased from V1 to V2/3 for the upright faces, whereas no such effect was found for the inverted faces. Moreover, face inversion modulated the coupling in V2/3, but not in V1. Our findings provide new evidence for priority maps of natural stimuli in early visual areas and extend traditional attention priority map theories by revealing another critical factor that affects priority maps in extrastriate cortex in addition to physical salience and task goal relevance: image configuration. SIGNIFICANCE STATEMENT Prominent theories of attention posit that attention sampling of visual information is mediated by a series of interacting topographic representations of visual space known as attention priority maps. Until now, neural evidence of attention priority maps has been limited to studies involving simple artificial stimuli and much remains unknown about the neural correlates of priority maps of natural stimuli. Here, we show that attention priority maps of face stimuli could be found in primary visual cortex (V1) and the extrastriate cortex (V2 and V3). Moreover, representations in extrastriate visual areas are strongly modulated by image configuration. These findings extend our understanding of attention priority maps significantly by showing that they are modulated, not only by physical salience and task-goal relevance, but also by the configuration of stimuli images. Copyright © 2018 the authors 0270-6474/18/380149-09$15.00/0.
Visual selective attention in amnestic mild cognitive impairment.
McLaughlin, Paula M; Anderson, Nicole D; Rich, Jill B; Chertkow, Howard; Murtha, Susan J E
2014-11-01
Subtle deficits in visual selective attention have been found in amnestic mild cognitive impairment (aMCI). However, few studies have explored performance on visual search paradigms or the Simon task, which are known to be sensitive to disease severity in Alzheimer's patients. Furthermore, there is limited research investigating how deficiencies can be ameliorated with exogenous support (auditory cues). Sixteen individuals with aMCI and 14 control participants completed 3 experimental tasks that varied in demand and cue availability: visual search-alerting, visual search-orienting, and Simon task. Visual selective attention was influenced by aMCI, auditory cues, and task characteristics. Visual search abilities were relatively consistent across groups. The aMCI participants were impaired on the Simon task when working memory was required, but conflict resolution was similar to controls. Spatially informative orienting cues improved response times, whereas spatially neutral alerting cues did not influence performance. Finally, spatially informative auditory cues benefited the aMCI group more than controls in the visual search task, specifically at the largest array size where orienting demands were greatest. These findings suggest that individuals with aMCI have working memory deficits and subtle deficiencies in orienting attention and rely on exogenous information to guide attention. © The Author 2013. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Shang, Chi-Yung; Gau, Susan Shur-Fen
2012-10-01
Atomoxetine is efficacious in reducing symptoms of attention- deficit/hyperactivity disorder (ADHD), but its effect on visual memory and attention needs more investigation. This study aimed to assess the effect of atomoxetine on visual memory, attention, and school function in boys with ADHD in Taiwan. This was an open-label 12 week atomoxetine treatment trial among 30 drug-naíve boys with ADHD, aged 8-16 years. Before administration of atomoxetine, the participants were assessed using psychiatric interviews, the Wechsler Intelligence Scale for Children, 3rd edition (WISC-III), the school function of the Chinese version of the Social Adjustment Inventory for Children and Adolescents (SAICA), the Conners' Continuous Performance Test (CPT), and the tasks of the Cambridge Neuropsychological Test Automated Battery (CANTAB) involving visual memory and attention: Pattern Recognition Memory, Spatial Recognition Memory, and Reaction Time, which were reassessed at weeks 4 and 12. Our results showed there was significant improvement in pattern recognition memory and spatial recognition memory as measured by the CANTAB tasks, sustained attention and response inhibition as measured by the CPT, and reaction time as measured by the CANTAB after treatment with atomoxetine for 4 weeks or 12 weeks. In addition, atomoxetine significantly enhanced school functioning in children with ADHD. Our findings suggested that atomoxetine was associated with significant improvement in visual memory, attention, and school functioning in boys with ADHD.
Walter, Sabrina; Keitel, Christian; Müller, Matthias M
2016-01-01
Visual attention can be focused concurrently on two stimuli at noncontiguous locations while intermediate stimuli remain ignored. Nevertheless, behavioral performance in multifocal attention tasks falters when attended stimuli fall within one visual hemifield as opposed to when they are distributed across left and right hemifields. This "different-hemifield advantage" has been ascribed to largely independent processing capacities of each cerebral hemisphere in early visual cortices. Here, we investigated how this advantage influences the sustained division of spatial attention. We presented six isoeccentric light-emitting diodes (LEDs) in the lower visual field, each flickering at a different frequency. Participants attended to two LEDs that were spatially separated by an intermediate LED and responded to synchronous events at to-be-attended LEDs. Task-relevant pairs of LEDs were either located in the same hemifield ("within-hemifield" conditions) or separated by the vertical meridian ("across-hemifield" conditions). Flicker-driven brain oscillations, steady-state visual evoked potentials (SSVEPs), indexed the allocation of attention to individual LEDs. Both behavioral performance and SSVEPs indicated enhanced processing of attended LED pairs during "across-hemifield" relative to "within-hemifield" conditions. Moreover, SSVEPs demonstrated effective filtering of intermediate stimuli in "across-hemifield" condition only. Thus, despite identical physical distances between LEDs of attended pairs, the spatial profiles of gain effects differed profoundly between "across-hemifield" and "within-hemifield" conditions. These findings corroborate that early cortical visual processing stages rely on hemisphere-specific processing capacities and highlight their limiting role in the concurrent allocation of visual attention to multiple locations.
Effects of subjective preference of colors on attention-related occipital theta oscillations.
Kawasaki, Masahiro; Yamaguchi, Yoko
2012-01-02
Human daily behaviors are often affected by subjective preferences. Studies have shown that physical responses are affected by unconscious preferences before conscious decision making. Accordingly, attention-related neural activities could be influenced by unconscious preferences. However, few neurological data exist on the relationship between visual attention and subjective preference. To address this issue, we focused on lateralization during visual attention and investigated the effects of subjective color preferences on visual attention-related brain activities. We recorded electroencephalograph (EEG) data during a preference judgment task that required 19 participants to choose their preferred color from 2 colors simultaneously presented to the right and left hemifields. In addition, to identify oscillatory activity during visual attention, we conducted a control experiment in which the participants focused on either the right or the left color without stating their preference. The EEG results showed enhanced theta (4-6 Hz) and decreased alpha (10-12 Hz) activities in the right and left occipital electrodes when the participants focused on the color in the opposite hemifield. Occipital theta synchronizations also increased contralaterally to the hemifield to which the preferred color was presented, whereas the alpha desynchronizations showed no lateralization. The contralateral occipital theta activity lasted longer than the ipsilateral occipital theta activity. Interestingly, theta lateralization was observed even when the preferred color was presented to the unattended side in the control experiment, revealing the strength of the preference-related theta-modulation effect irrespective of visual attention. These results indicate that subjective preferences modulate visual attention-related brain activities. Crown Copyright © 2011. Published by Elsevier Inc. All rights reserved.
Attention is required for maintenance of feature binding in visual working memory
Heider, Maike; Husain, Masud
2013-01-01
Working memory and attention are intimately connected. However, understanding the relationship between the two is challenging. Currently, there is an important controversy about whether objects in working memory are maintained automatically or require resources that are also deployed for visual or auditory attention. Here we investigated the effects of loading attention resources on precision of visual working memory, specifically on correct maintenance of feature-bound objects, using a dual-task paradigm. Participants were presented with a memory array and were asked to remember either direction of motion of random dot kinematograms of different colour, or orientation of coloured bars. During the maintenance period, they performed a secondary visual or auditory task, with varying levels of load. Following a retention period, they adjusted a coloured probe to match either the motion direction or orientation of stimuli with the same colour in the memory array. This allowed us to examine the effects of an attention-demanding task performed during maintenance on precision of recall on the concurrent working memory task. Systematic increase in attention load during maintenance resulted in a significant decrease in overall working memory performance. Changes in overall performance were specifically accompanied by an increase in feature misbinding errors: erroneous reporting of nontarget motion or orientation. Thus in trials where attention resources were taxed, participants were more likely to respond with nontarget values rather than simply making random responses. Our findings suggest that resources used during attention-demanding visual or auditory tasks also contribute to maintaining feature-bound representations in visual working memory—but not necessarily other aspects of working memory. PMID:24266343
Attention is required for maintenance of feature binding in visual working memory.
Zokaei, Nahid; Heider, Maike; Husain, Masud
2014-01-01
Working memory and attention are intimately connected. However, understanding the relationship between the two is challenging. Currently, there is an important controversy about whether objects in working memory are maintained automatically or require resources that are also deployed for visual or auditory attention. Here we investigated the effects of loading attention resources on precision of visual working memory, specifically on correct maintenance of feature-bound objects, using a dual-task paradigm. Participants were presented with a memory array and were asked to remember either direction of motion of random dot kinematograms of different colour, or orientation of coloured bars. During the maintenance period, they performed a secondary visual or auditory task, with varying levels of load. Following a retention period, they adjusted a coloured probe to match either the motion direction or orientation of stimuli with the same colour in the memory array. This allowed us to examine the effects of an attention-demanding task performed during maintenance on precision of recall on the concurrent working memory task. Systematic increase in attention load during maintenance resulted in a significant decrease in overall working memory performance. Changes in overall performance were specifically accompanied by an increase in feature misbinding errors: erroneous reporting of nontarget motion or orientation. Thus in trials where attention resources were taxed, participants were more likely to respond with nontarget values rather than simply making random responses. Our findings suggest that resources used during attention-demanding visual or auditory tasks also contribute to maintaining feature-bound representations in visual working memory-but not necessarily other aspects of working memory.
UnAdulterated - children and adults' visual attention to healthy and unhealthy food.
Junghans, Astrid F; Hooge, Ignace T C; Maas, Josje; Evers, Catharine; De Ridder, Denise T D
2015-04-01
Visually attending to unhealthy food creates a desire to consume the food. To resist the temptation people have to employ self-regulation strategies, such as visual avoidance. Past research has shown that self-regulatory skills develop throughout childhood and adolescence, suggesting adults' superior self-regulation skills compared to children. This study employed a novel method to investigate self-regulatory skills. Children and adults' initial (bottom-up) and maintained (top-down) visual attention to simultaneously presented healthy and unhealthy food were examined in an eye-tracking paradigm. Results showed that both children and adults initially attended most to the unhealthy food. Subsequently, adults self-regulated their visual attention away from the unhealthy food. Despite the children's high self-reported attempts to eat healthily and importance of eating healthily, children did not self-regulate visual attention away from unhealthy food. Children remained influenced by the attention-driven desire to consume the unhealthy food whereas adults visually attended more strongly to the healthy food thereby avoiding the desire to consume the unhealthy option. The findings emphasize the necessity of improving children's self-regulatory skills to support their desire to remain healthy and to protect children from the influences of the obesogenic environment. Copyright © 2015. Published by Elsevier Ltd.
Schneider, Werner X.
2013-01-01
The goal of this review is to introduce a theory of task-driven visual attention and working memory (TRAM). Based on a specific biased competition model, the ‘theory of visual attention’ (TVA) and its neural interpretation (NTVA), TRAM introduces the following assumption. First, selective visual processing over time is structured in competition episodes. Within an episode, that is, during its first two phases, a limited number of proto-objects are competitively encoded—modulated by the current task—in activation-based visual working memory (VWM). In processing phase 3, relevant VWM objects are transferred via a short-term consolidation into passive VWM. Second, each time attentional priorities change (e.g. after an eye movement), a new competition episode is initiated. Third, if a phase 3 VWM process (e.g. short-term consolidation) is not finished, whereas a new episode is called, a protective maintenance process allows its completion. After a VWM object change, its protective maintenance process is followed by an encapsulation of the VWM object causing attentional resource costs in trailing competition episodes. Viewed from this perspective, a new explanation of key findings of the attentional blink will be offered. Finally, a new suggestion will be made as to how VWM items might interact with visual search processes. PMID:24018722
Wilkinson, Krista M; Light, Janice
2011-12-01
Many individuals with complex communication needs may benefit from visual aided augmentative and alternative communication systems. In visual scene displays (VSDs), language concepts are embedded into a photograph of a naturalistic event. Humans play a central role in communication development and might be important elements in VSDs. However, many VSDs omit human figures. In this study, the authors sought to describe the distribution of visual attention to humans in naturalistic scenes as compared with other elements. Nineteen college students observed 8 photographs in which a human figure appeared near 1 or more items that might be expected to compete for visual attention (such as a Christmas tree or a table loaded with food). Eye-tracking technology allowed precise recording of participants' gaze. The fixation duration over a 7-s viewing period and latency to view elements in the photograph were measured. Participants fixated on the human figures more rapidly and for longer than expected based on the size of these figures, regardless of the other elements in the scene. Human figures attract attention in a photograph even when presented alongside other attractive distracters. Results suggest that humans may be a powerful means to attract visual attention to key elements in VSDs.
Park, George D; Reed, Catherine L
2015-02-01
Researchers acknowledge the interplay between action and attention, but typically consider action as a response to successful attentional selection or the correlation of performance on separate action and attention tasks. We investigated how concurrent action with spatial monitoring affects the distribution of attention across the visual field. We embedded a functional field of view (FFOV) paradigm with concurrent central object recognition and peripheral target localization tasks in a simulated driving environment. Peripheral targets varied across 20-60 deg eccentricity at 11 radial spokes. Three conditions assessed the effects of visual complexity and concurrent action on the size and shape of the FFOV: (1) with no background, (2) with driving background, and (3) with driving background and vehicle steering. The addition of visual complexity slowed task performance and reduced the FFOV size but did not change the baseline shape. In contrast, the addition of steering produced not only shrinkage of the FFOV, but also changes in the FFOV shape. Nonuniform performance decrements occurred in proximal regions used for the central task and for steering, independent of interference from context elements. Multifocal attention models should consider the role of action and account for nonhomogeneities in the distribution of attention. © 2015 SAGE Publications.
Auditory Selective Attention to Speech Modulates Activity in the Visual Word Form Area
Yoncheva, Yuliya N.; Zevin, Jason D.; Maurer, Urs
2010-01-01
Selective attention to speech versus nonspeech signals in complex auditory input could produce top-down modulation of cortical regions previously linked to perception of spoken, and even visual, words. To isolate such top-down attentional effects, we contrasted 2 equally challenging active listening tasks, performed on the same complex auditory stimuli (words overlaid with a series of 3 tones). Instructions required selectively attending to either the speech signals (in service of rhyme judgment) or the melodic signals (tone-triplet matching). Selective attention to speech, relative to attention to melody, was associated with blood oxygenation level–dependent (BOLD) increases during functional magnetic resonance imaging (fMRI) in left inferior frontal gyrus, temporal regions, and the visual word form area (VWFA). Further investigation of the activity in visual regions revealed overall deactivation relative to baseline rest for both attention conditions. Topographic analysis demonstrated that while attending to melody drove deactivation equivalently across all fusiform regions of interest examined, attending to speech produced a regionally specific modulation: deactivation of all fusiform regions, except the VWFA. Results indicate that selective attention to speech can topographically tune extrastriate cortex, leading to increased activity in VWFA relative to surrounding regions, in line with the well-established connectivity between areas related to spoken and visual word perception in skilled readers. PMID:19571269
The neural basis of visual dominance in the context of audio-visual object processing.
Schmid, Carmen; Büchel, Christian; Rose, Michael
2011-03-01
Visual dominance refers to the observation that in bimodal environments vision often has an advantage over other senses in human. Therefore, a better memory performance for visual compared to, e.g., auditory material is assumed. However, the reason for this preferential processing and the relation to the memory formation is largely unknown. In this fMRI experiment, we manipulated cross-modal competition and attention, two factors that both modulate bimodal stimulus processing and can affect memory formation. Pictures and sounds of objects were presented simultaneously in two levels of recognisability, thus manipulating the amount of cross-modal competition. Attention was manipulated via task instruction and directed either to the visual or the auditory modality. The factorial design allowed a direct comparison of the effects between both modalities. The resulting memory performance showed that visual dominance was limited to a distinct task setting. Visual was superior to auditory object memory only when allocating attention towards the competing modality. During encoding, cross-modal competition and attention towards the opponent domain reduced fMRI signals in both neural systems, but cross-modal competition was more pronounced in the auditory system and only in auditory cortex this competition was further modulated by attention. Furthermore, neural activity reduction in auditory cortex during encoding was closely related to the behavioural auditory memory impairment. These results indicate that visual dominance emerges from a less pronounced vulnerability of the visual system against competition from the auditory domain. Copyright © 2010 Elsevier Inc. All rights reserved.
Evidence for unlimited capacity processing of simple features in visual cortex
White, Alex L.; Runeson, Erik; Palmer, John; Ernst, Zachary R.; Boynton, Geoffrey M.
2017-01-01
Performance in many visual tasks is impaired when observers attempt to divide spatial attention across multiple visual field locations. Correspondingly, neuronal response magnitudes in visual cortex are often reduced during divided compared with focused spatial attention. This suggests that early visual cortex is the site of capacity limits, where finite processing resources must be divided among attended stimuli. However, behavioral research demonstrates that not all visual tasks suffer such capacity limits: The costs of divided attention are minimal when the task and stimulus are simple, such as when searching for a target defined by orientation or contrast. To date, however, every neuroimaging study of divided attention has used more complex tasks and found large reductions in response magnitude. We bridged that gap by using functional magnetic resonance imaging to measure responses in the human visual cortex during simple feature detection. The first experiment used a visual search task: Observers detected a low-contrast Gabor patch within one or four potentially relevant locations. The second experiment used a dual-task design, in which observers made independent judgments of Gabor presence in patches of dynamic noise at two locations. In both experiments, blood-oxygen level–dependent (BOLD) signals in the retinotopic cortex were significantly lower for ignored than attended stimuli. However, when observers divided attention between multiple stimuli, BOLD signals were not reliably reduced and behavioral performance was unimpaired. These results suggest that processing of simple features in early visual cortex has unlimited capacity. PMID:28654964