Sample records for feature-based visual attention

  1. Object-based attention underlies the rehearsal of feature binding in visual working memory.

    PubMed

    Shen, Mowei; Huang, Xiang; Gao, Zaifeng

    2015-04-01

    Feature binding is a core concept in many research fields, including the study of working memory (WM). Over the past decade, it has been debated whether keeping the feature binding in visual WM consumes more visual attention than the constituent single features. Previous studies have only explored the contribution of domain-general attention or space-based attention in the binding process; no study so far has explored the role of object-based attention in retaining binding in visual WM. We hypothesized that object-based attention underlay the mechanism of rehearsing feature binding in visual WM. Therefore, during the maintenance phase of a visual WM task, we inserted a secondary mental rotation (Experiments 1-3), transparent motion (Experiment 4), or an object-based feature report task (Experiment 5) to consume the object-based attention available for binding. In line with the prediction of the object-based attention hypothesis, Experiments 1-5 revealed a more significant impairment for binding than for constituent single features. However, this selective binding impairment was not observed when inserting a space-based visual search task (Experiment 6). We conclude that object-based attention underlies the rehearsal of binding representation in visual WM. (c) 2015 APA, all rights reserved.

  2. Spatial and Feature-Based Attention in a Layered Cortical Microcircuit Model

    PubMed Central

    Wagatsuma, Nobuhiko; Potjans, Tobias C.; Diesmann, Markus; Sakai, Ko; Fukai, Tomoki

    2013-01-01

    Directing attention to the spatial location or the distinguishing feature of a visual object modulates neuronal responses in the visual cortex and the stimulus discriminability of subjects. However, the spatial and feature-based modes of attention differently influence visual processing by changing the tuning properties of neurons. Intriguingly, neurons' tuning curves are modulated similarly across different visual areas under both these modes of attention. Here, we explored the mechanism underlying the effects of these two modes of visual attention on the orientation selectivity of visual cortical neurons. To do this, we developed a layered microcircuit model. This model describes multiple orientation-specific microcircuits sharing their receptive fields and consisting of layers 2/3, 4, 5, and 6. These microcircuits represent a functional grouping of cortical neurons and mutually interact via lateral inhibition and excitatory connections between groups with similar selectivity. The individual microcircuits receive bottom-up visual stimuli and top-down attention in different layers. A crucial assumption of the model is that feature-based attention activates orientation-specific microcircuits for the relevant feature selectively, whereas spatial attention activates all microcircuits homogeneously, irrespective of their orientation selectivity. Consequently, our model simultaneously accounts for the multiplicative scaling of neuronal responses in spatial attention and the additive modulations of orientation tuning curves in feature-based attention, which have been observed widely in various visual cortical areas. Simulations of the model predict contrasting differences between excitatory and inhibitory neurons in the two modes of attentional modulations. Furthermore, the model replicates the modulation of the psychophysical discriminability of visual stimuli in the presence of external noise. Our layered model with a biologically suggested laminar structure describes the basic circuit mechanism underlying the attention-mode specific modulations of neuronal responses and visual perception. PMID:24324628

  3. Short-term retention of visual information: Evidence in support of feature-based attention as an underlying mechanism.

    PubMed

    Sneve, Markus H; Sreenivasan, Kartik K; Alnæs, Dag; Endestad, Tor; Magnussen, Svein

    2015-01-01

    Retention of features in visual short-term memory (VSTM) involves maintenance of sensory traces in early visual cortex. However, the mechanism through which this is accomplished is not known. Here, we formulate specific hypotheses derived from studies on feature-based attention to test the prediction that visual cortex is recruited by attentional mechanisms during VSTM of low-level features. Functional magnetic resonance imaging (fMRI) of human visual areas revealed that neural populations coding for task-irrelevant feature information are suppressed during maintenance of detailed spatial frequency memory representations. The narrow spectral extent of this suppression agrees well with known effects of feature-based attention. Additionally, analyses of effective connectivity during maintenance between retinotopic areas in visual cortex show that the observed highlighting of task-relevant parts of the feature spectrum originates in V4, a visual area strongly connected with higher-level control regions and known to convey top-down influence to earlier visual areas during attentional tasks. In line with this property of V4 during attentional operations, we demonstrate that modulations of earlier visual areas during memory maintenance have behavioral consequences, and that these modulations are a result of influences from V4. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Feature-based attentional modulations in the absence of direct visual stimulation.

    PubMed

    Serences, John T; Boynton, Geoffrey M

    2007-07-19

    When faced with a crowded visual scene, observers must selectively attend to behaviorally relevant objects to avoid sensory overload. Often this selection process is guided by prior knowledge of a target-defining feature (e.g., the color red when looking for an apple), which enhances the firing rate of visual neurons that are selective for the attended feature. Here, we used functional magnetic resonance imaging and a pattern classification algorithm to predict the attentional state of human observers as they monitored a visual feature (one of two directions of motion). We find that feature-specific attention effects spread across the visual field-even to regions of the scene that do not contain a stimulus. This spread of feature-based attention to empty regions of space may facilitate the perception of behaviorally relevant stimuli by increasing sensitivity to attended features at all locations in the visual field.

  5. Gaze-independent brain-computer interfaces based on covert attention and feature attention

    NASA Astrophysics Data System (ADS)

    Treder, M. S.; Schmidt, N. M.; Blankertz, B.

    2011-10-01

    There is evidence that conventional visual brain-computer interfaces (BCIs) based on event-related potentials cannot be operated efficiently when eye movements are not allowed. To overcome this limitation, the aim of this study was to develop a visual speller that does not require eye movements. Three different variants of a two-stage visual speller based on covert spatial attention and non-spatial feature attention (i.e. attention to colour and form) were tested in an online experiment with 13 healthy participants. All participants achieved highly accurate BCI control. They could select one out of thirty symbols (chance level 3.3%) with mean accuracies of 88%-97% for the different spellers. The best results were obtained for a speller that was operated using non-spatial feature attention only. These results show that, using feature attention, it is possible to realize high-accuracy, fast-paced visual spellers that have a large vocabulary and are independent of eye gaze.

  6. Feature-based attention elicits surround suppression in feature space.

    PubMed

    Störmer, Viola S; Alvarez, George A

    2014-09-08

    It is known that focusing attention on a particular feature (e.g., the color red) facilitates the processing of all objects in the visual field containing that feature [1-7]. Here, we show that such feature-based attention not only facilitates processing but also actively inhibits processing of similar, but not identical, features globally across the visual field. We combined behavior and electrophysiological recordings of frequency-tagged potentials in human observers to measure this inhibitory surround in feature space. We found that sensory signals of an attended color (e.g., red) were enhanced, whereas sensory signals of colors similar to the target color (e.g., orange) were suppressed relative to colors more distinct from the target color (e.g., yellow). Importantly, this inhibitory effect spreads globally across the visual field, thus operating independently of location. These findings suggest that feature-based attention comprises an excitatory peak surrounded by a narrow inhibitory zone in color space to attenuate the most distracting and potentially confusable stimuli during visual perception. This selection profile is akin to what has been reported for location-based attention [8-10] and thus suggests that such center-surround mechanisms are an overarching principle of attention across different domains in the human brain. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Impaired visual search in rats reveals cholinergic contributions to feature binding in visuospatial attention.

    PubMed

    Botly, Leigh C P; De Rosa, Eve

    2012-10-01

    The visual search task established the feature integration theory of attention in humans and measures visuospatial attentional contributions to feature binding. We recently demonstrated that the neuromodulator acetylcholine (ACh), from the nucleus basalis magnocellularis (NBM), supports the attentional processes required for feature binding using a rat digging-based task. Additional research has demonstrated cholinergic contributions from the NBM to visuospatial attention in rats. Here, we combined these lines of evidence and employed visual search in rats to examine whether cortical cholinergic input supports visuospatial attention specifically for feature binding. We trained 18 male Long-Evans rats to perform visual search using touch screen-equipped operant chambers. Sessions comprised Feature Search (no feature binding required) and Conjunctive Search (feature binding required) trials using multiple stimulus set sizes. Following acquisition of visual search, 8 rats received bilateral NBM lesions using 192 IgG-saporin to selectively reduce cholinergic afferentation of the neocortex, which we hypothesized would selectively disrupt the visuospatial attentional processes needed for efficient conjunctive visual search. As expected, relative to sham-lesioned rats, ACh-NBM-lesioned rats took significantly longer to locate the target stimulus on Conjunctive Search, but not Feature Search trials, thus demonstrating that cholinergic contributions to visuospatial attention are important for feature binding in rats.

  8. Causal involvement of visual area MT in global feature-based enhancement but not contingent attentional capture.

    PubMed

    Painter, David R; Dux, Paul E; Mattingley, Jason B

    2015-09-01

    When visual attention is set for a particular target feature, such as color or shape, neural responses to that feature are enhanced across the visual field. This global feature-based enhancement is hypothesized to underlie the contingent attentional capture effect, in which task-irrelevant items with the target feature capture spatial attention. In humans, however, different cortical regions have been implicated in global feature-based enhancement and contingent capture. Here, we applied intermittent theta-burst stimulation (iTBS) to assess the causal roles of two regions of extrastriate cortex - right area MT and the right temporoparietal junction (TPJ) - in both global feature-based enhancement and contingent capture. We recorded cortical activity using EEG while participants monitored centrally for targets defined by color and ignored peripheral checkerboards that matched the distractor or target color. In central vision, targets were preceded by colored cues designed to capture attention. Stimuli flickered at unique frequencies, evoking distinct cortical oscillations. Analyses of these oscillations and behavioral performance revealed contingent capture in central vision and global feature-based enhancement in the periphery. Stimulation of right area MT selectively increased global feature-based enhancement, but did not influence contingent attentional capture. By contrast, stimulation of the right TPJ left both processes unaffected. Our results reveal a causal role for the right area MT in feature-based attention, and suggest that global feature-based enhancement does not underlie the contingent capture effect. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Research on metallic material defect detection based on bionic sensing of human visual properties

    NASA Astrophysics Data System (ADS)

    Zhang, Pei Jiang; Cheng, Tao

    2018-05-01

    Due to the fact that human visual system can quickly lock the areas of interest in complex natural environment and focus on it, this paper proposes an eye-based visual attention mechanism by simulating human visual imaging features based on human visual attention mechanism Bionic Sensing Visual Inspection Model Method to Detect Defects of Metallic Materials in the Mechanical Field. First of all, according to the biologically visually significant low-level features, the mark of defect experience marking is used as the intermediate feature of simulated visual perception. Afterwards, SVM method was used to train the advanced features of visual defects of metal material. According to the weight of each party, the biometrics detection model of metal material defect, which simulates human visual characteristics, is obtained.

  10. Toward the influence of temporal attention on the selection of targets in a visual search task: An ERP study.

    PubMed

    Rolke, Bettina; Festl, Freya; Seibold, Verena C

    2016-11-01

    We used ERPs to investigate whether temporal attention interacts with spatial attention and feature-based attention to enhance visual processing. We presented a visual search display containing one singleton stimulus among a set of homogenous distractors. Participants were asked to respond only to target singletons of a particular color and shape that were presented in an attended spatial position. We manipulated temporal attention by presenting a warning signal before each search display and varying the foreperiod (FP) between the warning signal and the search display in a blocked manner. We observed distinctive ERP effects of both spatial and temporal attention. The amplitudes for the N2pc, SPCN, and P3 were enhanced by spatial attention indicating a processing benefit of relevant stimulus features at the attended side. Temporal attention accelerated stimulus processing; this was indexed by an earlier onset of the N2pc component and a reduction in reaction times to targets. Most importantly, temporal attention did not interact with spatial attention or stimulus features to influence visual processing. Taken together, the results suggest that temporal attention fosters visual perceptual processing in a visual search task independently from spatial attention and feature-based attention; this provides support for the nonspecific enhancement hypothesis of temporal attention. © 2016 Society for Psychophysiological Research.

  11. Object-based attentional selection modulates anticipatory alpha oscillations

    PubMed Central

    Knakker, Balázs; Weiss, Béla; Vidnyánszky, Zoltán

    2015-01-01

    Visual cortical alpha oscillations are involved in attentional gating of incoming visual information. It has been shown that spatial and feature-based attentional selection result in increased alpha oscillations over the cortical regions representing sensory input originating from the unattended visual field and task-irrelevant visual features, respectively. However, whether attentional gating in the case of object based selection is also associated with alpha oscillations has not been investigated before. Here we measured anticipatory electroencephalography (EEG) alpha oscillations while participants were cued to attend to foveal face or word stimuli, the processing of which is known to have right and left hemispheric lateralization, respectively. The results revealed that in the case of simultaneously displayed, overlapping face and word stimuli, attending to the words led to increased power of parieto-occipital alpha oscillations over the right hemisphere as compared to when faces were attended. This object category-specific modulation of the hemispheric lateralization of anticipatory alpha oscillations was maintained during sustained attentional selection of sequentially presented face and word stimuli. These results imply that in the case of object-based attentional selection—similarly to spatial and feature-based attention—gating of visual information processing might involve visual cortical alpha oscillations. PMID:25628554

  12. Feature confirmation in object perception: Feature integration theory 26 years on from the Treisman Bartlett lecture.

    PubMed

    Humphreys, Glyn W

    2016-10-01

    The Treisman Bartlett lecture, reported in the Quarterly Journal of Experimental Psychology in 1988, provided a major overview of the feature integration theory of attention. This has continued to be a dominant account of human visual attention to this day. The current paper provides a summary of the work reported in the lecture and an update on critical aspects of the theory as applied to visual object perception. The paper highlights the emergence of findings that pose significant challenges to the theory and which suggest that revisions are required that allow for (a) several rather than a single form of feature integration, (b) some forms of feature integration to operate preattentively, (c) stored knowledge about single objects and interactions between objects to modulate perceptual integration, (d) the application of feature-based inhibition to object files where visual features are specified, which generates feature-based spreading suppression and scene segmentation, and (e) a role for attention in feature confirmation rather than feature integration in visual selection. A feature confirmation account of attention in object perception is outlined.

  13. Feature-Based Memory-Driven Attentional Capture: Visual Working Memory Content Affects Visual Attention

    ERIC Educational Resources Information Center

    Olivers, Christian N. L.; Meijer, Frank; Theeuwes, Jan

    2006-01-01

    In 7 experiments, the authors explored whether visual attention (the ability to select relevant visual information) and visual working memory (the ability to retain relevant visual information) share the same content representations. The presence of singleton distractors interfered more strongly with a visual search task when it was accompanied by…

  14. Visual attention: The past 25 years

    PubMed Central

    Carrasco, Marisa

    2012-01-01

    This review focuses on covert attention and how it alters early vision. I explain why attention is considered a selective process, the constructs of covert attention, spatial endogenous and exogenous attention, and feature-based attention. I explain how in the last 25 years research on attention has characterized the effects of covert attention on spatial filters and how attention influences the selection of stimuli of interest. This review includes the effects of spatial attention on discriminability and appearance in tasks mediated by contrast sensitivity and spatial resolution; the effects of feature-based attention on basic visual processes, and a comparison of the effects of spatial and feature-based attention. The emphasis of this review is on psychophysical studies, but relevant electrophysiological and neuroimaging studies and models regarding how and where neuronal responses are modulated are also discussed. PMID:21549742

  15. Visual attention: the past 25 years.

    PubMed

    Carrasco, Marisa

    2011-07-01

    This review focuses on covert attention and how it alters early vision. I explain why attention is considered a selective process, the constructs of covert attention, spatial endogenous and exogenous attention, and feature-based attention. I explain how in the last 25 years research on attention has characterized the effects of covert attention on spatial filters and how attention influences the selection of stimuli of interest. This review includes the effects of spatial attention on discriminability and appearance in tasks mediated by contrast sensitivity and spatial resolution; the effects of feature-based attention on basic visual processes, and a comparison of the effects of spatial and feature-based attention. The emphasis of this review is on psychophysical studies, but relevant electrophysiological and neuroimaging studies and models regarding how and where neuronal responses are modulated are also discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Object-based target templates guide attention during visual search.

    PubMed

    Berggren, Nick; Eimer, Martin

    2018-05-03

    During visual search, attention is believed to be controlled in a strictly feature-based fashion, without any guidance by object-based target representations. To challenge this received view, we measured electrophysiological markers of attentional selection (N2pc component) and working memory (sustained posterior contralateral negativity; SPCN) in search tasks where two possible targets were defined by feature conjunctions (e.g., blue circles and green squares). Critically, some search displays also contained nontargets with two target features (incorrect conjunction objects, e.g., blue squares). Because feature-based guidance cannot distinguish these objects from targets, any selective bias for targets will reflect object-based attentional control. In Experiment 1, where search displays always contained only one object with target-matching features, targets and incorrect conjunction objects elicited identical N2pc and SPCN components, demonstrating that attentional guidance was entirely feature-based. In Experiment 2, where targets and incorrect conjunction objects could appear in the same display, clear evidence for object-based attentional control was found. The target N2pc became larger than the N2pc to incorrect conjunction objects from 250 ms poststimulus, and only targets elicited SPCN components. This demonstrates that after an initial feature-based guidance phase, object-based templates are activated when they are required to distinguish target and nontarget objects. These templates modulate visual processing and control access to working memory, and their activation may coincide with the start of feature integration processes. Results also suggest that while multiple feature templates can be activated concurrently, only a single object-based target template can guide attention at any given time. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  17. The role of lightness, hue and saturation in feature-based visual attention.

    PubMed

    Stuart, Geoffrey W; Barsdell, Wendy N; Day, Ross H

    2014-03-01

    Visual attention is used to select part of the visual array for higher-level processing. Visual selection can be based on spatial location, but it has also been demonstrated that multiple locations can be selected simultaneously on the basis of a visual feature such as color. One task that has been used to demonstrate feature-based attention is the judgement of the symmetry of simple four-color displays. In a typical task, when symmetry is violated, four squares on either side of the display do not match. When four colors are involved, symmetry judgements are made more quickly than when only two of the four colors are involved. This indicates that symmetry judgements are made one color at a time. Previous studies have confounded lightness, hue, and saturation when defining the colors used in such displays. In three experiments, symmetry was defined by lightness alone, lightness plus hue, or by hue or saturation alone, with lightness levels randomised. The difference between judgements of two- and four-color asymmetry was maintained, showing that hue and saturation can provide the sole basis for feature-based attentional selection. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.

  18. Direct evidence for attention-dependent influences of the frontal eye-fields on feature-responsive visual cortex.

    PubMed

    Heinen, Klaartje; Feredoes, Eva; Weiskopf, Nikolaus; Ruff, Christian C; Driver, Jon

    2014-11-01

    Voluntary selective attention can prioritize different features in a visual scene. The frontal eye-fields (FEF) are one potential source of such feature-specific top-down signals, but causal evidence for influences on visual cortex (as was shown for "spatial" attention) has remained elusive. Here, we show that transcranial magnetic stimulation (TMS) applied to right FEF increased the blood oxygen level-dependent (BOLD) signals in visual areas processing "target feature" but not in "distracter feature"-processing regions. TMS-induced BOLD signals increase in motion-responsive visual cortex (MT+) when motion was attended in a display with moving dots superimposed on face stimuli, but in face-responsive fusiform area (FFA) when faces were attended to. These TMS effects on BOLD signal in both regions were negatively related to performance (on the motion task), supporting the behavioral relevance of this pathway. Our findings provide new causal evidence for the human FEF in the control of nonspatial "feature"-based attention, mediated by dynamic influences on feature-specific visual cortex that vary with the currently attended property. © The Author 2013. Published by Oxford University Press.

  19. Visual search, visual streams, and visual architectures.

    PubMed

    Green, M

    1991-10-01

    Most psychological, physiological, and computational models of early vision suggest that retinal information is divided into a parallel set of feature modules. The dominant theories of visual search assume that these modules form a "blackboard" architecture: a set of independent representations that communicate only through a central processor. A review of research shows that blackboard-based theories, such as feature-integration theory, cannot easily explain the existing data. The experimental evidence is more consistent with a "network" architecture, which stresses that: (1) feature modules are directly connected to one another, (2) features and their locations are represented together, (3) feature detection and integration are not distinct processing stages, and (4) no executive control process, such as focal attention, is needed to integrate features. Attention is not a spotlight that synthesizes objects from raw features. Instead, it is better to conceptualize attention as an aperture which masks irrelevant visual information.

  20. Temporal Correlation Mechanisms and Their Role in Feature Selection: A Single-Unit Study in Primate Somatosensory Cortex

    PubMed Central

    Gomez-Ramirez, Manuel; Trzcinski, Natalie K.; Mihalas, Stefan; Niebur, Ernst

    2014-01-01

    Studies in vision show that attention enhances the firing rates of cells when it is directed towards their preferred stimulus feature. However, it is unknown whether other sensory systems employ this mechanism to mediate feature selection within their modalities. Moreover, whether feature-based attention modulates the correlated activity of a population is unclear. Indeed, temporal correlation codes such as spike-synchrony and spike-count correlations (rsc) are believed to play a role in stimulus selection by increasing the signal and reducing the noise in a population, respectively. Here, we investigate (1) whether feature-based attention biases the correlated activity between neurons when attention is directed towards their common preferred feature, (2) the interplay between spike-synchrony and rsc during feature selection, and (3) whether feature attention effects are common across the visual and tactile systems. Single-unit recordings were made in secondary somatosensory cortex of three non-human primates while animals engaged in tactile feature (orientation and frequency) and visual discrimination tasks. We found that both firing rate and spike-synchrony between neurons with similar feature selectivity were enhanced when attention was directed towards their preferred feature. However, attention effects on spike-synchrony were twice as large as those on firing rate, and had a tighter relationship with behavioral performance. Further, we observed increased rsc when attention was directed towards the visual modality (i.e., away from touch). These data suggest that similar feature selection mechanisms are employed in vision and touch, and that temporal correlation codes such as spike-synchrony play a role in mediating feature selection. We posit that feature-based selection operates by implementing multiple mechanisms that reduce the overall noise levels in the neural population and synchronize activity across subpopulations that encode the relevant features of sensory stimuli. PMID:25423284

  1. Feature-based attentional modulation increases with stimulus separation in divided-attention tasks.

    PubMed

    Sally, Sharon L; Vidnyánsky, Zoltán; Papathomas, Thomas V

    2009-01-01

    Attention modifies our visual experience by selecting certain aspects of a scene for further processing. It is therefore important to understand factors that govern the deployment of selective attention over the visual field. Both location and feature-specific mechanisms of attention have been identified and their modulatory effects can interact at a neural level (Treue and Martinez-Trujillo, 1999). The effects of spatial parameters on feature-based attentional modulation were examined for the feature dimensions of orientation, motion and color using three divided-attention tasks. Subjects performed concurrent discriminations of two briefly presented targets (Gabor patches) to the left and right of a central fixation point at eccentricities of +/-2.5 degrees , 5 degrees , 10 degrees and 15 degrees in the horizontal plane. Gabors were size-scaled to maintain consistent single-task performance across eccentricities. For all feature dimensions, the data show a linear increase in the attentional effects with target separation. In a control experiment, Gabors were presented on an isoeccentric viewing arc at 10 degrees and 15 degrees at the closest spatial separation (+/-2.5 degrees ) of the main experiment. Under these conditions, the effects of feature-based attentional effects were largely eliminated. Our results are consistent with the hypothesis that feature-based attention prioritizes the processing of attended features. Feature-based attentional mechanisms may have helped direct the attentional focus to the appropriate target locations at greater separations, whereas similar assistance may not have been necessary at closer target spacings. The results of the present study specify conditions under which dual-task performance benefits from sharing similar target features and may therefore help elucidate the processes by which feature-based attention operates.

  2. Feature-based memory-driven attentional capture: visual working memory content affects visual attention.

    PubMed

    Olivers, Christian N L; Meijer, Frank; Theeuwes, Jan

    2006-10-01

    In 7 experiments, the authors explored whether visual attention (the ability to select relevant visual information) and visual working memory (the ability to retain relevant visual information) share the same content representations. The presence of singleton distractors interfered more strongly with a visual search task when it was accompanied by an additional memory task. Singleton distractors interfered even more when they were identical or related to the object held in memory, but only when it was difficult to verbalize the memory content. Furthermore, this content-specific interaction occurred for features that were relevant to the memory task but not for irrelevant features of the same object or for once-remembered objects that could be forgotten. Finally, memory-related distractors attracted more eye movements but did not result in longer fixations. The results demonstrate memory-driven attentional capture on the basis of content-specific representations. Copyright 2006 APA.

  3. Interactions between space-based and feature-based attention.

    PubMed

    Leonard, Carly J; Balestreri, Angela; Luck, Steven J

    2015-02-01

    Although early research suggested that attention to nonspatial features (i.e., red) was confined to stimuli appearing at an attended spatial location, more recent research has emphasized the global nature of feature-based attention. For example, a distractor sharing a target feature may capture attention even if it occurs at a task-irrelevant location. Such findings have been used to argue that feature-based attention operates independently of spatial attention. However, feature-based attention may nonetheless interact with spatial attention, yielding larger feature-based effects at attended locations than at unattended locations. The present study tested this possibility. In 2 experiments, participants viewed a rapid serial visual presentation (RSVP) stream and identified a target letter defined by its color. Target-colored distractors were presented at various task-irrelevant locations during the RSVP stream. We found that feature-driven attentional capture effects were largest when the target-colored distractor was closer to the attended location. These results demonstrate that spatial attention modulates the strength of feature-based attention capture, calling into question the prior evidence that feature-based attention operates in a global manner that is independent of spatial attention.

  4. Similar effects of feature-based attention on motion perception and pursuit eye movements at different levels of awareness

    PubMed Central

    Spering, Miriam; Carrasco, Marisa

    2012-01-01

    Feature-based attention enhances visual processing and improves perception, even for visual features that we are not aware of. Does feature-based attention also modulate motor behavior in response to visual information that does or does not reach awareness? Here we compare the effect of feature-based attention on motion perception and smooth pursuit eye movements in response to moving dichoptic plaids–stimuli composed of two orthogonally-drifting gratings, presented separately to each eye–in human observers. Monocular adaptation to one grating prior to the presentation of both gratings renders the adapted grating perceptually weaker than the unadapted grating and decreases the level of awareness. Feature-based attention was directed to either the adapted or the unadapted grating’s motion direction or to both (neutral condition). We show that observers were better in detecting a speed change in the attended than the unattended motion direction, indicating that they had successfully attended to one grating. Speed change detection was also better when the change occurred in the unadapted than the adapted grating, indicating that the adapted grating was perceptually weaker. In neutral conditions, perception and pursuit in response to plaid motion were dissociated: While perception followed one grating’s motion direction almost exclusively (component motion), the eyes tracked the average of both gratings (pattern motion). In attention conditions, perception and pursuit were shifted towards the attended component. These results suggest that attention affects perception and pursuit similarly even though only the former reflects awareness. The eyes can track an attended feature even if observers do not perceive it. PMID:22649238

  5. Similar effects of feature-based attention on motion perception and pursuit eye movements at different levels of awareness.

    PubMed

    Spering, Miriam; Carrasco, Marisa

    2012-05-30

    Feature-based attention enhances visual processing and improves perception, even for visual features that we are not aware of. Does feature-based attention also modulate motor behavior in response to visual information that does or does not reach awareness? Here we compare the effect of feature-based attention on motion perception and smooth-pursuit eye movements in response to moving dichoptic plaids--stimuli composed of two orthogonally drifting gratings, presented separately to each eye--in human observers. Monocular adaptation to one grating before the presentation of both gratings renders the adapted grating perceptually weaker than the unadapted grating and decreases the level of awareness. Feature-based attention was directed to either the adapted or the unadapted grating's motion direction or to both (neutral condition). We show that observers were better at detecting a speed change in the attended than the unattended motion direction, indicating that they had successfully attended to one grating. Speed change detection was also better when the change occurred in the unadapted than the adapted grating, indicating that the adapted grating was perceptually weaker. In neutral conditions, perception and pursuit in response to plaid motion were dissociated: While perception followed one grating's motion direction almost exclusively (component motion), the eyes tracked the average of both gratings (pattern motion). In attention conditions, perception and pursuit were shifted toward the attended component. These results suggest that attention affects perception and pursuit similarly even though only the former reflects awareness. The eyes can track an attended feature even if observers do not perceive it.

  6. Activity in human visual and parietal cortex reveals object-based attention in working memory.

    PubMed

    Peters, Benjamin; Kaiser, Jochen; Rahm, Benjamin; Bledowski, Christoph

    2015-02-25

    Visual attention enables observers to select behaviorally relevant information based on spatial locations, features, or objects. Attentional selection is not limited to physically present visual information, but can also operate on internal representations maintained in working memory (WM) in service of higher-order cognition. However, only little is known about whether attention to WM contents follows the same principles as attention to sensory stimuli. To address this question, we investigated in humans whether the typically observed effects of object-based attention in perception are also evident for object-based attentional selection of internal object representations in WM. In full accordance with effects in visual perception, the key behavioral and neuronal characteristics of object-based attention were observed in WM. Specifically, we found that reaction times were shorter when shifting attention to memory positions located on the currently attended object compared with equidistant positions on a different object. Furthermore, functional magnetic resonance imaging and multivariate pattern analysis of visuotopic activity in visual (areas V1-V4) and parietal cortex revealed that directing attention to one position of an object held in WM also enhanced brain activation for other positions on the same object, suggesting that attentional selection in WM activates the entire object. This study demonstrated that all characteristic features of object-based attention are present in WM and thus follows the same principles as in perception. Copyright © 2015 the authors 0270-6474/15/353360-10$15.00/0.

  7. Anticipatory Attentional Suppression of Visual Features Indexed by Oscillatory Alpha-Band Power Increases: A High-Density Electrical Mapping Study

    PubMed Central

    Snyder, Adam C.; Foxe, John J.

    2010-01-01

    Retinotopically specific increases in alpha-band (~10 Hz) oscillatory power have been strongly implicated in the suppression of processing for irrelevant parts of the visual field during the deployment of visuospatial attention. Here, we asked whether this alpha suppression mechanism also plays a role in the nonspatial anticipatory biasing of feature-based attention. Visual word cues informed subjects what the task-relevant feature of an upcoming visual stimulus (S2) was, while high-density electroencephalographic recordings were acquired. We examined anticipatory oscillatory activity in the Cue-to-S2 interval (~2 s). Subjects were cued on a trial-by-trial basis to attend to either the color or direction of motion of an upcoming dot field array, and to respond when they detected that a subset of the dots differed from the majority along the target feature dimension. We used the features of color and motion, expressly because they have well known, spatially separated cortical processing areas, to distinguish shifts in alpha power over areas processing each feature. Alpha power from dorsal regions increased when motion was the irrelevant feature (i.e., color was cued), and alpha power from ventral regions increased when color was irrelevant. Thus, alpha-suppression mechanisms appear to operate during feature-based selection in much the same manner as has been shown for space-based attention. PMID:20237273

  8. Attentional Selection of Feature Conjunctions Is Accomplished by Parallel and Independent Selection of Single Features.

    PubMed

    Andersen, Søren K; Müller, Matthias M; Hillyard, Steven A

    2015-07-08

    Experiments that study feature-based attention have often examined situations in which selection is based on a single feature (e.g., the color red). However, in more complex situations relevant stimuli may not be set apart from other stimuli by a single defining property but by a specific combination of features. Here, we examined sustained attentional selection of stimuli defined by conjunctions of color and orientation. Human observers attended to one out of four concurrently presented superimposed fields of randomly moving horizontal or vertical bars of red or blue color to detect brief intervals of coherent motion. Selective stimulus processing in early visual cortex was assessed by recordings of steady-state visual evoked potentials (SSVEPs) elicited by each of the flickering fields of stimuli. We directly contrasted attentional selection of single features and feature conjunctions and found that SSVEP amplitudes on conditions in which selection was based on a single feature only (color or orientation) exactly predicted the magnitude of attentional enhancement of SSVEPs when attending to a conjunction of both features. Furthermore, enhanced SSVEP amplitudes elicited by attended stimuli were accompanied by equivalent reductions of SSVEP amplitudes elicited by unattended stimuli in all cases. We conclude that attentional selection of a feature-conjunction stimulus is accomplished by the parallel and independent facilitation of its constituent feature dimensions in early visual cortex. The ability to perceive the world is limited by the brain's processing capacity. Attention affords adaptive behavior by selectively prioritizing processing of relevant stimuli based on their features (location, color, orientation, etc.). We found that attentional mechanisms for selection of different features belonging to the same object operate independently and in parallel: concurrent attentional selection of two stimulus features is simply the sum of attending to each of those features separately. This result is key to understanding attentional selection in complex (natural) scenes, where relevant stimuli are likely to be defined by a combination of stimulus features. Copyright © 2015 the authors 0270-6474/15/359912-08$15.00/0.

  9. Bindings in working memory: The role of object-based attention.

    PubMed

    Gao, Zaifeng; Wu, Fan; Qiu, Fangfang; He, Kaifeng; Yang, Yue; Shen, Mowei

    2017-02-01

    Over the past decade, it has been debated whether retaining bindings in working memory (WM) requires more attention than retaining constituent features, focusing on domain-general attention and space-based attention. Recently, we proposed that retaining bindings in WM needs more object-based attention than retaining constituent features (Shen, Huang, & Gao, 2015, Journal of Experimental Psychology: Human Perception and Performance, doi: 10.1037/xhp0000018 ). However, only unitized visual bindings were examined; to establish the role of object-based attention in retaining bindings in WM, more emperical evidence is required. We tested 4 new bindings that had been suggested requiring no more attention than the constituent features in the WM maintenance phase: The two constituent features of binding were stored in different WM modules (cross-module binding, Experiment 1), from auditory and visual modalities (cross-modal binding, Experiment 2), or temporally (cross-time binding, Experiments 3) or spatially (cross-space binding, Experiments 4-6) separated. In the critical condition, we added a secondary object feature-report task during the delay interval of the change-detection task, such that the secondary task competed for object-based attention with the to-be-memorized stimuli. If more object-based attention is required for retaining bindings than for retaining constituent features, the secondary task should impair the binding performance to a larger degree relative to the performance of constituent features. Indeed, Experiments 1-6 consistently revealed a significantly larger impairment for bindings than for the constituent features, suggesting that object-based attention plays a pivotal role in retaining bindings in WM.

  10. Feature-based interference from unattended visual field during attentional tracking in younger and older adults.

    PubMed

    Störmer, Viola S; Li, Shu-Chen; Heekeren, Hauke R; Lindenberger, Ulman

    2011-02-01

    The ability to attend to multiple objects that move in the visual field is important for many aspects of daily functioning. The attentional capacity for such dynamic tracking, however, is highly limited and undergoes age-related decline. Several aspects of the tracking process can influence performance. Here, we investigated effects of feature-based interference from distractor objects that appear in unattended regions of the visual field with a hemifield-tracking task. Younger and older participants performed an attentional tracking task in one hemifield while distractor objects were concurrently presented in the unattended hemifield. Feature similarity between objects in the attended and unattended hemifields as well as motion speed and the number of to-be-tracked objects were parametrically manipulated. The results show that increasing feature overlap leads to greater interference from the unattended visual field. This effect of feature-based interference was only present in the slow speed condition, indicating that the interference is mainly modulated by perceptual demands. High-performing older adults showed a similar interference effect as younger adults, whereas low-performing adults showed poor tracking performance overall.

  11. Toward a Unified Theory of Visual Area V4

    PubMed Central

    Roe, Anna W.; Chelazzi, Leonardo; Connor, Charles E.; Conway, Bevil R.; Fujita, Ichiro; Gallant, Jack L.; Lu, Haidong; Vanduffel, Wim

    2016-01-01

    Visual area V4 is a midtier cortical area in the ventral visual pathway. It is crucial for visual object recognition and has been a focus of many studies on visual attention. However, there is no unifying view of V4’s role in visual processing. Neither is there an understanding of how its role in feature processing interfaces with its role in visual attention. This review captures our current knowledge of V4, largely derived from electrophysiological and imaging studies in the macaque monkey. Based on recent discovery of functionally specific domains in V4, we propose that the unifying function of V4 circuitry is to enable selective extraction of specific functional domain-based networks, whether it be by bottom-up specification of object features or by top-down attentionally driven selection. PMID:22500626

  12. Anatomical constraints on attention: Hemifield independence is a signature of multifocal spatial selection

    PubMed Central

    Alvarez, George A; Gill, Jonathan; Cavanagh, Patrick

    2012-01-01

    Previous studies have shown independent attentional selection of targets in the left and right visual hemifields during attentional tracking (Alvarez & Cavanagh, 2005) but not during a visual search (Luck, Hillyard, Mangun, & Gazzaniga, 1989). Here we tested whether multifocal spatial attention is the critical process that operates independently in the two hemifields. It is explicitly required in tracking (attend to a subset of object locations, suppress the others) but not in the standard visual search task (where all items are potential targets). We used a modified visual search task in which observers searched for a target within a subset of display items, where the subset was selected based on location (Experiments 1 and 3A) or based on a salient feature difference (Experiments 2 and 3B). The results show hemifield independence in this subset visual search task with location-based selection but not with feature-based selection; this effect cannot be explained by general difficulty (Experiment 4). Combined, these findings suggest that hemifield independence is a signature of multifocal spatial attention and highlight the need for cognitive and neural theories of attention to account for anatomical constraints on selection mechanisms. PMID:22637710

  13. The fate of task-irrelevant visual motion: perceptual load versus feature-based attention.

    PubMed

    Taya, Shuichiro; Adams, Wendy J; Graf, Erich W; Lavie, Nilli

    2009-11-18

    We tested contrasting predictions derived from perceptual load theory and from recent feature-based selection accounts. Observers viewed moving, colored stimuli and performed low or high load tasks associated with one stimulus feature, either color or motion. The resultant motion aftereffect (MAE) was used to evaluate attentional allocation. We found that task-irrelevant visual features received less attention than co-localized task-relevant features of the same objects. Moreover, when color and motion features were co-localized yet perceived to belong to two distinct surfaces, feature-based selection was further increased at the expense of object-based co-selection. Load theory predicts that the MAE for task-irrelevant motion would be reduced with a higher load color task. However, this was not seen for co-localized features; perceptual load only modulated the MAE for task-irrelevant motion when this was spatially separated from the attended color location. Our results suggest that perceptual load effects are mediated by spatial selection and do not generalize to the feature domain. Feature-based selection operates to suppress processing of task-irrelevant, co-localized features, irrespective of perceptual load.

  14. Attention improves encoding of task-relevant features in the human visual cortex.

    PubMed

    Jehee, Janneke F M; Brady, Devin K; Tong, Frank

    2011-06-01

    When spatial attention is directed toward a particular stimulus, increased activity is commonly observed in corresponding locations of the visual cortex. Does this attentional increase in activity indicate improved processing of all features contained within the attended stimulus, or might spatial attention selectively enhance the features relevant to the observer's task? We used fMRI decoding methods to measure the strength of orientation-selective activity patterns in the human visual cortex while subjects performed either an orientation or contrast discrimination task, involving one of two laterally presented gratings. Greater overall BOLD activation with spatial attention was observed in visual cortical areas V1-V4 for both tasks. However, multivariate pattern analysis revealed that orientation-selective responses were enhanced by attention only when orientation was the task-relevant feature and not when the contrast of the grating had to be attended. In a second experiment, observers discriminated the orientation or color of a specific lateral grating. Here, orientation-selective responses were enhanced in both tasks, but color-selective responses were enhanced only when color was task relevant. In both experiments, task-specific enhancement of feature-selective activity was not confined to the attended stimulus location but instead spread to other locations in the visual field, suggesting the concurrent involvement of a global feature-based attentional mechanism. These results suggest that attention can be remarkably selective in its ability to enhance particular task-relevant features and further reveal that increases in overall BOLD amplitude are not necessarily accompanied by improved processing of stimulus information.

  15. Feature-based attentional weighting and spreading in visual working memory

    PubMed Central

    Niklaus, Marcel; Nobre, Anna C.; van Ede, Freek

    2017-01-01

    Attention can be directed at features and feature dimensions to facilitate perception. Here, we investigated whether feature-based-attention (FBA) can also dynamically weight feature-specific representations within multi-feature objects held in visual working memory (VWM). Across three experiments, participants retained coloured arrows in working memory and, during the delay, were cued to either the colour or the orientation dimension. We show that directing attention towards a feature dimension (1) improves the performance in the cued feature dimension at the expense of the uncued dimension, (2) is more efficient if directed to the same rather than to different dimensions for different objects, and (3) at least for colour, automatically spreads to the colour representation of non-attended objects in VWM. We conclude that FBA also continues to operate on VWM representations (with similar principles that govern FBA in the perceptual domain) and challenge the classical view that VWM representations are stored solely as integrated objects. PMID:28233830

  16. The spread of attention across features of a surface

    PubMed Central

    Ernst, Zachary Raymond; Jazayeri, Mehrdad

    2013-01-01

    Contrasting theories of visual attention have emphasized selection by spatial location, individual features, and whole objects. We used functional magnetic resonance imaging to ask whether and how attention to one feature of an object spreads to other features of the same object. Subjects viewed two spatially superimposed surfaces of random dots that were segregated by distinct color-motion conjunctions. The color and direction of motion of each surface changed smoothly and in a cyclical fashion. Subjects were required to track one feature (e.g., color) of one of the two surfaces and detect brief moments when the attended feature diverged from its smooth trajectory. To tease apart the effect of attention to individual features on the hemodynamic response, we used a frequency-tagging scheme. In this scheme, the stimulus features (color and direction of motion) are modulated periodically at distinct frequencies so that the contribution of each feature to the hemodynamics can be inferred from the harmonic response at the corresponding frequency. We found that attention to one feature (e.g., color) of one surface increased the response modulation not only to the attended feature but also to the other feature (e.g., motion) of the same surface. This attentional modulation was evident in multiple visual areas and was present as early as V1. The spread of attention to the behaviorally irrelevant features of a surface suggests that attention may automatically select all features of a single object. Thus object-based attention may be supported by an enhancement of feature-specific sensory signals in the visual cortex. PMID:23883860

  17. Spatial Attention Reduces Burstiness in Macaque Visual Cortical Area MST.

    PubMed

    Xue, Cheng; Kaping, Daniel; Ray, Sonia Baloni; Krishna, B Suresh; Treue, Stefan

    2017-01-01

    Visual attention modulates the firing rate of neurons in many primate cortical areas. In V4, a cortical area in the ventral visual pathway, spatial attention has also been shown to reduce the tendency of neurons to fire closely separated spikes (burstiness). A recent model proposes that a single mechanism accounts for both the firing rate enhancement and the burstiness reduction in V4, but this has not been empirically tested. It is also unclear if the burstiness reduction by spatial attention is found in other visual areas and for other attentional types. We therefore recorded from single neurons in the medial superior temporal area (MST), a key motion-processing area along the dorsal visual pathway, of two rhesus monkeys while they performed a task engaging both spatial and feature-based attention. We show that in MST, spatial attention is associated with a clear reduction in burstiness that is independent of the concurrent enhancement of firing rate. In contrast, feature-based attention enhances firing rate but is not associated with a significant reduction in burstiness. These results establish burstiness reduction as a widespread effect of spatial attention. They also suggest that in contrast to the recently proposed model, the effects of spatial attention on burstiness and firing rate emerge from different mechanisms. © The Author 2016. Published by Oxford University Press.

  18. Spatial Attention Reduces Burstiness in Macaque Visual Cortical Area MST

    PubMed Central

    Xue, Cheng; Kaping, Daniel; Ray, Sonia Baloni; Krishna, B. Suresh; Treue, Stefan

    2017-01-01

    Abstract Visual attention modulates the firing rate of neurons in many primate cortical areas. In V4, a cortical area in the ventral visual pathway, spatial attention has also been shown to reduce the tendency of neurons to fire closely separated spikes (burstiness). A recent model proposes that a single mechanism accounts for both the firing rate enhancement and the burstiness reduction in V4, but this has not been empirically tested. It is also unclear if the burstiness reduction by spatial attention is found in other visual areas and for other attentional types. We therefore recorded from single neurons in the medial superior temporal area (MST), a key motion-processing area along the dorsal visual pathway, of two rhesus monkeys while they performed a task engaging both spatial and feature-based attention. We show that in MST, spatial attention is associated with a clear reduction in burstiness that is independent of the concurrent enhancement of firing rate. In contrast, feature-based attention enhances firing rate but is not associated with a significant reduction in burstiness. These results establish burstiness reduction as a widespread effect of spatial attention. They also suggest that in contrast to the recently proposed model, the effects of spatial attention on burstiness and firing rate emerge from different mechanisms. PMID:28365773

  19. The relationship between visual working memory and attention: retention of precise colour information in the absence of effects on perceptual selection.

    PubMed

    Hollingworth, Andrew; Hwang, Seongmin

    2013-10-19

    We examined the conditions under which a feature value in visual working memory (VWM) recruits visual attention to matching stimuli. Previous work has suggested that VWM supports two qualitatively different states of representation: an active state that interacts with perceptual selection and a passive (or accessory) state that does not. An alternative hypothesis is that VWM supports a single form of representation, with the precision of feature memory controlling whether or not the representation interacts with perceptual selection. The results of three experiments supported the dual-state hypothesis. We established conditions under which participants retained a relatively precise representation of a parcticular colour. If the colour was immediately task relevant, it reliably recruited attention to matching stimuli. However, if the colour was not immediately task relevant, it failed to interact with perceptual selection. Feature maintenance in VWM is not necessarily equivalent with feature-based attentional selection.

  20. An object-based visual attention model for robotic applications.

    PubMed

    Yu, Yuanlong; Mann, George K I; Gosine, Raymond G

    2010-10-01

    By extending integrated competition hypothesis, this paper presents an object-based visual attention model, which selects one object of interest using low-dimensional features, resulting that visual perception starts from a fast attentional selection procedure. The proposed attention model involves seven modules: learning of object representations stored in a long-term memory (LTM), preattentive processing, top-down biasing, bottom-up competition, mediation between top-down and bottom-up ways, generation of saliency maps, and perceptual completion processing. It works in two phases: learning phase and attending phase. In the learning phase, the corresponding object representation is trained statistically when one object is attended. A dual-coding object representation consisting of local and global codings is proposed. Intensity, color, and orientation features are used to build the local coding, and a contour feature is employed to constitute the global coding. In the attending phase, the model preattentively segments the visual field into discrete proto-objects using Gestalt rules at first. If a task-specific object is given, the model recalls the corresponding representation from LTM and deduces the task-relevant feature(s) to evaluate top-down biases. The mediation between automatic bottom-up competition and conscious top-down biasing is then performed to yield a location-based saliency map. By combination of location-based saliency within each proto-object, the proto-object-based saliency is evaluated. The most salient proto-object is selected for attention, and it is finally put into the perceptual completion processing module to yield a complete object region. This model has been applied into distinct tasks of robots: detection of task-specific stationary and moving objects. Experimental results under different conditions are shown to validate this model.

  1. Interaction Between Spatial and Feature Attention in Posterior Parietal Cortex

    PubMed Central

    Ibos, Guilhem; Freedman, David J.

    2016-01-01

    Summary Lateral intraparietal (LIP) neurons encode a vast array of sensory and cognitive variables. Recently, we proposed that the flexibility of feature representations in LIP reflect the bottom-up integration of sensory signals, modulated by feature-based attention (FBA), from upstream feature-selective cortical neurons. Moreover, LIP activity is also strongly modulated by the position of space-based attention (SBA). However, the mechanisms by which SBA and FBA interact to facilitate the representation of task-relevant spatial and non-spatial features in LIP remain unclear. We recorded from LIP neurons during performance of a task which required monkeys to detect specific conjunctions of color, motion-direction, and stimulus position. Here we show that FBA and SBA potentiate each other’s effect in a manner consistent with attention gating the flow of visual information along the cortical visual pathway. Our results suggest that linear bottom-up integrative mechanisms allow LIP neurons to emphasize task-relevant spatial and non-spatial features. PMID:27499082

  2. Interaction between Spatial and Feature Attention in Posterior Parietal Cortex.

    PubMed

    Ibos, Guilhem; Freedman, David J

    2016-08-17

    Lateral intraparietal (LIP) neurons encode a vast array of sensory and cognitive variables. Recently, we proposed that the flexibility of feature representations in LIP reflect the bottom-up integration of sensory signals, modulated by feature-based attention (FBA), from upstream feature-selective cortical neurons. Moreover, LIP activity is also strongly modulated by the position of space-based attention (SBA). However, the mechanisms by which SBA and FBA interact to facilitate the representation of task-relevant spatial and non-spatial features in LIP remain unclear. We recorded from LIP neurons during performance of a task that required monkeys to detect specific conjunctions of color, motion direction, and stimulus position. Here we show that FBA and SBA potentiate each other's effect in a manner consistent with attention gating the flow of visual information along the cortical visual pathway. Our results suggest that linear bottom-up integrative mechanisms allow LIP neurons to emphasize task-relevant spatial and non-spatial features. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Incidental biasing of attention from visual long-term memory.

    PubMed

    Fan, Judith E; Turk-Browne, Nicholas B

    2016-06-01

    Holding recently experienced information in mind can help us achieve our current goals. However, such immediate and direct forms of guidance from working memory are less helpful over extended delays or when other related information in long-term memory is useful for reaching these goals. Here we show that information that was encoded in the past but is no longer present or relevant to the task also guides attention. We examined this by associating multiple unique features with novel shapes in visual long-term memory (VLTM), and subsequently testing how memories for these objects biased the deployment of attention. In Experiment 1, VLTM for associated features guided visual search for the shapes, even when these features had never been task-relevant. In Experiment 2, associated features captured attention when presented in isolation during a secondary task that was completely unrelated to the shapes. These findings suggest that long-term memory enables a durable and automatic type of memory-based attentional control. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  4. Feature-selective attention enhances color signals in early visual areas of the human brain.

    PubMed

    Müller, M M; Andersen, S; Trujillo, N J; Valdés-Sosa, P; Malinowski, P; Hillyard, S A

    2006-09-19

    We used an electrophysiological measure of selective stimulus processing (the steady-state visual evoked potential, SSVEP) to investigate feature-specific attention to color cues. Subjects viewed a display consisting of spatially intermingled red and blue dots that continually shifted their positions at random. The red and blue dots flickered at different frequencies and thereby elicited distinguishable SSVEP signals in the visual cortex. Paying attention selectively to either the red or blue dot population produced an enhanced amplitude of its frequency-tagged SSVEP, which was localized by source modeling to early levels of the visual cortex. A control experiment showed that this selection was based on color rather than flicker frequency cues. This signal amplification of attended color items provides an empirical basis for the rapid identification of feature conjunctions during visual search, as proposed by "guided search" models.

  5. Attention improves encoding of task-relevant features in the human visual cortex

    PubMed Central

    Jehee, Janneke F.M.; Brady, Devin K.; Tong, Frank

    2011-01-01

    When spatial attention is directed towards a particular stimulus, increased activity is commonly observed in corresponding locations of the visual cortex. Does this attentional increase in activity indicate improved processing of all features contained within the attended stimulus, or might spatial attention selectively enhance the features relevant to the observer’s task? We used fMRI decoding methods to measure the strength of orientation-selective activity patterns in the human visual cortex while subjects performed either an orientation or contrast discrimination task, involving one of two laterally presented gratings. Greater overall BOLD activation with spatial attention was observed in areas V1-V4 for both tasks. However, multivariate pattern analysis revealed that orientation-selective responses were enhanced by attention only when orientation was the task-relevant feature, and not when the grating’s contrast had to be attended. In a second experiment, observers discriminated the orientation or color of a specific lateral grating. Here, orientation-selective responses were enhanced in both tasks but color-selective responses were enhanced only when color was task-relevant. In both experiments, task-specific enhancement of feature-selective activity was not confined to the attended stimulus location, but instead spread to other locations in the visual field, suggesting the concurrent involvement of a global feature-based attentional mechanism. These results suggest that attention can be remarkably selective in its ability to enhance particular task-relevant features, and further reveal that increases in overall BOLD amplitude are not necessarily accompanied by improved processing of stimulus information. PMID:21632942

  6. The effect of visual salience on memory-based choices.

    PubMed

    Pooresmaeili, Arezoo; Bach, Dominik R; Dolan, Raymond J

    2014-02-01

    Deciding whether a stimulus is the "same" or "different" from a previous presented one involves integrating among the incoming sensory information, working memory, and perceptual decision making. Visual selective attention plays a crucial role in selecting the relevant information that informs a subsequent course of action. Previous studies have mainly investigated the role of visual attention during the encoding phase of working memory tasks. In this study, we investigate whether manipulation of bottom-up attention by changing stimulus visual salience impacts on later stages of memory-based decisions. In two experiments, we asked subjects to identify whether a stimulus had either the same or a different feature to that of a memorized sample. We manipulated visual salience of the test stimuli by varying a task-irrelevant feature contrast. Subjects chose a visually salient item more often when they looked for matching features and less often so when they looked for a nonmatch. This pattern of results indicates that salient items are more likely to be identified as a match. We interpret the findings in terms of capacity limitations at a comparison stage where a visually salient item is more likely to exhaust resources leading it to be prematurely parsed as a match.

  7. A computational visual saliency model based on statistics and machine learning.

    PubMed

    Lin, Ru-Je; Lin, Wei-Song

    2014-08-01

    Identifying the type of stimuli that attracts human visual attention has been an appealing topic for scientists for many years. In particular, marking the salient regions in images is useful for both psychologists and many computer vision applications. In this paper, we propose a computational approach for producing saliency maps using statistics and machine learning methods. Based on four assumptions, three properties (Feature-Prior, Position-Prior, and Feature-Distribution) can be derived and combined by a simple intersection operation to obtain a saliency map. These properties are implemented by a similarity computation, support vector regression (SVR) technique, statistical analysis of training samples, and information theory using low-level features. This technique is able to learn the preferences of human visual behavior while simultaneously considering feature uniqueness. Experimental results show that our approach performs better in predicting human visual attention regions than 12 other models in two test databases. © 2014 ARVO.

  8. Category-based guidance of spatial attention during visual search for feature conjunctions.

    PubMed

    Nako, Rebecca; Grubert, Anna; Eimer, Martin

    2016-10-01

    The question whether alphanumerical category is involved in the control of attentional target selection during visual search remains a contentious issue. We tested whether category-based attentional mechanisms would guide the allocation of attention under conditions where targets were defined by a combination of alphanumerical category and a basic visual feature, and search displays could contain both targets and partially matching distractor objects. The N2pc component was used as an electrophysiological marker of attentional object selection in tasks where target objects were defined by a conjunction of color and category (Experiment 1) or shape and category (Experiment 2). Some search displays contained the target or a nontarget object that matched either the target color/shape or its category among 3 nonmatching distractors. In other displays, the target and a partially matching nontarget object appeared together. N2pc components were elicited not only by targets and by color- or shape-matching nontargets, but also by category-matching nontarget objects, even on trials where a target was present in the same display. On these trials, the summed N2pc components to the 2 types of partially matching nontargets were initially equal in size to the target N2pc, suggesting that attention was allocated simultaneously and independently to all objects with target-matching features during the early phase of attentional processing. Results demonstrate that alphanumerical category is a genuine guiding feature that can operate in parallel with color or shape information to control the deployment of attention during visual search. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  9. Feature-based and spatial attentional selection in visual working memory.

    PubMed

    Heuer, Anna; Schubö, Anna

    2016-05-01

    The contents of visual working memory (VWM) can be modulated by spatial cues presented during the maintenance interval ("retrocues"). Here, we examined whether attentional selection of representations in VWM can also be based on features. In addition, we investigated whether the mechanisms of feature-based and spatial attention in VWM differ with respect to parallel access to noncontiguous locations. In two experiments, we tested the efficacy of valid retrocues relying on different kinds of information. Specifically, participants were presented with a typical spatial retrocue pointing to two locations, a symbolic spatial retrocue (numbers mapping onto two locations), and two feature-based retrocues: a color retrocue (a blob of the same color as two of the items) and a shape retrocue (an outline of the shape of two of the items). The two cued items were presented at either contiguous or noncontiguous locations. Overall retrocueing benefits, as compared to a neutral condition, were observed for all retrocue types. Whereas feature-based retrocues yielded benefits for cued items presented at both contiguous and noncontiguous locations, spatial retrocues were only effective when the cued items had been presented at contiguous locations. These findings demonstrate that attentional selection and updating in VWM can operate on different kinds of information, allowing for a flexible and efficient use of this limited system. The observation that the representations of items presented at noncontiguous locations could only be reliably selected with feature-based retrocues suggests that feature-based and spatial attentional selection in VWM rely on different mechanisms, as has been shown for attentional orienting in the external world.

  10. Visual Attention Model Based on Statistical Properties of Neuron Responses

    PubMed Central

    Duan, Haibin; Wang, Xiaohua

    2015-01-01

    Visual attention is a mechanism of the visual system that can select relevant objects from a specific scene. Interactions among neurons in multiple cortical areas are considered to be involved in attentional allocation. However, the characteristics of the encoded features and neuron responses in those attention related cortices are indefinite. Therefore, further investigations carried out in this study aim at demonstrating that unusual regions arousing more attention generally cause particular neuron responses. We suppose that visual saliency is obtained on the basis of neuron responses to contexts in natural scenes. A bottom-up visual attention model is proposed based on the self-information of neuron responses to test and verify the hypothesis. Four different color spaces are adopted and a novel entropy-based combination scheme is designed to make full use of color information. Valuable regions are highlighted while redundant backgrounds are suppressed in the saliency maps obtained by the proposed model. Comparative results reveal that the proposed model outperforms several state-of-the-art models. This study provides insights into the neuron responses based saliency detection and may underlie the neural mechanism of early visual cortices for bottom-up visual attention. PMID:25747859

  11. Reward-associated features capture attention in the absence of awareness: Evidence from object-substitution masking.

    PubMed

    Harris, Joseph A; Donohue, Sarah E; Schoenfeld, Mircea A; Hopf, Jens-Max; Heinze, Hans-Jochen; Woldorff, Marty G

    2016-08-15

    Reward-associated visual features have been shown to capture visual attention, evidenced in faster and more accurate behavioral performance, as well as in neural responses reflecting lateralized shifts of visual attention to those features. Specifically, the contralateral N2pc event-related-potential (ERP) component that reflects attentional shifting exhibits increased amplitude in response to task-relevant targets containing a reward-associated feature. In the present study, we examined the automaticity of such reward-association effects using object-substitution masking (OSM) in conjunction with MEG measures of visual attentional shifts. In OSM, a visual-search array is presented, with the target item to be detected indicated by a surrounding mask (here, four surrounding squares). Delaying the offset of the target-surrounding four-dot mask relative to the offset of the rest of the target/distracter array disrupts the viewer's awareness of the target (masked condition), whereas simultaneous offsets do not (unmasked condition). Here we manipulated whether the color of the OSM target was or was not of a previously reward-associated color. By tracking reward-associated enhancements of behavior and the N2pc in response to masked targets containing a previously rewarded or unrewarded feature, the automaticity of attentional capture by reward could be probed. We found an enhanced N2pc response to targets containing a previously reward-associated color feature. Moreover, this enhancement of the N2pc by reward did not differ between masking conditions, nor did it differ as a function of the apparent visibility of the target within the masked condition. Overall, these results underscore the automaticity of attentional capture by reward-associated features, and demonstrate the ability of feature-based reward associations to shape attentional capture and allocation outside of perceptual awareness. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Global facilitation of attended features is obligatory and restricts divided attention.

    PubMed

    Andersen, Søren K; Hillyard, Steven A; Müller, Matthias M

    2013-11-13

    In many common situations such as driving an automobile it is advantageous to attend concurrently to events at different locations (e.g., the car in front, the pedestrian to the side). While spatial attention can be divided effectively between separate locations, studies investigating attention to nonspatial features have often reported a "global effect", whereby items having the attended feature may be preferentially processed throughout the entire visual field. These findings suggest that spatial and feature-based attention may at times act in direct opposition: spatially divided foci of attention cannot be truly independent if feature attention is spatially global and thereby affects all foci equally. In two experiments, human observers attended concurrently to one of two overlapping fields of dots of different colors presented in both the left and right visual fields. When the same color or two different colors were attended on the two sides, deviant targets were detected accurately, and visual-cortical potentials elicited by attended dots were enhanced. However, when the attended color on one side matched the ignored color on the opposite side, attentional modulation of cortical potentials was abolished. This loss of feature selectivity could be attributed to enhanced processing of unattended items that shared the color of the attended items in the opposite field. Thus, while it is possible to attend to two different colors at the same time, this ability is fundamentally constrained by spatially global feature enhancement in early visual-cortical areas, which is obligatory and persists even when it explicitly conflicts with task demands.

  13. Modality-specificity of Selective Attention Networks.

    PubMed

    Stewart, Hannah J; Amitay, Sygal

    2015-01-01

    To establish the modality specificity and generality of selective attention networks. Forty-eight young adults completed a battery of four auditory and visual selective attention tests based upon the Attention Network framework: the visual and auditory Attention Network Tests (vANT, aANT), the Test of Everyday Attention (TEA), and the Test of Attention in Listening (TAiL). These provided independent measures for auditory and visual alerting, orienting, and conflict resolution networks. The measures were subjected to an exploratory factor analysis to assess underlying attention constructs. The analysis yielded a four-component solution. The first component comprised of a range of measures from the TEA and was labeled "general attention." The third component was labeled "auditory attention," as it only contained measures from the TAiL using pitch as the attended stimulus feature. The second and fourth components were labeled as "spatial orienting" and "spatial conflict," respectively-they were comprised of orienting and conflict resolution measures from the vANT, aANT, and TAiL attend-location task-all tasks based upon spatial judgments (e.g., the direction of a target arrow or sound location). These results do not support our a-priori hypothesis that attention networks are either modality specific or supramodal. Auditory attention separated into selectively attending to spatial and non-spatial features, with the auditory spatial attention loading onto the same factor as visual spatial attention, suggesting spatial attention is supramodal. However, since our study did not include a non-spatial measure of visual attention, further research will be required to ascertain whether non-spatial attention is modality-specific.

  14. More than a filter: Feature-based attention regulates the distribution of visual working memory resources.

    PubMed

    Dube, Blaire; Emrich, Stephen M; Al-Aidroos, Naseem

    2017-10-01

    Across 2 experiments we revisited the filter account of how feature-based attention regulates visual working memory (VWM). Originally drawing from discrete-capacity ("slot") models, the filter account proposes that attention operates like the "bouncer in the brain," preventing distracting information from being encoded so that VWM resources are reserved for relevant information. Given recent challenges to the assumptions of discrete-capacity models, we investigated whether feature-based attention plays a broader role in regulating memory. Both experiments used partial report tasks in which participants memorized the colors of circle and square stimuli, and we provided a feature-based goal by manipulating the likelihood that 1 shape would be probed over the other across a range of probabilities. By decomposing participants' responses using mixture and variable-precision models, we estimated the contributions of guesses, nontarget responses, and imprecise memory representations to their errors. Consistent with the filter account, participants were less likely to guess when the probed memory item matched the feature-based goal. Interestingly, this effect varied with goal strength, even across high probabilities where goal-matching information should always be prioritized, demonstrating strategic control over filter strength. Beyond this effect of attention on which stimuli were encoded, we also observed effects on how they were encoded: Estimates of both memory precision and nontarget errors varied continuously with feature-based attention. The results offer support for an extension to the filter account, where feature-based attention dynamically regulates the distribution of resources within working memory so that the most relevant items are encoded with the greatest precision. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  15. Effects of Spatial and Feature Attention on Disparity-Rendered Structure-From-Motion Stimuli in the Human Visual Cortex

    PubMed Central

    Ip, Ifan Betina; Bridge, Holly; Parker, Andrew J.

    2014-01-01

    An important advance in the study of visual attention has been the identification of a non-spatial component of attention that enhances the response to similar features or objects across the visual field. Here we test whether this non-spatial component can co-select individual features that are perceptually bound into a coherent object. We combined human psychophysics and functional magnetic resonance imaging (fMRI) to demonstrate the ability to co-select individual features from perceptually coherent objects. Our study used binocular disparity and visual motion to define disparity structure-from-motion (dSFM) stimuli. Although the spatial attention system induced strong modulations of the fMRI response in visual regions, the non-spatial system’s ability to co-select features of the dSFM stimulus was less pronounced and variable across subjects. Our results demonstrate that feature and global feature attention effects are variable across participants, suggesting that the feature attention system may be limited in its ability to automatically select features within the attended object. Careful comparison of the task design suggests that even minor differences in the perceptual task may be critical in revealing the presence of global feature attention. PMID:24936974

  16. An Extended Normalization Model of Attention Accounts for Feature-Based Attentional Enhancement of Both Response and Coherence Gain

    PubMed Central

    Krishna, B. Suresh; Treue, Stefan

    2016-01-01

    Paying attention to a sensory feature improves its perception and impairs that of others. Recent work has shown that a Normalization Model of Attention (NMoA) can account for a wide range of physiological findings and the influence of different attentional manipulations on visual performance. A key prediction of the NMoA is that attention to a visual feature like an orientation or a motion direction will increase the response of neurons preferring the attended feature (response gain) rather than increase the sensory input strength of the attended stimulus (input gain). This effect of feature-based attention on neuronal responses should translate to similar patterns of improvement in behavioral performance, with psychometric functions showing response gain rather than input gain when attention is directed to the task-relevant feature. In contrast, we report here that when human subjects are cued to attend to one of two motion directions in a transparent motion display, attentional effects manifest as a combination of input and response gain. Further, the impact on input gain is greater when attention is directed towards a narrow range of motion directions than when it is directed towards a broad range. These results are captured by an extended NMoA, which either includes a stimulus-independent attentional contribution to normalization or utilizes direction-tuned normalization. The proposed extensions are consistent with the feature-similarity gain model of attention and the attentional modulation in extrastriate area MT, where neuronal responses are enhanced and suppressed by attention to preferred and non-preferred motion directions respectively. PMID:27977679

  17. Feature-based and object-based attention orientation during short-term memory maintenance.

    PubMed

    Ku, Yixuan

    2015-12-01

    Top-down attention biases the short-term memory (STM) processing at multiple stages. Orienting attention during the maintenance period of STM by a retrospective cue (retro-cue) strengthens the representation of the cued item and improves the subsequent STM performance. In a recent article, Backer et al. (Backer KC, Binns MA, Alain C. J Neurosci 35: 1307-1318, 2015) extended these findings from the visual to the auditory domain and combined electroencephalography to dissociate neural mechanisms underlying feature-based and object-based attention orientation. Both event-related potentials and neural oscillations explained the behavioral benefits of retro-cues and favored the theory that feature-based and object-based attention orientation were independent. Copyright © 2015 the American Physiological Society.

  18. Early Visual Cortex Dynamics during Top-Down Modulated Shifts of Feature-Selective Attention.

    PubMed

    Müller, Matthias M; Trautmann, Mireille; Keitel, Christian

    2016-04-01

    Shifting attention from one color to another color or from color to another feature dimension such as shape or orientation is imperative when searching for a certain object in a cluttered scene. Most attention models that emphasize feature-based selection implicitly assume that all shifts in feature-selective attention underlie identical temporal dynamics. Here, we recorded time courses of behavioral data and steady-state visual evoked potentials (SSVEPs), an objective electrophysiological measure of neural dynamics in early visual cortex to investigate temporal dynamics when participants shifted attention from color or orientation toward color or orientation, respectively. SSVEPs were elicited by four random dot kinematograms that flickered at different frequencies. Each random dot kinematogram was composed of dashes that uniquely combined two features from the dimensions color (red or blue) and orientation (slash or backslash). Participants were cued to attend to one feature (such as color or orientation) and respond to coherent motion targets of the to-be-attended feature. We found that shifts toward color occurred earlier after the shifting cue compared with shifts toward orientation, regardless of the original feature (i.e., color or orientation). This was paralleled in SSVEP amplitude modulations as well as in the time course of behavioral data. Overall, our results suggest different neural dynamics during shifts of attention from color and orientation and the respective shifting destinations, namely, either toward color or toward orientation.

  19. Attentive Tracking Disrupts Feature Binding in Visual Working Memory

    PubMed Central

    Fougnie, Daryl; Marois, René

    2009-01-01

    One of the most influential theories in visual cognition proposes that attention is necessary to bind different visual features into coherent object percepts (Treisman & Gelade, 1980). While considerable evidence supports a role for attention in perceptual feature binding, whether attention plays a similar function in visual working memory (VWM) remains controversial. To test the attentional requirements of VWM feature binding, here we gave participants an attention-demanding multiple object tracking task during the retention interval of a VWM task. Results show that the tracking task disrupted memory for color-shape conjunctions above and beyond any impairment to working memory for object features, and that this impairment was larger when the VWM stimuli were presented at different spatial locations. These results demonstrate that the role of visuospatial attention in feature binding is not unique to perception, but extends to the working memory of these perceptual representations as well. PMID:19609460

  20. The relationship between visual working memory and attention: retention of precise colour information in the absence of effects on perceptual selection

    PubMed Central

    Hollingworth, Andrew; Hwang, Seongmin

    2013-01-01

    We examined the conditions under which a feature value in visual working memory (VWM) recruits visual attention to matching stimuli. Previous work has suggested that VWM supports two qualitatively different states of representation: an active state that interacts with perceptual selection and a passive (or accessory) state that does not. An alternative hypothesis is that VWM supports a single form of representation, with the precision of feature memory controlling whether or not the representation interacts with perceptual selection. The results of three experiments supported the dual-state hypothesis. We established conditions under which participants retained a relatively precise representation of a parcticular colour. If the colour was immediately task relevant, it reliably recruited attention to matching stimuli. However, if the colour was not immediately task relevant, it failed to interact with perceptual selection. Feature maintenance in VWM is not necessarily equivalent with feature-based attentional selection. PMID:24018723

  1. Feature-based attention to unconscious shapes and colors.

    PubMed

    Schmidt, Filipp; Schmidt, Thomas

    2010-08-01

    Two experiments employed feature-based attention to modulate the impact of completely masked primes on subsequent pointing responses. Participants processed a color cue to select a pair of possible pointing targets out of multiple targets on the basis of their color, and then pointed to the one of those two targets with a prespecified shape. All target pairs were preceded by prime pairs triggering either the correct or the opposite response. The time interval between cue and primes was varied to modulate the time course of feature-based attentional selection. In a second experiment, the roles of color and shape were switched. Pointing trajectories showed large priming effects that were amplified by feature-based attention, indicating that attention modulated the earliest phases of motor output. Priming effects as well as their attentional modulation occurred even though participants remained unable to identify the primes, indicating distinct processes underlying visual awareness, attention, and response control.

  2. Global Enhancement but Local Suppression in Feature-based Attention.

    PubMed

    Forschack, Norman; Andersen, Søren K; Müller, Matthias M

    2017-04-01

    A key property of feature-based attention is global facilitation of the attended feature throughout the visual field. Previously, we presented superimposed red and blue randomly moving dot kinematograms (RDKs) flickering at a different frequency each to elicit frequency-specific steady-state visual evoked potentials (SSVEPs) that allowed us to analyze neural dynamics in early visual cortex when participants shifted attention to one of the two colors. Results showed amplification of the attended and suppression of the unattended color as measured by SSVEP amplitudes. Here, we tested whether the suppression of the unattended color also operates globally. To this end, we presented superimposed flickering red and blue RDKs in the center of a screen and a red and blue RDK in the left and right periphery, respectively, also flickering at different frequencies. Participants shifted attention to one color of the superimposed RDKs in the center to discriminate coherent motion events in the attended from the unattended color RDK, whereas the peripheral RDKs were task irrelevant. SSVEP amplitudes elicited by the centrally presented RDKs confirmed the previous findings of amplification and suppression. For peripherally located RDKs, we found the expected SSVEP amplitude increase, relative to precue baseline when color matched the one of the centrally attended RDK. We found no reduction in SSVEP amplitude relative to precue baseline, when the peripheral color matched the unattended one of the central RDK, indicating that, while facilitation in feature-based attention operates globally, suppression seems to be linked to the location of focused attention.

  3. Determinants of Global Color-Based Selection in Human Visual Cortex.

    PubMed

    Bartsch, Mandy V; Boehler, Carsten N; Stoppel, Christian M; Merkel, Christian; Heinze, Hans-Jochen; Schoenfeld, Mircea A; Hopf, Jens-Max

    2015-09-01

    Feature attention operates in a spatially global way, with attended feature values being prioritized for selection outside the focus of attention. Accounts of global feature attention have emphasized feature competition as a determining factor. Here, we use magnetoencephalographic recordings in humans to test whether competition is critical for global feature selection to arise. Subjects performed a color/shape discrimination task in one visual field (VF), while irrelevant color probes were presented in the other unattended VF. Global effects of color attention were assessed by analyzing the response to the probe as a function of whether or not the probe's color was a target-defining color. We find that global color selection involves a sequence of modulations in extrastriate cortex, with an initial phase in higher tier areas (lateral occipital complex) followed by a later phase in lower tier retinotopic areas (V3/V4). Importantly, these modulations appeared with and without color competition in the focus of attention. Moreover, early parts of the modulation emerged for a task-relevant color not even present in the focus of attention. All modulations, however, were eliminated during simple onset-detection of the colored target. These results indicate that global color-based attention depends on target discrimination independent of feature competition in the focus of attention. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  4. Visual short-term memory always requires general attention.

    PubMed

    Morey, Candice C; Bieler, Malte

    2013-02-01

    The role of attention in visual memory remains controversial; while some evidence has suggested that memory for binding between features demands no more attention than does memory for the same features, other evidence has indicated cognitive costs or mnemonic benefits for explicitly attending to bindings. We attempted to reconcile these findings by examining how memory for binding, for features, and for features during binding is affected by a concurrent attention-demanding task. We demonstrated that performing a concurrent task impairs memory for as few as two visual objects, regardless of whether each object includes one or more features. We argue that this pattern of results reflects an essential role for domain-general attention in visual memory, regardless of the simplicity of the to-be-remembered stimuli. We then discuss the implications of these findings for theories of visual working memory.

  5. Attention to the Color of a Moving Stimulus Modulates Motion-Signal Processing in Macaque Area MT: Evidence for a Unified Attentional System.

    PubMed

    Katzner, Steffen; Busse, Laura; Treue, Stefan

    2009-01-01

    Directing visual attention to spatial locations or to non-spatial stimulus features can strongly modulate responses of individual cortical sensory neurons. Effects of attention typically vary in magnitude, not only between visual cortical areas but also between individual neurons from the same area. Here, we investigate whether the size of attentional effects depends on the match between the tuning properties of the recorded neuron and the perceptual task at hand. We recorded extracellular responses from individual direction-selective neurons in the middle temporal area (MT) of rhesus monkeys trained to attend either to the color or the motion signal of a moving stimulus. We found that effects of spatial and feature-based attention in MT, which are typically observed in tasks allocating attention to motion, were very similar even when attention was directed to the color of the stimulus. We conclude that attentional modulation can occur in extrastriate cortex, even under conditions without a match between the tuning properties of the recorded neuron and the perceptual task at hand. Our data are consistent with theories of object-based attention describing a transfer of attention from relevant to irrelevant features, within the attended object and across the visual field. These results argue for a unified attentional system that modulates responses to a stimulus across cortical areas, even if a given area is specialized for processing task-irrelevant aspects of that stimulus.

  6. Visual affective classification by combining visual and text features.

    PubMed

    Liu, Ningning; Wang, Kai; Jin, Xin; Gao, Boyang; Dellandréa, Emmanuel; Chen, Liming

    2017-01-01

    Affective analysis of images in social networks has drawn much attention, and the texts surrounding images are proven to provide valuable semantic meanings about image content, which can hardly be represented by low-level visual features. In this paper, we propose a novel approach for visual affective classification (VAC) task. This approach combines visual representations along with novel text features through a fusion scheme based on Dempster-Shafer (D-S) Evidence Theory. Specifically, we not only investigate different types of visual features and fusion methods for VAC, but also propose textual features to effectively capture emotional semantics from the short text associated to images based on word similarity. Experiments are conducted on three public available databases: the International Affective Picture System (IAPS), the Artistic Photos and the MirFlickr Affect set. The results demonstrate that the proposed approach combining visual and textual features provides promising results for VAC task.

  7. Visual affective classification by combining visual and text features

    PubMed Central

    Liu, Ningning; Wang, Kai; Jin, Xin; Gao, Boyang; Dellandréa, Emmanuel; Chen, Liming

    2017-01-01

    Affective analysis of images in social networks has drawn much attention, and the texts surrounding images are proven to provide valuable semantic meanings about image content, which can hardly be represented by low-level visual features. In this paper, we propose a novel approach for visual affective classification (VAC) task. This approach combines visual representations along with novel text features through a fusion scheme based on Dempster-Shafer (D-S) Evidence Theory. Specifically, we not only investigate different types of visual features and fusion methods for VAC, but also propose textual features to effectively capture emotional semantics from the short text associated to images based on word similarity. Experiments are conducted on three public available databases: the International Affective Picture System (IAPS), the Artistic Photos and the MirFlickr Affect set. The results demonstrate that the proposed approach combining visual and textual features provides promising results for VAC task. PMID:28850566

  8. a Novel Ship Detection Method for Large-Scale Optical Satellite Images Based on Visual Lbp Feature and Visual Attention Model

    NASA Astrophysics Data System (ADS)

    Haigang, Sui; Zhina, Song

    2016-06-01

    Reliably ship detection in optical satellite images has a wide application in both military and civil fields. However, this problem is very difficult in complex backgrounds, such as waves, clouds, and small islands. Aiming at these issues, this paper explores an automatic and robust model for ship detection in large-scale optical satellite images, which relies on detecting statistical signatures of ship targets, in terms of biologically-inspired visual features. This model first selects salient candidate regions across large-scale images by using a mechanism based on biologically-inspired visual features, combined with visual attention model with local binary pattern (CVLBP). Different from traditional studies, the proposed algorithm is high-speed and helpful to focus on the suspected ship areas avoiding the separation step of land and sea. Largearea images are cut into small image chips and analyzed in two complementary ways: Sparse saliency using visual attention model and detail signatures using LBP features, thus accordant with sparseness of ship distribution on images. Then these features are employed to classify each chip as containing ship targets or not, using a support vector machine (SVM). After getting the suspicious areas, there are still some false alarms such as microwaves and small ribbon clouds, thus simple shape and texture analysis are adopted to distinguish between ships and nonships in suspicious areas. Experimental results show the proposed method is insensitive to waves, clouds, illumination and ship size.

  9. Preattentive binding of auditory and visual stimulus features.

    PubMed

    Winkler, István; Czigler, István; Sussman, Elyse; Horváth, János; Balázs, Lászlo

    2005-02-01

    We investigated the role of attention in feature binding in the auditory and the visual modality. One auditory and one visual experiment used the mismatch negativity (MMN and vMMN, respectively) event-related potential to index the memory representations created from stimulus sequences, which were either task-relevant and, therefore, attended or task-irrelevant and ignored. In the latter case, the primary task was a continuous demanding within-modality task. The test sequences were composed of two frequently occurring stimuli, which differed from each other in two stimulus features (standard stimuli) and two infrequently occurring stimuli (deviants), which combined one feature from one standard stimulus with the other feature of the other standard stimulus. Deviant stimuli elicited MMN responses of similar parameters across the different attentional conditions. These results suggest that the memory representations involved in the MMN deviance detection response encoded the frequently occurring feature combinations whether or not the test sequences were attended. A possible alternative to the memory-based interpretation of the visual results, the elicitation of the McCollough color-contingent aftereffect, was ruled out by the results of our third experiment. The current results are compared with those supporting the attentive feature integration theory. We conclude that (1) with comparable stimulus paradigms, similar results have been obtained in the two modalities, (2) there exist preattentive processes of feature binding, however, (3) conjoining features within rich arrays of objects under time pressure and/or longterm retention of the feature-conjoined memory representations may require attentive processes.

  10. Modality-specificity of Selective Attention Networks

    PubMed Central

    Stewart, Hannah J.; Amitay, Sygal

    2015-01-01

    Objective: To establish the modality specificity and generality of selective attention networks. Method: Forty-eight young adults completed a battery of four auditory and visual selective attention tests based upon the Attention Network framework: the visual and auditory Attention Network Tests (vANT, aANT), the Test of Everyday Attention (TEA), and the Test of Attention in Listening (TAiL). These provided independent measures for auditory and visual alerting, orienting, and conflict resolution networks. The measures were subjected to an exploratory factor analysis to assess underlying attention constructs. Results: The analysis yielded a four-component solution. The first component comprised of a range of measures from the TEA and was labeled “general attention.” The third component was labeled “auditory attention,” as it only contained measures from the TAiL using pitch as the attended stimulus feature. The second and fourth components were labeled as “spatial orienting” and “spatial conflict,” respectively—they were comprised of orienting and conflict resolution measures from the vANT, aANT, and TAiL attend-location task—all tasks based upon spatial judgments (e.g., the direction of a target arrow or sound location). Conclusions: These results do not support our a-priori hypothesis that attention networks are either modality specific or supramodal. Auditory attention separated into selectively attending to spatial and non-spatial features, with the auditory spatial attention loading onto the same factor as visual spatial attention, suggesting spatial attention is supramodal. However, since our study did not include a non-spatial measure of visual attention, further research will be required to ascertain whether non-spatial attention is modality-specific. PMID:26635709

  11. Visual search for feature and conjunction targets with an attention deficit.

    PubMed

    Arguin, M; Joanette, Y; Cavanagh, P

    1993-01-01

    Abstract Brain-damaged subjects who had previously been identified as suffering from a visual attention deficit for contralesional stimulation were tested on a series of visual search tasks. The experiments examined the hypothesis that the processing of single features is preattentive but that feature integration, necessary for the correct perception of conjunctions of features, requires attention (Treisman & Gelade, 1980 Treisman & Sato, 1990). Subjects searched for a feature target (orientation or color) or for a conjunction target (orientation and color) in unilateral displays in which the number of items presented was variable. Ocular fixation was controlled so that trials on which eye movements occurred were cancelled. While brain-damaged subjects with a visual attention disorder (VAD subjects) performed similarly to normal controls in feature search tasks, they showed a marked deficit in conjunction search. Specifically, VAD subjects exhibited an important reduction of their serial search rates for a conjunction target with contralesional displays. In support of Treisman's feature integration theory, a visual attention deficit leads to a marked impairment in feature integration whereas it does not appear to affect feature encoding.

  12. Attention to Color Sharpens Neural Population Tuning via Feedback Processing in the Human Visual Cortex Hierarchy.

    PubMed

    Bartsch, Mandy V; Loewe, Kristian; Merkel, Christian; Heinze, Hans-Jochen; Schoenfeld, Mircea A; Tsotsos, John K; Hopf, Jens-Max

    2017-10-25

    Attention can facilitate the selection of elementary object features such as color, orientation, or motion. This is referred to as feature-based attention and it is commonly attributed to a modulation of the gain and tuning of feature-selective units in visual cortex. Although gain mechanisms are well characterized, little is known about the cortical processes underlying the sharpening of feature selectivity. Here, we show with high-resolution magnetoencephalography in human observers (men and women) that sharpened selectivity for a particular color arises from feedback processing in the human visual cortex hierarchy. To assess color selectivity, we analyze the response to a color probe that varies in color distance from an attended color target. We find that attention causes an initial gain enhancement in anterior ventral extrastriate cortex that is coarsely selective for the target color and transitions within ∼100 ms into a sharper tuned profile in more posterior ventral occipital cortex. We conclude that attention sharpens selectivity over time by attenuating the response at lower levels of the cortical hierarchy to color values neighboring the target in color space. These observations support computational models proposing that attention tunes feature selectivity in visual cortex through backward-propagating attenuation of units less tuned to the target. SIGNIFICANCE STATEMENT Whether searching for your car, a particular item of clothing, or just obeying traffic lights, in everyday life, we must select items based on color. But how does attention allow us to select a specific color? Here, we use high spatiotemporal resolution neuromagnetic recordings to examine how color selectivity emerges in the human brain. We find that color selectivity evolves as a coarse to fine process from higher to lower levels within the visual cortex hierarchy. Our observations support computational models proposing that feature selectivity increases over time by attenuating the responses of less-selective cells in lower-level brain areas. These data emphasize that color perception involves multiple areas across a hierarchy of regions, interacting with each other in a complex, recursive manner. Copyright © 2017 the authors 0270-6474/17/3710346-12$15.00/0.

  13. Featural and temporal attention selectively enhance task-appropriate representations in human V1

    PubMed Central

    Warren, Scott; Yacoub, Essa; Ghose, Geoffrey

    2015-01-01

    Our perceptions are often shaped by focusing our attention toward specific features or periods of time irrespective of location. We explore the physiological bases of these non-spatial forms of attention by imaging brain activity while subjects perform a challenging change detection task. The task employs a continuously varying visual stimulus that, for any moment in time, selectively activates functionally distinct subpopulations of primary visual cortex (V1) neurons. When subjects are cued to the timing and nature of the change, the mapping of orientation preference across V1 was systematically shifts toward the cued stimulus just prior to its appearance. A simple linear model can explain this shift: attentional changes are selectively targeted toward neural subpopulations representing the attended feature at the times the feature was anticipated. Our results suggest that featural attention is mediated by a linear change in the responses of task-appropriate neurons across cortex during appropriate periods of time. PMID:25501983

  14. Feature-based attention: it is all bottom-up priming.

    PubMed

    Theeuwes, Jan

    2013-10-19

    Feature-based attention (FBA) enhances the representation of image characteristics throughout the visual field, a mechanism that is particularly useful when searching for a specific stimulus feature. Even though most theories of visual search implicitly or explicitly assume that FBA is under top-down control, we argue that the role of top-down processing in FBA may be limited. Our review of the literature indicates that all behavioural and neuro-imaging studies investigating FBA suffer from the shortcoming that they cannot rule out an effect of priming. The mere attending to a feature enhances the mandatory processing of that feature across the visual field, an effect that is likely to occur in an automatic, bottom-up way. Studies that have investigated the feasibility of FBA by means of cueing paradigms suggest that the role of top-down processing in FBA is limited (e.g. prepare for red). Instead, the actual processing of the stimulus is needed to cause the mandatory tuning of responses throughout the visual field. We conclude that it is likely that all FBA effects reported previously are the result of bottom-up priming.

  15. Feature-based attention: it is all bottom-up priming

    PubMed Central

    Theeuwes, Jan

    2013-01-01

    Feature-based attention (FBA) enhances the representation of image characteristics throughout the visual field, a mechanism that is particularly useful when searching for a specific stimulus feature. Even though most theories of visual search implicitly or explicitly assume that FBA is under top-down control, we argue that the role of top-down processing in FBA may be limited. Our review of the literature indicates that all behavioural and neuro-imaging studies investigating FBA suffer from the shortcoming that they cannot rule out an effect of priming. The mere attending to a feature enhances the mandatory processing of that feature across the visual field, an effect that is likely to occur in an automatic, bottom-up way. Studies that have investigated the feasibility of FBA by means of cueing paradigms suggest that the role of top-down processing in FBA is limited (e.g. prepare for red). Instead, the actual processing of the stimulus is needed to cause the mandatory tuning of responses throughout the visual field. We conclude that it is likely that all FBA effects reported previously are the result of bottom-up priming. PMID:24018717

  16. Feature-selective attention in healthy old age: a selective decline in selective attention?

    PubMed

    Quigley, Cliodhna; Müller, Matthias M

    2014-02-12

    Deficient selection against irrelevant information has been proposed to underlie age-related cognitive decline. We recently reported evidence for maintained early sensory selection when older and younger adults used spatial selective attention to perform a challenging task. Here we explored age-related differences when spatial selection is not possible and feature-selective attention must be deployed. We additionally compared the integrity of feedforward processing by exploiting the well established phenomenon of suppression of visual cortical responses attributable to interstimulus competition. Electroencephalogram was measured while older and younger human adults responded to brief occurrences of coherent motion in an attended stimulus composed of randomly moving, orientation-defined, flickering bars. Attention was directed to horizontal or vertical bars by a pretrial cue, after which two orthogonally oriented, overlapping stimuli or a single stimulus were presented. Horizontal and vertical bars flickered at different frequencies and thereby elicited separable steady-state visual-evoked potentials, which were used to examine the effect of feature-based selection and the competitive influence of a second stimulus on ongoing visual processing. Age differences were found in feature-selective attentional modulation of visual responses: older adults did not show consistent modulation of magnitude or phase. In contrast, the suppressive effect of a second stimulus was robust and comparable in magnitude across age groups, suggesting that bottom-up processing of the current stimuli is essentially unchanged in healthy old age. Thus, it seems that visual processing per se is unchanged, but top-down attentional control is compromised in older adults when space cannot be used to guide selection.

  17. Is attentional prioritisation of infant faces unique in humans?: Comparative demonstrations by modified dot-probe task in monkeys.

    PubMed

    Koda, Hiroki; Sato, Anna; Kato, Akemi

    2013-09-01

    Humans innately perceive infantile features as cute. The ethologist Konrad Lorenz proposed that the infantile features of mammals and birds, known as the baby schema (kindchenschema), motivate caretaking behaviour. As biologically relevant stimuli, newborns are likely to be processed specially in terms of visual attention, perception, and cognition. Recent demonstrations on human participants have shown visual attentional prioritisation to newborn faces (i.e., newborn faces capture visual attention). Although characteristics equivalent to those found in the faces of human infants are found in nonhuman primates, attentional capture by newborn faces has not been tested in nonhuman primates. We examined whether conspecific newborn faces captured the visual attention of two Japanese monkeys using a target-detection task based on dot-probe tasks commonly used in human visual attention studies. Although visual cues enhanced target detection in subject monkeys, our results, unlike those for humans, showed no evidence of an attentional prioritisation for newborn faces by monkeys. Our demonstrations showed the validity of dot-probe task for visual attention studies in monkeys and propose a novel approach to bridge the gap between human and nonhuman primate social cognition research. This suggests that attentional capture by newborn faces is not common to macaques, but it is unclear if nursing experiences influence their perception and recognition of infantile appraisal stimuli. We need additional comparative studies to reveal the evolutionary origins of baby-schema perception and recognition. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. Retrospective cues based on object features improve visual working memory performance in older adults.

    PubMed

    Gilchrist, Amanda L; Duarte, Audrey; Verhaeghen, Paul

    2016-01-01

    Research with younger adults has shown that retrospective cues can be used to orient top-down attention toward relevant items in working memory. We examined whether older adults could take advantage of these cues to improve memory performance. Younger and older adults were presented with visual arrays of five colored shapes; during maintenance, participants were presented either with an informative cue based on an object feature (here, object shape or color) that would be probed, or with an uninformative, neutral cue. Although older adults were less accurate overall, both age groups benefited from the presentation of an informative, feature-based cue relative to a neutral cue. Surprisingly, we also observed differences in the effectiveness of shape versus color cues and their effects upon post-cue memory load. These results suggest that older adults can use top-down attention to remove irrelevant items from visual working memory, provided that task-relevant features function as cues.

  19. Classification of visual and linguistic tasks using eye-movement features.

    PubMed

    Coco, Moreno I; Keller, Frank

    2014-03-07

    The role of the task has received special attention in visual-cognition research because it can provide causal explanations of goal-directed eye-movement responses. The dependency between visual attention and task suggests that eye movements can be used to classify the task being performed. A recent study by Greene, Liu, and Wolfe (2012), however, fails to achieve accurate classification of visual tasks based on eye-movement features. In the present study, we hypothesize that tasks can be successfully classified when they differ with respect to the involvement of other cognitive domains, such as language processing. We extract the eye-movement features used by Greene et al. as well as additional features from the data of three different tasks: visual search, object naming, and scene description. First, we demonstrated that eye-movement responses make it possible to characterize the goals of these tasks. Then, we trained three different types of classifiers and predicted the task participants performed with an accuracy well above chance (a maximum of 88% for visual search). An analysis of the relative importance of features for classification accuracy reveals that just one feature, i.e., initiation time, is sufficient for above-chance performance (a maximum of 79% accuracy in object naming). Crucially, this feature is independent of task duration, which differs systematically across the three tasks we investigated. Overall, the best task classification performance was obtained with a set of seven features that included both spatial information (e.g., entropy of attention allocation) and temporal components (e.g., total fixation on objects) of the eye-movement record. This result confirms the task-dependent allocation of visual attention and extends previous work by showing that task classification is possible when tasks differ in the cognitive processes involved (purely visual tasks such as search vs. communicative tasks such as scene description).

  20. Visual Attention Modeling for Stereoscopic Video: A Benchmark and Computational Model.

    PubMed

    Fang, Yuming; Zhang, Chi; Li, Jing; Lei, Jianjun; Perreira Da Silva, Matthieu; Le Callet, Patrick

    2017-10-01

    In this paper, we investigate the visual attention modeling for stereoscopic video from the following two aspects. First, we build one large-scale eye tracking database as the benchmark of visual attention modeling for stereoscopic video. The database includes 47 video sequences and their corresponding eye fixation data. Second, we propose a novel computational model of visual attention for stereoscopic video based on Gestalt theory. In the proposed model, we extract the low-level features, including luminance, color, texture, and depth, from discrete cosine transform coefficients, which are used to calculate feature contrast for the spatial saliency computation. The temporal saliency is calculated by the motion contrast from the planar and depth motion features in the stereoscopic video sequences. The final saliency is estimated by fusing the spatial and temporal saliency with uncertainty weighting, which is estimated by the laws of proximity, continuity, and common fate in Gestalt theory. Experimental results show that the proposed method outperforms the state-of-the-art stereoscopic video saliency detection models on our built large-scale eye tracking database and one other database (DML-ITRACK-3D).

  1. Visual feature integration with an attention deficit.

    PubMed

    Arguin, M; Cavanagh, P; Joanette, Y

    1994-01-01

    Treisman's feature integration theory proposes that the perception of illusory conjunctions of correctly encoded visual features is due to the failure of an attentional process. This hypothesis was examined by studying brain-damaged subjects who had previously been shown to have difficulty in attending to contralesional stimulation. These subjects exhibited a massive feature integration deficit for contralesional stimulation relative to ipsilesional displays. In contrast, both normal age-matched controls and brain-damaged subjects who did not exhibit any evidence of an attention deficit showed comparable feature integration performance with left- and right-hemifield stimulation. These observations indicate the crucial function of attention for visual feature integration in normal perception.

  2. Category-based attentional guidance can operate in parallel for multiple target objects.

    PubMed

    Jenkins, Michael; Grubert, Anna; Eimer, Martin

    2018-05-01

    The question whether the control of attention during visual search is always feature-based or can also be based on the category of objects remains unresolved. Here, we employed the N2pc component as an on-line marker for target selection processes to compare the efficiency of feature-based and category-based attentional guidance. Two successive displays containing pairs of real-world objects (line drawings of kitchen or clothing items) were separated by a 10 ms SOA. In Experiment 1, target objects were defined by their category. In Experiment 2, one specific visual object served as target (exemplar-based search). On different trials, targets appeared either in one or in both displays, and participants had to report the number of targets (one or two). Target N2pc components were larger and emerged earlier during exemplar-based search than during category-based search, demonstrating the superior efficiency of feature-based attentional guidance. On trials where target objects appeared in both displays, both targets elicited N2pc components that overlapped in time, suggesting that attention was allocated in parallel to these target objects. Critically, this was the case not only in the exemplar-based task, but also when targets were defined by their category. These results demonstrate that attention can be guided by object categories, and that this type of category-based attentional control can operate concurrently for multiple target objects. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. The Theory-based Influence of Map Features on Risk Beliefs: Self-reports of What is Seen and Understood for Maps Depicting an Environmental Health Hazard

    PubMed Central

    Vatovec, Christine

    2013-01-01

    Theory-based research is needed to understand how maps of environmental health risk information influence risk beliefs and protective behavior. Using theoretical concepts from multiple fields of study including visual cognition, semiotics, health behavior, and learning and memory supports a comprehensive assessment of this influence. We report results from thirteen cognitive interviews that provide theory-based insights into how visual features influenced what participants saw and the meaning of what they saw as they viewed three formats of water test results for private wells (choropleth map, dot map, and a table). The unit of perception, color, proximity to hazards, geographic distribution, and visual salience had substantial influences on what participants saw and their resulting risk beliefs. These influences are explained by theoretical factors that shape what is seen, properties of features that shape cognition (pre-attentive, symbolic, visual salience), information processing (top-down and bottom-up), and the strength of concrete compared to abstract information. Personal relevance guided top-down attention to proximal and larger hazards that shaped stronger risk beliefs. Meaning was more local for small perceptual units and global for large units. Three aspects of color were important: pre-attentive “incremental risk” meaning of sequential shading, symbolic safety meaning of stoplight colors, and visual salience that drew attention. The lack of imagery, geographic information, and color diminished interest in table information. Numeracy and prior beliefs influenced comprehension for some participants. Results guided the creation of an integrated conceptual framework for application to future studies. Ethics should guide the selection of map features that support appropriate communication goals. PMID:22715919

  4. The effect of category learning on attentional modulation of visual cortex.

    PubMed

    Folstein, Jonathan R; Fuller, Kelly; Howard, Dorothy; DePatie, Thomas

    2017-09-01

    Learning about visual object categories causes changes in the way we perceive those objects. One likely mechanism by which this occurs is the application of attention to potentially relevant objects. Here we test the hypothesis that category membership influences the allocation of attention, allowing attention to be applied not only to object features, but to entire categories. Participants briefly learned to categorize a set of novel cartoon animals after which EEG was recorded while participants distinguished between a target and non-target category. A second identical EEG session was conducted after two sessions of categorization practice. The category structure and task design allowed parametric manipulation of number of target features while holding feature frequency and category membership constant. We found no evidence that category membership influenced attentional selection: a postero-lateral negative component, labeled the selection negativity/N250, increased over time and was sensitive to number of target features, not target categories. In contrast, the right hemisphere N170 was not sensitive to target features. The P300 appeared sensitive to category in the first session, but showed a graded sensitivity to number of target features in the second session, possibly suggesting a transition from rule-based to similarity based categorization. Copyright © 2017. Published by Elsevier Ltd.

  5. The guidance of visual search by shape features and shape configurations.

    PubMed

    McCants, Cody W; Berggren, Nick; Eimer, Martin

    2018-03-01

    Representations of target features (attentional templates) guide attentional object selection during visual search. In many search tasks, targets objects are defined not by a single feature but by the spatial configuration of their component shapes. We used electrophysiological markers of attentional selection processes to determine whether the guidance of shape configuration search is entirely part-based or sensitive to the spatial relationship between shape features. Participants searched for targets defined by the spatial arrangement of two shape components (e.g., hourglass above circle). N2pc components were triggered not only by targets but also by partially matching distractors with one target shape (e.g., hourglass above hexagon) and by distractors that contained both target shapes in the reverse arrangement (e.g., circle above hourglass), in line with part-based attentional control. Target N2pc components were delayed when a reverse distractor was present on the opposite side of the same display, suggesting that early shape-specific attentional guidance processes could not distinguish between targets and reverse distractors. The control of attention then became sensitive to spatial configuration, which resulted in a stronger attentional bias for target objects relative to reverse and partially matching distractors. Results demonstrate that search for target objects defined by the spatial arrangement of their component shapes is initially controlled in a feature-based fashion but can later be guided by templates for spatial configurations. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  6. Parallel perceptual enhancement and hierarchic relevance evaluation in an audio-visual conjunction task.

    PubMed

    Potts, Geoffrey F; Wood, Susan M; Kothmann, Delia; Martin, Laura E

    2008-10-21

    Attention directs limited-capacity information processing resources to a subset of available perceptual representations. The mechanisms by which attention selects task-relevant representations for preferential processing are not fully known. Triesman and Gelade's [Triesman, A., Gelade, G., 1980. A feature integration theory of attention. Cognit. Psychol. 12, 97-136.] influential attention model posits that simple features are processed preattentively, in parallel, but that attention is required to serially conjoin multiple features into an object representation. Event-related potentials have provided evidence for this model showing parallel processing of perceptual features in the posterior Selection Negativity (SN) and serial, hierarchic processing of feature conjunctions in the Frontal Selection Positivity (FSP). Most prior studies have been done on conjunctions within one sensory modality while many real-world objects have multimodal features. It is not known if the same neural systems of posterior parallel processing of simple features and frontal serial processing of feature conjunctions seen within a sensory modality also operate on conjunctions between modalities. The current study used ERPs and simultaneously presented auditory and visual stimuli in three task conditions: Attend Auditory (auditory feature determines the target, visual features are irrelevant), Attend Visual (visual features relevant, auditory irrelevant), and Attend Conjunction (target defined by the co-occurrence of an auditory and a visual feature). In the Attend Conjunction condition when the auditory but not the visual feature was a target there was an SN over auditory cortex, when the visual but not auditory stimulus was a target there was an SN over visual cortex, and when both auditory and visual stimuli were targets (i.e. conjunction target) there were SNs over both auditory and visual cortex, indicating parallel processing of the simple features within each modality. In contrast, an FSP was present when either the visual only or both auditory and visual features were targets, but not when only the auditory stimulus was a target, indicating that the conjunction target determination was evaluated serially and hierarchically with visual information taking precedence. This indicates that the detection of a target defined by audio-visual conjunction is achieved via the same mechanism as within a single perceptual modality, through separate, parallel processing of the auditory and visual features and serial processing of the feature conjunction elements, rather than by evaluation of a fused multimodal percept.

  7. Visual attention based bag-of-words model for image classification

    NASA Astrophysics Data System (ADS)

    Wang, Qiwei; Wan, Shouhong; Yue, Lihua; Wang, Che

    2014-04-01

    Bag-of-words is a classical method for image classification. The core problem is how to count the frequency of the visual words and what visual words to select. In this paper, we propose a visual attention based bag-of-words model (VABOW model) for image classification task. The VABOW model utilizes visual attention method to generate a saliency map, and uses the saliency map as a weighted matrix to instruct the statistic process for the frequency of the visual words. On the other hand, the VABOW model combines shape, color and texture cues and uses L1 regularization logistic regression method to select the most relevant and most efficient features. We compare our approach with traditional bag-of-words based method on two datasets, and the result shows that our VABOW model outperforms the state-of-the-art method for image classification.

  8. Behavioral model of visual perception and recognition

    NASA Astrophysics Data System (ADS)

    Rybak, Ilya A.; Golovan, Alexander V.; Gusakova, Valentina I.

    1993-09-01

    In the processes of visual perception and recognition human eyes actively select essential information by way of successive fixations at the most informative points of the image. A behavioral program defining a scanpath of the image is formed at the stage of learning (object memorizing) and consists of sequential motor actions, which are shifts of attention from one to another point of fixation, and sensory signals expected to arrive in response to each shift of attention. In the modern view of the problem, invariant object recognition is provided by the following: (1) separated processing of `what' (object features) and `where' (spatial features) information at high levels of the visual system; (2) mechanisms of visual attention using `where' information; (3) representation of `what' information in an object-based frame of reference (OFR). However, most recent models of vision based on OFR have demonstrated the ability of invariant recognition of only simple objects like letters or binary objects without background, i.e. objects to which a frame of reference is easily attached. In contrast, we use not OFR, but a feature-based frame of reference (FFR), connected with the basic feature (edge) at the fixation point. This has provided for our model, the ability for invariant representation of complex objects in gray-level images, but demands realization of behavioral aspects of vision described above. The developed model contains a neural network subsystem of low-level vision which extracts a set of primary features (edges) in each fixation, and high- level subsystem consisting of `what' (Sensory Memory) and `where' (Motor Memory) modules. The resolution of primary features extraction decreases with distances from the point of fixation. FFR provides both the invariant representation of object features in Sensor Memory and shifts of attention in Motor Memory. Object recognition consists in successive recall (from Motor Memory) and execution of shifts of attention and successive verification of the expected sets of features (stored in Sensory Memory). The model shows the ability of recognition of complex objects (such as faces) in gray-level images invariant with respect to shift, rotation, and scale.

  9. Collinearity Impairs Local Element Visual Search

    ERIC Educational Resources Information Center

    Jingling, Li; Tseng, Chia-Huei

    2013-01-01

    In visual searches, stimuli following the law of good continuity attract attention to the global structure and receive attentional priority. Also, targets that have unique features are of high feature contrast and capture attention in visual search. We report on a salient global structure combined with a high orientation contrast to the…

  10. Illusory conjunctions in visual short-term memory: Individual differences in corpus callosum connectivity and splitting attention between the two hemifields.

    PubMed

    Qin, Shuo; Ray, Nicholas R; Ramakrishnan, Nithya; Nashiro, Kaoru; O'Connell, Margaret A; Basak, Chandramallika

    2016-11-01

    Overloading the capacity of visual attention can result in mistakenly combining the various features of an object, that is, illusory conjunctions. We hypothesize that if the two hemispheres separately process visual information by splitting attention, connectivity of corpus callosum-a brain structure integrating the two hemispheres-would predict the degree of illusory conjunctions. In the current study, we assessed two types of illusory conjunctions using a memory-scanning paradigm; the features were either presented across the two opposite hemifields or within the same hemifield. Four objects, each with two visual features, were briefly presented together followed by a probe-recognition and a confidence rating for the recognition accuracy. MRI scans were also obtained. Results indicated that successful recollection during probe recognition was better for across hemifields conjunctions compared to within hemifield conjunctions, lending support to the bilateral advantage of the two hemispheres in visual short-term memory. Age-related differences regarding the underlying mechanisms of the bilateral advantage indicated greater reliance on recollection-based processing in young and on familiarity-based processing in old. Moreover, the integrity of the posterior corpus callosum was more predictive of opposite hemifield illusory conjunctions compared to within hemifield illusory conjunctions, even after controlling for age. That is, individuals with lesser posterior corpus callosum connectivity had better recognition for objects when their features were recombined from the opposite hemifields than from the same hemifield. This study is the first to investigate the role of the corpus callosum in splitting attention between versus within hemifields. © 2016 Society for Psychophysiological Research.

  11. The Role of Target-Distractor Relationships in Guiding Attention and the Eyes in Visual Search

    ERIC Educational Resources Information Center

    Becker, Stefanie I.

    2010-01-01

    Current models of visual search assume that visual attention can be guided by tuning attention toward specific feature values (e.g., particular size, color) or by inhibiting the features of the irrelevant nontargets. The present study demonstrates that attention and eye movements can also be guided by a relational specification of how the target…

  12. Parts-based stereoscopic image assessment by learning binocular manifold color visual properties

    NASA Astrophysics Data System (ADS)

    Xu, Haiyong; Yu, Mei; Luo, Ting; Zhang, Yun; Jiang, Gangyi

    2016-11-01

    Existing stereoscopic image quality assessment (SIQA) methods are mostly based on the luminance information, in which color information is not sufficiently considered. Actually, color is part of the important factors that affect human visual perception, and nonnegative matrix factorization (NMF) and manifold learning are in line with human visual perception. We propose an SIQA method based on learning binocular manifold color visual properties. To be more specific, in the training phase, a feature detector is created based on NMF with manifold regularization by considering color information, which not only allows parts-based manifold representation of an image, but also manifests localized color visual properties. In the quality estimation phase, visually important regions are selected by considering different human visual attention, and feature vectors are extracted by using the feature detector. Then the feature similarity index is calculated and the parts-based manifold color feature energy (PMCFE) for each view is defined based on the color feature vectors. The final quality score is obtained by considering a binocular combination based on PMCFE. The experimental results on LIVE I and LIVE Π 3-D IQA databases demonstrate that the proposed method can achieve much higher consistency with subjective evaluations than the state-of-the-art SIQA methods.

  13. Fast and robust generation of feature maps for region-based visual attention.

    PubMed

    Aziz, Muhammad Zaheer; Mertsching, Bärbel

    2008-05-01

    Visual attention is one of the important phenomena in biological vision which can be followed to achieve more efficiency, intelligence, and robustness in artificial vision systems. This paper investigates a region-based approach that performs pixel clustering prior to the processes of attention in contrast to late clustering as done by contemporary methods. The foundation steps of feature map construction for the region-based attention model are proposed here. The color contrast map is generated based upon the extended findings from the color theory, the symmetry map is constructed using a novel scanning-based method, and a new algorithm is proposed to compute a size contrast map as a formal feature channel. Eccentricity and orientation are computed using the moments of obtained regions and then saliency is evaluated using the rarity criteria. The efficient design of the proposed algorithms allows incorporating five feature channels while maintaining a processing rate of multiple frames per second. Another salient advantage over the existing techniques is the reusability of the salient regions in the high-level machine vision procedures due to preservation of their shapes and precise locations. The results indicate that the proposed model has the potential to efficiently integrate the phenomenon of attention into the main stream of machine vision and systems with restricted computing resources such as mobile robots can benefit from its advantages.

  14. Is Posner's "beam" the same as Treisman's "glue"?: On the relation between visual orienting and feature integration theory.

    PubMed

    Briand, K A; Klein, R M

    1987-05-01

    In the present study we investigated whether the visually allocated "beam" studied by Posner and others is the same visual attentional resource that performs the role of feature integration in Treisman's model. Subjects were cued to attend to a certain spatial location by a visual cue, and performance at expected and unexpected stimulus locations was compared. Subjects searched for a target letter (R) with distractor letters that either could give rise to illusory conjunctions (PQ) or could not (PB). Results from three separate experiments showed that orienting attention in response to central cues (endogenous orienting) showed similar effects for both conjunction and feature search. However, when attention was oriented with peripheral visual cues (exogenous orienting), conjunction search showed larger effects of attention than did feature search. It is suggested that the attentional systems that are oriented in response to central and peripheral cues may not be the same and that only the latter performs a role in feature integration. Possibilities for future research are discussed.

  15. Development of a Computerized Visual Search Test

    ERIC Educational Resources Information Center

    Reid, Denise; Babani, Harsha; Jon, Eugenia

    2009-01-01

    Visual attention and visual search are the features of visual perception, essential for attending and scanning one's environment while engaging in daily occupations. This study describes the development of a novel web-based test of visual search. The development information including the format of the test will be described. The test was designed…

  16. Ventromedial Frontal Cortex Is Critical for Guiding Attention to Reward-Predictive Visual Features in Humans.

    PubMed

    Vaidya, Avinash R; Fellows, Lesley K

    2015-09-16

    Adaptively interacting with our environment requires extracting information that will allow us to successfully predict reward. This can be a challenge, particularly when there are many candidate cues, and when rewards are probabilistic. Recent work has demonstrated that visual attention is allocated to stimulus features that have been associated with reward on previous trials. The ventromedial frontal lobe (VMF) has been implicated in learning in dynamic environments of this kind, but the mechanism by which this region influences this process is not clear. Here, we hypothesized that the VMF plays a critical role in guiding attention to reward-predictive stimulus features based on feedback. We tested the effects of VMF damage in human subjects on a visual search task in which subjects were primed to attend to task-irrelevant colors associated with different levels of reward, incidental to the search task. Consistent with previous work, we found that distractors had a greater influence on reaction time when they appeared in colors associated with high reward in the previous trial compared with colors associated with low reward in healthy control subjects and patients with prefrontal damage sparing the VMF. However, this reward modulation of attentional priming was absent in patients with VMF damage. Thus, an intact VMF is necessary for directing attention based on experience with cue-reward associations. We suggest that this region plays a role in selecting reward-predictive cues to facilitate future learning. There has been a swell of interest recently in the ventromedial frontal cortex (VMF), a brain region critical to associative learning. However, the underlying mechanism by which this region guides learning is not well understood. Here, we tested the effects of damage to this region in humans on a task in which rewards were linked incidentally to visual features, resulting in trial-by-trial attentional priming. Controls and subjects with prefrontal damage sparing the VMF showed normal reward priming, but VMF-damaged patients did not. This work sheds light on a potential mechanism through which this region influences behavior. We suggest that the VMF is necessary for directing attention to reward-predictive visual features based on feedback, facilitating future learning and decision-making. Copyright © 2015 the authors 0270-6474/15/3512813-11$15.00/0.

  17. Top-down attention based on object representation and incremental memory for knowledge building and inference.

    PubMed

    Kim, Bumhwi; Ban, Sang-Woo; Lee, Minho

    2013-10-01

    Humans can efficiently perceive arbitrary visual objects based on an incremental learning mechanism with selective attention. This paper proposes a new task specific top-down attention model to locate a target object based on its form and color representation along with a bottom-up saliency based on relativity of primitive visual features and some memory modules. In the proposed model top-down bias signals corresponding to the target form and color features are generated, which draw the preferential attention to the desired object by the proposed selective attention model in concomitance with the bottom-up saliency process. The object form and color representation and memory modules have an incremental learning mechanism together with a proper object feature representation scheme. The proposed model includes a Growing Fuzzy Topology Adaptive Resonance Theory (GFTART) network which plays two important roles in object color and form biased attention; one is to incrementally learn and memorize color and form features of various objects, and the other is to generate a top-down bias signal to localize a target object by focusing on the candidate local areas. Moreover, the GFTART network can be utilized for knowledge inference which enables the perception of new unknown objects on the basis of the object form and color features stored in the memory during training. Experimental results show that the proposed model is successful in focusing on the specified target objects, in addition to the incremental representation and memorization of various objects in natural scenes. In addition, the proposed model properly infers new unknown objects based on the form and color features of previously trained objects. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Binding in visual working memory: the role of the episodic buffer.

    PubMed

    Baddeley, Alan D; Allen, Richard J; Hitch, Graham J

    2011-05-01

    The episodic buffer component of working memory is assumed to play a central role in the binding of features into objects, a process that was initially assumed to depend upon executive resources. Here, we review a program of work in which we specifically tested this assumption by studying the effects of a range of attentionally demanding concurrent tasks on the capacity to encode and retain both individual features and bound objects. We found no differential effect of concurrent load, even when the process of binding was made more demanding by separating the shape and color features spatially, temporally or across visual and auditory modalities. Bound features were however more readily disrupted by subsequent stimuli, a process we studied using a suffix paradigm. This suggested a need to assume a feature-based attentional filter followed by an object based storage process. Our results are interpreted within a modified version of the multicomponent working memory model. We also discuss work examining the role of the hippocampus in visual feature binding. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. Associative learning in baboons (Papio papio) and humans (Homo sapiens): species differences in learned attention to visual features.

    PubMed

    Fagot, J; Kruschke, J K; Dépy, D; Vauclair, J

    1998-10-01

    We examined attention shifting in baboons and humans during the learning of visual categories. Within a conditional matching-to-sample task, participants of the two species sequentially learned two two-feature categories which shared a common feature. Results showed that humans encoded both features of the initially learned category, but predominantly only the distinctive feature of the subsequently learned category. Although baboons initially encoded both features of the first category, they ultimately retained only the distinctive features of each category. Empirical data from the two species were analyzed with the 1996 ADIT connectionist model of Kruschke. ADIT fits the baboon data when the attentional shift rate is zero, and the human data when the attentional shift rate is not zero. These empirical and modeling results suggest species differences in learned attention to visual features.

  20. Internal attention to features in visual short-term memory guides object learning

    PubMed Central

    Fan, Judith E.; Turk-Browne, Nicholas B.

    2013-01-01

    Attending to objects in the world affects how we perceive and remember them. What are the consequences of attending to an object in mind? In particular, how does reporting the features of a recently seen object guide visual learning? In three experiments, observers were presented with abstract shapes in a particular color, orientation, and location. After viewing each object, observers were cued to report one feature from visual short-term memory (VSTM). In a subsequent test, observers were cued to report features of the same objects from visual long-term memory (VLTM). We tested whether reporting a feature from VSTM: (1) enhances VLTM for just that feature (practice-benefit hypothesis), (2) enhances VLTM for all features (object-based hypothesis), or (3) simultaneously enhances VLTM for that feature and suppresses VLTM for unreported features (feature-competition hypothesis). The results provided support for the feature-competition hypothesis, whereby the representation of an object in VLTM was biased towards features reported from VSTM and away from unreported features (Experiment 1). This bias could not be explained by the amount of sensory exposure or response learning (Experiment 2) and was amplified by the reporting of multiple features (Experiment 3). Taken together, these results suggest that selective internal attention induces competitive dynamics among features during visual learning, flexibly tuning object representations to align with prior mnemonic goals. PMID:23954925

  1. Internal attention to features in visual short-term memory guides object learning.

    PubMed

    Fan, Judith E; Turk-Browne, Nicholas B

    2013-11-01

    Attending to objects in the world affects how we perceive and remember them. What are the consequences of attending to an object in mind? In particular, how does reporting the features of a recently seen object guide visual learning? In three experiments, observers were presented with abstract shapes in a particular color, orientation, and location. After viewing each object, observers were cued to report one feature from visual short-term memory (VSTM). In a subsequent test, observers were cued to report features of the same objects from visual long-term memory (VLTM). We tested whether reporting a feature from VSTM: (1) enhances VLTM for just that feature (practice-benefit hypothesis), (2) enhances VLTM for all features (object-based hypothesis), or (3) simultaneously enhances VLTM for that feature and suppresses VLTM for unreported features (feature-competition hypothesis). The results provided support for the feature-competition hypothesis, whereby the representation of an object in VLTM was biased towards features reported from VSTM and away from unreported features (Experiment 1). This bias could not be explained by the amount of sensory exposure or response learning (Experiment 2) and was amplified by the reporting of multiple features (Experiment 3). Taken together, these results suggest that selective internal attention induces competitive dynamics among features during visual learning, flexibly tuning object representations to align with prior mnemonic goals. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. A recurrent neural model for proto-object based contour integration and figure-ground segregation.

    PubMed

    Hu, Brian; Niebur, Ernst

    2017-12-01

    Visual processing of objects makes use of both feedforward and feedback streams of information. However, the nature of feedback signals is largely unknown, as is the identity of the neuronal populations in lower visual areas that receive them. Here, we develop a recurrent neural model to address these questions in the context of contour integration and figure-ground segregation. A key feature of our model is the use of grouping neurons whose activity represents tentative objects ("proto-objects") based on the integration of local feature information. Grouping neurons receive input from an organized set of local feature neurons, and project modulatory feedback to those same neurons. Additionally, inhibition at both the local feature level and the object representation level biases the interpretation of the visual scene in agreement with principles from Gestalt psychology. Our model explains several sets of neurophysiological results (Zhou et al. Journal of Neuroscience, 20(17), 6594-6611 2000; Qiu et al. Nature Neuroscience, 10(11), 1492-1499 2007; Chen et al. Neuron, 82(3), 682-694 2014), and makes testable predictions about the influence of neuronal feedback and attentional selection on neural responses across different visual areas. Our model also provides a framework for understanding how object-based attention is able to select both objects and the features associated with them.

  3. Attention Determines Contextual Enhancement versus Suppression in Human Primary Visual Cortex.

    PubMed

    Flevaris, Anastasia V; Murray, Scott O

    2015-09-02

    Neural responses in primary visual cortex (V1) depend on stimulus context in seemingly complex ways. For example, responses to an oriented stimulus can be suppressed when it is flanked by iso-oriented versus orthogonally oriented stimuli but can also be enhanced when attention is directed to iso-oriented versus orthogonal flanking stimuli. Thus the exact same contextual stimulus arrangement can have completely opposite effects on neural responses-in some cases leading to orientation-tuned suppression and in other cases leading to orientation-tuned enhancement. Here we show that stimulus-based suppression and enhancement of fMRI responses in humans depends on small changes in the focus of attention and can be explained by a model that combines feature-based attention with response normalization. Neurons in the primary visual cortex (V1) respond to stimuli within a restricted portion of the visual field, termed their "receptive field." However, neuronal responses can also be influenced by stimuli that surround a receptive field, although the nature of these contextual interactions and underlying neural mechanisms are debated. Here we show that the response in V1 to a stimulus in the same context can either be suppressed or enhanced depending on the focus of attention. We are able to explain the results using a simple computational model that combines two well established properties of visual cortical responses: response normalization and feature-based enhancement. Copyright © 2015 the authors 0270-6474/15/3512273-08$15.00/0.

  4. Interaction between object-based attention and pertinence values shapes the attentional priority map of a multielement display.

    PubMed

    Gillebert, Celine R; Petersen, Anders; Van Meel, Chayenne; Müller, Tanja; McIntyre, Alexandra; Wagemans, Johan; Humphreys, Glyn W

    2016-06-01

    Previous studies have shown that the perceptual organization of the visual scene constrains the deployment of attention. Here we investigated how the organization of multiple elements into larger configurations alters their attentional weight, depending on the "pertinence" or behavioral importance of the elements' features. We assessed object-based effects on distinct aspects of the attentional priority map: top-down control, reflecting the tendency to encode targets rather than distracters, and the spatial distribution of attention weights across the visual scene, reflecting the tendency to report elements belonging to the same rather than different objects. In 2 experiments participants had to report the letters in briefly presented displays containing 8 letters and digits, in which pairs of characters could be connected with a line. Quantitative estimates of top-down control were obtained using Bundesen's Theory of Visual Attention (1990). The spatial distribution of attention weights was assessed using the "paired response index" (PRI), indicating responses for within-object pairs of letters. In Experiment 1, grouping along the task-relevant dimension (targets with targets and distracters with distracters) increased top-down control and enhanced the PRI; in contrast, task-irrelevant grouping (targets with distracters) did not affect performance. In Experiment 2, we disentangled the effect of target-target and distracter-distracter grouping: Pairwise grouping of distracters enhanced top-down control whereas pairwise grouping of targets changed the PRI. We conclude that object-based perceptual representations interact with pertinence values (of the elements' features and location) in the computation of attention weights, thereby creating a widespread pattern of attentional facilitation across the visual scene. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  5. Feature-Selective Attentional Modulations in Human Frontoparietal Cortex.

    PubMed

    Ester, Edward F; Sutterer, David W; Serences, John T; Awh, Edward

    2016-08-03

    Control over visual selection has long been framed in terms of a dichotomy between "source" and "site," where top-down feedback signals originating in frontoparietal cortical areas modulate or bias sensory processing in posterior visual areas. This distinction is motivated in part by observations that frontoparietal cortical areas encode task-level variables (e.g., what stimulus is currently relevant or what motor outputs are appropriate), while posterior sensory areas encode continuous or analog feature representations. Here, we present evidence that challenges this distinction. We used fMRI, a roving searchlight analysis, and an inverted encoding model to examine representations of an elementary feature property (orientation) across the entire human cortical sheet while participants attended either the orientation or luminance of a peripheral grating. Orientation-selective representations were present in a multitude of visual, parietal, and prefrontal cortical areas, including portions of the medial occipital cortex, the lateral parietal cortex, and the superior precentral sulcus (thought to contain the human homolog of the macaque frontal eye fields). Additionally, representations in many-but not all-of these regions were stronger when participants were instructed to attend orientation relative to luminance. Collectively, these findings challenge models that posit a strict segregation between sources and sites of attentional control on the basis of representational properties by demonstrating that simple feature values are encoded by cortical regions throughout the visual processing hierarchy, and that representations in many of these areas are modulated by attention. Influential models of visual attention posit a distinction between top-down control and bottom-up sensory processing networks. These models are motivated in part by demonstrations showing that frontoparietal cortical areas associated with top-down control represent abstract or categorical stimulus information, while visual areas encode parametric feature information. Here, we show that multivariate activity in human visual, parietal, and frontal cortical areas encode representations of a simple feature property (orientation). Moreover, representations in several (though not all) of these areas were modulated by feature-based attention in a similar fashion. These results provide an important challenge to models that posit dissociable top-down control and sensory processing networks on the basis of representational properties. Copyright © 2016 the authors 0270-6474/16/368188-12$15.00/0.

  6. Learning-based saliency model with depth information.

    PubMed

    Ma, Chih-Yao; Hang, Hsueh-Ming

    2015-01-01

    Most previous studies on visual saliency focused on two-dimensional (2D) scenes. Due to the rapidly growing three-dimensional (3D) video applications, it is very desirable to know how depth information affects human visual attention. In this study, we first conducted eye-fixation experiments on 3D images. Our fixation data set comprises 475 3D images and 16 subjects. We used a Tobii TX300 eye tracker (Tobii, Stockholm, Sweden) to track the eye movement of each subject. In addition, this database contains 475 computed depth maps. Due to the scarcity of public-domain 3D fixation data, this data set should be useful to the 3D visual attention research community. Then, a learning-based visual attention model was designed to predict human attention. In addition to the popular 2D features, we included the depth map and its derived features. The results indicate that the extra depth information can enhance the saliency estimation accuracy specifically for close-up objects hidden in a complex-texture background. In addition, we examined the effectiveness of various low-, mid-, and high-level features on saliency prediction. Compared with both 2D and 3D state-of-the-art saliency estimation models, our methods show better performance on the 3D test images. The eye-tracking database and the MATLAB source codes for the proposed saliency model and evaluation methods are available on our website.

  7. Color-Change Detection Activity in the Primate Superior Colliculus.

    PubMed

    Herman, James P; Krauzlis, Richard J

    2017-01-01

    The primate superior colliculus (SC) is a midbrain structure that participates in the control of spatial attention. Previous studies examining the role of the SC in attention have mostly used luminance-based visual features (e.g., motion, contrast) as the stimuli and saccadic eye movements as the behavioral response, both of which are known to modulate the activity of SC neurons. To explore the limits of the SC's involvement in the control of spatial attention, we recorded SC neuronal activity during a task using color, a visual feature dimension not traditionally associated with the SC, and required monkeys to detect threshold-level changes in the saturation of a cued stimulus by releasing a joystick during maintained fixation. Using this color-based spatial attention task, we found substantial cue-related modulation in all categories of visually responsive neurons in the intermediate layers of the SC. Notably, near-threshold changes in color saturation, both increases and decreases, evoked phasic bursts of activity with magnitudes as large as those evoked by stimulus onset. This change-detection activity had two distinctive features: activity for hits was larger than for misses, and the timing of change-detection activity accounted for 67% of joystick release latency, even though it preceded the release by at least 200 ms. We conclude that during attention tasks, SC activity denotes the behavioral relevance of the stimulus regardless of feature dimension and that phasic event-related SC activity is suitable to guide the selection of manual responses as well as saccadic eye movements.

  8. Object-based selection from spatially-invariant representations: evidence from a feature-report task.

    PubMed

    Matsukura, Michi; Vecera, Shaun P

    2011-02-01

    Attention selects objects as well as locations. When attention selects an object's features, observers identify two features from a single object more accurately than two features from two different objects (object-based effect of attention; e.g., Duncan, Journal of Experimental Psychology: General, 113, 501-517, 1984). Several studies have demonstrated that object-based attention can operate at a late visual processing stage that is independent of objects' spatial information (Awh, Dhaliwal, Christensen, & Matsukura, Psychological Science, 12, 329-334, 2001; Matsukura & Vecera, Psychonomic Bulletin & Review, 16, 529-536, 2009; Vecera, Journal of Experimental Psychology: General, 126, 14-18, 1997; Vecera & Farah, Journal of Experimental Psychology: General, 123, 146-160, 1994). In the present study, we asked two questions regarding this late object-based selection mechanism. In Part I, we investigated how observers' foreknowledge of to-be-reported features allows attention to select objects, as opposed to individual features. Using a feature-report task, a significant object-based effect was observed when to-be-reported features were known in advance but not when this advance knowledge was absent. In Part II, we examined what drives attention to select objects rather than individual features in the absence of observers' foreknowledge of to-be-reported features. Results suggested that, when there was no opportunity for observers to direct their attention to objects that possess to-be-reported features at the time of stimulus presentation, these stimuli must retain strong perceptual cues to establish themselves as separate objects.

  9. Context and competition in the capture of visual attention.

    PubMed

    Hickey, Clayton; Theeuwes, Jan

    2011-10-01

    Competition-based models of visual attention propose that perceptual ambiguity is resolved through inhibition, which is stronger when objects share a greater number of neural receptive fields (RFs). According to this theory, the misallocation of attention to a salient distractor--that is, the capture of attention--can be indexed in RF-scaled interference costs. We used this pattern to investigate distractor-related costs in visual search across several manipulations of temporal context. Distractor costs are generally larger under circumstances in which the distractor can be defined by features that have recently characterised the target, suggesting that capture occurs in these trials. However, our results show that search for a target in the presence of a salient distractor also produces RF-scaled costs when the features defining the target and distractor do not vary from trial to trial. Contextual differences in distractor costs appear to reflect something other than capture, perhaps a qualitative difference in the type of attentional mechanism deployed to the distractor.

  10. Is that a belt or a snake? object attentional selection affects the early stages of visual sensory processing

    PubMed Central

    2012-01-01

    Background There is at present crescent empirical evidence deriving from different lines of ERPs research that, unlike previously observed, the earliest sensory visual response, known as C1 component or P/N80, generated within the striate cortex, might be modulated by selective attention to visual stimulus features. Up to now, evidence of this modulation has been related to space location, and simple features such as spatial frequency, luminance, and texture. Additionally, neurophysiological conditions, such as emotion, vigilance, the reflexive or voluntary nature of input attentional selection, and workload have also been related to C1 modulations, although at least the workload status has received controversial indications. No information is instead available, at present, for objects attentional selection. Methods In this study object- and space-based attention mechanisms were conjointly investigated by presenting complex, familiar shapes of artefacts and animals, intermixed with distracters, in different tasks requiring the selection of a relevant target-category within a relevant spatial location, while ignoring the other shape categories within this location, and, overall, all the categories at an irrelevant location. EEG was recorded from 30 scalp electrode sites in 21 right-handed participants. Results and Conclusions ERP findings showed that visual processing was modulated by both shape- and location-relevance per se, beginning separately at the latency of the early phase of a precocious negativity (60-80 ms) at mesial scalp sites consistent with the C1 component, and a positivity at more lateral sites. The data also showed that the attentional modulation progressed conjointly at the latency of the subsequent P1 (100-120 ms) and N1 (120-180 ms), as well as later-latency components. These findings support the views that (1) V1 may be precociously modulated by direct top-down influences, and participates to object, besides simple features, attentional selection; (2) object spatial and non-spatial features selection might begin with an early, parallel detection of a target object in the visual field, followed by the progressive focusing of spatial attention onto the location of an actual target for its identification, somehow in line with neural mechanisms reported in the literature as "object-based space selection", or with those proposed for visual search. PMID:22300540

  11. Characterizing the effects of feature salience and top-down attention in the early visual system.

    PubMed

    Poltoratski, Sonia; Ling, Sam; McCormack, Devin; Tong, Frank

    2017-07-01

    The visual system employs a sophisticated balance of attentional mechanisms: salient stimuli are prioritized for visual processing, yet observers can also ignore such stimuli when their goals require directing attention elsewhere. A powerful determinant of visual salience is local feature contrast: if a local region differs from its immediate surround along one or more feature dimensions, it will appear more salient. We used high-resolution functional MRI (fMRI) at 7T to characterize the modulatory effects of bottom-up salience and top-down voluntary attention within multiple sites along the early visual pathway, including visual areas V1-V4 and the lateral geniculate nucleus (LGN). Observers viewed arrays of spatially distributed gratings, where one of the gratings immediately to the left or right of fixation differed from all other items in orientation or motion direction, making it salient. To investigate the effects of directed attention, observers were cued to attend to the grating to the left or right of fixation, which was either salient or nonsalient. Results revealed reliable additive effects of top-down attention and stimulus-driven salience throughout visual areas V1-hV4. In comparison, the LGN exhibited significant attentional enhancement but was not reliably modulated by orientation- or motion-defined salience. Our findings indicate that top-down effects of spatial attention can influence visual processing at the earliest possible site along the visual pathway, including the LGN, whereas the processing of orientation- and motion-driven salience primarily involves feature-selective interactions that take place in early cortical visual areas. NEW & NOTEWORTHY While spatial attention allows for specific, goal-driven enhancement of stimuli, salient items outside of the current focus of attention must also be prioritized. We used 7T fMRI to compare salience and spatial attentional enhancement along the early visual hierarchy. We report additive effects of attention and bottom-up salience in early visual areas, suggesting that salience enhancement is not contingent on the observer's attentional state. Copyright © 2017 the American Physiological Society.

  12. Neural evidence reveals the rapid effects of reward history on selective attention.

    PubMed

    MacLean, Mary H; Giesbrecht, Barry

    2015-05-05

    Selective attention is often framed as being primarily driven by two factors: task-relevance and physical salience. However, factors like selection and reward history, which are neither currently task-relevant nor physically salient, can reliably and persistently influence visual selective attention. The current study investigated the nature of the persistent effects of irrelevant, physically non-salient, reward-associated features. These features affected one of the earliest reliable neural indicators of visual selective attention in humans, the P1 event-related potential, measured one week after the reward associations were learned. However, the effects of reward history were moderated by current task demands. The modulation of visually evoked activity supports the hypothesis that reward history influences the innate salience of reward associated features, such that even when no longer relevant, nor physically salient, these features have a rapid, persistent, and robust effect on early visual selective attention. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Fuzzy Classification of High Resolution Remote Sensing Scenes Using Visual Attention Features.

    PubMed

    Li, Linyi; Xu, Tingbao; Chen, Yun

    2017-01-01

    In recent years the spatial resolutions of remote sensing images have been improved greatly. However, a higher spatial resolution image does not always lead to a better result of automatic scene classification. Visual attention is an important characteristic of the human visual system, which can effectively help to classify remote sensing scenes. In this study, a novel visual attention feature extraction algorithm was proposed, which extracted visual attention features through a multiscale process. And a fuzzy classification method using visual attention features (FC-VAF) was developed to perform high resolution remote sensing scene classification. FC-VAF was evaluated by using remote sensing scenes from widely used high resolution remote sensing images, including IKONOS, QuickBird, and ZY-3 images. FC-VAF achieved more accurate classification results than the others according to the quantitative accuracy evaluation indices. We also discussed the role and impacts of different decomposition levels and different wavelets on the classification accuracy. FC-VAF improves the accuracy of high resolution scene classification and therefore advances the research of digital image analysis and the applications of high resolution remote sensing images.

  14. Fuzzy Classification of High Resolution Remote Sensing Scenes Using Visual Attention Features

    PubMed Central

    Xu, Tingbao; Chen, Yun

    2017-01-01

    In recent years the spatial resolutions of remote sensing images have been improved greatly. However, a higher spatial resolution image does not always lead to a better result of automatic scene classification. Visual attention is an important characteristic of the human visual system, which can effectively help to classify remote sensing scenes. In this study, a novel visual attention feature extraction algorithm was proposed, which extracted visual attention features through a multiscale process. And a fuzzy classification method using visual attention features (FC-VAF) was developed to perform high resolution remote sensing scene classification. FC-VAF was evaluated by using remote sensing scenes from widely used high resolution remote sensing images, including IKONOS, QuickBird, and ZY-3 images. FC-VAF achieved more accurate classification results than the others according to the quantitative accuracy evaluation indices. We also discussed the role and impacts of different decomposition levels and different wavelets on the classification accuracy. FC-VAF improves the accuracy of high resolution scene classification and therefore advances the research of digital image analysis and the applications of high resolution remote sensing images. PMID:28761440

  15. The Spotlight of Attention Illuminates Failed Feature-based Expectancies

    PubMed Central

    Bengson, Jesse J.; Lopez-Calderon, Javier; Mangun, George R.

    2012-01-01

    A well-replicated finding is that visual stimuli presented at an attended location are afforded a processing benefit in the form of speeded reaction times and increased accuracy (Posner, 1979; Mangun 1995). This effect has been described using a spotlight metaphor, in which all stimuli within the focus of spatial attention receive facilitated processing, irrespective of other stimulus parameters. However, the spotlight metaphor has been brought into question by a series of combined expectancy studies which demonstrated that the behavioral benefits of spatial attention are contingent upon secondary feature-based expectancies (Kingstone, 1992). The present work used an event-related potential (ERP) approach to reveal that the early neural signature of the spotlight of spatial attention is not sensitive to the validity of secondary feature-based expectancies. PMID:22775503

  16. The effect of feature-based attention on flanker interference processing: An fMRI-constrained source analysis.

    PubMed

    Siemann, Julia; Herrmann, Manfred; Galashan, Daniela

    2018-01-25

    The present study examined whether feature-based cueing affects early or late stages of flanker conflict processing using EEG and fMRI. Feature cues either directed participants' attention to the upcoming colour of the target or were neutral. Validity-specific modulations during interference processing were investigated using the N200 event-related potential (ERP) component and BOLD signal differences. Additionally, both data sets were integrated using an fMRI-constrained source analysis. Finally, the results were compared with a previous study in which spatial instead of feature-based cueing was applied to an otherwise identical flanker task. Feature-based and spatial attention recruited a common fronto-parietal network during conflict processing. Irrespective of attention type (feature-based; spatial), this network responded to focussed attention (valid cueing) as well as context updating (invalid cueing), hinting at domain-general mechanisms. However, spatially and non-spatially directed attention also demonstrated domain-specific activation patterns for conflict processing that were observable in distinct EEG and fMRI data patterns as well as in the respective source analyses. Conflict-specific activity in visual brain regions was comparable between both attention types. We assume that the distinction between spatially and non-spatially directed attention types primarily applies to temporal differences (domain-specific dynamics) between signals originating in the same brain regions (domain-general localization).

  17. Causal implication by rhythmic transcranial magnetic stimulation of alpha frequency in feature-based local vs. global attention.

    PubMed

    Romei, Vincenzo; Thut, Gregor; Mok, Robert M; Schyns, Philippe G; Driver, Jon

    2012-03-01

    Although oscillatory activity in the alpha band was traditionally associated with lack of alertness, more recent work has linked it to specific cognitive functions, including visual attention. The emerging method of rhythmic transcranial magnetic stimulation (TMS) allows causal interventional tests for the online impact on performance of TMS administered in short bursts at a particular frequency. TMS bursts at 10 Hz have recently been shown to have an impact on spatial visual attention, but any role in featural attention remains unclear. Here we used rhythmic TMS at 10 Hz to assess the impact on attending to global or local components of a hierarchical Navon-like stimulus (D. Navon (1977) Forest before trees: The precedence of global features in visual perception. Cognit. Psychol., 9, 353), in a paradigm recently used with TMS at other frequencies (V. Romei, J. Driver, P.G. Schyns & G. Thut. (2011) Rhythmic TMS over parietal cortex links distinct brain frequencies to global versus local visual processing. Curr. Biol., 2, 334-337). In separate groups, left or right posterior parietal sites were stimulated at 10 Hz just before presentation of the hierarchical stimulus. Participants had to identify either the local or global component in separate blocks. Right parietal 10 Hz stimulation (vs. sham) significantly impaired global processing without affecting local processing, while left parietal 10 Hz stimulation vs. sham impaired local processing with a minor trend to enhance global processing. These 10 Hz outcomes differed significantly from stimulation at other frequencies (i.e. 5 or 20 Hz) over the same site in other recent work with the same paradigm. These dissociations confirm differential roles of the two hemispheres in local vs. global processing, and reveal a frequency-specific role for stimulation in the alpha band for regulating feature-based visual attention. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  18. A model of proto-object based saliency

    PubMed Central

    Russell, Alexander F.; Mihalaş, Stefan; von der Heydt, Rudiger; Niebur, Ernst; Etienne-Cummings, Ralph

    2013-01-01

    Organisms use the process of selective attention to optimally allocate their computational resources to the instantaneously most relevant subsets of a visual scene, ensuring that they can parse the scene in real time. Many models of bottom-up attentional selection assume that elementary image features, like intensity, color and orientation, attract attention. Gestalt psychologists, how-ever, argue that humans perceive whole objects before they analyze individual features. This is supported by recent psychophysical studies that show that objects predict eye-fixations better than features. In this report we present a neurally inspired algorithm of object based, bottom-up attention. The model rivals the performance of state of the art non-biologically plausible feature based algorithms (and outperforms biologically plausible feature based algorithms) in its ability to predict perceptual saliency (eye fixations and subjective interest points) in natural scenes. The model achieves this by computing saliency as a function of proto-objects that establish the perceptual organization of the scene. All computational mechanisms of the algorithm have direct neural correlates, and our results provide evidence for the interface theory of attention. PMID:24184601

  19. Impaired search for orientation but not color in hemi-spatial neglect.

    PubMed

    Wilkinson, David; Ko, Philip; Milberg, William; McGlinchey, Regina

    2008-01-01

    Patients with hemi-spatial neglect have trouble finding targets defined by a conjunction of visual features. The problem is widely believed to stem from a high-level deficit in attentional deployment, which in turn has led to disagreement over whether the detection of basic features is also disrupted. If one assumes that the detection of salient visual features can be based on the output of spared 'preattentive' processes (Treisman and Gelade, 1980), then feature detection should remain intact. However, if one assumes that all forms of detection require at least a modicum of focused attention (Duncan and Humphreys, 1992), then all forms of search will be disrupted to some degree. Here we measured the detection of feature targets that were defined by either a unique color or orientation. Comparable detection rates were observed in non-neglected space, which indicated that both forms of search placed similar demands on attention. For either of the above accounts to be true, the two targets should therefore be detected with equal efficiency in the neglected field. We found that while the detection rate for color was normal in four of our five patients, all showed an increased reaction time and/or error rate for orientation. This result points to a selective deficit in orientation discrimination, and implies that neglect disrupts specific feature representations. That is, the effects of neglect on visual search are not only attentional but also perceptual.

  20. Feature integration theory revisited: dissociating feature detection and attentional guidance in visual search.

    PubMed

    Chan, Louis K H; Hayward, William G

    2009-02-01

    In feature integration theory (FIT; A. Treisman & S. Sato, 1990), feature detection is driven by independent dimensional modules, and other searches are driven by a master map of locations that integrates dimensional information into salience signals. Although recent theoretical models have largely abandoned this distinction, some observed results are difficult to explain in its absence. The present study measured dimension-specific performance during detection and localization, tasks that require operation of dimensional modules and the master map, respectively. Results showed a dissociation between tasks in terms of both dimension-switching costs and cross-dimension attentional capture, reflecting a dimension-specific nature for detection tasks and a dimension-general nature for localization tasks. In a feature-discrimination task, results precluded an explanation based on response mode. These results are interpreted to support FIT's postulation that different mechanisms are involved in parallel and focal attention searches. This indicates that the FIT architecture should be adopted to explain the current results and that a variety of visual attention findings can be addressed within this framework. Copyright 2009 APA, all rights reserved.

  1. Value-Driven Attentional Capture is Modulated by Spatial Context

    PubMed Central

    Anderson, Brian A.

    2014-01-01

    When stimuli are associated with reward outcome, their visual features acquire high attentional priority such that stimuli possessing those features involuntarily capture attention. Whether a particular feature is predictive of reward, however, will vary with a number of contextual factors. One such factor is spatial location: for example, red berries are likely to be found in low-lying bushes, whereas yellow bananas are likely to be found on treetops. In the present study, I explore whether the attentional priority afforded to reward-associated features is modulated by such location-based contingencies. The results demonstrate that when a stimulus feature is associated with a reward outcome in one spatial location but not another, attentional capture by that feature is selective to when it appears in the rewarded location. This finding provides insight into how reward learning effectively modulates attention in an environment with complex stimulus–reward contingencies, thereby supporting efficient foraging. PMID:26069450

  2. Object-based Encoding in Visual Working Memory: Evidence from Memory-driven Attentional Capture.

    PubMed

    Gao, Zaifeng; Yu, Shixian; Zhu, Chengfeng; Shui, Rende; Weng, Xuchu; Li, Peng; Shen, Mowei

    2016-03-09

    Visual working memory (VWM) adopts a specific manner of object-based encoding (OBE) to extract perceptual information: Whenever one feature-dimension is selected for entry into VWM, the others are also extracted. Currently most studies revealing OBE probed an 'irrelevant-change distracting effect', where changes of irrelevant-features dramatically affected the performance of the target feature. However, the existence of irrelevant-feature change may affect participants' processing manner, leading to a false-positive result. The current study conducted a strict examination of OBE in VWM, by probing whether irrelevant-features guided the deployment of attention in visual search. The participants memorized an object's colour yet ignored shape and concurrently performed a visual-search task. They searched for a target line among distractor lines, each embedded within a different object. One object in the search display could match the shape, colour, or both dimensions of the memory item, but this object never contained the target line. Relative to a neutral baseline, where there was no match between the memory and search displays, search time was significantly prolonged in all match conditions, regardless of whether the memory item was displayed for 100 or 1000 ms. These results suggest that task-irrelevant shape was extracted into VWM, supporting OBE in VWM.

  3. The role of attention in figure-ground segregation in areas V1 and V4 of the visual cortex.

    PubMed

    Poort, Jasper; Raudies, Florian; Wannig, Aurel; Lamme, Victor A F; Neumann, Heiko; Roelfsema, Pieter R

    2012-07-12

    Our visual system segments images into objects and background. Figure-ground segregation relies on the detection of feature discontinuities that signal boundaries between the figures and the background and on a complementary region-filling process that groups together image regions with similar features. The neuronal mechanisms for these processes are not well understood and it is unknown how they depend on visual attention. We measured neuronal activity in V1 and V4 in a task where monkeys either made an eye movement to texture-defined figures or ignored them. V1 activity predicted the timing and the direction of the saccade if the figures were task relevant. We found that boundary detection is an early process that depends little on attention, whereas region filling occurs later and is facilitated by visual attention, which acts in an object-based manner. Our findings are explained by a model with local, bottom-up computations for boundary detection and feedback processing for region filling. Copyright © 2012 Elsevier Inc. All rights reserved.

  4. Using neuronal populations to study the mechanisms underlying spatial and feature attention

    PubMed Central

    Cohen, Marlene R.; Maunsell, John H.R.

    2012-01-01

    Summary Visual attention affects both perception and neuronal responses. Whether the same neuronal mechanisms mediate spatial attention, which improves perception of attended locations, and non-spatial forms of attention has been a subject of considerable debate. Spatial and feature attention have similar effects on individual neurons. Because visual cortex is retinotopically organized, however, spatial attention can co-modulate local neuronal populations, while feature attention generally requires more selective modulation. We compared the effects of feature and spatial attention on local and spatially separated populations by recording simultaneously from dozens of neurons in both hemispheres of V4. Feature and spatial attention affect the activity of local populations similarly, modulating both firing rates and correlations between pairs of nearby neurons. However, while spatial attention appears to act on local populations, feature attention is coordinated across hemispheres. Our results are consistent with a unified attentional mechanism that can modulate the responses of arbitrary subgroups of neurons. PMID:21689604

  5. The Role of Attention in the Maintenance of Feature Bindings in Visual Short-term Memory

    ERIC Educational Resources Information Center

    Johnson, Jeffrey S.; Hollingworth, Andrew; Luck, Steven J.

    2008-01-01

    This study examined the role of attention in maintaining feature bindings in visual short-term memory. In a change-detection paradigm, participants attempted to detect changes in the colors and orientations of multiple objects; the changes consisted of new feature values in a feature-memory condition and changes in how existing feature values were…

  6. Object-Based Control of Attention Is Sensitive to Recent Experience

    ERIC Educational Resources Information Center

    Lee, Hyunkyu; Mozer, Michael C.; Kramer, Arthur F.; Vecera, Shaun P.

    2012-01-01

    How is attention guided by past experience? In visual search, numerous studies have shown that recent trials influence responses to the current trial. Repeating features such as color, shape, or location of a target facilitates performance. Here we examine whether recent experience also modulates a more abstract dimension of attentional control,…

  7. Irrelevant reward and selection histories have different influences on task-relevant attentional selection.

    PubMed

    MacLean, Mary H; Giesbrecht, Barry

    2015-07-01

    Task-relevant and physically salient features influence visual selective attention. In the present study, we investigated the influence of task-irrelevant and physically nonsalient reward-associated features on visual selective attention. Two hypotheses were tested: One predicts that the effects of target-defining task-relevant and task-irrelevant features interact to modulate visual selection; the other predicts that visual selection is determined by the independent combination of relevant and irrelevant feature effects. These alternatives were tested using a visual search task that contained multiple targets, placing a high demand on the need for selectivity, and that was data-limited and required unspeeded responses, emphasizing early perceptual selection processes. One week prior to the visual search task, participants completed a training task in which they learned to associate particular colors with a specific reward value. In the search task, the reward-associated colors were presented surrounding targets and distractors, but were neither physically salient nor task-relevant. In two experiments, the irrelevant reward-associated features influenced performance, but only when they were presented in a task-relevant location. The costs induced by the irrelevant reward-associated features were greater when they oriented attention to a target than to a distractor. In a third experiment, we examined the effects of selection history in the absence of reward history and found that the interaction between task relevance and selection history differed, relative to when the features had previously been associated with reward. The results indicate that under conditions that demand highly efficient perceptual selection, physically nonsalient task-irrelevant and task-relevant factors interact to influence visual selective attention.

  8. Suppression effects in feature-based attention

    PubMed Central

    Wang, Yixue; Miller, James; Liu, Taosheng

    2015-01-01

    Attending to a feature enhances visual processing of that feature, but it is less clear what occurs to unattended features. Single-unit recording studies in middle temporal (MT) have shown that neuronal modulation is a monotonic function of the difference between the attended and neuron's preferred direction. Such a relationship should predict a monotonic suppressive effect in psychophysical performance. However, past research on suppressive effects of feature-based attention has remained inconclusive. We investigated the suppressive effect for motion direction, orientation, and color in three experiments. We asked participants to detect a weak signal among noise and provided a partially valid feature cue to manipulate attention. We measured performance as a function of the offset between the cued and signal feature. We also included neutral trials where no feature cues were presented to provide a baseline measure of performance. Across three experiments, we consistently observed enhancement effects when the target feature and cued feature coincided and suppression effects when the target feature deviated from the cued feature. The exact profile of suppression was different across feature dimensions: Whereas the profile for direction exhibited a “rebound” effect, the profiles for orientation and color were monotonic. These results demonstrate that unattended features are suppressed during feature-based attention, but the exact suppression profile depends on the specific feature. Overall, the results are largely consistent with neurophysiological data and support the feature-similarity gain model of attention. PMID:26067533

  9. A foreground object features-based stereoscopic image visual comfort assessment model

    NASA Astrophysics Data System (ADS)

    Jin, Xin; Jiang, G.; Ying, H.; Yu, M.; Ding, S.; Peng, Z.; Shao, F.

    2014-11-01

    Since stereoscopic images provide observers with both realistic and discomfort viewing experience, it is necessary to investigate the determinants of visual discomfort. By considering that foreground object draws most attention when human observing stereoscopic images. This paper proposes a new foreground object based visual comfort assessment (VCA) metric. In the first place, a suitable segmentation method is applied to disparity map and then the foreground object is ascertained as the one having the biggest average disparity. In the second place, three visual features being average disparity, average width and spatial complexity of foreground object are computed from the perspective of visual attention. Nevertheless, object's width and complexity do not consistently influence the perception of visual comfort in comparison with disparity. In accordance with this psychological phenomenon, we divide the whole images into four categories on the basis of different disparity and width, and exert four different models to more precisely predict its visual comfort in the third place. Experimental results show that the proposed VCA metric outperformance other existing metrics and can achieve a high consistency between objective and subjective visual comfort scores. The Pearson Linear Correlation Coefficient (PLCC) and Spearman Rank Order Correlation Coefficient (SROCC) are over 0.84 and 0.82, respectively.

  10. Modeling global scene factors in attention

    NASA Astrophysics Data System (ADS)

    Torralba, Antonio

    2003-07-01

    Models of visual attention have focused predominantly on bottom-up approaches that ignored structured contextual and scene information. I propose a model of contextual cueing for attention guidance based on the global scene configuration. It is shown that the statistics of low-level features across the whole image can be used to prime the presence or absence of objects in the scene and to predict their location, scale, and appearance before exploring the image. In this scheme, visual context information can become available early in the visual processing chain, which allows modulation of the saliency of image regions and provides an efficient shortcut for object detection and recognition. 2003 Optical Society of America

  11. Effect of feature-selective attention on neuronal responses in macaque area MT

    PubMed Central

    Chen, X.; Hoffmann, K.-P.; Albright, T. D.

    2012-01-01

    Attention influences visual processing in striate and extrastriate cortex, which has been extensively studied for spatial-, object-, and feature-based attention. Most studies exploring neural signatures of feature-based attention have trained animals to attend to an object identified by a certain feature and ignore objects/displays identified by a different feature. Little is known about the effects of feature-selective attention, where subjects attend to one stimulus feature domain (e.g., color) of an object while features from different domains (e.g., direction of motion) of the same object are ignored. To study this type of feature-selective attention in area MT in the middle temporal sulcus, we trained macaque monkeys to either attend to and report the direction of motion of a moving sine wave grating (a feature for which MT neurons display strong selectivity) or attend to and report its color (a feature for which MT neurons have very limited selectivity). We hypothesized that neurons would upregulate their firing rate during attend-direction conditions compared with attend-color conditions. We found that feature-selective attention significantly affected 22% of MT neurons. Contrary to our hypothesis, these neurons did not necessarily increase firing rate when animals attended to direction of motion but fell into one of two classes. In one class, attention to color increased the gain of stimulus-induced responses compared with attend-direction conditions. The other class displayed the opposite effects. Feature-selective activity modulations occurred earlier in neurons modulated by attention to color compared with neurons modulated by attention to motion direction. Thus feature-selective attention influences neuronal processing in macaque area MT but often exhibited a mismatch between the preferred stimulus dimension (direction of motion) and the preferred attention dimension (attention to color). PMID:22170961

  12. Effect of feature-selective attention on neuronal responses in macaque area MT.

    PubMed

    Chen, X; Hoffmann, K-P; Albright, T D; Thiele, A

    2012-03-01

    Attention influences visual processing in striate and extrastriate cortex, which has been extensively studied for spatial-, object-, and feature-based attention. Most studies exploring neural signatures of feature-based attention have trained animals to attend to an object identified by a certain feature and ignore objects/displays identified by a different feature. Little is known about the effects of feature-selective attention, where subjects attend to one stimulus feature domain (e.g., color) of an object while features from different domains (e.g., direction of motion) of the same object are ignored. To study this type of feature-selective attention in area MT in the middle temporal sulcus, we trained macaque monkeys to either attend to and report the direction of motion of a moving sine wave grating (a feature for which MT neurons display strong selectivity) or attend to and report its color (a feature for which MT neurons have very limited selectivity). We hypothesized that neurons would upregulate their firing rate during attend-direction conditions compared with attend-color conditions. We found that feature-selective attention significantly affected 22% of MT neurons. Contrary to our hypothesis, these neurons did not necessarily increase firing rate when animals attended to direction of motion but fell into one of two classes. In one class, attention to color increased the gain of stimulus-induced responses compared with attend-direction conditions. The other class displayed the opposite effects. Feature-selective activity modulations occurred earlier in neurons modulated by attention to color compared with neurons modulated by attention to motion direction. Thus feature-selective attention influences neuronal processing in macaque area MT but often exhibited a mismatch between the preferred stimulus dimension (direction of motion) and the preferred attention dimension (attention to color).

  13. Active listening impairs visual perception and selectivity: an ERP study of auditory dual-task costs on visual attention.

    PubMed

    Gherri, Elena; Eimer, Martin

    2011-04-01

    The ability to drive safely is disrupted by cell phone conversations, and this has been attributed to a diversion of attention from the visual environment. We employed behavioral and ERP measures to study whether the attentive processing of spoken messages is, in itself, sufficient to produce visual-attentional deficits. Participants searched for visual targets defined by a unique feature (Experiment 1) or feature conjunction (Experiment 2), and simultaneously listened to narrated text passages that had to be recalled later (encoding condition), or heard backward-played speech sounds that could be ignored (control condition). Responses to targets were slower in the encoding condition, and ERPs revealed that the visual processing of search arrays and the attentional selection of target stimuli were less efficient in the encoding relative to the control condition. Results demonstrate that the attentional processing of visual information is impaired when concurrent spoken messages are encoded and maintained, in line with cross-modal links in selective attention, but inconsistent with the view that attentional resources are modality-specific. The distraction of visual attention by active listening could contribute to the adverse effects of cell phone use on driving performance.

  14. Visual Attention to Antismoking PSAs: Smoking Cues versus Other Attention-Grabbing Features

    ERIC Educational Resources Information Center

    Sanders-Jackson, Ashley N.; Cappella, Joseph N.; Linebarger, Deborah L.; Piotrowski, Jessica Taylor; O'Keeffe, Moira; Strasser, Andrew A.

    2011-01-01

    This study examines how addicted smokers attend visually to smoking-related public service announcements (PSAs) in adults smokers. Smokers' onscreen visual fixation is an indicator of cognitive resources allocated to visual attention. Characteristic of individuals with addictive tendencies, smokers are expected to be appetitively activated by…

  15. Effects of feature-selective and spatial attention at different stages of visual processing.

    PubMed

    Andersen, Søren K; Fuchs, Sandra; Müller, Matthias M

    2011-01-01

    We investigated mechanisms of concurrent attentional selection of location and color using electrophysiological measures in human subjects. Two completely overlapping random dot kinematograms (RDKs) of two different colors were presented on either side of a central fixation cross. On each trial, participants attended one of these four RDKs, defined by its specific combination of color and location, in order to detect coherent motion targets. Sustained attentional selection while monitoring for targets was measured by means of steady-state visual evoked potentials (SSVEPs) elicited by the frequency-tagged RDKs. Attentional selection of transient targets and distractors was assessed by behavioral responses and by recording event-related potentials to these stimuli. Spatial attention and attention to color had independent and largely additive effects on the amplitudes of SSVEPs elicited in early visual areas. In contrast, behavioral false alarms and feature-selective modulation of P3 amplitudes to targets and distractors were limited to the attended location. These results suggest that feature-selective attention produces an early, global facilitation of stimuli having the attended feature throughout the visual field, whereas the discrimination of target events takes place at a later stage of processing that is only applied to stimuli at the attended position.

  16. Effects of feature-based attention on the motion aftereffect at remote locations.

    PubMed

    Boynton, Geoffrey M; Ciaramitaro, Vivian M; Arman, A Cyrus

    2006-09-01

    Previous studies have shown that attention to a particular stimulus feature, such as direction of motion or color, enhances neuronal responses to unattended stimuli sharing that feature. We studied this effect psychophysically by measuring the strength of the motion aftereffect (MAE) induced by an unattended stimulus when attention was directed to one of two overlapping fields of moving dots in a different spatial location. When attention was directed to the same direction of motion as the unattended stimulus, the unattended stimulus induced a stronger MAE than when attention was directed to the opposite direction. Also, when the unattended location contained either uncorrelated motion or had no stimulus at all an MAE was induced in the opposite direction to the attended direction of motion. The strength of the MAE was similar regardless of whether subjects attended to the speed or luminance of the attended dots. These results provide further support for a global feature-based mechanism of attention, and show that the effect spreads across all features of an attended object, and to all locations of visual space.

  17. Objects Classification by Learning-Based Visual Saliency Model and Convolutional Neural Network.

    PubMed

    Li, Na; Zhao, Xinbo; Yang, Yongjia; Zou, Xiaochun

    2016-01-01

    Humans can easily classify different kinds of objects whereas it is quite difficult for computers. As a hot and difficult problem, objects classification has been receiving extensive interests with broad prospects. Inspired by neuroscience, deep learning concept is proposed. Convolutional neural network (CNN) as one of the methods of deep learning can be used to solve classification problem. But most of deep learning methods, including CNN, all ignore the human visual information processing mechanism when a person is classifying objects. Therefore, in this paper, inspiring the completed processing that humans classify different kinds of objects, we bring forth a new classification method which combines visual attention model and CNN. Firstly, we use the visual attention model to simulate the processing of human visual selection mechanism. Secondly, we use CNN to simulate the processing of how humans select features and extract the local features of those selected areas. Finally, not only does our classification method depend on those local features, but also it adds the human semantic features to classify objects. Our classification method has apparently advantages in biology. Experimental results demonstrated that our method made the efficiency of classification improve significantly.

  18. Feature precedence in processing multifeature visual information in the human brain: an event-related potential study.

    PubMed

    Liu, B; Meng, X; Wu, G; Huang, Y

    2012-05-17

    In this article, we aimed to study whether feature precedence existed in the cognitive processing of multifeature visual information in the human brain. In our experiment, we paid attention to two important visual features as follows: color and shape. In order to avoid the presence of semantic constraints between them and the resulting impact, pure color and simple geometric shape were chosen as the color feature and shape feature of visual stimulus, respectively. We adopted an "old/new" paradigm to study the cognitive processing of color feature, shape feature and the combination of color feature and shape feature, respectively. The experiment consisted of three tasks as follows: Color task, Shape task and Color-Shape task. The results showed that the feature-based pattern would be activated in the human brain in processing multifeature visual information without semantic association between features. Furthermore, shape feature was processed earlier than color feature, and the cognitive processing of color feature was more difficult than that of shape feature. Copyright © 2012 IBRO. Published by Elsevier Ltd. All rights reserved.

  19. Visual search in Alzheimer's disease: a deficiency in processing conjunctions of features.

    PubMed

    Tales, A; Butler, S R; Fossey, J; Gilchrist, I D; Jones, R W; Troscianko, T

    2002-01-01

    Human vision often needs to encode multiple characteristics of many elements of the visual field, for example their lightness and orientation. The paradigm of visual search allows a quantitative assessment of the function of the underlying mechanisms. It measures the ability to detect a target element among a set of distractor elements. We asked whether Alzheimer's disease (AD) patients are particularly affected in one type of search, where the target is defined by a conjunction of features (orientation and lightness) and where performance depends on some shifting of attention. Two non-conjunction control conditions were employed. The first was a pre-attentive, single-feature, "pop-out" task, detecting a vertical target among horizontal distractors. The second was a single-feature, partly attentive task in which the target element was slightly larger than the distractors-a "size" task. This was chosen to have a similar level of attentional load as the conjunction task (for the control group), but lacked the conjunction of two features. In an experiment, 15 AD patients were compared to age-matched controls. The results suggested that AD patients have a particular impairment in the conjunction task but not in the single-feature size or pre-attentive tasks. This may imply that AD particularly affects those mechanisms which compare across more than one feature type, and spares the other systems and is not therefore simply an 'attention-related' impairment. Additionally, these findings show a double dissociation with previous data on visual search in Parkinson's disease (PD), suggesting a different effect of these diseases on the visual pathway.

  20. Beyond the search surface: visual search and attentional engagement.

    PubMed

    Duncan, J; Humphreys, G

    1992-05-01

    Treisman (1991) described a series of visual search studies testing feature integration theory against an alternative (Duncan & Humphreys, 1989) in which feature and conjunction search are basically similar. Here the latter account is noted to have 2 distinct levels: (a) a summary of search findings in terms of stimulus similarities, and (b) a theory of how visual attention is brought to bear on relevant objects. Working at the 1st level, Treisman found that even when similarities were calibrated and controlled, conjunction search was much harder than feature search. The theory, however, can only really be tested at the 2nd level, because the 1st is an approximation. An account of the findings is developed at the 2nd level, based on the 2 processes of input-template matching and spreading suppression. New data show that, when both of these factors are controlled, feature and conjunction search are equally difficult. Possibilities for unification of the alternative views are considered.

  1. Spatial attention improves the quality of population codes in human visual cortex.

    PubMed

    Saproo, Sameer; Serences, John T

    2010-08-01

    Selective attention enables sensory input from behaviorally relevant stimuli to be processed in greater detail, so that these stimuli can more accurately influence thoughts, actions, and future goals. Attention has been shown to modulate the spiking activity of single feature-selective neurons that encode basic stimulus properties (color, orientation, etc.). However, the combined output from many such neurons is required to form stable representations of relevant objects and little empirical work has formally investigated the relationship between attentional modulations on population responses and improvements in encoding precision. Here, we used functional MRI and voxel-based feature tuning functions to show that spatial attention induces a multiplicative scaling in orientation-selective population response profiles in early visual cortex. In turn, this multiplicative scaling correlates with an improvement in encoding precision, as evidenced by a concurrent increase in the mutual information between population responses and the orientation of attended stimuli. These data therefore demonstrate how multiplicative scaling of neural responses provides at least one mechanism by which spatial attention may improve the encoding precision of population codes. Increased encoding precision in early visual areas may then enhance the speed and accuracy of perceptual decisions computed by higher-order neural mechanisms.

  2. Attention is required for maintenance of feature binding in visual working memory

    PubMed Central

    Heider, Maike; Husain, Masud

    2013-01-01

    Working memory and attention are intimately connected. However, understanding the relationship between the two is challenging. Currently, there is an important controversy about whether objects in working memory are maintained automatically or require resources that are also deployed for visual or auditory attention. Here we investigated the effects of loading attention resources on precision of visual working memory, specifically on correct maintenance of feature-bound objects, using a dual-task paradigm. Participants were presented with a memory array and were asked to remember either direction of motion of random dot kinematograms of different colour, or orientation of coloured bars. During the maintenance period, they performed a secondary visual or auditory task, with varying levels of load. Following a retention period, they adjusted a coloured probe to match either the motion direction or orientation of stimuli with the same colour in the memory array. This allowed us to examine the effects of an attention-demanding task performed during maintenance on precision of recall on the concurrent working memory task. Systematic increase in attention load during maintenance resulted in a significant decrease in overall working memory performance. Changes in overall performance were specifically accompanied by an increase in feature misbinding errors: erroneous reporting of nontarget motion or orientation. Thus in trials where attention resources were taxed, participants were more likely to respond with nontarget values rather than simply making random responses. Our findings suggest that resources used during attention-demanding visual or auditory tasks also contribute to maintaining feature-bound representations in visual working memory—but not necessarily other aspects of working memory. PMID:24266343

  3. Attention is required for maintenance of feature binding in visual working memory.

    PubMed

    Zokaei, Nahid; Heider, Maike; Husain, Masud

    2014-01-01

    Working memory and attention are intimately connected. However, understanding the relationship between the two is challenging. Currently, there is an important controversy about whether objects in working memory are maintained automatically or require resources that are also deployed for visual or auditory attention. Here we investigated the effects of loading attention resources on precision of visual working memory, specifically on correct maintenance of feature-bound objects, using a dual-task paradigm. Participants were presented with a memory array and were asked to remember either direction of motion of random dot kinematograms of different colour, or orientation of coloured bars. During the maintenance period, they performed a secondary visual or auditory task, with varying levels of load. Following a retention period, they adjusted a coloured probe to match either the motion direction or orientation of stimuli with the same colour in the memory array. This allowed us to examine the effects of an attention-demanding task performed during maintenance on precision of recall on the concurrent working memory task. Systematic increase in attention load during maintenance resulted in a significant decrease in overall working memory performance. Changes in overall performance were specifically accompanied by an increase in feature misbinding errors: erroneous reporting of nontarget motion or orientation. Thus in trials where attention resources were taxed, participants were more likely to respond with nontarget values rather than simply making random responses. Our findings suggest that resources used during attention-demanding visual or auditory tasks also contribute to maintaining feature-bound representations in visual working memory-but not necessarily other aspects of working memory.

  4. Feature bindings endure without attention: evidence from an explicit recall task.

    PubMed

    Gajewski, Daniel A; Brockmole, James R

    2006-08-01

    Are integrated objects the unit of capacity of visual working memory, or is continued attention needed to maintain bindings between independently stored features? In a delayed recall task, participants reported the color and shape of a probed item from a memory array. During the delay, attention was manipulated with an exogenous cue. Recall was elevated at validly cued positions, indicating that the cue affected item memory. On invalid trials, participants most frequently recalled either both features (perfect object memory) or neither of the two features (no object memory); the frequency with which only one feature was recalled was significantly lower than predicted by feature independence as determined in a single-feature recall task. These data do not support the view that features are remembered independently when attention is withdrawn. Instead, integrated objects are stored in visual working memory without need for continued attention.

  5. The impact of attentional, linguistic, and visual features during object naming

    PubMed Central

    Clarke, Alasdair D. F.; Coco, Moreno I.; Keller, Frank

    2013-01-01

    Object detection and identification are fundamental to human vision, and there is mounting evidence that objects guide the allocation of visual attention. However, the role of objects in tasks involving multiple modalities is less clear. To address this question, we investigate object naming, a task in which participants have to verbally identify objects they see in photorealistic scenes. We report an eye-tracking study that investigates which features (attentional, visual, and linguistic) influence object naming. We find that the amount of visual attention directed toward an object, its position and saliency, along with linguistic factors such as word frequency, animacy, and semantic proximity, significantly influence whether the object will be named or not. We then ask how features from different modalities are combined during naming, and find significant interactions between saliency and position, saliency and linguistic features, and attention and position. We conclude that when the cognitive system performs tasks such as object naming, it uses input from one modality to constraint or enhance the processing of other modalities, rather than processing each input modality independently. PMID:24379792

  6. Demands on attention and the role of response priming in visual discrimination of feature conjunctions.

    PubMed

    Fournier, Lisa R; Herbert, Rhonda J; Farris, Carrie

    2004-10-01

    This study examined how response mapping of features within single- and multiple-feature targets affects decision-based processing and attentional capacity demands. Observers judged the presence or absence of 1 or 2 target features within an object either presented alone or with distractors. Judging the presence of 2 features relative to the less discriminable of these features alone was faster (conjunction benefits) when the task-relevant features differed in discriminability and were consistently mapped to responses. Conjunction benefits were attributed to asynchronous decision priming across attended, task-relevant dimensions. A failure to find conjunction benefits for disjunctive conjunctions was attributed to increased memory demands and variable feature-response mapping for 2- versus single-feature targets. Further, attentional demands were similar between single- and 2-feature targets when response mapping, memory demands, and discriminability of the task-relevant features were equated between targets. Implications of the findings for recent attention models are discussed. (c) 2004 APA, all rights reserved

  7. Feature integration, attention, and fixations during visual search.

    PubMed

    Khani, Abbas; Ordikhani-Seyedlar, Mehdi

    2017-01-01

    We argue that mechanistic premises of "item-based" theories are not invalidated by the fixation-based approach. We use item-based theories to propose an account that does not advocate strict serial item processing and integrates fixations. The main focus of this account is feature integration within fixations. We also suggest that perceptual load determines the size of the fixations.

  8. An integrative, experience-based theory of attentional control.

    PubMed

    Wilder, Matthew H; Mozer, Michael C; Wickens, Christopher D

    2011-02-09

    Although diverse, theories of visual attention generally share the notion that attention is controlled by some combination of three distinct strategies: (1) exogenous cuing from locally contrasting primitive visual features, such as abrupt onsets or color singletons (e.g., L. Itti, C. Koch, & E. Neiber, 1998), (2) endogenous gain modulation of exogenous activations, used to guide attention to task-relevant features (e.g., V. Navalpakkam & L. Itti, 2007; J. Wolfe, 1994, 2007), and (3) endogenous prediction of likely locations of interest, based on task and scene gist (e.g., A. Torralba, A. Oliva, M. Castelhano, & J. Henderson, 2006). However, little work has been done to synthesize these disparate theories. In this work, we propose a unifying conceptualization in which attention is controlled along two dimensions: the degree of task focus and the contextual scale of operation. Previously proposed strategies-and their combinations-can be viewed as instances of this one mechanism. Thus, this theory serves not as a replacement for existing models but as a means of bringing them into a coherent framework. We present an implementation of this theory and demonstrate its applicability to a wide range of attentional phenomena. The model accounts for key results in visual search with synthetic images and makes reasonable predictions for human eye movements in search tasks involving real-world images. In addition, the theory offers an unusual perspective on attention that places a fundamental emphasis on the role of experience and task-related knowledge.

  9. Short-term and long-term attentional biases to frequently encountered target features.

    PubMed

    Sha, Li Z; Remington, Roger W; Jiang, Yuhong V

    2017-07-01

    It has long been known that frequently occurring targets are attended better than infrequent ones in visual search. But does this frequency-based attentional prioritization reflect momentary or durable changes in attention? Here we observed both short-term and long-term attentional biases for visual features as a function of different types of statistical associations between the targets, distractors, and features. Participants searched for a target, a line oriented horizontally or vertically among diagonal distractors, and reported its length. In one set of experiments we manipulated the target's color probability: Targets were more often in Color 1 than in Color 2. The distractors were in other colors. Participants found Color 1 targets more quickly than Color 2 targets, but this preference disappeared immediately when the target's color became random in the subsequent testing phase. In the other set of experiments, we manipulated the diagnostic values of the two colors: Color 1 was more often a target than a distractor; Color 2 was more often a distractor than a target. Participants found Color 1 targets more quickly than Color 2 targets. Importantly, and in contrast to the first set of experiments, the featural preference was sustained in the testing phase. These results suggest that short-term and long-term attentional biases are products of different statistical information. Finding a target momentarily activates its features, inducing short-term repetition priming. Long-term changes in attention, on the other hand, may rely on learning diagnostic features of the targets.

  10. Hyperspectral image visualization based on a human visual model

    NASA Astrophysics Data System (ADS)

    Zhang, Hongqin; Peng, Honghong; Fairchild, Mark D.; Montag, Ethan D.

    2008-02-01

    Hyperspectral image data can provide very fine spectral resolution with more than 200 bands, yet presents challenges for visualization techniques for displaying such rich information on a tristimulus monitor. This study developed a visualization technique by taking advantage of both the consistent natural appearance of a true color image and the feature separation of a PCA image based on a biologically inspired visual attention model. The key part is to extract the informative regions in the scene. The model takes into account human contrast sensitivity functions and generates a topographic saliency map for both images. This is accomplished using a set of linear "center-surround" operations simulating visual receptive fields as the difference between fine and coarse scales. A difference map between the saliency map of the true color image and that of the PCA image is derived and used as a mask on the true color image to select a small number of interesting locations where the PCA image has more salient features than available in the visible bands. The resulting representations preserve hue for vegetation, water, road etc., while the selected attentional locations may be analyzed by more advanced algorithms.

  11. Anxious mood narrows attention in feature space.

    PubMed

    Wegbreit, Ezra; Franconeri, Steven; Beeman, Mark

    2015-01-01

    Spatial attention can operate like a spotlight whose scope can vary depending on task demands. Emotional states contribute to the spatial extent of attentional selection, with the spotlight focused more narrowly during anxious moods and more broadly during happy moods. In addition to visual space, attention can also operate over features, and we show here that mood states may also influence attentional scope in feature space. After anxious or happy mood inductions, participants focused their attention to identify a central target while ignoring flanking items. Flankers were sometimes coloured differently than targets, so focusing attention on target colour should lead to relatively less interference. Compared to happy and neutral moods, when anxious, participants showed reduced interference when colour isolated targets from flankers, but showed more interference when flankers and targets were the same colour. This pattern reveals that the anxious mood caused these individuals to attend to the irrelevant feature in both cases, regardless of its benefit or detriment. In contrast, participants showed no effect of colour on interference when happy, suggesting that positive mood did not influence attention in feature space. These mood effects on feature-based attention provide a theoretical bridge between previous findings concerning spatial and conceptual attention.

  12. Explicit attention interferes with selective emotion processing in human extrastriate cortex.

    PubMed

    Schupp, Harald T; Stockburger, Jessica; Bublatzky, Florian; Junghöfer, Markus; Weike, Almut I; Hamm, Alfons O

    2007-02-22

    Brain imaging and event-related potential studies provide strong evidence that emotional stimuli guide selective attention in visual processing. A reflection of the emotional attention capture is the increased Early Posterior Negativity (EPN) for pleasant and unpleasant compared to neutral images (approximately 150-300 ms poststimulus). The present study explored whether this early emotion discrimination reflects an automatic phenomenon or is subject to interference by competing processing demands. Thus, emotional processing was assessed while participants performed a concurrent feature-based attention task varying in processing demands. Participants successfully performed the primary visual attention task as revealed by behavioral performance and selected event-related potential components (Selection Negativity and P3b). Replicating previous results, emotional modulation of the EPN was observed in a task condition with low processing demands. In contrast, pleasant and unpleasant pictures failed to elicit increased EPN amplitudes compared to neutral images in more difficult explicit attention task conditions. Further analyses determined that even the processing of pleasant and unpleasant pictures high in emotional arousal is subject to interference in experimental conditions with high task demand. Taken together, performing demanding feature-based counting tasks interfered with differential emotion processing indexed by the EPN. The present findings demonstrate that taxing processing resources by a competing primary visual attention task markedly attenuated the early discrimination of emotional from neutral picture contents. Thus, these results provide further empirical support for an interference account of the emotion-attention interaction under conditions of competition. Previous studies revealed the interference of selective emotion processing when attentional resources were directed to locations of explicitly task-relevant stimuli. The present data suggest that interference of emotion processing by competing task demands is a more general phenomenon extending to the domain of feature-based attention. Furthermore, the results are inconsistent with the notion of effortlessness, i.e., early emotion discrimination despite concurrent task demands. These findings implicate to assess the presumed automatic nature of emotion processing at the level of specific aspects rather than considering automaticity as an all-or-none phenomenon.

  13. Explicit attention interferes with selective emotion processing in human extrastriate cortex

    PubMed Central

    Schupp, Harald T; Stockburger, Jessica; Bublatzky, Florian; Junghöfer, Markus; Weike, Almut I; Hamm, Alfons O

    2007-01-01

    Background Brain imaging and event-related potential studies provide strong evidence that emotional stimuli guide selective attention in visual processing. A reflection of the emotional attention capture is the increased Early Posterior Negativity (EPN) for pleasant and unpleasant compared to neutral images (~150–300 ms poststimulus). The present study explored whether this early emotion discrimination reflects an automatic phenomenon or is subject to interference by competing processing demands. Thus, emotional processing was assessed while participants performed a concurrent feature-based attention task varying in processing demands. Results Participants successfully performed the primary visual attention task as revealed by behavioral performance and selected event-related potential components (Selection Negativity and P3b). Replicating previous results, emotional modulation of the EPN was observed in a task condition with low processing demands. In contrast, pleasant and unpleasant pictures failed to elicit increased EPN amplitudes compared to neutral images in more difficult explicit attention task conditions. Further analyses determined that even the processing of pleasant and unpleasant pictures high in emotional arousal is subject to interference in experimental conditions with high task demand. Taken together, performing demanding feature-based counting tasks interfered with differential emotion processing indexed by the EPN. Conclusion The present findings demonstrate that taxing processing resources by a competing primary visual attention task markedly attenuated the early discrimination of emotional from neutral picture contents. Thus, these results provide further empirical support for an interference account of the emotion-attention interaction under conditions of competition. Previous studies revealed the interference of selective emotion processing when attentional resources were directed to locations of explicitly task-relevant stimuli. The present data suggest that interference of emotion processing by competing task demands is a more general phenomenon extending to the domain of feature-based attention. Furthermore, the results are inconsistent with the notion of effortlessness, i.e., early emotion discrimination despite concurrent task demands. These findings implicate to assess the presumed automatic nature of emotion processing at the level of specific aspects rather than considering automaticity as an all-or-none phenomenon. PMID:17316444

  14. A bio-inspired method and system for visual object-based attention and segmentation

    NASA Astrophysics Data System (ADS)

    Huber, David J.; Khosla, Deepak

    2010-04-01

    This paper describes a method and system of human-like attention and object segmentation in visual scenes that (1) attends to regions in a scene in their rank of saliency in the image, (2) extracts the boundary of an attended proto-object based on feature contours, and (3) can be biased to boost the attention paid to specific features in a scene, such as those of a desired target object in static and video imagery. The purpose of the system is to identify regions of a scene of potential importance and extract the region data for processing by an object recognition and classification algorithm. The attention process can be performed in a default, bottom-up manner or a directed, top-down manner which will assign a preference to certain features over others. One can apply this system to any static scene, whether that is a still photograph or imagery captured from video. We employ algorithms that are motivated by findings in neuroscience, psychology, and cognitive science to construct a system that is novel in its modular and stepwise approach to the problems of attention and region extraction, its application of a flooding algorithm to break apart an image into smaller proto-objects based on feature density, and its ability to join smaller regions of similar features into larger proto-objects. This approach allows many complicated operations to be carried out by the system in a very short time, approaching real-time. A researcher can use this system as a robust front-end to a larger system that includes object recognition and scene understanding modules; it is engineered to function over a broad range of situations and can be applied to any scene with minimal tuning from the user.

  15. Late electrophysiological modulations of feature-based attention to object shapes.

    PubMed

    Stojanoski, Bobby Boge; Niemeier, Matthias

    2014-03-01

    Feature-based attention has been shown to aid object perception. Our previous ERP effects revealed temporally late feature-based modulation in response to objects relative to motion. The aim of the current study was to confirm the timing of feature-based influences on object perception while cueing within the feature dimension of shape. Participants were told to expect either "pillow" or "flower" objects embedded among random white and black lines. Participants more accurately reported the object's main color for valid compared to invalid shapes. ERPs revealed modulation from 252-502 ms, from occipital to frontal electrodes. Our results are consistent with previous findings examining the time course for processing similar stimuli (illusory contours). Our results provide novel insights into how attending to features of higher complexity aids object perception presumably via feed-forward and feedback mechanisms along the visual hierarchy. Copyright © 2014 Society for Psychophysiological Research.

  16. A gaze independent hybrid-BCI based on visual spatial attention

    NASA Astrophysics Data System (ADS)

    Egan, John M.; Loughnane, Gerard M.; Fletcher, Helen; Meade, Emma; Lalor, Edmund C.

    2017-08-01

    Objective. Brain-computer interfaces (BCI) use measures of brain activity to convey a user’s intent without the need for muscle movement. Hybrid designs, which use multiple measures of brain activity, have been shown to increase the accuracy of BCIs, including those based on EEG signals reflecting covert attention. Our study examined whether incorporating a measure of the P3 response improved the performance of a previously reported attention-based BCI design that incorporates measures of steady-state visual evoked potentials (SSVEP) and alpha band modulations. Approach. Subjects viewed stimuli consisting of two bi-laterally located flashing white boxes on a black background. Streams of letters were presented sequentially within the boxes, in random order. Subjects were cued to attend to one of the boxes without moving their eyes, and they were tasked with counting the number of target-letters that appeared within. P3 components evoked by target appearance, SSVEPs evoked by the flashing boxes, and power in the alpha band are modulated by covert attention, and the modulations can be used to classify trials as left-attended or right-attended. Main Results. We showed that classification accuracy was improved by including a P3 feature along with the SSVEP and alpha features (the inclusion of a P3 feature lead to a 9% increase in accuracy compared to the use of SSVEP and Alpha features alone). We also showed that the design improves the robustness of BCI performance to individual subject differences. Significance. These results demonstrate that incorporating multiple neurophysiological indices of covert attention can improve performance in a gaze-independent BCI.

  17. Robotic Attention Processing And Its Application To Visual Guidance

    NASA Astrophysics Data System (ADS)

    Barth, Matthew; Inoue, Hirochika

    1988-03-01

    This paper describes a method of real-time visual attention processing for robots performing visual guidance. This robot attention processing is based on a novel vision processor, the multi-window vision system that was developed at the University of Tokyo. The multi-window vision system is unique in that it only processes visual information inside local area windows. These local area windows are quite flexible in their ability to move anywhere on the visual screen, change their size and shape, and alter their pixel sampling rate. By using these windows for specific attention tasks, it is possible to perform high speed attention processing. The primary attention skills of detecting motion, tracking an object, and interpreting an image are all performed at high speed on the multi-window vision system. A basic robotic attention scheme using the attention skills was developed. The attention skills involved detection and tracking of salient visual features. The tracking and motion information thus obtained was utilized in producing the response to the visual stimulus. The response of the attention scheme was quick enough to be applicable to the real-time vision processing tasks of playing a video 'pong' game, and later using an automobile driving simulator. By detecting the motion of a 'ball' on a video screen and then tracking the movement, the attention scheme was able to control a 'paddle' in order to keep the ball in play. The response was faster than that of a human's, allowing the attention scheme to play the video game at higher speeds. Further, in the application to the driving simulator, the attention scheme was able to control both direction and velocity of a simulated vehicle following a lead car. These two applications show the potential of local visual processing in its use for robotic attention processing.

  18. Feature bindings are maintained in visual short-term memory without sustained focused attention.

    PubMed

    Delvenne, Jean-François; Cleeremans, Axel; Laloyaux, Cédric

    2010-01-01

    Does the maintenance of feature bindings in visual short-term memory (VSTM) require sustained focused attention? This issue was investigated in three experiments, in which memory for single features (i.e., colors or shapes) was compared with memory for feature bindings (i.e., the link between the color and shape of an object). Attention was manipulated during the memory retention interval with a retro-cue, which allows attention to be directed and focused on a subset of memory items. The retro-cue was presented 700 ms after the offset of the memory display and 700 ms before the onset of the test display. If the maintenance of feature bindings - but not of individual features - in memory requires sustained focused attention, the retro-cue should not affect memory performance. Contrary to this prediction, we found that both memory for feature bindings and memory for individual features were equally improved by the retro-cue. Therefore, this finding does not support the view that the sustained focused attention is needed to properly maintain feature bindings in VSTM.

  19. Value-driven attentional capture in the auditory domain.

    PubMed

    Anderson, Brian A

    2016-01-01

    It is now well established that the visual attention system is shaped by reward learning. When visual features are associated with a reward outcome, they acquire high priority and can automatically capture visual attention. To date, evidence for value-driven attentional capture has been limited entirely to the visual system. In the present study, I demonstrate that previously reward-associated sounds also capture attention, interfering more strongly with the performance of a visual task. This finding suggests that value-driven attention reflects a broad principle of information processing that can be extended to other sensory modalities and that value-driven attention can bias cross-modal stimulus competition.

  20. A novel visual saliency analysis model based on dynamic multiple feature combination strategy

    NASA Astrophysics Data System (ADS)

    Lv, Jing; Ye, Qi; Lv, Wen; Zhang, Libao

    2017-06-01

    The human visual system can quickly focus on a small number of salient objects. This process was known as visual saliency analysis and these salient objects are called focus of attention (FOA). The visual saliency analysis mechanism can be used to extract the salient regions and analyze saliency of object in an image, which is time-saving and can avoid unnecessary costs of computing resources. In this paper, a novel visual saliency analysis model based on dynamic multiple feature combination strategy is introduced. In the proposed model, we first generate multi-scale feature maps of intensity, color and orientation features using Gaussian pyramids and the center-surround difference. Then, we evaluate the contribution of all feature maps to the saliency map according to the area of salient regions and their average intensity, and attach different weights to different features according to their importance. Finally, we choose the largest salient region generated by the region growing method to perform the evaluation. Experimental results show that the proposed model cannot only achieve higher accuracy in saliency map computation compared with other traditional saliency analysis models, but also extract salient regions with arbitrary shapes, which is of great value for the image analysis and understanding.

  1. Dissociable Electroencephalograph Correlates of Visual Awareness and Feature-Based Attention

    PubMed Central

    Chen, Yifan; Wang, Xiaochun; Yu, Yanglan; Liu, Ying

    2017-01-01

    Background: The relationship between awareness and attention is complex and controversial. A growing body of literature has shown that the neural bases of consciousness and endogenous attention (voluntary attention) are independent. The important role of exogenous attention (reflexive attention) on conscious experience has been noted in several studies. However, exogenous attention can also modulate subliminal processing, suggesting independence between the two processes. The question of whether visual awareness and exogenous attention rely on independent mechanisms under certain circumstances remains unanswered. Methods: In the current study, electroencephalograph recordings were conducted using 64 channels from 16 subjects while subjects attempted to detect faint speed changes of colored rotating dots. Awareness and attention were manipulated throughout trials in order to test whether exogenous attention and visual awareness rely on independent mechanisms. Results: Neural activity related to consciousness was recorded in the following cue-locked time-windows (event related potential, cluster- based permutation test): 0–50, 150–200, and 750–800 ms. With a more liberal threshold, the inferior occipital lobe was found to be the source of awareness-related activity in the 0–50 ms range. In the later 150–200 ms range, activity in the fusiform and post-central gyrus was related to awareness. Awareness-related activation in the later 750–800 ms range was more widely distributed. This awareness-related activation pattern was quite different from that of attention. Attention-related neural activity was emphasized in the 750–800 ms time window and the main source of attention-related activity was localized to the right angular gyrus. These results suggest that exogenous attention and visual consciousness correspond to different and relatively independent neural mechanisms and are distinct processes under certain conditions. PMID:29180950

  2. Selective Audiovisual Semantic Integration Enabled by Feature-Selective Attention.

    PubMed

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Li, Peijun; Fang, Fang; Sun, Pei

    2016-01-13

    An audiovisual object may contain multiple semantic features, such as the gender and emotional features of the speaker. Feature-selective attention and audiovisual semantic integration are two brain functions involved in the recognition of audiovisual objects. Humans often selectively attend to one or several features while ignoring the other features of an audiovisual object. Meanwhile, the human brain integrates semantic information from the visual and auditory modalities. However, how these two brain functions correlate with each other remains to be elucidated. In this functional magnetic resonance imaging (fMRI) study, we explored the neural mechanism by which feature-selective attention modulates audiovisual semantic integration. During the fMRI experiment, the subjects were presented with visual-only, auditory-only, or audiovisual dynamical facial stimuli and performed several feature-selective attention tasks. Our results revealed that a distribution of areas, including heteromodal areas and brain areas encoding attended features, may be involved in audiovisual semantic integration. Through feature-selective attention, the human brain may selectively integrate audiovisual semantic information from attended features by enhancing functional connectivity and thus regulating information flows from heteromodal areas to brain areas encoding the attended features.

  3. Color-selective attention need not be mediated by spatial attention.

    PubMed

    Andersen, Søren K; Müller, Matthias M; Hillyard, Steven A

    2009-06-08

    It is well-established that attention can select stimuli for preferential processing on the basis of non-spatial features such as color, orientation, or direction of motion. Evidence is mixed, however, as to whether feature-selective attention acts by increasing the signal strength of to-be-attended features irrespective of their spatial locations or whether it acts by guiding the spotlight of spatial attention to locations containing the relevant feature. To address this question, we designed a task in which feature-selective attention could not be mediated by spatial selection. Participants observed a display of intermingled dots of two colors, which rapidly and unpredictably changed positions, with the task of detecting brief intervals of reduced luminance of 20% of the dots of one or the other color. Both behavioral indices and electrophysiological measures of steady-state visual evoked potentials showed selectively enhanced processing of the attended-color items. The results demonstrate that feature-selective attention produces a sensory gain enhancement at early levels of the visual cortex that occurs without mediation by spatial attention.

  4. Modulation of neuronal responses during covert search for visual feature conjunctions

    PubMed Central

    Buracas, Giedrius T.; Albright, Thomas D.

    2009-01-01

    While searching for an object in a visual scene, an observer's attentional focus and eye movements are often guided by information about object features and spatial locations. Both spatial and feature-specific attention are known to modulate neuronal responses in visual cortex, but little is known of the dynamics and interplay of these mechanisms as visual search progresses. To address this issue, we recorded from directionally selective cells in visual area MT of monkeys trained to covertly search for targets defined by a unique conjunction of color and motion features and to signal target detection with an eye movement to the putative target. Two patterns of response modulation were observed. One pattern consisted of enhanced responses to targets presented in the receptive field (RF). These modulations occurred at the end-stage of search and were more potent during correct target identification than during erroneous saccades to a distractor in RF, thus suggesting that this modulation is not a mere presaccadic enhancement. A second pattern of modulation was observed when RF stimuli were nontargets that shared a feature with the target. The latter effect was observed during early stages of search and is consistent with a global feature-specific mechanism. This effect often terminated before target identification, thus suggesting that it interacts with spatial attention. This modulation was exhibited not only for motion but also for color cue, although MT neurons are known to be insensitive to color. Such cue-invariant attentional effects may contribute to a feature binding mechanism acting across visual dimensions. PMID:19805385

  5. Modulation of neuronal responses during covert search for visual feature conjunctions.

    PubMed

    Buracas, Giedrius T; Albright, Thomas D

    2009-09-29

    While searching for an object in a visual scene, an observer's attentional focus and eye movements are often guided by information about object features and spatial locations. Both spatial and feature-specific attention are known to modulate neuronal responses in visual cortex, but little is known of the dynamics and interplay of these mechanisms as visual search progresses. To address this issue, we recorded from directionally selective cells in visual area MT of monkeys trained to covertly search for targets defined by a unique conjunction of color and motion features and to signal target detection with an eye movement to the putative target. Two patterns of response modulation were observed. One pattern consisted of enhanced responses to targets presented in the receptive field (RF). These modulations occurred at the end-stage of search and were more potent during correct target identification than during erroneous saccades to a distractor in RF, thus suggesting that this modulation is not a mere presaccadic enhancement. A second pattern of modulation was observed when RF stimuli were nontargets that shared a feature with the target. The latter effect was observed during early stages of search and is consistent with a global feature-specific mechanism. This effect often terminated before target identification, thus suggesting that it interacts with spatial attention. This modulation was exhibited not only for motion but also for color cue, although MT neurons are known to be insensitive to color. Such cue-invariant attentional effects may contribute to a feature binding mechanism acting across visual dimensions.

  6. The effects of alcohol intoxication on attention and memory for visual scenes.

    PubMed

    Harvey, Alistair J; Kneller, Wendy; Campbell, Alison C

    2013-01-01

    This study tests the claim that alcohol intoxication narrows the focus of visual attention on to the more salient features of a visual scene. A group of alcohol intoxicated and sober participants had their eye movements recorded as they encoded a photographic image featuring a central event of either high or low salience. All participants then recalled the details of the image the following day when sober. We sought to determine whether the alcohol group would pay less attention to the peripheral features of the encoded scene than their sober counterparts, whether this effect of attentional narrowing was stronger for the high-salience event than for the low-salience event, and whether it would lead to a corresponding deficit in peripheral recall. Alcohol was found to narrow the focus of foveal attention to the central features of both images but did not facilitate recall from this region. It also reduced the overall amount of information accurately recalled from each scene. These findings demonstrate that the concept of alcohol myopia originally posited to explain the social consequences of intoxication (Steele & Josephs, 1990) may be extended to explain the relative neglect of peripheral information during the processing of visual scenes.

  7. Visual attention to features by associative learning.

    PubMed

    Gozli, Davood G; Moskowitz, Joshua B; Pratt, Jay

    2014-11-01

    Expecting a particular stimulus can facilitate processing of that stimulus over others, but what is the fate of other stimuli that are known to co-occur with the expected stimulus? This study examined the impact of learned association on feature-based attention. The findings show that the effectiveness of an uninformative color transient in orienting attention can change by learned associations between colors and the expected target shape. In an initial acquisition phase, participants learned two distinct sequences of stimulus-response-outcome, where stimuli were defined by shape ('S' vs. 'H'), responses were localized key-presses (left vs. right), and outcomes were colors (red vs. green). Next, in a test phase, while expecting a target shape (80% probable), participants showed reliable attentional orienting to the color transient associated with the target shape, and showed no attentional orienting with the color associated with the alternative target shape. This bias seemed to be driven by learned association between shapes and colors, and not modulated by the response. In addition, the bias seemed to depend on observing target-color conjunctions, since encountering the two features disjunctively (without spatiotemporal overlap) did not replicate the findings. We conclude that associative learning - likely mediated by mechanisms underlying visual object representation - can extend the impact of goal-driven attention to features associated with a target stimulus. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Spatial gradient for unique-feature detection in patients with unilateral neglect: evidence from auditory and visual search.

    PubMed

    Eramudugolla, Ranmalee; Mattingley, Jason B

    2008-01-01

    Patients with unilateral spatial neglect following right hemisphere damage are impaired in detecting contralesional targets in both visual and haptic search tasks, and often show a graded improvement in detection performance for more ipsilesional spatial locations. In audition, multiple simultaneous sounds are most effectively perceived if they are distributed along the frequency dimension. Thus, attention to spectro-temporal features alone can allow detection of a target sound amongst multiple simultaneous distracter sounds, regardless of whether these sounds are spatially separated. Spatial bias in attention associated with neglect should not affect auditory search based on spectro-temporal features of a sound target. We report that a right brain damaged patient with neglect demonstrated a significant gradient favouring the ipsilesional side on a visual search task as well as an auditory search task in which the target was a frequency modulated tone amongst steady distractor tones. No such asymmetry was apparent in the auditory search performance of a control patient with a right hemisphere lesion but no neglect. The results suggest that the spatial bias in attention exhibited by neglect patients affects stimulus processing even when spatial information is irrelevant to the task.

  9. The Attentional Field Revealed by Single-Voxel Modeling of fMRI Time Courses

    PubMed Central

    DeYoe, Edgar A.

    2015-01-01

    The spatial topography of visual attention is a distinguishing and critical feature of many theoretical models of visuospatial attention. Previous fMRI-based measurements of the topography of attention have typically been too crude to adequately test the predictions of different competing models. This study demonstrates a new technique to make detailed measurements of the topography of visuospatial attention from single-voxel, fMRI time courses. Briefly, this technique involves first estimating a voxel's population receptive field (pRF) and then “drifting” attention through the pRF such that the modulation of the voxel's fMRI time course reflects the spatial topography of attention. The topography of the attentional field (AF) is then estimated using a time-course modeling procedure. Notably, we are able to make these measurements in many visual areas including smaller, higher order areas, thus enabling a more comprehensive comparison of attentional mechanisms throughout the full hierarchy of human visual cortex. Using this technique, we show that the AF scales with eccentricity and varies across visual areas. We also show that voxels in multiple visual areas exhibit suppressive attentional effects that are well modeled by an AF having an enhancing Gaussian center with a suppressive surround. These findings provide extensive, quantitative neurophysiological data for use in modeling the psychological effects of visuospatial attention. PMID:25810532

  10. Preparatory attention in visual cortex.

    PubMed

    Battistoni, Elisa; Stein, Timo; Peelen, Marius V

    2017-05-01

    Top-down attention is the mechanism that allows us to selectively process goal-relevant aspects of a scene while ignoring irrelevant aspects. A large body of research has characterized the effects of attention on neural activity evoked by a visual stimulus. However, attention also includes a preparatory phase before stimulus onset in which the attended dimension is internally represented. Here, we review neurophysiological, functional magnetic resonance imaging, magnetoencephalography, electroencephalography, and transcranial magnetic stimulation (TMS) studies investigating the neural basis of preparatory attention, both when attention is directed to a location in space and when it is directed to nonspatial stimulus attributes (content-based attention) ranging from low-level features to object categories. Results show that both spatial and content-based attention lead to increased baseline activity in neural populations that selectively code for the attended attribute. TMS studies provide evidence that this preparatory activity is causally related to subsequent attentional selection and behavioral performance. Attention thus acts by preactivating selective neurons in the visual cortex before stimulus onset. This appears to be a general mechanism that can operate on multiple levels of representation. We discuss the functional relevance of this mechanism, its limitations, and its relation to working memory, imagery, and expectation. We conclude by outlining open questions and future directions. © 2017 New York Academy of Sciences.

  11. Top-down influences on visual attention during listening are modulated by observer sex.

    PubMed

    Shen, John; Itti, Laurent

    2012-07-15

    In conversation, women have a small advantage in decoding non-verbal communication compared to men. In light of these findings, we sought to determine whether sex differences also existed in visual attention during a related listening task, and if so, if the differences existed among attention to high-level aspects of the scene or to conspicuous visual features. Using eye-tracking and computational techniques, we present direct evidence that men and women orient attention differently during conversational listening. We tracked the eyes of 15 men and 19 women who watched and listened to 84 clips featuring 12 different speakers in various outdoor settings. At the fixation following each saccadic eye movement, we analyzed the type of object that was fixated. Men gazed more often at the mouth and women at the eyes of the speaker. Women more often exhibited "distracted" saccades directed away from the speaker and towards a background scene element. Examining the multi-scale center-surround variation in low-level visual features (static: color, intensity, orientation, and dynamic: motion energy), we found that men consistently selected regions which expressed more variation in dynamic features, which can be attributed to a male preference for motion and a female preference for areas that may contain nonverbal information about the speaker. In sum, significant differences were observed, which we speculate arise from different integration strategies of visual cues in selecting the final target of attention. Our findings have implications for studies of sex in nonverbal communication, as well as for more predictive models of visual attention. Published by Elsevier Ltd.

  12. Size matters: large objects capture attention in visual search.

    PubMed

    Proulx, Michael J

    2010-12-23

    Can objects or events ever capture one's attention in a purely stimulus-driven manner? A recent review of the literature set out the criteria required to find stimulus-driven attentional capture independent of goal-directed influences, and concluded that no published study has satisfied that criteria. Here visual search experiments assessed whether an irrelevantly large object can capture attention. Capture of attention by this static visual feature was found. The results suggest that a large object can indeed capture attention in a stimulus-driven manner and independent of displaywide features of the task that might encourage a goal-directed bias for large items. It is concluded that these results are either consistent with the stimulus-driven criteria published previously or alternatively consistent with a flexible, goal-directed mechanism of saliency detection.

  13. Evidence for unlimited capacity processing of simple features in visual cortex

    PubMed Central

    White, Alex L.; Runeson, Erik; Palmer, John; Ernst, Zachary R.; Boynton, Geoffrey M.

    2017-01-01

    Performance in many visual tasks is impaired when observers attempt to divide spatial attention across multiple visual field locations. Correspondingly, neuronal response magnitudes in visual cortex are often reduced during divided compared with focused spatial attention. This suggests that early visual cortex is the site of capacity limits, where finite processing resources must be divided among attended stimuli. However, behavioral research demonstrates that not all visual tasks suffer such capacity limits: The costs of divided attention are minimal when the task and stimulus are simple, such as when searching for a target defined by orientation or contrast. To date, however, every neuroimaging study of divided attention has used more complex tasks and found large reductions in response magnitude. We bridged that gap by using functional magnetic resonance imaging to measure responses in the human visual cortex during simple feature detection. The first experiment used a visual search task: Observers detected a low-contrast Gabor patch within one or four potentially relevant locations. The second experiment used a dual-task design, in which observers made independent judgments of Gabor presence in patches of dynamic noise at two locations. In both experiments, blood-oxygen level–dependent (BOLD) signals in the retinotopic cortex were significantly lower for ignored than attended stimuli. However, when observers divided attention between multiple stimuli, BOLD signals were not reliably reduced and behavioral performance was unimpaired. These results suggest that processing of simple features in early visual cortex has unlimited capacity. PMID:28654964

  14. Evidence for negative feature guidance in visual search is explained by spatial recoding.

    PubMed

    Beck, Valerie M; Hollingworth, Andrew

    2015-10-01

    Theories of attention and visual search explain how attention is guided toward objects with known target features. But can attention be directed away from objects with a feature known to be associated only with distractors? Most studies have found that the demand to maintain the to-be-avoided feature in visual working memory biases attention toward matching objects rather than away from them. In contrast, Arita, Carlisle, and Woodman (2012) claimed that attention can be configured to selectively avoid objects that match a cued distractor color, and they reported evidence that this type of negative cue generates search benefits. However, the colors of the search array items in Arita et al. (2012) were segregated by hemifield (e.g., blue items on the left, red on the right), which allowed for a strategy of translating the feature-cue information into a simple spatial template (e.g., avoid right, or attend left). In the present study, we replicated the negative cue benefit using the Arita et al. (2012), method (albeit within a subset of participants who reliably used the color cues to guide attention). Then, we eliminated the benefit by using search arrays that could not be grouped by hemifield. Our results suggest that feature-guided avoidance is implemented only indirectly, in this case by translating feature-cue information into a spatial template. (c) 2015 APA, all rights reserved).

  15. Guidance of attention by information held in working memory.

    PubMed

    Calleja, Marissa Ortiz; Rich, Anina N

    2013-05-01

    Information held in working memory (WM) can guide attention during visual search. The authors of recent studies have interpreted the effect of holding verbal labels in WM as guidance of visual attention by semantic information. In a series of experiments, we tested how attention is influenced by visual features versus category-level information about complex objects held in WM. Participants either memorized an object's image or its category. While holding this information in memory, they searched for a target in a four-object search display. On exact-match trials, the memorized item reappeared as a distractor in the search display. On category-match trials, another exemplar of the memorized item appeared as a distractor. On neutral trials, none of the distractors were related to the memorized object. We found attentional guidance in visual search on both exact-match and category-match trials in Experiment 1, in which the exemplars were visually similar. When we controlled for visual similarity among the exemplars by using four possible exemplars (Exp. 2) or by using two exemplars rated as being visually dissimilar (Exp. 3), we found attentional guidance only on exact-match trials when participants memorized the object's image. The same pattern of results held when the target was invariant (Exps. 2-3) and when the target was defined semantically and varied in visual features (Exp. 4). The findings of these experiments suggest that attentional guidance by WM requires active visual information.

  16. Emotion recognition abilities across stimulus modalities in schizophrenia and the role of visual attention.

    PubMed

    Simpson, Claire; Pinkham, Amy E; Kelsven, Skylar; Sasson, Noah J

    2013-12-01

    Emotion can be expressed by both the voice and face, and previous work suggests that presentation modality may impact emotion recognition performance in individuals with schizophrenia. We investigated the effect of stimulus modality on emotion recognition accuracy and the potential role of visual attention to faces in emotion recognition abilities. Thirty-one patients who met DSM-IV criteria for schizophrenia (n=8) or schizoaffective disorder (n=23) and 30 non-clinical control individuals participated. Both groups identified emotional expressions in three different conditions: audio only, visual only, combined audiovisual. In the visual only and combined conditions, time spent visually fixating salient features of the face were recorded. Patients were significantly less accurate than controls in emotion recognition during both the audio and visual only conditions but did not differ from controls on the combined condition. Analysis of visual scanning behaviors demonstrated that patients attended less than healthy individuals to the mouth in the visual condition but did not differ in visual attention to salient facial features in the combined condition, which may in part explain the absence of a deficit for patients in this condition. Collectively, these findings demonstrate that patients benefit from multimodal stimulus presentations of emotion and support hypotheses that visual attention to salient facial features may serve as a mechanism for accurate emotion identification. © 2013.

  17. Coding of spatial attention priorities and object features in the macaque lateral intraparietal cortex.

    PubMed

    Levichkina, Ekaterina; Saalmann, Yuri B; Vidyasagar, Trichur R

    2017-03-01

    Primate posterior parietal cortex (PPC) is known to be involved in controlling spatial attention. Neurons in one part of the PPC, the lateral intraparietal area (LIP), show enhanced responses to objects at attended locations. Although many are selective for object features, such as the orientation of a visual stimulus, it is not clear how LIP circuits integrate feature-selective information when providing attentional feedback about behaviorally relevant locations to the visual cortex. We studied the relationship between object feature and spatial attention properties of LIP cells in two macaques by measuring the cells' orientation selectivity and the degree of attentional enhancement while performing a delayed match-to-sample task. Monkeys had to match both the location and orientation of two visual gratings presented separately in time. We found a wide range in orientation selectivity and degree of attentional enhancement among LIP neurons. However, cells with significant attentional enhancement had much less orientation selectivity in their response than cells which showed no significant modulation by attention. Additionally, orientation-selective cells showed working memory activity for their preferred orientation, whereas cells showing attentional enhancement also synchronized with local neuronal activity. These results are consistent with models of selective attention incorporating two stages, where an initial feature-selective process guides a second stage of focal spatial attention. We suggest that LIP contributes to both stages, where the first stage involves orientation-selective LIP cells that support working memory of the relevant feature, and the second stage involves attention-enhanced LIP cells that synchronize to provide feedback on spatial priorities. © 2017 The Authors. Physiological Reports published by Wiley Periodicals, Inc. on behalf of The Physiological Society and the American Physiological Society.

  18. Region of interest extraction based on multiscale visual saliency analysis for remote sensing images

    NASA Astrophysics Data System (ADS)

    Zhang, Yinggang; Zhang, Libao; Yu, Xianchuan

    2015-01-01

    Region of interest (ROI) extraction is an important component of remote sensing image processing. However, traditional ROI extraction methods are usually prior knowledge-based and depend on classification, segmentation, and a global searching solution, which are time-consuming and computationally complex. We propose a more efficient ROI extraction model for remote sensing images based on multiscale visual saliency analysis (MVS), implemented in the CIE L*a*b* color space, which is similar to visual perception of the human eye. We first extract the intensity, orientation, and color feature of the image using different methods: the visual attention mechanism is used to eliminate the intensity feature using a difference of Gaussian template; the integer wavelet transform is used to extract the orientation feature; and color information content analysis is used to obtain the color feature. Then, a new feature-competition method is proposed that addresses the different contributions of each feature map to calculate the weight of each feature image for combining them into the final saliency map. Qualitative and quantitative experimental results of the MVS model as compared with those of other models show that it is more effective and provides more accurate ROI extraction results with fewer holes inside the ROI.

  19. Dynamic visual attention: motion direction versus motion magnitude

    NASA Astrophysics Data System (ADS)

    Bur, A.; Wurtz, P.; Müri, R. M.; Hügli, H.

    2008-02-01

    Defined as an attentive process in the context of visual sequences, dynamic visual attention refers to the selection of the most informative parts of video sequence. This paper investigates the contribution of motion in dynamic visual attention, and specifically compares computer models designed with the motion component expressed either as the speed magnitude or as the speed vector. Several computer models, including static features (color, intensity and orientation) and motion features (magnitude and vector) are considered. Qualitative and quantitative evaluations are performed by comparing the computer model output with human saliency maps obtained experimentally from eye movement recordings. The model suitability is evaluated in various situations (synthetic and real sequences, acquired with fixed and moving camera perspective), showing advantages and inconveniences of each method as well as preferred domain of application.

  20. Audio-visual synchrony and feature-selective attention co-amplify early visual processing.

    PubMed

    Keitel, Christian; Müller, Matthias M

    2016-05-01

    Our brain relies on neural mechanisms of selective attention and converging sensory processing to efficiently cope with rich and unceasing multisensory inputs. One prominent assumption holds that audio-visual synchrony can act as a strong attractor for spatial attention. Here, we tested for a similar effect of audio-visual synchrony on feature-selective attention. We presented two superimposed Gabor patches that differed in colour and orientation. On each trial, participants were cued to selectively attend to one of the two patches. Over time, spatial frequencies of both patches varied sinusoidally at distinct rates (3.14 and 3.63 Hz), giving rise to pulse-like percepts. A simultaneously presented pure tone carried a frequency modulation at the pulse rate of one of the two visual stimuli to introduce audio-visual synchrony. Pulsed stimulation elicited distinct time-locked oscillatory electrophysiological brain responses. These steady-state responses were quantified in the spectral domain to examine individual stimulus processing under conditions of synchronous versus asynchronous tone presentation and when respective stimuli were attended versus unattended. We found that both, attending to the colour of a stimulus and its synchrony with the tone, enhanced its processing. Moreover, both gain effects combined linearly for attended in-sync stimuli. Our results suggest that audio-visual synchrony can attract attention to specific stimulus features when stimuli overlap in space.

  1. Is goal-directed attentional guidance just intertrial priming? A review.

    PubMed

    Lamy, Dominique F; Kristjánsson, Arni

    2013-07-01

    According to most models of selective visual attention, our goals at any given moment and saliency in the visual field determine attentional priority. But selection is not carried out in isolation--we typically track objects through space and time. This is not well captured within the distinction between goal-directed and saliency-based attentional guidance. Recent studies have shown that selection is strongly facilitated when the characteristics of the objects to be attended and of those to be ignored remain constant between consecutive selections. These studies have generated the proposal that goal-directed or top-down effects are best understood as intertrial priming effects. Here, we provide a detailed overview and critical appraisal of the arguments, experimental strategies, and findings that have been used to promote this idea, along with a review of studies providing potential counterarguments. We divide this review according to different types of attentional control settings that observers are thought to adopt during visual search: feature-based settings, dimension-based settings, and singleton detection mode. We conclude that priming accounts for considerable portions of effects attributed to top-down guidance, but that top-down guidance can be independent of intertrial priming.

  2. A neural theory of visual attention and short-term memory (NTVA).

    PubMed

    Bundesen, Claus; Habekost, Thomas; Kyllingsbæk, Søren

    2011-05-01

    The neural theory of visual attention and short-term memory (NTVA) proposed by Bundesen, Habekost, and Kyllingsbæk (2005) is reviewed. In NTVA, filtering (selection of objects) changes the number of cortical neurons in which an object is represented so that this number increases with the behavioural importance of the object. Another mechanism of selection, pigeonholing (selection of features), scales the level of activation in neurons coding for a particular feature. By these mechanisms, behaviourally important objects and features are likely to win the competition to become encoded into visual short-term memory (VSTM). The VSTM system is conceived as a feedback mechanism that sustains activity in the neurons that have won the attentional competition. NTVA accounts both for a wide range of attentional effects in human performance (reaction times and error rates) and a wide range of effects observed in firing rates of single cells in the primate visual system. Copyright © 2010 Elsevier Ltd. All rights reserved.

  3. The theory-based influence of map features on risk beliefs: self-reports of what is seen and understood for maps depicting an environmental health hazard.

    PubMed

    Severtson, Dolores J; Vatovec, Christine

    2012-08-01

    Theory-based research is needed to understand how maps of environmental health risk information influence risk beliefs and protective behavior. Using theoretical concepts from multiple fields of study including visual cognition, semiotics, health behavior, and learning and memory supports a comprehensive assessment of this influence. The authors report results from 13 cognitive interviews that provide theory-based insights into how visual features influenced what participants saw and the meaning of what they saw as they viewed 3 formats of water test results for private wells (choropleth map, dot map, and a table). The unit of perception, color, proximity to hazards, geographic distribution, and visual salience had substantial influences on what participants saw and their resulting risk beliefs. These influences are explained by theoretical factors that shape what is seen, properties of features that shape cognition (preattentive, symbolic, visual salience), information processing (top-down and bottom-up), and the strength of concrete compared with abstract information. Personal relevance guided top-down attention to proximal and larger hazards that shaped stronger risk beliefs. Meaning was more local for small perceptual units and global for large units. Three aspects of color were important: preattentive "incremental risk" meaning of sequential shading, symbolic safety meaning of stoplight colors, and visual salience that drew attention. The lack of imagery, geographic information, and color diminished interest in table information. Numeracy and prior beliefs influenced comprehension for some participants. Results guided the creation of an integrated conceptual framework for application to future studies. Ethics should guide the selection of map features that support appropriate communication goals.

  4. Fault Diagnosis for Rolling Bearings under Variable Conditions Based on Visual Cognition

    PubMed Central

    Cheng, Yujie; Zhou, Bo; Lu, Chen; Yang, Chao

    2017-01-01

    Fault diagnosis for rolling bearings has attracted increasing attention in recent years. However, few studies have focused on fault diagnosis for rolling bearings under variable conditions. This paper introduces a fault diagnosis method for rolling bearings under variable conditions based on visual cognition. The proposed method includes the following steps. First, the vibration signal data are transformed into a recurrence plot (RP), which is a two-dimensional image. Then, inspired by the visual invariance characteristic of the human visual system (HVS), we utilize speed up robust feature to extract fault features from the two-dimensional RP and generate a 64-dimensional feature vector, which is invariant to image translation, rotation, scaling variation, etc. Third, based on the manifold perception characteristic of HVS, isometric mapping, a manifold learning method that can reflect the intrinsic manifold embedded in the high-dimensional space, is employed to obtain a low-dimensional feature vector. Finally, a classical classification method, support vector machine, is utilized to realize fault diagnosis. Verification data were collected from Case Western Reserve University Bearing Data Center, and the experimental result indicates that the proposed fault diagnosis method based on visual cognition is highly effective for rolling bearings under variable conditions, thus providing a promising approach from the cognitive computing field. PMID:28772943

  5. Selective Audiovisual Semantic Integration Enabled by Feature-Selective Attention

    PubMed Central

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Li, Peijun; Fang, Fang; Sun, Pei

    2016-01-01

    An audiovisual object may contain multiple semantic features, such as the gender and emotional features of the speaker. Feature-selective attention and audiovisual semantic integration are two brain functions involved in the recognition of audiovisual objects. Humans often selectively attend to one or several features while ignoring the other features of an audiovisual object. Meanwhile, the human brain integrates semantic information from the visual and auditory modalities. However, how these two brain functions correlate with each other remains to be elucidated. In this functional magnetic resonance imaging (fMRI) study, we explored the neural mechanism by which feature-selective attention modulates audiovisual semantic integration. During the fMRI experiment, the subjects were presented with visual-only, auditory-only, or audiovisual dynamical facial stimuli and performed several feature-selective attention tasks. Our results revealed that a distribution of areas, including heteromodal areas and brain areas encoding attended features, may be involved in audiovisual semantic integration. Through feature-selective attention, the human brain may selectively integrate audiovisual semantic information from attended features by enhancing functional connectivity and thus regulating information flows from heteromodal areas to brain areas encoding the attended features. PMID:26759193

  6. A bilateral advantage in controlling access to visual short-term memory.

    PubMed

    Holt, Jessica L; Delvenne, Jean-François

    2014-01-01

    Recent research on visual short-term memory (VSTM) has revealed the existence of a bilateral field advantage (BFA--i.e., better memory when the items are distributed in the two visual fields than if they are presented in the same hemifield) for spatial location and bar orientation, but not for color (Delvenne, 2005; Umemoto, Drew, Ester, & Awh, 2010). Here, we investigated whether a BFA in VSTM is constrained by attentional selective processes. It has indeed been previously suggested that the BFA may be a general feature of selective attention (Alvarez & Cavanagh, 2005; Delvenne, 2005). Therefore, the present study examined whether VSTM for color benefits from bilateral presentation if attentional selective processes are particularly engaged. Participants completed a color change detection task whereby target stimuli were presented either across both hemifields or within one single hemifield. In order to engage attentional selective processes, some trials contained irrelevant stimuli that needed to be ignored. Targets were selected based on spatial locations (Experiment 1) or on a salient feature (Experiment 2). In both cases, the results revealed a BFA only when irrelevant stimuli were presented among the targets. Overall, the findings strongly suggest that attentional selective processes at encoding can constrain whether a BFA is observed in VSTM.

  7. Influence of semantic consistency and perceptual features on visual attention during scene viewing in toddlers.

    PubMed

    Helo, Andrea; van Ommen, Sandrien; Pannasch, Sebastian; Danteny-Dordoigne, Lucile; Rämä, Pia

    2017-11-01

    Conceptual representations of everyday scenes are built in interaction with visual environment and these representations guide our visual attention. Perceptual features and object-scene semantic consistency have been found to attract our attention during scene exploration. The present study examined how visual attention in 24-month-old toddlers is attracted by semantic violations and how perceptual features (i. e. saliency, centre distance, clutter and object size) and linguistic properties (i. e. object label frequency and label length) affect gaze distribution. We compared eye movements of 24-month-old toddlers and adults while exploring everyday scenes which either contained an inconsistent (e.g., soap on a breakfast table) or consistent (e.g., soap in a bathroom) object. Perceptual features such as saliency, centre distance and clutter of the scene affected looking times in the toddler group during the whole viewing time whereas looking times in adults were affected only by centre distance during the early viewing time. Adults looked longer to inconsistent than consistent objects either if the objects had a high or a low saliency. In contrast, toddlers presented semantic consistency effect only when objects were highly salient. Additionally, toddlers with lower vocabulary skills looked longer to inconsistent objects while toddlers with higher vocabulary skills look equally long to both consistent and inconsistent objects. Our results indicate that 24-month-old children use scene context to guide visual attention when exploring the visual environment. However, perceptual features have a stronger influence in eye movement guidance in toddlers than in adults. Our results also indicate that language skills influence cognitive but not perceptual guidance of eye movements during scene perception in toddlers. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Visual Attention and Autistic Behavior in Infants with Fragile X Syndrome

    ERIC Educational Resources Information Center

    Roberts, Jane E.; Hatton, Deborah D.; Long, Anna C. J.; Anello, Vittoria; Colombo, John

    2012-01-01

    Aberrant attention is a core feature of fragile X syndrome (FXS), however, little is known regarding the developmental trajectory and underlying physiological processes of attention deficits in FXS. Atypical visual attention is an early emerging and robust indicator of autism in idiopathic (non-FXS) autism. Using a biobehavioral approach with gaze…

  9. Threat captures attention but does not affect learning of contextual regularities.

    PubMed

    Yamaguchi, Motonori; Harwood, Sarah L

    2017-04-01

    Some of the stimulus features that guide visual attention are abstract properties of objects such as potential threat to one's survival, whereas others are complex configurations such as visual contexts that are learned through past experiences. The present study investigated the two functions that guide visual attention, threat detection and learning of contextual regularities, in visual search. Search arrays contained images of threat and non-threat objects, and their locations were fixed on some trials but random on other trials. Although they were irrelevant to the visual search task, threat objects facilitated attention capture and impaired attention disengagement. Search time improved for fixed configurations more than for random configurations, reflecting learning of visual contexts. Nevertheless, threat detection had little influence on learning of the contextual regularities. The results suggest that factors guiding visual attention are different from factors that influence learning to guide visual attention.

  10. No Effect of Featural Attention on Body Size Aftereffects

    PubMed Central

    Stephen, Ian D.; Bickersteth, Chloe; Mond, Jonathan; Stevenson, Richard J.; Brooks, Kevin R.

    2016-01-01

    Prolonged exposure to images of narrow bodies has been shown to induce a perceptual aftereffect, such that observers’ point of subjective normality (PSN) for bodies shifts toward narrower bodies. The converse effect is shown for adaptation to wide bodies. In low-level stimuli, object attention (attention directed to the object) and spatial attention (attention directed to the location of the object) have been shown to increase the magnitude of visual aftereffects, while object-based attention enhances the adaptation effect in faces. It is not known whether featural attention (attention directed to a specific aspect of the object) affects the magnitude of adaptation effects in body stimuli. Here, we manipulate the attention of Caucasian observers to different featural information in body images, by asking them to rate the fatness or sex typicality of male and female bodies manipulated to appear fatter or thinner than average. PSNs for body fatness were taken at baseline and after adaptation, and a change in PSN (ΔPSN) was calculated. A body size adaptation effect was found, with observers who viewed fat bodies showing an increased PSN, and those exposed to thin bodies showing a reduced PSN. However, manipulations of featural attention to body fatness or sex typicality produced equivalent results, suggesting that featural attention may not affect the strength of the body size aftereffect. PMID:27597835

  11. No Effect of Featural Attention on Body Size Aftereffects.

    PubMed

    Stephen, Ian D; Bickersteth, Chloe; Mond, Jonathan; Stevenson, Richard J; Brooks, Kevin R

    2016-01-01

    Prolonged exposure to images of narrow bodies has been shown to induce a perceptual aftereffect, such that observers' point of subjective normality (PSN) for bodies shifts toward narrower bodies. The converse effect is shown for adaptation to wide bodies. In low-level stimuli, object attention (attention directed to the object) and spatial attention (attention directed to the location of the object) have been shown to increase the magnitude of visual aftereffects, while object-based attention enhances the adaptation effect in faces. It is not known whether featural attention (attention directed to a specific aspect of the object) affects the magnitude of adaptation effects in body stimuli. Here, we manipulate the attention of Caucasian observers to different featural information in body images, by asking them to rate the fatness or sex typicality of male and female bodies manipulated to appear fatter or thinner than average. PSNs for body fatness were taken at baseline and after adaptation, and a change in PSN (ΔPSN) was calculated. A body size adaptation effect was found, with observers who viewed fat bodies showing an increased PSN, and those exposed to thin bodies showing a reduced PSN. However, manipulations of featural attention to body fatness or sex typicality produced equivalent results, suggesting that featural attention may not affect the strength of the body size aftereffect.

  12. Identifying a "default" visual search mode with operant conditioning.

    PubMed

    Kawahara, Jun-ichiro

    2010-09-01

    The presence of a singleton in a task-irrelevant domain can impair visual search. This impairment, known as the attentional capture depends on the set of participants. When narrowly searching for a specific feature (the feature search mode), only matching stimuli capture attention. When searching broadly (the singleton detection mode), any oddball captures attention. The present study examined which strategy represents the "default" mode using an operant conditioning approach in which participants were trained, in the absence of explicit instructions, to search for a target in an ambiguous context in which one of two modes was available. The results revealed that participants behaviorally adopted the singleton detection as the default mode but reported using the feature search mode. Conscious strategies did not eliminate capture. These results challenge the view that a conscious set always modulates capture, suggesting that the visual system tends to rely on stimulus salience to deploy attention.

  13. Explaining seeing? Disentangling qualia from perceptual organization.

    PubMed

    Ibáñez, Agustin; Bekinschtein, Tristan

    2010-09-01

    Abstract Visual perception and integration seem to play an essential role in our conscious phenomenology. Relatively local neural processing of reentrant nature may explain several visual integration processes (feature binding or figure-ground segregation, object recognition, inference, competition), even without attention or cognitive control. Based on the above statements, should the neural signatures of visual integration (via reentrant process) be non-reportable phenomenological qualia? We argue that qualia are not required to understand this perceptual organization.

  14. Enhanced HMAX model with feedforward feature learning for multiclass categorization.

    PubMed

    Li, Yinlin; Wu, Wei; Zhang, Bo; Li, Fengfu

    2015-01-01

    In recent years, the interdisciplinary research between neuroscience and computer vision has promoted the development in both fields. Many biologically inspired visual models are proposed, and among them, the Hierarchical Max-pooling model (HMAX) is a feedforward model mimicking the structures and functions of V1 to posterior inferotemporal (PIT) layer of the primate visual cortex, which could generate a series of position- and scale- invariant features. However, it could be improved with attention modulation and memory processing, which are two important properties of the primate visual cortex. Thus, in this paper, based on recent biological research on the primate visual cortex, we still mimic the first 100-150 ms of visual cognition to enhance the HMAX model, which mainly focuses on the unsupervised feedforward feature learning process. The main modifications are as follows: (1) To mimic the attention modulation mechanism of V1 layer, a bottom-up saliency map is computed in the S1 layer of the HMAX model, which can support the initial feature extraction for memory processing; (2) To mimic the learning, clustering and short-term memory to long-term memory conversion abilities of V2 and IT, an unsupervised iterative clustering method is used to learn clusters with multiscale middle level patches, which are taken as long-term memory; (3) Inspired by the multiple feature encoding mode of the primate visual cortex, information including color, orientation, and spatial position are encoded in different layers of the HMAX model progressively. By adding a softmax layer at the top of the model, multiclass categorization experiments can be conducted, and the results on Caltech101 show that the enhanced model with a smaller memory size exhibits higher accuracy than the original HMAX model, and could also achieve better accuracy than other unsupervised feature learning methods in multiclass categorization task.

  15. Differentiating Emotional Processing and Attention in Psychopathy with Functional Neuroimaging

    PubMed Central

    Anderson, Nathaniel E.; Steele, Vaughn R.; Maurer, J. Michael; Rao, Vikram; Koenigs, Michael R.; Decety, Jean; Kosson, David; Calhoun, Vince; Kiehl, Kent A.

    2017-01-01

    Psychopathic individuals are often characterized by emotional processing deficits, and recent research has examined the specific contexts and cognitive mechanisms that underlie these abnormalities. Some evidence suggests that abnormal features of attention are fundamental to psychopaths’ emotional deficits, but few studies have demonstrated the neural underpinnings responsible for such effects. Here, we use functional neuroimaging to examine attention-emotion interactions among incarcerated individuals (n=120) evaluated for psychopathic traits using the Hare Psychopathy Checklist – Revised (PCL-R). Using a task designed to manipulate attention to emotional features of visual stimuli, we demonstrate effects representing implicit emotional processing, explicit emotional processing, attention-facilitated emotional processing, and vigilance for emotional content. Results confirm the importance of considering mechanisms of attention when evaluating emotional processing differences related to psychopathic traits. The affective-interpersonal features of psychopathy (PCL-R Factor 1) were associated with relatively lower emotion-dependent augmentation of activity in visual processing areas during implicit emotional processing while antisocial-lifestyle features (PCL-R Factor 2) were associated with elevated activity in the amygdala and related salience-network regions. During explicit emotional processing psychopathic traits were associated with upregulation in the medial prefrontal cortex, insula, and superior frontal regions. Isolating the impact of explicit attention to emotional content, only Factor 1 was related to upregulation of activity in the visual processing stream, which was accompanied by increased activity in the angular gyrus. These effects highlight some important mechanisms underlying abnormal features of attention and emotional processing that accompany psychopathic traits. PMID:28092055

  16. Differentiating emotional processing and attention in psychopathy with functional neuroimaging.

    PubMed

    Anderson, Nathaniel E; Steele, Vaughn R; Maurer, J Michael; Rao, Vikram; Koenigs, Michael R; Decety, Jean; Kosson, David S; Calhoun, Vince D; Kiehl, Kent A

    2017-06-01

    Individuals with psychopathy are often characterized by emotional processing deficits, and recent research has examined the specific contexts and cognitive mechanisms that underlie these abnormalities. Some evidence suggests that abnormal features of attention are fundamental to emotional deficits in persons with psychopathy, but few studies have demonstrated the neural underpinnings responsible for such effects. Here, we use functional neuroimaging to examine attention-emotion interactions among incarcerated individuals (n = 120) evaluated for psychopathic traits using the Hare Psychopathy Checklist-Revised (PCL-R). Using a task designed to manipulate attention to emotional features of visual stimuli, we demonstrate effects representing implicit emotional processing, explicit emotional processing, attention-facilitated emotional processing, and vigilance for emotional content. Results confirm the importance of considering mechanisms of attention when evaluating emotional processing differences related to psychopathic traits. The affective-interpersonal features of psychopathy (PCL-R Factor 1) were associated with relatively lower emotion-dependent augmentation of activity in visual processing areas during implicit emotional processing, while antisocial-lifestyle features (PCL-R Factor 2) were associated with elevated activity in the amygdala and related salience network regions. During explicit emotional processing, psychopathic traits were associated with upregulation in the medial prefrontal cortex, insula, and superior frontal regions. Isolating the impact of explicit attention to emotional content, only Factor 1 was related to upregulation of activity in the visual processing stream, which was accompanied by increased activity in the angular gyrus. These effects highlight some important mechanisms underlying abnormal features of attention and emotional processing that accompany psychopathic traits.

  17. Mining Videos for Features that Drive Attention

    DTIC Science & Technology

    2015-04-01

    Psychology & Neuroscience Graduate Program, University of Southern California, 3641 Watt Way, HNB 10, Los Angeles, CA 90089, USA e-mail: itti@usc.edu...challenging question in neuroscience . Since the onset of visual experience, a human or animal begins to form a subjective percept which, depending on...been added based on neuroscience discoveries of mechanisms of vision in the brain as well as useful features based on computer vision. Figure14.1 illus

  18. Infants' Selective Attention to Reliable Visual Cues in the Presence of Salient Distractors

    ERIC Educational Resources Information Center

    Tummeltshammer, Kristen Swan; Mareschal, Denis; Kirkham, Natasha Z.

    2014-01-01

    With many features competing for attention in their visual environment, infants must learn to deploy attention toward informative cues while ignoring distractions. Three eye tracking experiments were conducted to investigate whether 6- and 8-month-olds (total N = 102) would shift attention away from a distractor stimulus to learn a cue-reward…

  19. Event-related brain potentials and cognitive processes related to perceptual-motor information transmission.

    PubMed

    Kopp, Bruno; Wessel, Karl

    2010-05-01

    In the present study, event-related potentials (ERPs) were recorded to investigate cognitive processes related to the partial transmission of information from stimulus recognition to response preparation. Participants classified two-dimensional visual stimuli with dimensions size and form. One feature combination was designated as the go-target, whereas the other three feature combinations served as no-go distractors. Size discriminability was manipulated across three experimental conditions. N2c and P3a amplitudes were enhanced in response to those distractors that shared the feature from the faster dimension with the target. Moreover, N2c and P3a amplitudes showed a crossover effect: Size distractors evoked more pronounced ERPs under high size discriminability, but form distractors elicited enhanced ERPs under low size discriminability. These results suggest that partial perceptual-motor transmission of information is accompanied by acts of cognitive control and by shifts of attention between the sources of conflicting information. Selection negativity findings imply adaptive allocation of visual feature-based attention across the two stimulus dimensions.

  20. Visual search deficits in amblyopia.

    PubMed

    Tsirlin, Inna; Colpa, Linda; Goltz, Herbert C; Wong, Agnes M F

    2018-04-01

    Amblyopia is a neurodevelopmental disorder defined as a reduction in visual acuity that cannot be corrected by optical means. It has been associated with low-level deficits. However, research has demonstrated a link between amblyopia and visual attention deficits in counting, tracking, and identifying objects. Visual search is a useful tool for assessing visual attention but has not been well studied in amblyopia. Here, we assessed the extent of visual search deficits in amblyopia using feature and conjunction search tasks. We compared the performance of participants with amblyopia (n = 10) to those of controls (n = 12) on both feature and conjunction search tasks using Gabor patch stimuli, varying spatial bandwidth and orientation. To account for the low-level deficits inherent in amblyopia, we measured individual contrast and crowding thresholds and monitored eye movements. The display elements were then presented at suprathreshold levels to ensure that visibility was equalized across groups. There was no performance difference between groups on feature search, indicating that our experimental design controlled successfully for low-level amblyopia deficits. In contrast, during conjunction search, median reaction times and reaction time slopes were significantly larger in participants with amblyopia compared with controls. Amblyopia differentially affects performance on conjunction visual search, a more difficult task that requires feature binding and possibly the involvement of higher-level attention processes. Deficits in visual search may affect day-to-day functioning in people with amblyopia.

  1. A Static Color Discontinuity Can Capture Spatial Attention when the Target Is an Abrupt-Onset Singleton

    ERIC Educational Resources Information Center

    Burnham, Bryan R.; Neely, James H.

    2008-01-01

    C. L. Folk, R. W. Remington, and J. C. Johnston's (1992) contingent involuntary orienting hypothesis states that a salient visual feature will involuntarily capture attention only when the observer's attentional set includes similar features. In four experiments, when the target's relevant feature was its being an abruptly onset singleton,…

  2. Memory-guided attention during active viewing of edited dynamic scenes.

    PubMed

    Valuch, Christian; König, Peter; Ansorge, Ulrich

    2017-01-01

    Films, TV shows, and other edited dynamic scenes contain many cuts, which are abrupt transitions from one video shot to the next. Cuts occur within or between scenes, and often join together visually and semantically related shots. Here, we tested to which degree memory for the visual features of the precut shot facilitates shifting attention to the postcut shot. We manipulated visual similarity across cuts, and measured how this affected covert attention (Experiment 1) and overt attention (Experiments 2 and 3). In Experiments 1 and 2, participants actively viewed a target movie that randomly switched locations with a second, distractor movie at the time of the cuts. In Experiments 1 and 2, participants were able to deploy attention more rapidly and accurately to the target movie's continuation when visual similarity was high than when it was low. Experiment 3 tested whether this could be explained by stimulus-driven (bottom-up) priming by feature similarity, using one clip at screen center that was followed by two alternative continuations to the left and right. Here, even the highest similarity across cuts did not capture attention. We conclude that following cuts of high visual similarity, memory-guided attention facilitates the deployment of attention, but this effect is (top-down) dependent on the viewer's active matching of scene content across cuts.

  3. Are objects the same as groups? ERP correlates of spatial attentional guidance by irrelevant feature similarity.

    PubMed

    Kasai, Tetsuko; Moriya, Hiroki; Hirano, Shingo

    2011-07-05

    It has been proposed that the most fundamental units of attentional selection are "objects" that are grouped according to Gestalt factors such as similarity or connectedness. Previous studies using event-related potentials (ERPs) have shown that object-based attention is associated with modulations of the visual-evoked N1 component, which reflects an early cortical mechanism that is shared with spatial attention. However, these studies only examined the case of perceptually continuous objects. The present study examined the case of separate objects that are grouped according to feature similarity (color, shape) by indexing lateralized potentials at posterior sites in a sustained-attention task that involved bilateral stimulus arrays. A behavioral object effect was found only for task-relevant shape similarity. Electrophysiological results indicated that attention was guided to the task-irrelevant side of the visual field due to achromatic-color similarity in N1 (155-205 ms post-stimulus) and early N2 (210-260 ms) and due to shape similarity in early N2 and late N2 (280-400 ms) latency ranges. These results are discussed in terms of selection mechanisms and object/group representations. Copyright © 2011 Elsevier B.V. All rights reserved.

  4. Visual feature integration and focused attention: response competition from multiple distractor features.

    PubMed

    Lavie, N

    1997-05-01

    Predictions from Treisman's feature integration theory of attention were tested in a variant of the response-competition paradigm. Subjects made choice responses to particular color-shape conjunctions (e.g., a purple cross vs. a green circle) while withholding their responses to the opposite conjunctions (i.e., a purple circle vs. a green cross). The results showed that compatibility effects were based on both distractor color and shape. For unattended distractors in preknown irrelevant positions, compatibility effects were equivalent for conjunctive distractors (e.g., a purple cross and a blue triangle) and for disjunctive distractors (e.g., a purple triangle and a blue cross). Manipulation of attention to the distractors positions resulted in larger compatibility effects from conjoined features. These results accord with Treisman's claim that correct conjunction information is unavailable under conditions of inattention, and they provide new information on response-competition effects from multiple features.

  5. Dimension- and space-based intertrial effects in visual pop-out search: modulation by task demands for focal-attentional processing.

    PubMed

    Krummenacher, Joseph; Müller, Hermann J; Zehetleitner, Michael; Geyer, Thomas

    2009-03-01

    Two experiments compared reaction times (RTs) in visual search for singleton feature targets defined, variably across trials, in either the color or the orientation dimension. Experiment 1 required observers to simply discern target presence versus absence (simple-detection task); Experiment 2 required them to respond to a detection-irrelevant form attribute of the target (compound-search task). Experiment 1 revealed a marked dimensional intertrial effect of 34 ms for an target defined in a changed versus a repeated dimension, and an intertrial target distance effect, with an 4-ms increase in RTs (per unit of distance) as the separation of the current relative to the preceding target increased. Conversely, in Experiment 2, the dimension change effect was markedly reduced (11 ms), while the intertrial target distance effect was markedly increased (11 ms per unit of distance). The results suggest that dimension change/repetition effects are modulated by the amount of attentional focusing required by the task, with space-based attention altering the integration of dimension-specific feature contrast signals at the level of the overall-saliency map.

  6. Integration trumps selection in object recognition.

    PubMed

    Saarela, Toni P; Landy, Michael S

    2015-03-30

    Finding and recognizing objects is a fundamental task of vision. Objects can be defined by several "cues" (color, luminance, texture, etc.), and humans can integrate sensory cues to improve detection and recognition [1-3]. Cortical mechanisms fuse information from multiple cues [4], and shape-selective neural mechanisms can display cue invariance by responding to a given shape independent of the visual cue defining it [5-8]. Selective attention, in contrast, improves recognition by isolating a subset of the visual information [9]. Humans can select single features (red or vertical) within a perceptual dimension (color or orientation), giving faster and more accurate responses to items having the attended feature [10, 11]. Attention elevates neural responses and sharpens neural tuning to the attended feature, as shown by studies in psychophysics and modeling [11, 12], imaging [13-16], and single-cell and neural population recordings [17, 18]. Besides single features, attention can select whole objects [19-21]. Objects are among the suggested "units" of attention because attention to a single feature of an object causes the selection of all of its features [19-21]. Here, we pit integration against attentional selection in object recognition. We find, first, that humans can integrate information near optimally from several perceptual dimensions (color, texture, luminance) to improve recognition. They cannot, however, isolate a single dimension even when the other dimensions provide task-irrelevant, potentially conflicting information. For object recognition, it appears that there is mandatory integration of information from multiple dimensions of visual experience. The advantage afforded by this integration, however, comes at the expense of attentional selection. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Integration trumps selection in object recognition

    PubMed Central

    Saarela, Toni P.; Landy, Michael S.

    2015-01-01

    Summary Finding and recognizing objects is a fundamental task of vision. Objects can be defined by several “cues” (color, luminance, texture etc.), and humans can integrate sensory cues to improve detection and recognition [1–3]. Cortical mechanisms fuse information from multiple cues [4], and shape-selective neural mechanisms can display cue-invariance by responding to a given shape independent of the visual cue defining it [5–8]. Selective attention, in contrast, improves recognition by isolating a subset of the visual information [9]. Humans can select single features (red or vertical) within a perceptual dimension (color or orientation), giving faster and more accurate responses to items having the attended feature [10,11]. Attention elevates neural responses and sharpens neural tuning to the attended feature, as shown by studies in psychophysics and modeling [11,12], imaging [13–16], and single-cell and neural population recordings [17,18]. Besides single features, attention can select whole objects [19–21]. Objects are among the suggested “units” of attention because attention to a single feature of an object causes the selection of all of its features [19–21]. Here, we pit integration against attentional selection in object recognition. We find, first, that humans can integrate information near-optimally from several perceptual dimensions (color, texture, luminance) to improve recognition. They cannot, however, isolate a single dimension even when the other dimensions provide task-irrelevant, potentially conflicting information. For object recognition, it appears that there is mandatory integration of information from multiple dimensions of visual experience. The advantage afforded by this integration, however, comes at the expense of attentional selection. PMID:25802154

  8. Saccade frequency response to visual cues during gait in Parkinson's disease: the selective role of attention.

    PubMed

    Stuart, Samuel; Lord, Sue; Galna, Brook; Rochester, Lynn

    2018-04-01

    Gait impairment is a core feature of Parkinson's disease (PD) with implications for falls risk. Visual cues improve gait in PD, but the underlying mechanisms are unclear. Evidence suggests that attention and vision play an important role; however, the relative contribution from each is unclear. Measurement of visual exploration (specifically saccade frequency) during gait allows for real-time measurement of attention and vision. Understanding how visual cues influence visual exploration may allow inferences of the underlying mechanisms to response which could help to develop effective therapeutics. This study aimed to examine saccade frequency during gait in response to a visual cue in PD and older adults and investigate the roles of attention and vision in visual cue response in PD. A mobile eye-tracker measured saccade frequency during gait in 55 people with PD and 32 age-matched controls. Participants walked in a straight line with and without a visual cue (50 cm transverse lines) presented under single task and dual-task (concurrent digit span recall). Saccade frequency was reduced when walking in PD compared to controls; however, visual cues ameliorated saccadic deficit. Visual cues significantly increased saccade frequency in both PD and controls under both single task and dual-task. Attention rather than visual function was central to saccade frequency and gait response to visual cues in PD. In conclusion, this study highlights the impact of visual cues on visual exploration when walking and the important role of attention in PD. Understanding these complex features will help inform intervention development. © 2018 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  9. Altered spatial profile of distraction in people with schizophrenia.

    PubMed

    Leonard, Carly J; Robinson, Benjamin M; Hahn, Britta; Luck, Steven J; Gold, James M

    2017-11-01

    Attention is critical for effective processing of incoming information and has long been identified as a potential area of dysfunction in people with schizophrenia (PSZ). In the realm of visual processing, both spatial attention and feature-based attention are involved in biasing selection toward task-relevant stimuli and avoiding distraction. Evidence from multiple paradigms has suggested that PSZ may hyperfocus and have a narrower "spotlight" of spatial attention. In contrast, feature-based attention seems largely preserved, with some suggestion of increased processing of stimuli sharing the target-defining feature. In the current study, we examined the spatial profile of feature-based distraction using a task in which participants searched for a particular color target and attempted to ignore distractors that varied in distance from the target location and either matched or mismatched the target color. PSZ differed from healthy controls in terms of interference from peripheral distractors that shared the target-color presented 200 ms before a central target. Specifically, PSZ showed an amplified gradient of spatial attention, with increased distraction to near distractors and less interference to far distractors. Moreover, consistent with hyperfocusing, individual differences in this spatial profile were correlated with positive symptoms, such that those with greater positive symptoms showed less distraction by target-colored distractors near the task-relevant location. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  10. What top-down task sets do for us: an ERP study on the benefits of advance preparation in visual search.

    PubMed

    Eimer, Martin; Kiss, Monika; Nicholas, Susan

    2011-12-01

    When target-defining features are specified in advance, attentional target selection in visual search is controlled by preparatory top-down task sets. We used ERP measures to study voluntary target selection in the absence of such feature-specific task sets, and to compare it to selection that is guided by advance knowledge about target features. Visual search arrays contained two different color singleton digits, and participants had to select one of these as target and report its parity. Target color was either known in advance (fixed color task) or had to be selected anew on each trial (free color-choice task). ERP correlates of spatially selective attentional target selection (N2pc) and working memory processing (SPCN) demonstrated rapid target selection and efficient exclusion of color singleton distractors from focal attention and working memory in the fixed color task. In the free color-choice task, spatially selective processing also emerged rapidly, but selection efficiency was reduced, with nontarget singleton digits capturing attention and gaining access to working memory. Results demonstrate the benefits of top-down task sets: Feature-specific advance preparation accelerates target selection, rapidly resolves attentional competition, and prevents irrelevant events from attracting attention and entering working memory.

  11. Attention in the processing of complex visual displays: detecting features and their combinations.

    PubMed

    Farell, B

    1984-02-01

    The distinction between operations in visual processing that are parallel and preattentive and those that are serial and attentional receives both theoretical and empirical support. According to Treisman's feature-integration theory, independent features are available preattentively, but attention is required to veridically combine features into objects. Certain evidence supporting this theory is consistent with a different interpretation, which was tested in four experiments. The first experiment compared the detection of features and feature combinations while eliminating a factor that confounded earlier comparisons. The resulting priority of access to combinatorial information suggests that features and nonlocal combinations of features are not connected solely by a bottom-up hierarchical convergence. Causes of the disparity between the results of Experiment 1 and the results of previous research were investigated in three subsequent experiments. The results showed that of the two confounded factors, it was the difference in the mapping of alternatives onto responses, not the differing attentional demands of features and objects, that underlaid the results of the previous research. The present results are thus counterexamples to the feature-integration theory. Aspects of this theory are shown to be subsumed by more general principles, which are discussed in terms of attentional processes in the detection of features, objects, and stimulus alternatives.

  12. The Impact of Salient Advertisements on Reading and Attention on Web Pages

    ERIC Educational Resources Information Center

    Simola, Jaana; Kuisma, Jarmo; Oorni, Anssi; Uusitalo, Liisa; Hyona, Jukka

    2011-01-01

    Human vision is sensitive to salient features such as motion. Therefore, animation and onset of advertisements on Websites may attract visual attention and disrupt reading. We conducted three eye tracking experiments with authentic Web pages to assess whether (a) ads are efficiently ignored, (b) ads attract overt visual attention and disrupt…

  13. Perceptual Learning Induces Persistent Attentional Capture by Nonsalient Shapes.

    PubMed

    Qu, Zhe; Hillyard, Steven A; Ding, Yulong

    2017-02-01

    Visual attention can be attracted automatically by salient simple features, but whether and how nonsalient complex stimuli such as shapes may capture attention in humans remains unclear. Here, we present strong electrophysiological evidence that a nonsalient shape presented among similar shapes can provoke a robust and persistent capture of attention as a consequence of extensive training in visual search (VS) for that shape. Strikingly, this attentional capture that followed perceptual learning (PL) was evident even when the trained shape was task-irrelevant, was presented outside the focus of top-down spatial attention, and was undetected by the observer. Moreover, this attentional capture persisted for at least 3-5 months after training had been terminated. This involuntary capture of attention was indexed by electrophysiological recordings of the N2pc component of the event-related brain potential, which was localized to ventral extrastriate visual cortex, and was highly predictive of stimulus-specific improvement in VS ability following PL. These findings provide the first evidence that nonsalient shapes can capture visual attention automatically following PL and challenge the prominent view that detection of feature conjunctions requires top-down focal attention. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  14. Temporal Limitations in the Effective Binding of Attended Target Attributes in the Mutual Masking of Visual Objects

    ERIC Educational Resources Information Center

    Hommuk, Karita; Bachmann, Talis

    2009-01-01

    The problem of feature binding has been examined under conditions of distributed attention or with spatially dispersed stimuli. We studied binding by asking whether selective attention to a feature of a masked object enables perceptual access to the other features of that object using conditions in which spatial attention was directed at a single…

  15. Qualitative differences in the guidance of attention during single-color and multiple-color visual search: behavioral and electrophysiological evidence.

    PubMed

    Grubert, Anna; Eimer, Martin

    2013-10-01

    To find out whether attentional target selection can be effectively guided by top-down task sets for multiple colors, we measured behavioral and ERP markers of attentional target selection in an experiment where participants had to identify color-defined target digits that were accompanied by a single gray distractor object in the opposite visual field. In the One Color task, target color was constant. In the Two Color task, targets could have one of two equally likely colors. Color-guided target selection was less efficient during multiple-color relative to single-color search, and this was reflected by slower response times and delayed N2pc components. Nontarget-color items that were presented in half of all trials captured attention and gained access to working memory when participants searched for two colors, but were excluded from attentional processing in the One Color task. Results demonstrate qualitative differences in the guidance of attentional target selection between single-color and multiple-color visual search. They suggest that top-down attentional control can be applied much more effectively when it is based on a single feature-specific attentional template. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  16. Residual attention guidance in blindsight monkeys watching complex natural scenes.

    PubMed

    Yoshida, Masatoshi; Itti, Laurent; Berg, David J; Ikeda, Takuro; Kato, Rikako; Takaura, Kana; White, Brian J; Munoz, Douglas P; Isa, Tadashi

    2012-08-07

    Patients with damage to primary visual cortex (V1) demonstrate residual performance on laboratory visual tasks despite denial of conscious seeing (blindsight) [1]. After a period of recovery, which suggests a role for plasticity [2], visual sensitivity higher than chance is observed in humans and monkeys for simple luminance-defined stimuli, grating stimuli, moving gratings, and other stimuli [3-7]. Some residual cognitive processes including bottom-up attention and spatial memory have also been demonstrated [8-10]. To date, little is known about blindsight with natural stimuli and spontaneous visual behavior. In particular, is orienting attention toward salient stimuli during free viewing still possible? We used a computational saliency map model to analyze spontaneous eye movements of monkeys with blindsight from unilateral ablation of V1. Despite general deficits in gaze allocation, monkeys were significantly attracted to salient stimuli. The contribution of orientation features to salience was nearly abolished, whereas contributions of motion, intensity, and color features were preserved. Control experiments employing laboratory stimuli confirmed the free-viewing finding that lesioned monkeys retained color sensitivity. Our results show that attention guidance over complex natural scenes is preserved in the absence of V1, thereby directly challenging theories and models that crucially depend on V1 to compute the low-level visual features that guide attention. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Stimulus competition mediates the joint effects of spatial and feature-based attention

    PubMed Central

    White, Alex L.; Rolfs, Martin; Carrasco, Marisa

    2015-01-01

    Distinct attentional mechanisms enhance the sensory processing of visual stimuli that appear at task-relevant locations and have task-relevant features. We used a combination of psychophysics and computational modeling to investigate how these two types of attention—spatial and feature based—interact to modulate sensitivity when combined in one task. Observers monitored overlapping groups of dots for a target change in color saturation, which they had to localize as being in the upper or lower visual hemifield. Pre-cues indicated the target's most likely location (left/right), color (red/green), or both location and color. We measured sensitivity (d′) for every combination of the location cue and the color cue, each of which could be valid, neutral, or invalid. When three competing saturation changes occurred simultaneously with the target change, there was a clear interaction: The spatial cueing effect was strongest for the cued color, and the color cueing effect was strongest at the cued location. In a second experiment, only the target dot group changed saturation, such that stimulus competition was low. The resulting cueing effects were statistically independent and additive: The color cueing effect was equally strong at attended and unattended locations. We account for these data with a computational model in which spatial and feature-based attention independently modulate the gain of sensory responses, consistent with measurements of cortical activity. Multiple responses then compete via divisive normalization. Sufficient competition creates interactions between the two cueing effects, although the attentional systems are themselves independent. This model helps reconcile seemingly disparate behavioral and physiological findings. PMID:26473316

  18. Direct and indirect effects of attention and visual function on gait impairment in Parkinson's disease: influence of task and turning.

    PubMed

    Stuart, Samuel; Galna, Brook; Delicato, Louise S; Lord, Sue; Rochester, Lynn

    2017-07-01

    Gait impairment is a core feature of Parkinson's disease (PD) which has been linked to cognitive and visual deficits, but interactions between these features are poorly understood. Monitoring saccades allows investigation of real-time cognitive and visual processes and their impact on gait when walking. This study explored: (i) saccade frequency when walking under different attentional manipulations of turning and dual-task; and (ii) direct and indirect relationships between saccades, gait impairment, vision and attention. Saccade frequency (number of fast eye movements per-second) was measured during gait in 60 PD and 40 age-matched control participants using a mobile eye-tracker. Saccade frequency was significantly reduced in PD compared to controls during all conditions. However, saccade frequency increased with a turn and decreased under dual-task for both groups. Poorer attention directly related to saccade frequency, visual function and gait impairment in PD, but not controls. Saccade frequency did not directly relate to gait in PD, but did in controls. Instead, saccade frequency and visual function deficit indirectly impacted gait impairment in PD, which was underpinned by their relationship with attention. In conclusion, our results suggest a vital role for attention with direct and indirect influences on gait impairment in PD. Attention directly impacted saccade frequency, visual function and gait impairment in PD, with connotations for falls. It also underpinned indirect impact of visual and saccadic impairment on gait. Attention therefore represents a key therapeutic target that should be considered in future research. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  19. Common capacity-limited neural mechanisms of selective attention and spatial working memory encoding

    PubMed Central

    Fusser, Fabian; Linden, David E J; Rahm, Benjamin; Hampel, Harald; Haenschel, Corinna; Mayer, Jutta S

    2011-01-01

    One characteristic feature of visual working memory (WM) is its limited capacity, and selective attention has been implicated as limiting factor. A possible reason why attention constrains the number of items that can be encoded into WM is that the two processes share limited neural resources. Functional magnetic resonance imaging (fMRI) studies have indeed demonstrated commonalities between the neural substrates of WM and attention. Here we investigated whether such overlapping activations reflect interacting neural mechanisms that could result in capacity limitations. To independently manipulate the demands on attention and WM encoding within one single task, we combined visual search and delayed discrimination of spatial locations. Participants were presented with a search array and performed easy or difficult visual search in order to encode one, three or five positions of target items into WM. Our fMRI data revealed colocalised activation for attention-demanding visual search and WM encoding in distributed posterior and frontal regions. However, further analysis yielded two patterns of results. Activity in prefrontal regions increased additively with increased demands on WM and attention, indicating regional overlap without functional interaction. Conversely, the WM load-dependent activation in visual, parietal and premotor regions was severely reduced during high attentional demand. We interpret this interaction as indicating the sites of shared capacity-limited neural resources. Our findings point to differential contributions of prefrontal and posterior regions to the common neural mechanisms that support spatial WM encoding and attention, providing new imaging evidence for attention-based models of WM encoding. PMID:21781193

  20. Visual search and attention: an overview.

    PubMed

    Davis, Elizabeth T; Palmer, John

    2004-01-01

    This special feature issue is devoted to attention and visual search. Attention is a central topic in psychology and visual search is both a versatile paradigm for the study of visual attention and a topic of study in itself. Visual search depends on sensory, perceptual, and cognitive processes. As a result, the search paradigm has been used to investigate a diverse range of phenomena. Manipulating the search task can vary the demands on attention. In turn, attention modulates visual search by selecting and limiting the information available at various levels of processing. Focusing on the intersection of attention and search provides a relatively structured window into the wide world of attentional phenomena. In particular, the effects of divided attention are illustrated by the effects of set size (the number of stimuli in a display) and the effects of selective attention are illustrated by cueing subsets of stimuli within the display. These two phenomena provide the starting point for the articles in this special issue. The articles are organized into four general topics to help structure the issues of attention and search.

  1. The Associations between Visual Attention and Facial Expression Identification in Patients with Schizophrenia.

    PubMed

    Lin, I-Mei; Fan, Sheng-Yu; Huang, Tiao-Lai; Wu, Wan-Ting; Li, Shi-Ming

    2013-12-01

    Visual search is an important attention process that precedes the information processing. Visual search also mediates the relationship between cognition function (attention) and social cognition (such as facial expression identification). However, the association between visual attention and social cognition in patients with schizophrenia remains unknown. The purposes of this study were to examine the differences in visual search performance and facial expression identification between patients with schizophrenia and normal controls, and to explore the relationship between visual search performance and facial expression identification in patients with schizophrenia. Fourteen patients with schizophrenia (mean age=46.36±6.74) and 15 normal controls (mean age=40.87±9.33) participated this study. The visual search task, including feature search and conjunction search, and Japanese and Caucasian Facial Expression of Emotion were administered. Patients with schizophrenia had worse visual search performance both in feature search and conjunction search than normal controls, as well as had worse facial expression identification, especially in surprised and sadness. In addition, there were negative associations between visual search performance and facial expression identification in patients with schizophrenia, especially in surprised and sadness. However, this phenomenon was not showed in normal controls. Patients with schizophrenia who had visual search deficits had the impairment on facial expression identification. Increasing ability of visual search and facial expression identification may improve their social function and interpersonal relationship.

  2. Neural correlates of context-dependent feature conjunction learning in visual search tasks.

    PubMed

    Reavis, Eric A; Frank, Sebastian M; Greenlee, Mark W; Tse, Peter U

    2016-06-01

    Many perceptual learning experiments show that repeated exposure to a basic visual feature such as a specific orientation or spatial frequency can modify perception of that feature, and that those perceptual changes are associated with changes in neural tuning early in visual processing. Such perceptual learning effects thus exert a bottom-up influence on subsequent stimulus processing, independent of task-demands or endogenous influences (e.g., volitional attention). However, it is unclear whether such bottom-up changes in perception can occur as more complex stimuli such as conjunctions of visual features are learned. It is not known whether changes in the efficiency with which people learn to process feature conjunctions in a task (e.g., visual search) reflect true bottom-up perceptual learning versus top-down, task-related learning (e.g., learning better control of endogenous attention). Here we show that feature conjunction learning in visual search leads to bottom-up changes in stimulus processing. First, using fMRI, we demonstrate that conjunction learning in visual search has a distinct neural signature: an increase in target-evoked activity relative to distractor-evoked activity (i.e., a relative increase in target salience). Second, we demonstrate that after learning, this neural signature is still evident even when participants passively view learned stimuli while performing an unrelated, attention-demanding task. This suggests that conjunction learning results in altered bottom-up perceptual processing of the learned conjunction stimuli (i.e., a perceptual change independent of the task). We further show that the acquired change in target-evoked activity is contextually dependent on the presence of distractors, suggesting that search array Gestalts are learned. Hum Brain Mapp 37:2319-2330, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  3. A formal theory of feature binding in object perception.

    PubMed

    Ashby, F G; Prinzmetal, W; Ivry, R; Maddox, W T

    1996-01-01

    Visual objects are perceived correctly only if their features are identified and then bound together. Illusory conjunctions result when feature identification is correct but an error occurs during feature binding. A new model is proposed that assumes feature binding errors occur because of uncertainty about the location of visual features. This model accounted for data from 2 new experiments better than a model derived from A. M. Treisman and H. Schmidt's (1982) feature integration theory. The traditional method for detecting the occurrence of true illusory conjunctions is shown to be fundamentally flawed. A reexamination of 2 previous studies provided new insights into the role of attention and location information in object perception and a reinterpretation of the deficits in patients who exhibit attentional disorders.

  4. The putative visual word form area is functionally connected to the dorsal attention network.

    PubMed

    Vogel, Alecia C; Miezin, Fran M; Petersen, Steven E; Schlaggar, Bradley L

    2012-03-01

    The putative visual word form area (pVWFA) is the most consistently activated region in single word reading studies (i.e., Vigneau et al. 2006), yet its function remains a matter of debate. The pVWFA may be predominantly used in reading or it could be a more general visual processor used in reading but also in other visual tasks. Here, resting-state functional connectivity magnetic resonance imaging (rs-fcMRI) is used to characterize the functional relationships of the pVWFA to help adjudicate between these possibilities. rs-fcMRI defines relationships based on correlations in slow fluctuations of blood oxygen level-dependent activity occurring at rest. In this study, rs-fcMRI correlations show little relationship between the pVWFA and reading-related regions but a strong relationship between the pVWFA and dorsal attention regions thought to be related to spatial and feature attention. The rs-fcMRI correlations between the pVWFA and regions of the dorsal attention network increase with age and reading skill, while the correlations between the pVWFA and reading-related regions do not. These results argue the pVWFA is not used predominantly in reading but is a more general visual processor used in other visual tasks, as well as reading.

  5. The Putative Visual Word Form Area Is Functionally Connected to the Dorsal Attention Network

    PubMed Central

    Miezin, Fran M.; Petersen, Steven E.; Schlaggar, Bradley L.

    2012-01-01

    The putative visual word form area (pVWFA) is the most consistently activated region in single word reading studies (i.e., Vigneau et al. 2006), yet its function remains a matter of debate. The pVWFA may be predominantly used in reading or it could be a more general visual processor used in reading but also in other visual tasks. Here, resting-state functional connectivity magnetic resonance imaging (rs-fcMRI) is used to characterize the functional relationships of the pVWFA to help adjudicate between these possibilities. rs-fcMRI defines relationships based on correlations in slow fluctuations of blood oxygen level–dependent activity occurring at rest. In this study, rs-fcMRI correlations show little relationship between the pVWFA and reading-related regions but a strong relationship between the pVWFA and dorsal attention regions thought to be related to spatial and feature attention. The rs-fcMRI correlations between the pVWFA and regions of the dorsal attention network increase with age and reading skill, while the correlations between the pVWFA and reading-related regions do not. These results argue the pVWFA is not used predominantly in reading but is a more general visual processor used in other visual tasks, as well as reading. PMID:21690259

  6. Visual attention and flexible normalization pools

    PubMed Central

    Schwartz, Odelia; Coen-Cagli, Ruben

    2013-01-01

    Attention to a spatial location or feature in a visual scene can modulate the responses of cortical neurons and affect perceptual biases in illusions. We add attention to a cortical model of spatial context based on a well-founded account of natural scene statistics. The cortical model amounts to a generalized form of divisive normalization, in which the surround is in the normalization pool of the center target only if they are considered statistically dependent. Here we propose that attention influences this computation by accentuating the neural unit activations at the attended location, and that the amount of attentional influence of the surround on the center thus depends on whether center and surround are deemed in the same normalization pool. The resulting form of model extends a recent divisive normalization model of attention (Reynolds & Heeger, 2009). We simulate cortical surround orientation experiments with attention and show that the flexible model is suitable for capturing additional data and makes nontrivial testable predictions. PMID:23345413

  7. An impaired attentional dwell time after parietal and frontal lesions related to impaired selective attention not unilateral neglect.

    PubMed

    Correani, Alessia; Humphreys, Glyn W

    2011-07-01

    The attentional blink, a measure of the temporal dynamics of visual processing, has been documented to be more pronounced following brain lesions that are associated with visual neglect. This suggests that, in addition to their spatial bias in attention, neglect patients may have a prolonged dwell time for attention. Here the attentional dwell time was examined in patients with damage focused on either posterior parietal or frontal cortices. In three experiments, we show that there is an abnormally pronounced attentional dwell time, which does not differ in patients with posterior parietal and with frontal lobe lesions, and this is associated with a measure of selective attention but not with measures of spatial bias in selection. These data occurred both when we attempted to match patients and controls for overall differences in performance and when a single set stimulus exposure was used across participants. In Experiments 1 and 2, requiring report of colour-form conjunctions, there was evidence that the patients were also impaired at temporal binding, showing errors in feature combination across stimuli and in reporting in the correct temporal order. In Experiment 3, requiring only the report of features but introducing task switching led to similar results. The data suggest that damage to a frontoparietal network can compromise temporal selection of visual stimuli; however, this is not necessarily related to a deficit in hemispatial visual attention but it is to impaired target selection. We discuss the implications for understanding visual selection.

  8. Visual Cortex Inspired CNN Model for Feature Construction in Text Analysis

    PubMed Central

    Fu, Hongping; Niu, Zhendong; Zhang, Chunxia; Ma, Jing; Chen, Jie

    2016-01-01

    Recently, biologically inspired models are gradually proposed to solve the problem in text analysis. Convolutional neural networks (CNN) are hierarchical artificial neural networks, which include a various of multilayer perceptrons. According to biological research, CNN can be improved by bringing in the attention modulation and memory processing of primate visual cortex. In this paper, we employ the above properties of primate visual cortex to improve CNN and propose a biological-mechanism-driven-feature-construction based answer recommendation method (BMFC-ARM), which is used to recommend the best answer for the corresponding given questions in community question answering. BMFC-ARM is an improved CNN with four channels respectively representing questions, answers, asker information and answerer information, and mainly contains two stages: biological mechanism driven feature construction (BMFC) and answer ranking. BMFC imitates the attention modulation property by introducing the asker information and answerer information of given questions and the similarity between them, and imitates the memory processing property through bringing in the user reputation information for answerers. Then the feature vector for answer ranking is constructed by fusing the asker-answerer similarities, answerer's reputation and the corresponding vectors of question, answer, asker, and answerer. Finally, the Softmax is used at the stage of answer ranking to get best answers by the feature vector. The experimental results of answer recommendation on the Stackexchange dataset show that BMFC-ARM exhibits better performance. PMID:27471460

  9. Visual Cortex Inspired CNN Model for Feature Construction in Text Analysis.

    PubMed

    Fu, Hongping; Niu, Zhendong; Zhang, Chunxia; Ma, Jing; Chen, Jie

    2016-01-01

    Recently, biologically inspired models are gradually proposed to solve the problem in text analysis. Convolutional neural networks (CNN) are hierarchical artificial neural networks, which include a various of multilayer perceptrons. According to biological research, CNN can be improved by bringing in the attention modulation and memory processing of primate visual cortex. In this paper, we employ the above properties of primate visual cortex to improve CNN and propose a biological-mechanism-driven-feature-construction based answer recommendation method (BMFC-ARM), which is used to recommend the best answer for the corresponding given questions in community question answering. BMFC-ARM is an improved CNN with four channels respectively representing questions, answers, asker information and answerer information, and mainly contains two stages: biological mechanism driven feature construction (BMFC) and answer ranking. BMFC imitates the attention modulation property by introducing the asker information and answerer information of given questions and the similarity between them, and imitates the memory processing property through bringing in the user reputation information for answerers. Then the feature vector for answer ranking is constructed by fusing the asker-answerer similarities, answerer's reputation and the corresponding vectors of question, answer, asker, and answerer. Finally, the Softmax is used at the stage of answer ranking to get best answers by the feature vector. The experimental results of answer recommendation on the Stackexchange dataset show that BMFC-ARM exhibits better performance.

  10. The Visual Hemifield Asymmetry in the Spatial Blink during Singleton Search and Feature Search

    ERIC Educational Resources Information Center

    Burnham, Bryan R.; Rozell, Cassandra A.; Kasper, Alex; Bianco, Nicole E.; Delliturri, Antony

    2011-01-01

    The present study examined a visual field asymmetry in the contingent capture of attention that was previously observed by Du and Abrams (2010). In our first experiment, color singleton distractors that matched the color of a to-be-detected target produced a stronger capture of attention when they appeared in the left visual hemifield than in the…

  11. Seeing without knowing: task relevance dissociates between visual awareness and recognition.

    PubMed

    Eitam, Baruch; Shoval, Roy; Yeshurun, Yaffa

    2015-03-01

    We demonstrate that task relevance dissociates between visual awareness and knowledge activation to create a state of seeing without knowing-visual awareness of familiar stimuli without recognizing them. We rely on the fact that in order to experience a Kanizsa illusion, participants must be aware of its inducers. While people can indicate the orientation of the illusory rectangle with great ease (signifying that they have consciously experienced the illusion's inducers), almost 30% of them could not report the inducers' color. Thus, people can see, in the sense of phenomenally experiencing, but not know, in the sense of recognizing what the object is or activating appropriate knowledge about it. Experiment 2 tests whether relevance-based selection operates within objects and shows that, contrary to the pattern of results found with features of different objects in our previous studies and replicated in Experiment 1, selection does not occur when both relevant and irrelevant features belong to the same object. We discuss these findings in relation to the existing theories of consciousness and to attention and inattentional blindness, and the role of cognitive load, object-based attention, and the use of self-reports as measures of awareness. © 2015 New York Academy of Sciences.

  12. Aging, selective attention, and feature integration.

    PubMed

    Plude, D J; Doussard-Roosevelt, J A

    1989-03-01

    This study used feature-integration theory as a means of determining the point in processing at which selective attention deficits originate. The theory posits an initial stage of processing in which features are registered in parallel and then a serial process in which features are conjoined to form complex stimuli. Performance of young and older adults on feature versus conjunction search is compared. Analyses of reaction times and error rates suggest that elderly adults in addition to young adults, can capitalize on the early parallel processing stage of visual information processing, and that age decrements in visual search arise as a result of the later, serial stage of processing. Analyses of a third, unconfounded, conjunction search condition reveal qualitatively similar modes of conjunction search in young and older adults. The contribution of age-related data limitations is found to be secondary to the contribution of age decrements in selective attention.

  13. Setting and changing feature priorities in visual short-term memory.

    PubMed

    Kalogeropoulou, Zampeta; Jagadeesh, Akshay V; Ohl, Sven; Rolfs, Martin

    2017-04-01

    Many everyday tasks require prioritizing some visual features over competing ones, both during the selection from the rich sensory input and while maintaining information in visual short-term memory (VSTM). Here, we show that observers can change priorities in VSTM when, initially, they attended to a different feature. Observers reported from memory the orientation of one of two spatially interspersed groups of black and white gratings. Using colored pre-cues (presented before stimulus onset) and retro-cues (presented after stimulus offset) predicting the to-be-reported group, we manipulated observers' feature priorities independently during stimulus encoding and maintenance, respectively. Valid pre-cues reliably increased observers' performance (reduced guessing, increased report precision) as compared to neutral ones; invalid pre-cues had the opposite effect. Valid retro-cues also consistently improved performance (by reducing random guesses), even if the unexpected group suddenly became relevant (invalid-valid condition). Thus, feature-based attention can reshape priorities in VSTM protecting information that would otherwise be forgotten.

  14. Identifying Bottom-Up and Top-Down Components of Attentional Weight by Experimental Analysis and Computational Modeling

    ERIC Educational Resources Information Center

    Nordfang, Maria; Dyrholm, Mads; Bundesen, Claus

    2013-01-01

    The attentional weight of a visual object depends on the contrast of the features of the object to its local surroundings (feature contrast) and the relevance of the features to one's goals (feature relevance). We investigated the dependency in partial report experiments with briefly presented stimuli but unspeeded responses. The task was to…

  15. Detection of Emotional Faces: Salient Physical Features Guide Effective Visual Search

    ERIC Educational Resources Information Center

    Calvo, Manuel G.; Nummenmaa, Lauri

    2008-01-01

    In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent,…

  16. What's color got to do with it? The influence of color on visual attention in different categories.

    PubMed

    Frey, Hans-Peter; Honey, Christian; König, Peter

    2008-10-23

    Certain locations attract human gaze in natural visual scenes. Are there measurable features, which distinguish these locations from others? While there has been extensive research on luminance-defined features, only few studies have examined the influence of color on overt attention. In this study, we addressed this question by presenting color-calibrated stimuli and analyzing color features that are known to be relevant for the responses of LGN neurons. We recorded eye movements of 15 human subjects freely viewing colored and grayscale images of seven different categories. All images were also analyzed by the saliency map model (L. Itti, C. Koch, & E. Niebur, 1998). We find that human fixation locations differ between colored and grayscale versions of the same image much more than predicted by the saliency map. Examining the influence of various color features on overt attention, we find two extreme categories: while in rainforest images all color features are salient, none is salient in fractals. In all other categories, color features are selectively salient. This shows that the influence of color on overt attention depends on the type of image. Also, it is crucial to analyze neurophysiologically relevant color features for quantifying the influence of color on attention.

  17. Target templates: the precision of mental representations affects attentional guidance and decision-making in visual search.

    PubMed

    Hout, Michael C; Goldinger, Stephen D

    2015-01-01

    When people look for things in the environment, they use target templates-mental representations of the objects they are attempting to locate-to guide attention and to assess incoming visual input as potential targets. However, unlike laboratory participants, searchers in the real world rarely have perfect knowledge regarding the potential appearance of targets. In seven experiments, we examined how the precision of target templates affects the ability to conduct visual search. Specifically, we degraded template precision in two ways: 1) by contaminating searchers' templates with inaccurate features, and 2) by introducing extraneous features to the template that were unhelpful. We recorded eye movements to allow inferences regarding the relative extents to which attentional guidance and decision-making are hindered by template imprecision. Our findings support a dual-function theory of the target template and highlight the importance of examining template precision in visual search.

  18. Target templates: the precision of mental representations affects attentional guidance and decision-making in visual search

    PubMed Central

    Hout, Michael C.; Goldinger, Stephen D.

    2014-01-01

    When people look for things in the environment, they use target templates—mental representations of the objects they are attempting to locate—to guide attention and to assess incoming visual input as potential targets. However, unlike laboratory participants, searchers in the real world rarely have perfect knowledge regarding the potential appearance of targets. In seven experiments, we examined how the precision of target templates affects the ability to conduct visual search. Specifically, we degraded template precision in two ways: 1) by contaminating searchers’ templates with inaccurate features, and 2) by introducing extraneous features to the template that were unhelpful. We recorded eye movements to allow inferences regarding the relative extents to which attentional guidance and decision-making are hindered by template imprecision. Our findings support a dual-function theory of the target template and highlight the importance of examining template precision in visual search. PMID:25214306

  19. Funny money: the attentional role of monetary feedback detached from expected value.

    PubMed

    Roper, Zachary J J; Vecera, Shaun P

    2016-10-01

    Stimuli associated with monetary reward can become powerful cues that effectively capture visual attention. We examined whether such value-driven attentional capture can be induced with monetary feedback in the absence of an expected cash payout. To this end, we implemented images of U.S. dollar bills as reward feedback. Participants knew in advance that they would not receive any money based on their performance. Our reward stimuli-$5 and $20 bill images-were thus dissociated from any practical utility. Strikingly, we observed a reliable attentional capture effect for the mere images of bills. Moreover, this finding generalized to Monopoly money. In two control experiments, we found no evidence in favor of nominal or symbolic monetary value. Hence, we claim that bill images are special monetary representations, such that there are strong associations between the defining visual features of bills and reward, probably due to a lifelong learning history. Together, we show that the motivation to earn cash plays a minor role when it comes to monetary rewards, while bill-defining visual features seem to be sufficient. These findings have the potential to influence human factor applications, such as gamification, and can be extended to novel value systems, such as the electronic cash Bitcoin being developed for use in mobile banking. Finally, our procedure represents a proof of concept on how images of money can be used to conserve expenditures in the experimental context.

  20. Dimension-based attention in visual short-term memory.

    PubMed

    Pilling, Michael; Barrett, Doug J K

    2016-07-01

    We investigated how dimension-based attention influences visual short-term memory (VSTM). This was done through examining the effects of cueing a feature dimension in two perceptual comparison tasks (change detection and sameness detection). In both tasks, a memory array and a test array consisting of a number of colored shapes were presented successively, interleaved by a blank interstimulus interval (ISI). In Experiment 1 (change detection), the critical event was a feature change in one item across the memory and test arrays. In Experiment 2 (sameness detection), the critical event was the absence of a feature change in one item across the two arrays. Auditory cues indicated the feature dimension (color or shape) of the critical event with 80 % validity; the cues were presented either prior to the memory array, during the ISI, or simultaneously with the test array. In Experiment 1, the cue validity influenced sensitivity only when the cue was given at the earliest position; in Experiment 2, the cue validity influenced sensitivity at all three cue positions. We attributed the greater effectiveness of top-down guidance by cues in the sameness detection task to the more active nature of the comparison process required to detect sameness events (Hyun, Woodman, Vogel, Hollingworth, & Luck, Journal of Experimental Psychology: Human Perception and Performance, 35; 1140-1160, 2009).

  1. Visual search for feature conjunctions: an fMRI study comparing alcohol-related neurodevelopmental disorder (ARND) to ADHD.

    PubMed

    O'Conaill, Carrie R; Malisza, Krisztina L; Buss, Joan L; Bolster, R Bruce; Clancy, Christine; de Gervai, Patricia Dreessen; Chudley, Albert E; Longstaffe, Sally

    2015-01-01

    Alcohol-related neurodevelopmental disorder (ARND) falls under the umbrella of fetal alcohol spectrum disorder (FASD). Diagnosis of ARND is difficult because individuals do not demonstrate the characteristic facial features associated with fetal alcohol syndrome (FAS). While attentional problems in ARND are similar to those found in attention-deficit/hyperactivity disorder (ADHD), the underlying impairment in attention pathways may be different. Functional magnetic resonance imaging (fMRI) and diffusion tensor imaging (DTI) was conducted at 3 T. Sixty-three children aged 10 to 14 years diagnosed with ARND, ADHD, and typically developing (TD) controls performed a single-feature and a feature-conjunction visual search task. Dorsal and ventral attention pathways were activated during both attention tasks in all groups. Significantly greater activation was observed in ARND subjects during a single-feature search as compared to TD and ADHD groups, suggesting ARND subjects require greater neural recruitment to perform this simple task. ARND subjects appear unable to effectively use the very efficient automatic perceptual 'pop-out' mechanism employed by TD and ADHD groups during presentation of the disjunction array. By comparison, activation was lower in ARND compared to TD and ADHD subjects during the more difficult conjunction search task as compared to the single-feature search. Analysis of DTI data using tract-based spatial statistics (TBSS) showed areas of significantly lower fractional anisotropy (FA) and higher mean diffusivity (MD) in the right inferior longitudinal fasciculus (ILF) in ARND compared to TD subjects. Damage to the white matter of the ILF may compromise the ventral attention pathway and may require subjects to use the dorsal attention pathway, which is associated with effortful top-down processing, for tasks that should be automatic. Decreased functional activity in the right temporoparietal junction (TPJ) of ARND subjects may be due to a reduction in the white matter tract's ability to efficiently convey information critical to performance of the attention tasks. Limited activation patterns in ARND suggest problems in information processing along the ventral frontoparietal attention pathway. Poor integrity of the ILF, which connects the functional components of the ventral attention network, in ARND subjects may contribute to the attention deficits characteristic of the disorder.

  2. A top-down manner-based DCNN architecture for semantic image segmentation.

    PubMed

    Qiao, Kai; Chen, Jian; Wang, Linyuan; Zeng, Lei; Yan, Bin

    2017-01-01

    Given their powerful feature representation for recognition, deep convolutional neural networks (DCNNs) have been driving rapid advances in high-level computer vision tasks. However, their performance in semantic image segmentation is still not satisfactory. Based on the analysis of visual mechanism, we conclude that DCNNs in a bottom-up manner are not enough, because semantic image segmentation task requires not only recognition but also visual attention capability. In the study, superpixels containing visual attention information are introduced in a top-down manner, and an extensible architecture is proposed to improve the segmentation results of current DCNN-based methods. We employ the current state-of-the-art fully convolutional network (FCN) and FCN with conditional random field (DeepLab-CRF) as baselines to validate our architecture. Experimental results of the PASCAL VOC segmentation task qualitatively show that coarse edges and error segmentation results are well improved. We also quantitatively obtain about 2%-3% intersection over union (IOU) accuracy improvement on the PASCAL VOC 2011 and 2012 test sets.

  3. Deep Visual Attention Prediction

    NASA Astrophysics Data System (ADS)

    Wang, Wenguan; Shen, Jianbing

    2018-05-01

    In this work, we aim to predict human eye fixation with view-free scenes based on an end-to-end deep learning architecture. Although Convolutional Neural Networks (CNNs) have made substantial improvement on human attention prediction, it is still needed to improve CNN based attention models by efficiently leveraging multi-scale features. Our visual attention network is proposed to capture hierarchical saliency information from deep, coarse layers with global saliency information to shallow, fine layers with local saliency response. Our model is based on a skip-layer network structure, which predicts human attention from multiple convolutional layers with various reception fields. Final saliency prediction is achieved via the cooperation of those global and local predictions. Our model is learned in a deep supervision manner, where supervision is directly fed into multi-level layers, instead of previous approaches of providing supervision only at the output layer and propagating this supervision back to earlier layers. Our model thus incorporates multi-level saliency predictions within a single network, which significantly decreases the redundancy of previous approaches of learning multiple network streams with different input scales. Extensive experimental analysis on various challenging benchmark datasets demonstrate our method yields state-of-the-art performance with competitive inference time.

  4. Pupil size reflects the focus of feature-based attention.

    PubMed

    Binda, Paola; Pereverzeva, Maria; Murray, Scott O

    2014-12-15

    We measured pupil size in adult human subjects while they selectively attended to one of two surfaces, bright and dark, defined by coherently moving dots. The two surfaces were presented at the same location; therefore, subjects could select the cued surface only on the basis of its features. With no luminance change in the stimulus, we find that pupil size was smaller when the bright surface was attended and larger when the dark surface was attended: an effect of feature-based (or surface-based) attention. With the same surfaces at nonoverlapping locations, we find a similar effect of spatial attention. The pupil size modulation cannot be accounted for by differences in eye position and by other variables known to affect pupil size such as task difficulty, accommodation, or the mere anticipation (imagery) of bright/dark stimuli. We conclude that pupil size reflects not just luminance or cognitive state, but the interaction between the two: it reflects which luminance level in the visual scene is relevant for the task at hand. Copyright © 2014 the American Physiological Society.

  5. The Role of Attention in Item-Item Binding in Visual Working Memory

    ERIC Educational Resources Information Center

    Peterson, Dwight J.; Naveh-Benjamin, Moshe

    2017-01-01

    An important yet unresolved question regarding visual working memory (VWM) relates to whether or not binding processes within VWM require additional attentional resources compared with processing solely the individual components comprising these bindings. Previous findings indicate that binding of surface features (e.g., colored shapes) within VWM…

  6. Object-based attention: strength of object representation and attentional guidance.

    PubMed

    Shomstein, Sarah; Behrmann, Marlene

    2008-01-01

    Two or more features belonging to a single object are identified more quickly and more accurately than are features belonging to different objects--a finding attributed to sensory enhancement of all features belonging to an attended or selected object. However, several recent studies have suggested that this "single-object advantage" may be a product of probabilistic and configural strategic prioritizations rather than of object-based perceptual enhancement per se, challenging the underlying mechanism that is thought to give rise to object-based attention. In the present article, we further explore constraints on the mechanisms of object-based selection by examining the contribution of the strength of object representations to the single-object advantage. We manipulated factors such as exposure duration (i.e., preview time) and salience of configuration (i.e., objects). Varying preview time changes the magnitude of the object-based effect, so that if there is ample time to establish an object representation (i.e., preview time of 1,000 msec), then both probability and configuration (i.e., objects) guide attentional selection. If, however, insufficient time is provided to establish a robust object-based representation, then only probabilities guide attentional selection. Interestingly, at a short preview time of 200 msec, when the two objects were sufficiently different from each other (i.e., different colors), both configuration and probability guided attention selection. These results suggest that object-based effects can be explained both in terms of strength of object representations (established at longer exposure durations and by pictorial cues) and probabilistic contingencies in the visual environment.

  7. Object-Based Visual Attention in 8-Month-Old Infants: Evidence from an Eye-Tracking Study

    ERIC Educational Resources Information Center

    Bulf, Hermann; Valenza, Eloisa

    2013-01-01

    Visual attention is one of the infant's primary tools for gathering relevant information from the environment for further processing and learning. The space-based component of visual attention in infants has been widely investigated; however, the object-based component of visual attention has received scarce interest. This scarcity is…

  8. Should I Stay or Should I Go? Attentional Disengagement from Visually Unique and Unexpected Items at Fixation

    ERIC Educational Resources Information Center

    Brockmole, James R.; Boot, Walter R.

    2009-01-01

    Distinctive aspects of a scene can capture attention even when they are irrelevant to one's goals. The authors address whether visually unique, unexpected, but task-irrelevant features also tend to hold attention. Observers searched through displays in which the color of each item was irrelevant. At the start of search, all objects changed color.…

  9. The role of attention in binding visual features in working memory: evidence from cognitive ageing.

    PubMed

    Brown, Louise A; Brockmole, James R

    2010-10-01

    Two experiments were conducted to assess the costs of attentional load during a feature (colour-shape) binding task in younger and older adults. Experiment 1 showed that a demanding backwards counting task, which draws upon central executive/general attentional resources, reduced binding to a greater extent than individual feature memory, but the effect was no greater in older than in younger adults. Experiment 2 showed that presenting memory items sequentially rather than simultaneously, such that items are required to be maintained while new representations are created, selectively affects binding performance in both age groups. Although this experiment exhibited an age-related binding deficit overall, both age groups were affected by the attention manipulation to an equal extent. While a role for attentional processes in colour-shape binding was apparent across both experiments, manipulations of attention exerted equal effects in both age groups. We therefore conclude that age-related binding deficits neither emerge nor are exacerbated under conditions of high attentional load. Implications for theories of visual working memory and cognitive ageing are discussed.

  10. Reward- and attention-related biasing of sensory selection in visual cortex.

    PubMed

    Buschschulte, Antje; Boehler, Carsten N; Strumpf, Hendrik; Stoppel, Christian; Heinze, Hans-Jochen; Schoenfeld, Mircea A; Hopf, Jens-Max

    2014-05-01

    Attention to task-relevant features leads to a biasing of sensory selection in extrastriate cortex. Features signaling reward seem to produce a similar bias, but how modulatory effects due to reward and attention relate to each other is largely unexplored. To address this issue, it is critical to separate top-down settings defining reward relevance from those defining attention. To this end, we used a visual search paradigm in which the target's definition (attention to color) was dissociated from reward relevance by delivering monetary reward on search frames where a certain task-irrelevant color was combined with the target-defining color to form the target object. We assessed the state of neural biasing for the attended and reward-relevant color by analyzing the neuromagnetic brain response to asynchronously presented irrelevant distractor probes drawn in the target-defining color, the reward-relevant color, and a completely irrelevant color as a reference. We observed that for the prospect of moderate rewards, the target-defining color but not the reward-relevant color produced a selective enhancement of the neuromagnetic response between 180 and 280 msec in ventral extrastriate visual cortex. Increasing reward prospect caused a delayed attenuation (220-250 msec) of the response to reward probes, which followed a prior (160-180 msec) response enhancement in dorsal ACC. Notably, shorter latency responses in dorsal ACC were associated with stronger attenuation in extrastriate visual cortex. Finally, an analysis of the brain response to the search frames revealed that the presence of the reward-relevant color in search distractors elicited an enhanced response that was abolished after increasing reward size. The present data together indicate that when top-down definitions of reward relevance and attention are separated, the behavioral significance of reward-associated features is still rapidly coded in higher-level cortex areas, thereby commanding effective top-down inhibitory control to counter a selection bias for those features in extrastriate visual cortex.

  11. Concurrent deployment of visual attention and response selection bottleneck in a dual-task: Electrophysiological and behavioural evidence.

    PubMed

    Reimer, Christina B; Strobach, Tilo; Schubert, Torsten

    2017-12-01

    Visual attention and response selection are limited in capacity. Here, we investigated whether visual attention requires the same bottleneck mechanism as response selection in a dual-task of the psychological refractory period (PRP) paradigm. The dual-task consisted of an auditory two-choice discrimination Task 1 and a conjunction search Task 2, which were presented at variable temporal intervals (stimulus onset asynchrony, SOA). In conjunction search, visual attention is required to select items and to bind their features resulting in a serial search process around the items in the search display (i.e., set size). We measured the reaction time of the visual search task (RT2) and the N2pc, an event-related potential (ERP), which reflects lateralized visual attention processes. If the response selection processes in Task 1 influence the visual attention processes in Task 2, N2pc latency and amplitude would be delayed and attenuated at short SOA compared to long SOA. The results, however, showed that latency and amplitude were independent of SOA, indicating that visual attention was concurrently deployed to response selection. Moreover, the RT2 analysis revealed an underadditive interaction of SOA and set size. We concluded that visual attention does not require the same bottleneck mechanism as response selection in dual-tasks.

  12. Attention versus consciousness in the visual brain: differences in conception, phenomenology, behavior, neuroanatomy, and physiology.

    PubMed

    Baars, B J

    1999-07-01

    A common confound between consciousness and attention makes it difficult to think clearly about recent advances in the understanding of the visual brain. Visual consciousness involves phenomenal experience of the visual world, but visual attention is more plausibly treated as a function that selects and maintains the selection of potential conscious contents, often unconsciously. In the same sense, eye movements select conscious visual events, which are not the same as conscious visual experience. According to common sense, visual experience is consciousness, and selective processes are labeled as attention. The distinction is reflected in very different behavioral measures and in very different brain anatomy and physiology. Visual consciousness tends to be associated with the "what" stream of visual feature neurons in the ventral temporal lobe. In contrast, attentional selection and maintenance are mediated by other brain regions, ranging from superior colliculi to thalamus, prefrontal cortex, and anterior cingulate. The author applied the common-sense distinction between attention and consciousness to the theoretical positions of M. I. Posner (1992, 1994) and D. LaBerge (1997, 1998) to show how it helps to clarify the evidence. He concluded that clarity of thought is served by calling a thing by its proper name.

  13. Linguistic labels, dynamic visual features, and attention in infant category learning.

    PubMed

    Deng, Wei Sophia; Sloutsky, Vladimir M

    2015-06-01

    How do words affect categorization? According to some accounts, even early in development words are category markers and are different from other features. According to other accounts, early in development words are part of the input and are akin to other features. The current study addressed this issue by examining the role of words and dynamic visual features in category learning in 8- to 12-month-old infants. Infants were familiarized with exemplars from one category in a label-defined or motion-defined condition and then tested with prototypes from the studied category and from a novel contrast category. Eye-tracking results indicated that infants exhibited better category learning in the motion-defined condition than in the label-defined condition, and their attention was more distributed among different features when there was a dynamic visual feature compared with the label-defined condition. These results provide little evidence for the idea that linguistic labels are category markers that facilitate category learning. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Linguistic Labels, Dynamic Visual Features, and Attention in Infant Category Learning

    PubMed Central

    Deng, Wei (Sophia); Sloutsky, Vladimir M.

    2015-01-01

    How do words affect categorization? According to some accounts, even early in development, words are category markers and are different from other features. According to other accounts, early in development, words are part of the input and are akin to other features. The current study addressed this issue by examining the role of words and dynamic visual features in category learning in 8- to 12- month infants. Infants were familiarized with exemplars from one category in a label-defined or motion-defined condition and then tested with prototypes from the studied category and from a novel contrast category. Eye tracking results indicated that infants exhibited better category learning in the motion-defined than in the label-defined condition and their attention was more distributed among different features when there was a dynamic visual feature compared to the label-defined condition. These results provide little evidence for the idea that linguistic labels are category markers that facilitate category learning. PMID:25819100

  15. Detection of emotional faces: salient physical features guide effective visual search.

    PubMed

    Calvo, Manuel G; Nummenmaa, Lauri

    2008-08-01

    In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent, surprised and disgusted faces was found both under upright and inverted display conditions. Inversion slowed down the detection of these faces less than that of others (fearful, angry, and sad). Accordingly, the detection advantage involves processing of featural rather than configural information. The facial features responsible for the detection advantage are located in the mouth rather than the eye region. Computationally modeled visual saliency predicted both attentional orienting and detection. Saliency was greatest for the faces (happy) and regions (mouth) that were fixated earlier and detected faster, and there was close correspondence between the onset of the modeled saliency peak and the time at which observers initially fixated the faces. The authors conclude that visual saliency of specific facial features--especially the smiling mouth--is responsible for facilitated initial orienting, which thus shortens detection. (PsycINFO Database Record (c) 2008 APA, all rights reserved).

  16. The mechanisms of feature inheritance as predicted by a systems-level model of visual attention and decision making.

    PubMed

    Hamker, Fred H

    2008-07-15

    Feature inheritance provides evidence that properties of an invisible target stimulus can be attached to a following mask. We apply a systemslevel model of attention and decision making to explore the influence of memory and feedback connections in feature inheritance. We find that the presence of feedback loops alone is sufficient to account for feature inheritance. Although our simulations do not cover all experimental variations and focus only on the general principle, our result appears of specific interest since the model was designed for a completely different purpose than to explain feature inheritance. We suggest that feedback is an important property in visual perception and provide a description of its mechanism and its role in perception.

  17. Contingent capture of involuntary visual attention interferes with detection of auditory stimuli

    PubMed Central

    Kamke, Marc R.; Harris, Jill

    2014-01-01

    The involuntary capture of attention by salient visual stimuli can be influenced by the behavioral goals of an observer. For example, when searching for a target item, irrelevant items that possess the target-defining characteristic capture attention more strongly than items not possessing that feature. Such contingent capture involves a shift of spatial attention toward the item with the target-defining characteristic. It is not clear, however, if the associated decrements in performance for detecting the target item are entirely due to involuntary orienting of spatial attention. To investigate whether contingent capture also involves a non-spatial interference, adult observers were presented with streams of visual and auditory stimuli and were tasked with simultaneously monitoring for targets in each modality. Visual and auditory targets could be preceded by a lateralized visual distractor that either did, or did not, possess the target-defining feature (a specific color). In agreement with the contingent capture hypothesis, target-colored distractors interfered with visual detection performance (response time and accuracy) more than distractors that did not possess the target color. Importantly, the same pattern of results was obtained for the auditory task: visual target-colored distractors interfered with sound detection. The decrement in auditory performance following a target-colored distractor suggests that contingent capture involves a source of processing interference in addition to that caused by a spatial shift of attention. Specifically, we argue that distractors possessing the target-defining characteristic enter a capacity-limited, serial stage of neural processing, which delays detection of subsequently presented stimuli regardless of the sensory modality. PMID:24920945

  18. Contingent capture of involuntary visual attention interferes with detection of auditory stimuli.

    PubMed

    Kamke, Marc R; Harris, Jill

    2014-01-01

    The involuntary capture of attention by salient visual stimuli can be influenced by the behavioral goals of an observer. For example, when searching for a target item, irrelevant items that possess the target-defining characteristic capture attention more strongly than items not possessing that feature. Such contingent capture involves a shift of spatial attention toward the item with the target-defining characteristic. It is not clear, however, if the associated decrements in performance for detecting the target item are entirely due to involuntary orienting of spatial attention. To investigate whether contingent capture also involves a non-spatial interference, adult observers were presented with streams of visual and auditory stimuli and were tasked with simultaneously monitoring for targets in each modality. Visual and auditory targets could be preceded by a lateralized visual distractor that either did, or did not, possess the target-defining feature (a specific color). In agreement with the contingent capture hypothesis, target-colored distractors interfered with visual detection performance (response time and accuracy) more than distractors that did not possess the target color. Importantly, the same pattern of results was obtained for the auditory task: visual target-colored distractors interfered with sound detection. The decrement in auditory performance following a target-colored distractor suggests that contingent capture involves a source of processing interference in addition to that caused by a spatial shift of attention. Specifically, we argue that distractors possessing the target-defining characteristic enter a capacity-limited, serial stage of neural processing, which delays detection of subsequently presented stimuli regardless of the sensory modality.

  19. The flanker compatibility effect as a function of visual angle, attentional focus, visual transients, and perceptual load: a search for boundary conditions.

    PubMed

    Miller, J

    1991-03-01

    When subjects must respond to a relevant center letter and ignore irrelevant flanking letters, the identities of the flankers produce a response compatibility effect, indicating that they are processed semantically at least to some extent. Because this effect decreases as the separation between target and flankers increases, the effect appears to result from imperfect early selection (attenuation). In the present experiments, several features of the focused attention paradigm were examined, in order to determine whether they might produce the flanker compatibility effect by interfering with the operation of an early selective mechanism. Specifically, the effect might be produced because the paradigm requires subjects to (1) attend exclusively to stimuli within a very small visual angle, (2) maintain a long-term attentional focus on a constant display location, (3) focus attention on an empty display location, (4) exclude onset-transient flankers from semantic processing, or (5) ignore some of the few stimuli in an impoverished visual field. The results indicate that none of these task features is required for semantic processing of unattended stimuli to occur. In fact, visual angle is the only one of the task features that clearly has a strong influence on the size of the flanker compatibility effect. The invariance of the flanker compatibility effect across these conditions suggests that the mechanism for early selection rarely, if ever, completely excludes unattended stimuli from semantic analysis. In addition, it shows that selective mechanisms are relatively insensitive to several factors that might be expected to influence them, thereby supporting the view that spatial separation has a special status for visual selective attention.

  20. Visual search for features and conjunctions following declines in the useful field of view.

    PubMed

    Cosman, Joshua D; Lees, Monica N; Lee, John D; Rizzo, Matthew; Vecera, Shaun P

    2012-01-01

    BACKGROUND/STUDY CONTEXT: Typical measures for assessing the useful field (UFOV) of view involve many components of attention. The objective of the current experiment was to examine differences in visual search efficiency for older individuals with and without UFOV impairment. The authors used a computerized screening instrument to assess the useful field of view and to characterize participants as having an impaired or normal UFOV. Participants also performed two visual search tasks, a feature search (e.g., search for a green target among red distractors) or a conjunction search (e.g., a green target with a gap on its left or right side among red distractors with gaps on the left or right and green distractors with gaps on the top or bottom). Visual search performance did not differ between UFOV impaired and unimpaired individuals when searching for a basic feature. However, search efficiency was lower for impaired individuals than unimpaired individuals when searching for a conjunction of features. The results suggest that UFOV decline in normal aging is associated with conjunction search. This finding suggests that the underlying cause of UFOV decline may arise from an overall decline in attentional efficiency. Because the useful field of view is a reliable predictor of driving safety, the results suggest that decline in the everyday visual behavior of older adults might arise from attentional declines.

  1. Rules infants look by: Testing the assumption of transitivity in visual salience.

    PubMed

    Kibbe, Melissa M; Kaldy, Zsuzsa; Blaser, Erik

    2018-01-01

    What drives infants' attention in complex visual scenes? Early models of infant attention suggested that the degree to which different visual features were detectable determines their attentional priority. Here, we tested this by asking whether two targets - defined by different features, but each equally salient when evaluated independently - would drive attention equally when pitted head-to-head. In Experiment 1, we presented 6-month-old infants with an array of gabor patches in which a target region varied either in color or spatial frequency from the background. Using a forced-choice preferential-looking method, we measured how readily infants fixated the target as its featural difference from the background was parametrically increased. Then, in Experiment 2, we used these psychometric preference functions to choose values for color and spatial frequency targets that were equally salient (preferred), and pitted them against each other within the same display. We reasoned that, if salience is transitive, then the stimuli should be iso-salient and infants should therefore show no systematic preference for either stimulus. On the contrary, we found that infants consistently preferred the color-defined stimulus. This suggests that computing visual salience in more complex scenes needs to include factors above and beyond local salience values.

  2. Distinct roles of the intraparietal sulcus and temporoparietal junction in attentional capture from distractor features: An individual differences approach.

    PubMed

    Painter, David R; Dux, Paul E; Mattingley, Jason B

    2015-07-01

    Setting attention for an elementary visual feature, such as color or motion, results in greater spatial attentional "capture" from items with target compared with distractor features. Thus, capture is contingent on feature-based control settings. Neuroimaging studies suggest that this contingent attentional capture involves interactions between dorsal and ventral frontoparietal networks. To examine the distinct causal influences of these networks on contingent capture, we applied continuous theta-burst stimulation (cTBS) to alter neural excitability within the dorsal intraparietal sulcus (IPS), the ventral temporoparietal junction (TPJ) and a control site, visual area MT. Participants undertook an attentional capture task before and after stimulation, in which they made speeded responses to color-defined targets that were preceded by spatial cues in the target or distractor color. Cues appeared either at the target location (valid) or at a non-target location (invalid). Reaction times were slower for targets preceded by invalid compared with valid cues, demonstrating spatial attentional capture. Cues with the target color captured attention to a greater extent than those with the distractor color, consistent with contingent capture. Effects of cTBS were not evident at the group level, but emerged instead from analyses of individual differences. Target capture magnitude was positively correlated pre- and post-stimulation for all three cortical sites, suggesting that cTBS did not influence target capture. Conversely, distractor capture was positively correlated pre- and post-stimulation of MT, but uncorrelated for IPS and TPJ, suggesting that stimulation of IPS and TPJ selectively disrupted distractor capture. Additionally, the effects of IPS stimulation were predicted by pre-stimulation attentional capture, whereas the effects of TPJ stimulation were predicted by pre-stimulation distractor suppression. The results are consistent with the existence of distinct neural circuits underlying target and distractor capture, as well as distinct roles for the IPS and TPJ. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. The cost of selective attention in category learning: Developmental differences between adults and infants

    PubMed Central

    Best, Catherine A.; Yim, Hyungwook; Sloutsky, Vladimir M.

    2013-01-01

    Selective attention plays an important role in category learning. However, immaturities of top-down attentional control during infancy coupled with successful category learning suggest that early category learning is achieved without attending selectively. Research presented here examines this possibility by focusing on category learning in infants (6–8 months old) and adults. Participants were trained on a novel visual category. Halfway through the experiment, unbeknownst to participants, the to-be-learned category switched to another category, where previously relevant features became irrelevant and previously irrelevant features became relevant. If participants attend selectively to the relevant features of the first category, they should incur a cost of selective attention immediately after the unknown category switch. Results revealed that adults demonstrated a cost, as evidenced by a decrease in accuracy and response time on test trials as well as a decrease in visual attention to newly relevant features. In contrast, infants did not demonstrate a similar cost of selective attention as adults despite evidence of learning both to-be-learned categories. Findings are discussed as supporting multiple systems of category learning and as suggesting that learning mechanisms engaged by adults may be different from those engaged by infants. PMID:23773914

  4. Frontal-parietal synchrony in elderly EEG for visual search.

    PubMed

    Phillips, Steven; Takeda, Yuji

    2010-01-01

    Aging involves selective changes in attentional control. However, its precise effect on visual attention is difficult to discern from behavioural studies alone. In this paper, we employ a recently developed phase-locking measure of synchrony as an indicator of top-down/bottom-up control of attention to assess attentional control in the elderly. Fourteen participants (63-74 years) searched for a target item (coloured, oriented rectangular bar) among a display set of distractors. For the feature search condition, where none of the distractors shared a feature with the target, search time did not increase with display set size (two, or four items). For the conjunctive search condition, where each distractor shared either a colour or orientation feature with the target, search time increased with display size. Phase-locking analysis revealed a significant increase in high gamma-band (36-56 Hz) synchrony indicating greater bottom-up control for feature than conjunctive search. In view of our earlier study on younger (21-32 years) adults (Phillips and Takeda, 2009), these results suggest that older participants are more likely to use bottom-up control of attention, possibly triggered by their greater susceptibility to attentional capture, than younger participants. Copyright (c) 2009 Elsevier B.V. All rights reserved.

  5. Dynamic interactions between visual working memory and saccade target selection

    PubMed Central

    Schneegans, Sebastian; Spencer, John P.; Schöner, Gregor; Hwang, Seongmin; Hollingworth, Andrew

    2014-01-01

    Recent psychophysical experiments have shown that working memory for visual surface features interacts with saccadic motor planning, even in tasks where the saccade target is unambiguously specified by spatial cues. Specifically, a match between a memorized color and the color of either the designated target or a distractor stimulus influences saccade target selection, saccade amplitudes, and latencies in a systematic fashion. To elucidate these effects, we present a dynamic neural field model in combination with new experimental data. The model captures the neural processes underlying visual perception, working memory, and saccade planning relevant to the psychophysical experiment. It consists of a low-level visual sensory representation that interacts with two separate pathways: a spatial pathway implementing spatial attention and saccade generation, and a surface feature pathway implementing color working memory and feature attention. Due to bidirectional coupling between visual working memory and feature attention in the model, the working memory content can indirectly exert an effect on perceptual processing in the low-level sensory representation. This in turn biases saccadic movement planning in the spatial pathway, allowing the model to quantitatively reproduce the observed interaction effects. The continuous coupling between representations in the model also implies that modulation should be bidirectional, and model simulations provide specific predictions for complementary effects of saccade target selection on visual working memory. These predictions were empirically confirmed in a new experiment: Memory for a sample color was biased toward the color of a task-irrelevant saccade target object, demonstrating the bidirectional coupling between visual working memory and perceptual processing. PMID:25228628

  6. Biasing spatial attention with semantic information: an event coding approach.

    PubMed

    Amer, Tarek; Gozli, Davood G; Pratt, Jay

    2017-04-21

    We investigated the influence of conceptual processing on visual attention from the standpoint of Theory of Event Coding (TEC). The theory makes two predictions: first, an important factor in determining the influence of event 1 on processing event 2 is whether features of event 1 are bound into a unified representation (i.e., selection or retrieval of event 1). Second, whether processing the two events facilitates or interferes with each other should depend on the extent to which their constituent features overlap. In two experiments, participants performed a visual-attention cueing task, in which the visual target (event 2) was preceded by a relevant or irrelevant explicit (e.g., "UP") or implicit (e.g., "HAPPY") spatial-conceptual cue (event 1). Consistent with TEC, we found relevant explicit cues (which featurally overlap to a greater extent with the target) and implicit cues (which featurally overlap to a lesser extent), respectively, facilitated and interfered with target processing at compatible locations. Irrelevant explicit and implicit cues, on the other hand, both facilitated target processing, presumably because they were less likely selected or retrieved as an integrated and unified event file. We argue that such effects, often described as "attentional cueing", are better accounted for within the event coding framework.

  7. Attentional Resources in Visual Tracking through Occlusion: The High-Beams Effect

    ERIC Educational Resources Information Center

    Flombaum, Jonathan I.; Scholl, Brian J.; Pylyshyn, Zenon W.

    2008-01-01

    A considerable amount of research has uncovered heuristics that the visual system employs to keep track of objects through periods of occlusion. Relatively little work, by comparison, has investigated the online resources that support this processing. We explored how attention is distributed when featurally identical objects become occluded during…

  8. The Relation Between Selective Attention to Television Forms and Children's Comprehension of Content.

    ERIC Educational Resources Information Center

    Calvert, Sandra L.; And Others

    1982-01-01

    Investigates the relationship between the moment-to-moment occurrence of selected visual and auditory formal features of a prosocial cartoon and two aspects of information processing (visual attention and comprehension). Subjects, 128 White kindergarten and third- to fourth-grade children, were equally distributed by sex and age and viewed the…

  9. Attending to auditory memory.

    PubMed

    Zimmermann, Jacqueline F; Moscovitch, Morris; Alain, Claude

    2016-06-01

    Attention to memory describes the process of attending to memory traces when the object is no longer present. It has been studied primarily for representations of visual stimuli with only few studies examining attention to sound object representations in short-term memory. Here, we review the interplay of attention and auditory memory with an emphasis on 1) attending to auditory memory in the absence of related external stimuli (i.e., reflective attention) and 2) effects of existing memory on guiding attention. Attention to auditory memory is discussed in the context of change deafness, and we argue that failures to detect changes in our auditory environments are most likely the result of a faulty comparison system of incoming and stored information. Also, objects are the primary building blocks of auditory attention, but attention can also be directed to individual features (e.g., pitch). We review short-term and long-term memory guided modulation of attention based on characteristic features, location, and/or semantic properties of auditory objects, and propose that auditory attention to memory pathways emerge after sensory memory. A neural model for auditory attention to memory is developed, which comprises two separate pathways in the parietal cortex, one involved in attention to higher-order features and the other involved in attention to sensory information. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. The Comparison of Visual Working Memory Representations with Perceptual Inputs

    PubMed Central

    Hyun, Joo-seok; Woodman, Geoffrey F.; Vogel, Edward K.; Hollingworth, Andrew

    2008-01-01

    The human visual system can notice differences between memories of previous visual inputs and perceptions of new visual inputs, but the comparison process that detects these differences has not been well characterized. This study tests the hypothesis that differences between the memory of a stimulus array and the perception of a new array are detected in a manner that is analogous to the detection of simple features in visual search tasks. That is, just as the presence of a task-relevant feature in visual search can be detected in parallel, triggering a rapid shift of attention to the object containing the feature, the presence of a memory-percept difference along a task-relevant dimension can be detected in parallel, triggering a rapid shift of attention to the changed object. Supporting evidence was obtained in a series of experiments that examined manual reaction times, saccadic reaction times, and event-related potential latencies. However, these experiments also demonstrated that a slow, limited-capacity process must occur before the observer can make a manual change-detection response. PMID:19653755

  11. Unconscious Familiarity-based Color-Form Binding: Evidence from Visual Extinction.

    PubMed

    Rappaport, Sarah J; Riddoch, M Jane; Chechlacz, Magda; Humphreys, Glyn W

    2016-03-01

    There is good evidence that early visual processing involves the coding of different features in independent brain regions. A major question, then, is how we see the world in an integrated manner, in which the different features are "bound" together. A standard account of this has been that feature binding depends on attention to the stimulus, which enables only the relevant features to be linked together [Treisman, A., & Gelade, G. A feature-integration theory of attention. Cognitive Psychology, 12, 97-136, 1980]. Here we test this influential idea by examining whether, in patients showing visual extinction, the processing of otherwise unconscious (extinguished) stimuli is modulated by presenting objects in their correct (familiar) color. Correctly colored objects showed reduced extinction when they had a learned color, and this color matched across the ipsi- and contralesional items (red strawberry + red tomato). In contrast, there was no reduction in extinction under the same conditions when the stimuli were colored incorrectly (blue strawberry + blue tomato; Experiment 1). The result was not due to the speeded identification of a correctly colored ipsilesional item, as there was no benefit from having correctly colored objects in different colors (red strawberry + yellow lemon; Experiment 2). There was also no benefit to extinction from presenting the correct colors in the background of each item (Experiment 3). The data suggest that learned color-form binding can reduce extinction even when color is irrelevant for the task. The result is consistent with preattentive binding of color and shape for familiar stimuli.

  12. Subliminally presented and stored objects capture spatial attention.

    PubMed

    Astle, Duncan E; Nobre, Anna C; Scerif, Gaia

    2010-03-10

    When objects disappear from view, we can still bring them to mind, at least for brief periods of time, because we can represent those objects in visual short-term memory (VSTM) (Sperling, 1960; Cowan, 2001). A defining characteristic of this representation is that it is topographic, that is, it preserves a spatial organization based on the original visual percept (Vogel and Machizawa, 2004; Astle et al., 2009; Kuo et al., 2009). Recent research has also shown that features or locations of visual items that match those being maintained in conscious VSTM automatically capture our attention (Awh and Jonides, 2001; Olivers et al., 2006; Soto et al., 2008). But do objects leave some trace that can guide spatial attention, even without participants intentionally remembering them? Furthermore, could subliminally presented objects leave a topographically arranged representation that can capture attention? We presented objects either supraliminally or subliminally and then 1 s later re-presented one of those objects in a new location, as a "probe" shape. As participants made an arbitrary perceptual judgment on the probe shape, their covert spatial attention was drawn to the original location of that shape, regardless of whether its initial presentation had been supraliminal or subliminal. We demonstrate this with neural and behavioral measures of memory-driven attentional capture. These findings reveal the existence of a topographically arranged store of "visual" objects, the content of which is beyond our explicit awareness but which nonetheless guides spatial attention.

  13. iPixel: a visual content-based and semantic search engine for retrieving digitized mammograms by using collective intelligence.

    PubMed

    Alor-Hernández, Giner; Pérez-Gallardo, Yuliana; Posada-Gómez, Rubén; Cortes-Robles, Guillermo; Rodríguez-González, Alejandro; Aguilar-Laserre, Alberto A

    2012-09-01

    Nowadays, traditional search engines such as Google, Yahoo and Bing facilitate the retrieval of information in the format of images, but the results are not always useful for the users. This is mainly due to two problems: (1) the semantic keywords are not taken into consideration and (2) it is not always possible to establish a query using the image features. This issue has been covered in different domains in order to develop content-based image retrieval (CBIR) systems. The expert community has focussed their attention on the healthcare domain, where a lot of visual information for medical analysis is available. This paper provides a solution called iPixel Visual Search Engine, which involves semantics and content issues in order to search for digitized mammograms. iPixel offers the possibility of retrieving mammogram features using collective intelligence and implementing a CBIR algorithm. Our proposal compares not only features with similar semantic meaning, but also visual features. In this sense, the comparisons are made in different ways: by the number of regions per image, by maximum and minimum size of regions per image and by average intensity level of each region. iPixel Visual Search Engine supports the medical community in differential diagnoses related to the diseases of the breast. The iPixel Visual Search Engine has been validated by experts in the healthcare domain, such as radiologists, in addition to experts in digital image analysis.

  14. A parieto-medial temporal pathway for the strategic control over working memory biases in human visual attention.

    PubMed

    Soto, David; Greene, Ciara M; Kiyonaga, Anastasia; Rosenthal, Clive R; Egner, Tobias

    2012-12-05

    The contents of working memory (WM) can both aid and disrupt the goal-directed allocation of visual attention. WM benefits attention when its contents overlap with goal-relevant stimulus features, but WM leads attention astray when its contents match features of currently irrelevant stimuli. Recent behavioral data have documented that WM biases of attention may be subject to strategic cognitive control processes whereby subjects are able to either enhance or inhibit the influence of WM contents on attention. However, the neural mechanisms supporting cognitive control over WM biases on attention are presently unknown. Here, we characterize these mechanisms by combining human functional magnetic resonance imaging with a task that independently manipulates the relationship between WM cues and attention targets during visual search (with WM contents matching either search targets or distracters), as well as the predictability of this relationship (100 vs 50% predictability) to assess participants' ability to strategically enhance or inhibit WM biases on attention when WM contents reliably matched targets or distracter stimuli, respectively. We show that cues signaling predictable (> unpredictable) WM-attention relations reliably enhanced search performance, and that this strategic modulation of the interplay between WM contents and visual attention was mediated by a neuroanatomical network involving the posterior parietal cortex, the posterior cingulate, and medial temporal lobe structures, with responses in the hippocampus proper correlating with behavioral measures of strategic control of WM biases. Thus, we delineate a novel parieto-medial temporal pathway implementing cognitive control over WM biases to optimize goal-directed selection.

  15. Out of sight, out of mind: Categorization learning and normal aging.

    PubMed

    Schenk, Sabrina; Minda, John P; Lech, Robert K; Suchan, Boris

    2016-10-01

    The present combined EEG and eye tracking study examined the process of categorization learning at different age ranges and aimed to investigate to which degree categorization learning is mediated by visual attention and perceptual strategies. Seventeen young subjects and ten elderly subjects had to perform a visual categorization task with two abstract categories. Each category consisted of prototypical stimuli and an exception. The categorization of prototypical stimuli was learned very early during the experiment, while the learning of exceptions was delayed. The categorization of exceptions was accompanied by higher P150, P250 and P300 amplitudes. In contrast to younger subjects, elderly subjects had problems in the categorization of exceptions, but showed an intact categorization performance for prototypical stimuli. Moreover, elderly subjects showed higher fixation rates for important stimulus features and higher P150 amplitudes, which were positively correlated with the categorization performances. These results indicate that elderly subjects compensate for cognitive decline through enhanced perceptual and attentional processing of individual stimulus features. Additionally, a computational approach has been applied and showed a transition away from purely abstraction-based learning to an exemplar-based learning in the middle block for both groups. However, the calculated models provide a better fit for younger subjects than for elderly subjects. The current study demonstrates that human categorization learning is based on early abstraction-based processing followed by an exemplar-memorization stage. This strategy combination facilitates the learning of real world categories with a nuanced category structure. In addition, the present study suggests that categorization learning is affected by normal aging and modulated by perceptual processing and visual attention. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. All eyes on relevance: strategic allocation of attention as a result of feature-based task demands in multiple object tracking.

    PubMed

    Brockhoff, Alisa; Huff, Markus

    2016-10-01

    Multiple object tracking (MOT) plays a fundamental role in processing and interpreting dynamic environments. Regarding the type of information utilized by the observer, recent studies reported evidence for the use of object features in an automatic, low- level manner. By introducing a novel paradigm that allowed us to combine tracking with a noninterfering top-down task, we tested whether a voluntary component can regulate the deployment of attention to task-relevant features in a selective manner. In four experiments we found conclusive evidence for a task-driven selection mechanism that guides attention during tracking: The observers were able to ignore or prioritize distinct objects. They marked the distinct (cued) object (target/distractor) more or less often than other objects of the same type (targets /distractors)-but only when they had received an identification task that required them to actively process object features (cues) during tracking. These effects are discussed with regard to existing theoretical approaches to attentive tracking, gaze-cue usability as well as attentional readiness, a term that originally stems from research on attention capture and visual search. Our findings indicate that existing theories of MOT need to be adjusted to allow for flexible top-down, voluntary processing during tracking.

  17. Perceptual grouping enhances visual plasticity.

    PubMed

    Mastropasqua, Tommaso; Turatto, Massimo

    2013-01-01

    Visual perceptual learning, a manifestation of neural plasticity, refers to improvements in performance on a visual task achieved by training. Attention is known to play an important role in perceptual learning, given that the observer's discriminative ability improves only for those stimulus feature that are attended. However, the distribution of attention can be severely constrained by perceptual grouping, a process whereby the visual system organizes the initial retinal input into candidate objects. Taken together, these two pieces of evidence suggest the interesting possibility that perceptual grouping might also affect perceptual learning, either directly or via attentional mechanisms. To address this issue, we conducted two experiments. During the training phase, participants attended to the contrast of the task-relevant stimulus (oriented grating), while two similar task-irrelevant stimuli were presented in the adjacent positions. One of the two flanking stimuli was perceptually grouped with the attended stimulus as a consequence of its similar orientation (Experiment 1) or because it was part of the same perceptual object (Experiment 2). A test phase followed the training phase at each location. Compared to the task-irrelevant no-grouping stimulus, orientation discrimination improved at the attended location. Critically, a perceptual learning effect equivalent to the one observed for the attended location also emerged for the task-irrelevant grouping stimulus, indicating that perceptual grouping induced a transfer of learning to the stimulus (or feature) being perceptually grouped with the task-relevant one. Our findings indicate that no voluntary effort to direct attention to the grouping stimulus or feature is necessary to enhance visual plasticity.

  18. Signal detection evidence for limited capacity in visual search

    PubMed Central

    Fencsik, David E.; Flusberg, Stephen J.; Horowitz, Todd S.; Wolfe, Jeremy M.

    2014-01-01

    The nature of capacity limits (if any) in visual search has been a topic of controversy for decades. In 30 years of work, researchers have attempted to distinguish between two broad classes of visual search models. Attention-limited models have proposed two stages of perceptual processing: an unlimited-capacity preattentive stage, and a limited-capacity selective attention stage. Conversely, noise-limited models have proposed a single, unlimited-capacity perceptual processing stage, with decision processes influenced only by stochastic noise. Here, we use signal detection methods to test a strong prediction of attention-limited models. In standard attention-limited models, performance of some searches (feature searches) should only be limited by a preattentive stage. Other search tasks (e.g., spatial configuration search for a “2” among “5”s) should be additionally limited by an attentional bottleneck. We equated average accuracies for a feature and a spatial configuration search over set sizes of 1–8 for briefly presented stimuli. The strong prediction of attention-limited models is that, given overall equivalence in performance, accuracy should be better on the spatial configuration search than on the feature search for set size 1, and worse for set size 8. We confirm this crossover interaction and show that it is problematic for at least one class of one-stage decision models. PMID:21901574

  19. Quantitative EEG features selection in the classification of attention and response control in the children and adolescents with attention deficit hyperactivity disorder.

    PubMed

    Bashiri, Azadeh; Shahmoradi, Leila; Beigy, Hamid; Savareh, Behrouz A; Nosratabadi, Masood; N Kalhori, Sharareh R; Ghazisaeedi, Marjan

    2018-06-01

    Quantitative EEG gives valuable information in the clinical evaluation of psychological disorders. The purpose of the present study is to identify the most prominent features of quantitative electroencephalography (QEEG) that affect attention and response control parameters in children with attention deficit hyperactivity disorder. The QEEG features and the Integrated Visual and Auditory-Continuous Performance Test ( IVA-CPT) of 95 attention deficit hyperactivity disorder subjects were preprocessed by Independent Evaluation Criterion for Binary Classification. Then, the importance of selected features in the classification of desired outputs was evaluated using the artificial neural network. Findings uncovered the highest rank of QEEG features in each IVA-CPT parameters related to attention and response control. Using the designed model could help therapists to determine the existence or absence of defects in attention and response control relying on QEEG.

  20. More insight into the interplay of response selection and visual attention in dual-tasks: masked visual search and response selection are performed in parallel.

    PubMed

    Reimer, Christina B; Schubert, Torsten

    2017-09-15

    Both response selection and visual attention are limited in capacity. According to the central bottleneck model, the response selection processes of two tasks in a dual-task situation are performed sequentially. In conjunction search, visual attention is required to select the items and to bind their features (e.g., color and form), which results in a serial search process. Search time increases as items are added to the search display (i.e., set size effect). When the search display is masked, visual attention deployment is restricted to a brief period of time and target detection decreases as a function of set size. Here, we investigated whether response selection and visual attention (i.e., feature binding) rely on a common or on distinct capacity limitations. In four dual-task experiments, participants completed an auditory Task 1 and a conjunction search Task 2 that were presented with an experimentally modulated temporal interval between them (Stimulus Onset Asynchrony, SOA). In Experiment 1, Task 1 was a two-choice discrimination task and the conjunction search display was not masked. In Experiment 2, the response selection difficulty in Task 1 was increased to a four-choice discrimination and the search task was the same as in Experiment 1. We applied the locus-of-slack method in both experiments to analyze conjunction search time, that is, we compared the set size effects across SOAs. Similar set size effects across SOAs (i.e., additive effects of SOA and set size) would indicate sequential processing of response selection and visual attention. However, a significantly smaller set size effect at short SOA compared to long SOA (i.e., underadditive interaction of SOA and set size) would indicate parallel processing of response selection and visual attention. In both experiments, we found underadditive interactions of SOA and set size. In Experiments 3 and 4, the conjunction search display in Task 2 was masked. Task 1 was the same as in Experiments 1 and 2, respectively. In both experiments, the d' analysis revealed that response selection did not affect target detection. Overall, Experiments 1-4 indicated that neither the response selection difficulty in the auditory Task 1 (i.e., two-choice vs. four-choice) nor the type of presentation of the search display in Task 2 (i.e., not masked vs. masked) impaired parallel processing of response selection and conjunction search. We concluded that in general, response selection and visual attention (i.e., feature binding) rely on distinct capacity limitations.

  1. Visual attention shifting in autism spectrum disorders.

    PubMed

    Richard, Annette E; Lajiness-O'Neill, Renee

    2015-01-01

    Abnormal visual attention has been frequently observed in autism spectrum disorders (ASD). Abnormal shifting of visual attention is related to abnormal development of social cognition and has been identified as a key neuropsychological finding in ASD. Better characterizing attention shifting in ASD and its relationship with social functioning may help to identify new targets for intervention and improving social communication in these disorders. Thus, the current study investigated deficits in attention shifting in ASD as well as relationships between attention shifting and social communication in ASD and neurotypicals (NT). To investigate deficits in visual attention shifting in ASD, 20 ASD and 20 age- and gender-matched NT completed visual search (VS) and Navon tasks with attention-shifting demands as well as a set-shifting task. VS was a feature search task with targets defined in one of two dimensions; Navon required identification of a target letter presented at the global or local level. Psychomotor and processing speed were entered as covariates. Relationships between visual attention shifting, set shifting, and social functioning were also examined. ASD and NT showed comparable costs of shifting attention. However, psychomotor and processing speed were slower in ASD than in NT, and psychomotor and processing speed were positively correlated with attention-shifting costs on Navon and VS, respectively, for both groups. Attention shifting on VS and Navon were correlated among NT, while attention shifting on Navon was correlated with set shifting among ASD. Attention-shifting costs on Navon were positively correlated with restricted and repetitive behaviors among ASD. Relationships between attention shifting and psychomotor and processing speed, as well as relationships between measures of different aspects of visual attention shifting, suggest inefficient top-down influences over preattentive visual processing in ASD. Inefficient attention shifting may be related to restricted and repetitive behaviors in these disorders.

  2. Visual Foraging With Fingers and Eye Gaze

    PubMed Central

    Thornton, Ian M.; Smith, Irene J.; Chetverikov, Andrey; Kristjánsson, Árni

    2016-01-01

    A popular model of the function of selective visual attention involves search where a single target is to be found among distractors. For many scenarios, a more realistic model involves search for multiple targets of various types, since natural tasks typically do not involve a single target. Here we present results from a novel multiple-target foraging paradigm. We compare finger foraging where observers cancel a set of predesignated targets by tapping them, to gaze foraging where observers cancel items by fixating them for 100 ms. During finger foraging, for most observers, there was a large difference between foraging based on a single feature, where observers switch easily between target types, and foraging based on a conjunction of features where observers tended to stick to one target type. The pattern was notably different during gaze foraging where these condition differences were smaller. Two conclusions follow: (a) The fact that a sizeable number of observers (in particular during gaze foraging) had little trouble switching between different target types raises challenges for many prominent theoretical accounts of visual attention and working memory. (b) While caveats must be noted for the comparison of gaze and finger foraging, the results suggest that selection mechanisms for gaze and pointing have different operational constraints. PMID:27433323

  3. Saccadic eye movements do not disrupt the deployment of feature-based attention.

    PubMed

    Kalogeropoulou, Zampeta; Rolfs, Martin

    2017-07-01

    The tight link of saccades to covert spatial attention has been firmly established, yet their relation to other forms of visual selection remains poorly understood. Here we studied the temporal dynamics of feature-based attention (FBA) during fixation and across saccades. Participants reported the orientation (on a continuous scale) of one of two sets of spatially interspersed Gabors (black or white). We tested performance at different intervals between the onset of a colored cue (black or white, indicating which stimulus was the most probable target; red: neutral condition) and the stimulus. FBA built up after cue onset: Benefits (errors for valid vs. neutral cues), costs (invalid vs. neutral), and the overall cueing effect (valid vs. invalid) increased with the cue-stimulus interval. Critically, we also tested visual performance at different intervals after a saccade, when FBA had been fully deployed before saccade initiation. Cueing effects were evident immediately after the saccade and were predicted most accurately and most precisely by fully deployed FBA, indicating that FBA was continuous throughout saccades. Finally, a decomposition of orientation reports into target reports and random guesses confirmed continuity of report precision and guess rates across the saccade. We discuss the role of FBA in perceptual continuity across saccades.

  4. Both hand position and movement direction modulate visual attention

    PubMed Central

    Festman, Yariv; Adam, Jos J.; Pratt, Jay; Fischer, Martin H.

    2013-01-01

    The current study explored effects of continuous hand motion on the allocation of visual attention. A concurrent paradigm was used to combine visually concealed continuous hand movements with an attentionally demanding letter discrimination task. The letter probe appeared contingent upon the moving right hand passing through one of six positions. Discrimination responses were then collected via a keyboard press with the static left hand. Both the right hand's position and its movement direction systematically contributed to participants' visual sensitivity. Discrimination performance increased substantially when the right hand was distant from, but moving toward the visual probe location (replicating the far-hand effect, Festman et al., 2013). However, this effect disappeared when the probe appeared close to the static left hand, supporting the view that static and dynamic features of both hands combine in modulating pragmatic maps of attention. PMID:24098288

  5. A competitive interaction theory of attentional selection and decision making in brief, multielement displays.

    PubMed

    Smith, Philip L; Sewell, David K

    2013-07-01

    We generalize the integrated system model of Smith and Ratcliff (2009) to obtain a new theory of attentional selection in brief, multielement visual displays. The theory proposes that attentional selection occurs via competitive interactions among detectors that signal the presence of task-relevant features at particular display locations. The outcome of the competition, together with attention, determines which stimuli are selected into visual short-term memory (VSTM). Decisions about the contents of VSTM are made by a diffusion-process decision stage. The selection process is modeled by coupled systems of shunting equations, which perform gated where-on-what pathway VSTM selection. The theory provides a computational account of key findings from attention tasks with near-threshold stimuli. These are (a) the success of the MAX model of visual search and spatial cuing, (b) the distractor homogeneity effect, (c) the double-target detection deficit, (d) redundancy costs in the post-stimulus probe task, (e) the joint item and information capacity limits of VSTM, and (f) the object-based nature of attentional selection. We argue that these phenomena are all manifestations of an underlying competitive VSTM selection process, which arise as a natural consequence of our theory. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  6. Intentional attention switching in dichotic listening: exploring the efficiency of nonspatial and spatial selection.

    PubMed

    Lawo, Vera; Fels, Janina; Oberem, Josefa; Koch, Iring

    2014-10-01

    Using an auditory variant of task switching, we examined the ability to intentionally switch attention in a dichotic-listening task. In our study, participants responded selectively to one of two simultaneously presented auditory number words (spoken by a female and a male, one for each ear) by categorizing its numerical magnitude. The mapping of gender (female vs. male) and ear (left vs. right) was unpredictable. The to-be-attended feature for gender or ear, respectively, was indicated by a visual selection cue prior to auditory stimulus onset. In Experiment 1, explicitly cued switches of the relevant feature dimension (e.g., from gender to ear) and switches of the relevant feature within a dimension (e.g., from male to female) occurred in an unpredictable manner. We found large performance costs when the relevant feature switched, but switches of the relevant feature dimension incurred only small additional costs. The feature-switch costs were larger in ear-relevant than in gender-relevant trials. In Experiment 2, we replicated these findings using a simplified design (i.e., only within-dimension switches with blocked dimensions). In Experiment 3, we examined preparation effects by manipulating the cueing interval and found a preparation benefit only when ear was cued. Together, our data suggest that the large part of attentional switch costs arises from reconfiguration at the level of relevant auditory features (e.g., left vs. right) rather than feature dimensions (ear vs. gender). Additionally, our findings suggest that ear-based target selection benefits more from preparation time (i.e., time to direct attention to one ear) than gender-based target selection.

  7. Reward and attentional control in visual search.

    PubMed

    Yantis, Steven; Anderson, Brian A; Wampler, Emma K; Laurent, Patryk A

    2012-01-01

    It has long been known that the control of attention in visual search depends both on voluntary, top-down deployment according to context-specific goals, and on involuntary, stimulus-driven capture based on the physical conspicuity of perceptual objects. Recent evidence suggests that pairing target stimuli with reward can modulate the voluntary deployment of attention, but there is little evidence that reward modulates the involuntary deployment of attention to task-irrelevant distractors. We report several experiments that investigate the role of reward learning on attentional control. Each experiment involved a training phase and a test phase. In the training phase, different colors were associated with different amounts of monetary reward. In the test phase, color was not task-relevant and participants searched for a shape singleton; in most experiments no reward was delivered in the test phase. We first show that attentional capture by physically salient distractors is magnified by a previous association with reward. In subsequent experiments we demonstrate that physically inconspicuous stimuli previously associated with reward capture attention persistently during extinction--even several days after training. Furthermore, vulnerability to attentional capture by high-value stimuli is negatively correlated across individuals with working memory capacity and positively correlated with trait impulsivity. An analysis of intertrial effects reveals that value-driven attentional capture is spatially specific. Finally, when reward is delivered at test contingent on the task-relevant shape feature, recent reward history modulates value-driven attentional capture by the irrelevant color feature. The influence of learned value on attention may provide a useful model of clinical syndromes characterized by similar failures of cognitive control, including addiction, attention-deficit/hyperactivity disorder, and obesity.

  8. Reward and Attentional Control in Visual Search

    PubMed Central

    Anderson, Brian A.; Wampler, Emma K.; Laurent, Patryk A.

    2015-01-01

    It has long been known that the control of attention in visual search depends both on voluntary, top-down deployment according to context-specific goals, and on involuntary, stimulus-driven capture based on the physical conspicuity of perceptual objects. Recent evidence suggests that pairing target stimuli with reward can modulate the voluntary deployment of attention, but there is little evidence that reward modulates the involuntary deployment of attention to task-irrelevant distractors. We report several experiments that investigate the role of reward learning on attentional control. Each experiment involved a training phase and a test phase. In the training phase, different colors were associated with different amounts of monetary reward. In the test phase, color was not task-relevant and participants searched for a shape singleton; in most experiments no reward was delivered in the test phase. We first show that attentional capture by physically salient distractors is magnified by a previous association with reward. In subsequent experiments we demonstrate that physically inconspicuous stimuli previously associated with reward capture attention persistently during extinction—even several days after training. Furthermore, vulnerability to attentional capture by high-value stimuli is negatively correlated across individuals with working memory capacity and positively correlated with trait impulsivity. An analysis of intertrial effects reveals that value-driven attentional capture is spatially specific. Finally, when reward is delivered at test contingent on the task-relevant shape feature, recent reward history modulates value-driven attentional capture by the irrelevant color feature. The influence of learned value on attention may provide a useful model of clinical syndromes characterized by similar failures of cognitive control, including addiction, attention-deficit/hyperactivity disorder, and obesity. PMID:23437631

  9. The effect of spatial attention on invisible stimuli.

    PubMed

    Shin, Kilho; Stolte, Moritz; Chong, Sang Chul

    2009-10-01

    The influence of selective attention on visual processing is widespread. Recent studies have demonstrated that spatial attention can affect processing of invisible stimuli. However, it has been suggested that this effect is limited to low-level features, such as line orientations. The present experiments investigated whether spatial attention can influence both low-level (contrast threshold) and high-level (gender discrimination) adaptation, using the same method of attentional modulation for both types of stimuli. We found that spatial attention was able to increase the amount of adaptation to low- as well as to high-level invisible stimuli. These results suggest that attention can influence perceptual processes independent of visual awareness.

  10. Color priming in pop-out search depends on the relative color of the target

    PubMed Central

    Becker, Stefanie I.; Valuch, Christian; Ansorge, Ulrich

    2014-01-01

    In visual search for pop-out targets, search times are shorter when the target and non-target colors from the previous trial are repeated than when they change. This priming effect was originally attributed to a feature weighting mechanism that biases attention toward the target features, and away from the non-target features. However, more recent studies have shown that visual selection is strongly context-dependent: according to a relational account of feature priming, the target color is always encoded relative to the non-target color (e.g., as redder or greener). The present study provides a critical test of this hypothesis, by varying the colors of the search items such that either the relative color or the absolute color of the target always remained constant (or both). The results clearly show that color priming depends on the relative color of a target with respect to the non-targets but not on its absolute color value. Moreover, the observed priming effects did not change over the course of the experiment, suggesting that the visual system encodes colors in a relative manner from the start of the experiment. Taken together, these results strongly support a relational account of feature priming in visual search, and are inconsistent with the dominant feature-based views. PMID:24782795

  11. Attention has memory: priming for the size of the attentional focus.

    PubMed

    Fuggetta, Giorgio; Lanfranchi, Silvia; Campana, Gianluca

    2009-01-01

    Repeating the same target's features or spatial position, as well as repeating the same context (e.g. distractor sets) in visual search leads to a decrease of reaction times. This modulation can occur on a trial by trial basis (the previous trial primes the following one), but can also occur across multiple trials (i.e. performance in the current trial can benefit from features, position or context seen several trials earlier), and includes inhibition of different features, position or contexts besides facilitation of the same ones. Here we asked whether a similar implicit memory mechanism exists for the size of the attentional focus. By manipulating the size of the attentional focus with the repetition of search arrays with the same vs. different size, we found both facilitation for the same array size and inhibition for a different array size, as well as a progressive improvement in performance with increasing the number of repetition of search arrays with the same size. These results show that implicit memory for the size of the attentional focus can guide visual search even in the absence of feature or position priming, or distractor's contextual effects.

  12. Neural oscillatory deficits in schizophrenia predict behavioral and neurocognitive impairments

    PubMed Central

    Martínez, Antígona; Gaspar, Pablo A.; Hillyard, Steven A.; Bickel, Stephan; Lakatos, Peter; Dias, Elisa C.; Javitt, Daniel C.

    2015-01-01

    Paying attention to visual stimuli is typically accompanied by event-related desynchronizations (ERD) of ongoing alpha (7–14 Hz) activity in visual cortex. The present study used time-frequency based analyses to investigate the role of impaired alpha ERD in visual processing deficits in schizophrenia (Sz). Subjects viewed sinusoidal gratings of high (HSF) and low (LSF) spatial frequency (SF) designed to test functioning of the parvo- vs. magnocellular pathways, respectively. Patients with Sz and healthy controls paid attention selectively to either the LSF or HSF gratings which were presented in random order. Event-related brain potentials (ERPs) were recorded to all stimuli. As in our previous study, it was found that Sz patients were selectively impaired at detecting LSF target stimuli and that ERP amplitudes to LSF stimuli were diminished, both for the early sensory-evoked components and for the attend minus unattend difference component (the Selection Negativity), which is generally regarded as a specific index of feature-selective attention. In the time-frequency domain, the differential ERP deficits to LSF stimuli were echoed in a virtually absent theta-band phase locked response to both unattended and attended LSF stimuli (along with relatively intact theta-band activity for HSF stimuli). In contrast to the theta-band evoked responses which were tightly stimulus locked, stimulus-induced desynchronizations of ongoing alpha activity were not tightly stimulus locked and were apparent only in induced power analyses. Sz patients were significantly impaired in the attention-related modulation of ongoing alpha activity for both HSF and LSF stimuli. These deficits correlated with patients’ behavioral deficits in visual information processing as well as with visually based neurocognitive deficits. These findings suggest an additional, pathway-independent, mechanism by which deficits in early visual processing contribute to overall cognitive impairment in Sz. PMID:26190988

  13. Visual working memory simultaneously guides facilitation and inhibition during visual search.

    PubMed

    Dube, Blaire; Basciano, April; Emrich, Stephen M; Al-Aidroos, Naseem

    2016-07-01

    During visual search, visual working memory (VWM) supports the guidance of attention in two ways: It stores the identity of the search target, facilitating the selection of matching stimuli in the search array, and it maintains a record of the distractors processed during search so that they can be inhibited. In two experiments, we investigated whether the full contents of VWM can be used to support both of these abilities simultaneously. In Experiment 1, participants completed a preview search task in which (a) a subset of search distractors appeared before the remainder of the search items, affording participants the opportunity to inhibit them, and (b) the search target varied from trial to trial, requiring the search target template to be maintained in VWM. We observed the established signature of VWM-based inhibition-reduced ability to ignore previewed distractors when the number of distractors exceeds VWM's capacity-suggesting that VWM can serve this role while also representing the target template. In Experiment 2, we replicated Experiment 1, but added to the search displays a singleton distractor that sometimes matched the color (a task-irrelevant feature) of the search target, to evaluate capture. We again observed the signature of VWM-based preview inhibition along with attentional capture by (and, thus, facilitation of) singletons matching the target template. These findings indicate that more than one VWM representation can bias attention at a time, and that these representations can separately affect selection through either facilitation or inhibition, placing constraints on existing models of the VWM-based guidance of attention.

  14. Spatial Attention Effects during Conscious and Nonconscious Processing of Visual Features and Objects

    ERIC Educational Resources Information Center

    Tapia, Evelina; Breitmeyer, Bruno G.; Jacob, Jane; Broyles, Elizabeth C.

    2013-01-01

    Flanker congruency effects were measured in a masked flanker task to assess the properties of spatial attention during conscious and nonconscious processing of form, color, and conjunctions of these features. We found that (1) consciously and nonconsciously processed colored shape distractors (i.e., flankers) produce flanker congruency effects;…

  15. Selection and response bias as determinants of priming of pop-out search: Revelations from diffusion modeling.

    PubMed

    Burnham, Bryan R

    2018-05-03

    During visual search, both top-down factors and bottom-up properties contribute to the guidance of visual attention, but selection history can influence attention independent of bottom-up and top-down factors. For example, priming of pop-out (PoP) is the finding that search for a singleton target is faster when the target and distractor features repeat than when those features trade roles between trials. Studies have suggested that such priming (selection history) effects on pop-out search manifest either early, by biasing the selection of the preceding target feature, or later in processing, by facilitating response and target retrieval processes. The present study was designed to examine the influence of selection history on pop-out search by introducing a speed-accuracy trade-off manipulation in a pop-out search task. Ratcliff diffusion modeling (RDM) was used to examine how selection history influenced both attentional bias and response execution processes. The results support the hypothesis that selection history biases attention toward the preceding target's features on the current trial and also influences selection of the response to the target.

  16. The cost of selective attention in category learning: developmental differences between adults and infants.

    PubMed

    Best, Catherine A; Yim, Hyungwook; Sloutsky, Vladimir M

    2013-10-01

    Selective attention plays an important role in category learning. However, immaturities of top-down attentional control during infancy coupled with successful category learning suggest that early category learning is achieved without attending selectively. Research presented here examines this possibility by focusing on category learning in infants (6-8months old) and adults. Participants were trained on a novel visual category. Halfway through the experiment, unbeknownst to participants, the to-be-learned category switched to another category, where previously relevant features became irrelevant and previously irrelevant features became relevant. If participants attend selectively to the relevant features of the first category, they should incur a cost of selective attention immediately after the unknown category switch. Results revealed that adults demonstrated a cost, as evidenced by a decrease in accuracy and response time on test trials as well as a decrease in visual attention to newly relevant features. In contrast, infants did not demonstrate a similar cost of selective attention as adults despite evidence of learning both to-be-learned categories. Findings are discussed as supporting multiple systems of category learning and as suggesting that learning mechanisms engaged by adults may be different from those engaged by infants. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. Distractor-Induced Blindness: A Special Case of Contingent Attentional Capture?

    PubMed Central

    Winther, Gesche N.; Niedeggen, Michael

    2017-01-01

    The detection of a salient visual target embedded in a rapid serial visual presentation (RSVP) can be severely affected if target-like distractors are presented previously. This phenomenon, known as distractor-induced blindness (DIB), shares the prerequisites of contingent attentional capture (Folk, Remington, & Johnston, 1992). In both, target processing is transiently impaired by the presentation of distractors defined by similar features. In the present study, we investigated whether the speeded response to a target in the DIB paradigm can be described in terms of a contingent attentional capture process. In the first experiments, multiple distractors were embedded in the RSVP stream. Distractors either shared the target’s visual features (Experiment 1A) or differed from them (Experiment 1B). Congruent with hypotheses drawn from contingent attentional capture theory, response times (RTs) were exclusively impaired in conditions with target-like distractors. However, RTs were not impaired if only one single target-like distractor was presented (Experiment 2). If attentional capture directly contributed to DIB, the single distractor should be sufficient to impair target processing. In conclusion, DIB is not due to contingent attentional capture, but may rely on a central suppression process triggered by multiple distractors. PMID:28439320

  18. Encouraging top-down attention in visual search:A developmental perspective.

    PubMed

    Lookadoo, Regan; Yang, Yingying; Merrill, Edward C

    2017-10-01

    Four experiments are reported in which 60 younger children (7-8 years old), 60 older children (10-11 years old), and 60 young adults (18-25 years old) performed a conjunctive visual search task (15 per group in each experiment). The number of distractors of each feature type was unbalanced across displays to evaluate participants' ability to restrict search to the smaller subset of features. The use of top-down attention processes to restrict search was encouraged by providing external aids for identifying and maintaining attention on the smaller set. In Experiment 1, no external assistance was provided. In Experiment 2, precues and instructions were provided to focus attention on that subset. In Experiment 3, trials in which the smaller subset was represented by the same feature were presented in alternating blocks to eliminate the need to switch attention between features from trial to trial. In Experiment 4, consecutive blocks of the same subset features were presented in the first or second half of the experiment, providing additional consistency. All groups benefited from external support of top-down attention, although the pattern of improvement varied across experiments. The younger children benefited most from precues and instruction, using the subset search strategy when instructed. Furthermore, younger children benefited from blocking trials only when blocks of the same features did not alternate. Older participants benefited from the blocking of trials in both Experiments 3 and 4, but not from precues and instructions. Hence, our results revealed both malleability and limits of children's top-down control of attention.

  19. Measuring the interrelations among multiple paradigms of visual attention: an individual differences approach.

    PubMed

    Huang, Liqiang; Mo, Lei; Li, Ying

    2012-04-01

    A large part of the empirical research in the field of visual attention has focused on various concrete paradigms. However, as yet, there has been no clear demonstration of whether or not these paradigms are indeed measuring the same underlying construct. We collected a very large data set (nearly 1.3 million trials) to address this question. We tested 257 participants on nine paradigms: conjunction search, configuration search, counting, tracking, feature access, spatial pattern, response selection, visual short-term memory, and change blindness. A fairly general attention factor was identified. Some of the participants were also tested on eight other paradigms. This general attention factor was found to be correlated with intelligence, visual marking, task switching, mental rotation, and Stroop task. On the other hand, a few paradigms that are very important in the attention literature (attentional capture, consonance-driven orienting, and inhibition of return) were found to be dissociated from this general attention factor.

  20. Top-Down Control of Visual Attention by the Prefrontal Cortex. Functional Specialization and Long-Range Interactions

    PubMed Central

    Paneri, Sofia; Gregoriou, Georgia G.

    2017-01-01

    The ability to select information that is relevant to current behavioral goals is the hallmark of voluntary attention and an essential part of our cognition. Attention tasks are a prime example to study at the neuronal level, how task related information can be selectively processed in the brain while irrelevant information is filtered out. Whereas, numerous studies have focused on elucidating the mechanisms of visual attention at the single neuron and population level in the visual cortices, considerably less work has been devoted to deciphering the distinct contribution of higher-order brain areas, which are known to be critical for the employment of attention. Among these areas, the prefrontal cortex (PFC) has long been considered a source of top-down signals that bias selection in early visual areas in favor of the attended features. Here, we review recent experimental data that support the role of PFC in attention. We examine the existing evidence for functional specialization within PFC and we discuss how long-range interactions between PFC subregions and posterior visual areas may be implemented in the brain and contribute to the attentional modulation of different measures of neural activity in visual cortices. PMID:29033784

  1. Top-Down Control of Visual Attention by the Prefrontal Cortex. Functional Specialization and Long-Range Interactions.

    PubMed

    Paneri, Sofia; Gregoriou, Georgia G

    2017-01-01

    The ability to select information that is relevant to current behavioral goals is the hallmark of voluntary attention and an essential part of our cognition. Attention tasks are a prime example to study at the neuronal level, how task related information can be selectively processed in the brain while irrelevant information is filtered out. Whereas, numerous studies have focused on elucidating the mechanisms of visual attention at the single neuron and population level in the visual cortices, considerably less work has been devoted to deciphering the distinct contribution of higher-order brain areas, which are known to be critical for the employment of attention. Among these areas, the prefrontal cortex (PFC) has long been considered a source of top-down signals that bias selection in early visual areas in favor of the attended features. Here, we review recent experimental data that support the role of PFC in attention. We examine the existing evidence for functional specialization within PFC and we discuss how long-range interactions between PFC subregions and posterior visual areas may be implemented in the brain and contribute to the attentional modulation of different measures of neural activity in visual cortices.

  2. Measuring and modeling salience with the theory of visual attention.

    PubMed

    Krüger, Alexander; Tünnermann, Jan; Scharlau, Ingrid

    2017-08-01

    For almost three decades, the theory of visual attention (TVA) has been successful in mathematically describing and explaining a wide variety of phenomena in visual selection and recognition with high quantitative precision. Interestingly, the influence of feature contrast on attention has been included in TVA only recently, although it has been extensively studied outside the TVA framework. The present approach further develops this extension of TVA's scope by measuring and modeling salience. An empirical measure of salience is achieved by linking different (orientation and luminance) contrasts to a TVA parameter. In the modeling part, the function relating feature contrasts to salience is described mathematically and tested against alternatives by Bayesian model comparison. This model comparison reveals that the power function is an appropriate model of salience growth in the dimensions of orientation and luminance contrast. Furthermore, if contrasts from the two dimensions are combined, salience adds up additively.

  3. Eye movements and attention: The role of pre-saccadic shifts of attention in perception, memory and the control of saccades

    PubMed Central

    Gersch, Timothy M.; Schnitzer, Brian S.; Dosher, Barbara A.; Kowler, Eileen

    2012-01-01

    Saccadic eye movements and perceptual attention work in a coordinated fashion to allow selection of the objects, features or regions with the greatest momentary need for limited visual processing resources. This study investigates perceptual characteristics of pre-saccadic shifts of attention during a sequence of saccades using the visual manipulations employed to study mechanisms of attention during maintained fixation. The first part of this paper reviews studies of the connections between saccades and attention, and their significance for both saccadic control and perception. The second part presents three experiments that examine the effects of pre-saccadic shifts of attention on vision during sequences of saccades. Perceptual enhancements at the saccadic goal location relative to non-goal locations were found across a range of stimulus contrasts, with either perceptual discrimination or detection tasks, with either single or multiple perceptual targets, and regardless of the presence of external noise. The results show that the preparation of saccades can evoke a variety of attentional effects, including attentionally-mediated changes in the strength of perceptual representations, selection of targets for encoding in visual memory, exclusion of external noise, or changes in the levels of internal visual noise. The visual changes evoked by saccadic planning make it possible for the visual system to effectively use saccadic eye movements to explore the visual environment. PMID:22809798

  4. Top-down dimensional weight set determines the capture of visual attention: evidence from the PCN component.

    PubMed

    Töllner, Thomas; Müller, Hermann J; Zehetleitner, Michael

    2012-07-01

    Visual search for feature singletons is slowed when a task-irrelevant, but more salient distracter singleton is concurrently presented. While there is a consensus that this distracter interference effect can be influenced by internal system settings, it remains controversial at what stage of processing this influence starts to affect visual coding. Advocates of the "stimulus-driven" view maintain that the initial sweep of visual processing is entirely driven by physical stimulus attributes and that top-down settings can bias visual processing only after selection of the most salient item. By contrast, opponents argue that top-down expectancies can alter the initial selection priority, so that focal attention is "not automatically" shifted to the location exhibiting the highest feature contrast. To precisely trace the allocation of focal attention, we analyzed the Posterior-Contralateral-Negativity (PCN) in a task in which the likelihood (expectancy) with which a distracter occurred was systematically varied. Our results show that both high (vs. low) distracter expectancy and experiencing a distracter on the previous trial speed up the timing of the target-elicited PCN. Importantly, there was no distracter-elicited PCN, indicating that participants did not shift attention to the distracter before selecting the target. This pattern unambiguously demonstrates that preattentive vision is top-down modifiable.

  5. Parameter-Based Assessment of Disturbed and Intact Components of Visual Attention in Children with Developmental Dyslexia

    ERIC Educational Resources Information Center

    Bogon, Johanna; Finke, Kathrin; Schulte-Körne, Gerd; Müller, Hermann J.; Schneider, Werner X.; Stenneken, Prisca

    2014-01-01

    People with developmental dyslexia (DD) have been shown to be impaired in tasks that require the processing of multiple visual elements in parallel. It has been suggested that this deficit originates from disturbed visual attentional functions. The parameter-based assessment of visual attention based on Bundesen's (1990) theory of visual…

  6. An insect-inspired model for visual binding II: functional analysis and visual attention.

    PubMed

    Northcutt, Brandon D; Higgins, Charles M

    2017-04-01

    We have developed a neural network model capable of performing visual binding inspired by neuronal circuitry in the optic glomeruli of flies: a brain area that lies just downstream of the optic lobes where early visual processing is performed. This visual binding model is able to detect objects in dynamic image sequences and bind together their respective characteristic visual features-such as color, motion, and orientation-by taking advantage of their common temporal fluctuations. Visual binding is represented in the form of an inhibitory weight matrix which learns over time which features originate from a given visual object. In the present work, we show that information represented implicitly in this weight matrix can be used to explicitly count the number of objects present in the visual image, to enumerate their specific visual characteristics, and even to create an enhanced image in which one particular object is emphasized over others, thus implementing a simple form of visual attention. Further, we present a detailed analysis which reveals the function and theoretical limitations of the visual binding network and in this context describe a novel network learning rule which is optimized for visual binding.

  7. What Top-Down Task Sets Do for Us: An ERP Study on the Benefits of Advance Preparation in Visual Search

    ERIC Educational Resources Information Center

    Eimer, Martin; Kiss, Monika; Nicholas, Susan

    2011-01-01

    When target-defining features are specified in advance, attentional target selection in visual search is controlled by preparatory top-down task sets. We used ERP measures to study voluntary target selection in the absence of such feature-specific task sets, and to compare it to selection that is guided by advance knowledge about target features.…

  8. Modeling human comprehension of data visualizations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matzen, Laura E.; Haass, Michael Joseph; Divis, Kristin Marie

    This project was inspired by two needs. The first is a need for tools to help scientists and engineers to design effective data visualizations for communicating information, whether to the user of a system, an analyst who must make decisions based on complex data, or in the context of a technical report or publication. Most scientists and engineers are not trained in visualization design, and they could benefit from simple metrics to assess how well their visualization's design conveys the intended message. In other words, will the most important information draw the viewer's attention? The second is the need formore » cognition-based metrics for evaluating new types of visualizations created by researchers in the information visualization and visual analytics communities. Evaluating visualizations is difficult even for experts. However, all visualization methods and techniques are intended to exploit the properties of the human visual system to convey information efficiently to a viewer. Thus, developing evaluation methods that are rooted in the scientific knowledge of the human visual system could be a useful approach. In this project, we conducted fundamental research on how humans make sense of abstract data visualizations, and how this process is influenced by their goals and prior experience. We then used that research to develop a new model, the Data Visualization Saliency Model, that can make accurate predictions about which features in an abstract visualization will draw a viewer's attention. The model is an evaluation tool that can address both of the needs described above, supporting both visualization research and Sandia mission needs.« less

  9. The Effect of Perceptual Load on Attention-Induced Motion Blindness: The Efficiency of Selective Inhibition

    ERIC Educational Resources Information Center

    Hay, Julia L.; Milders, Maarten M.; Sahraie, Arash; Niedeggen, Michael

    2006-01-01

    Recent visual marking studies have shown that the carry-over of distractor inhibition can impair the ability of singletons to capture attention if the singleton and distractors share features. The current study extends this finding to first-order motion targets and distractors, clearly separated in time by a visual cue (the letter X). Target…

  10. Interactions between Visual Attention and Episodic Retrieval: Dissociable Contributions of Parietal Regions during Gist-Based False Recognition

    PubMed Central

    Guerin, Scott A.; Robbins, Clifford A.; Gilmore, Adrian W.; Schacter, Daniel L.

    2012-01-01

    SUMMARY The interaction between episodic retrieval and visual attention is relatively unexplored. Given that systems mediating attention and episodic memory appear to be segregated, and perhaps even in competition, it is unclear how visual attention is recruited during episodic retrieval. We investigated the recruitment of visual attention during the suppression of gist-based false recognition, the tendency to falsely recognize items that are similar to previously encountered items. Recruitment of visual attention was associated with activity in the dorsal attention network. The inferior parietal lobule, often implicated in episodic retrieval, tracked veridical retrieval of perceptual detail and showed reduced activity during the engagement of visual attention, consistent with a competitive relationship with the dorsal attention network. These findings suggest that the contribution of the parietal cortex to interactions between visual attention and episodic retrieval entails distinct systems that contribute to different components of the task while also suppressing each other. PMID:22998879

  11. Figure-ground organization and the emergence of proto-objects in the visual cortex.

    PubMed

    von der Heydt, Rüdiger

    2015-01-01

    A long history of studies of perception has shown that the visual system organizes the incoming information early on, interpreting the 2D image in terms of a 3D world and producing a structure that provides perceptual continuity and enables object-based attention. Recordings from monkey visual cortex show that many neurons, especially in area V2, are selective for border ownership. These neurons are edge selective and have ordinary classical receptive fields (CRF), but in addition their responses are modulated (enhanced or suppressed) depending on the location of a 'figure' relative to the edge in their receptive field. Each neuron has a fixed preference for location on one side or the other. This selectivity is derived from the image context far beyond the CRF. This paper reviews evidence indicating that border ownership selectivity reflects the formation of early object representations ('proto-objects'). The evidence includes experiments showing (1) reversal of border ownership signals with change of perceived object structure, (2) border ownership specific enhancement of responses in object-based selective attention, (3) persistence of border ownership signals in accordance with continuity of object perception, and (4) remapping of border ownership signals across saccades and object movements. Findings 1 and 2 can be explained by hypothetical grouping circuits that sum contour feature signals in search of objectness, and, via recurrent projections, enhance the corresponding low-level feature signals. Findings 3 and 4 might be explained by assuming that the activity of grouping circuits persists and can be remapped. Grouping, persistence, and remapping are fundamental operations of vision. Finding these operations manifest in low-level visual areas challenges traditional views of visual processing. New computational models need to be developed for a comprehensive understanding of the function of the visual cortex.

  12. Figure–ground organization and the emergence of proto-objects in the visual cortex

    PubMed Central

    von der Heydt, Rüdiger

    2015-01-01

    A long history of studies of perception has shown that the visual system organizes the incoming information early on, interpreting the 2D image in terms of a 3D world and producing a structure that provides perceptual continuity and enables object-based attention. Recordings from monkey visual cortex show that many neurons, especially in area V2, are selective for border ownership. These neurons are edge selective and have ordinary classical receptive fields (CRF), but in addition their responses are modulated (enhanced or suppressed) depending on the location of a ‘figure’ relative to the edge in their receptive field. Each neuron has a fixed preference for location on one side or the other. This selectivity is derived from the image context far beyond the CRF. This paper reviews evidence indicating that border ownership selectivity reflects the formation of early object representations (‘proto-objects’). The evidence includes experiments showing (1) reversal of border ownership signals with change of perceived object structure, (2) border ownership specific enhancement of responses in object-based selective attention, (3) persistence of border ownership signals in accordance with continuity of object perception, and (4) remapping of border ownership signals across saccades and object movements. Findings 1 and 2 can be explained by hypothetical grouping circuits that sum contour feature signals in search of objectness, and, via recurrent projections, enhance the corresponding low-level feature signals. Findings 3 and 4 might be explained by assuming that the activity of grouping circuits persists and can be remapped. Grouping, persistence, and remapping are fundamental operations of vision. Finding these operations manifest in low-level visual areas challenges traditional views of visual processing. New computational models need to be developed for a comprehensive understanding of the function of the visual cortex. PMID:26579062

  13. Can state-of-the-art HVS-based objective image quality criteria be used for image reconstruction techniques based on ROI analysis?

    NASA Astrophysics Data System (ADS)

    Dostal, P.; Krasula, L.; Klima, M.

    2012-06-01

    Various image processing techniques in multimedia technology are optimized using visual attention feature of the human visual system. Spatial non-uniformity causes that different locations in an image are of different importance in terms of perception of the image. In other words, the perceived image quality depends mainly on the quality of important locations known as regions of interest. The performance of such techniques is measured by subjective evaluation or objective image quality criteria. Many state-of-the-art objective metrics are based on HVS properties; SSIM, MS-SSIM based on image structural information, VIF based on the information that human brain can ideally gain from the reference image or FSIM utilizing the low-level features to assign the different importance to each location in the image. But still none of these objective metrics utilize the analysis of regions of interest. We solve the question if these objective metrics can be used for effective evaluation of images reconstructed by processing techniques based on ROI analysis utilizing high-level features. In this paper authors show that the state-of-the-art objective metrics do not correlate well with subjective evaluation while the demosaicing based on ROI analysis is used for reconstruction. The ROI were computed from "ground truth" visual attention data. The algorithm combining two known demosaicing techniques on the basis of ROI location is proposed to reconstruct the ROI in fine quality while the rest of image is reconstructed with low quality. The color image reconstructed by this ROI approach was compared with selected demosaicing techniques by objective criteria and subjective testing. The qualitative comparison of the objective and subjective results indicates that the state-of-the-art objective metrics are still not suitable for evaluation image processing techniques based on ROI analysis and new criteria is demanded.

  14. Time course of spatial and feature selective attention for partly-occluded objects.

    PubMed

    Kasai, Tetsuko; Takeya, Ryuji

    2012-07-01

    Attention selects objects/groups as the most fundamental units, and this may be achieved by an attention-spreading mechanism. Previous event-related potential (ERP) studies have found that attention-spreading is reflected by a decrease in the N1 spatial attention effect. The present study tested whether the electrophysiological attention effect is associated with the perception of object unity or amodal completion through the use of partly-occluded objects. ERPs were recorded in 14 participants who were required to pay attention to their left or right visual field and to press a button for a target shape in the attended field. Bilateral stimuli were presented rapidly, and were separated, connected, or connected behind an occluder. Behavioral performance in the connected and occluded conditions was worse than that in the separated condition, indicating that attention spread over perceptual object representations after amodal completion. Consistently, the late N1 spatial attention effect (180-220 ms post-stimulus) and the early phase (230-280 ms) of feature selection effects (target N2) at contralateral sites decreased, equally for the occluded and connected conditions, while the attention effect in the early N1 latency (140-180 ms) shifted most positively for the occluded condition. These results suggest that perceptual organization processes for object recognition transiently modulate spatial and feature selection processes in the visual cortex. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Development of a computerized visual search test.

    PubMed

    Reid, Denise; Babani, Harsha; Jon, Eugenia

    2009-09-01

    Visual attention and visual search are the features of visual perception, essential for attending and scanning one's environment while engaging in daily occupations. This study describes the development of a novel web-based test of visual search. The development information including the format of the test will be described. The test was designed to provide an alternative to existing cancellation tests. Data from two pilot studies will be reported that examined some aspects of the test's validity. To date, our assessment of the test shows that it discriminates between healthy and head-injured persons. More research and development work is required to examine task performance changes in relation to task complexity. It is suggested that the conceptual design for the test is worthy of further investigation.

  16. Perceptual Grouping Enhances Visual Plasticity

    PubMed Central

    Mastropasqua, Tommaso; Turatto, Massimo

    2013-01-01

    Visual perceptual learning, a manifestation of neural plasticity, refers to improvements in performance on a visual task achieved by training. Attention is known to play an important role in perceptual learning, given that the observer's discriminative ability improves only for those stimulus feature that are attended. However, the distribution of attention can be severely constrained by perceptual grouping, a process whereby the visual system organizes the initial retinal input into candidate objects. Taken together, these two pieces of evidence suggest the interesting possibility that perceptual grouping might also affect perceptual learning, either directly or via attentional mechanisms. To address this issue, we conducted two experiments. During the training phase, participants attended to the contrast of the task-relevant stimulus (oriented grating), while two similar task-irrelevant stimuli were presented in the adjacent positions. One of the two flanking stimuli was perceptually grouped with the attended stimulus as a consequence of its similar orientation (Experiment 1) or because it was part of the same perceptual object (Experiment 2). A test phase followed the training phase at each location. Compared to the task-irrelevant no-grouping stimulus, orientation discrimination improved at the attended location. Critically, a perceptual learning effect equivalent to the one observed for the attended location also emerged for the task-irrelevant grouping stimulus, indicating that perceptual grouping induced a transfer of learning to the stimulus (or feature) being perceptually grouped with the task-relevant one. Our findings indicate that no voluntary effort to direct attention to the grouping stimulus or feature is necessary to enhance visual plasticity. PMID:23301100

  17. Event-driven visual attention for the humanoid robot iCub

    PubMed Central

    Rea, Francesco; Metta, Giorgio; Bartolozzi, Chiara

    2013-01-01

    Fast reaction to sudden and potentially interesting stimuli is a crucial feature for safe and reliable interaction with the environment. Here we present a biologically inspired attention system developed for the humanoid robot iCub. It is based on input from unconventional event-driven vision sensors and an efficient computational method. The resulting system shows low-latency and fast determination of the location of the focus of attention. The performance is benchmarked against an instance of the state of the art in robotics artificial attention system used in robotics. Results show that the proposed system is two orders of magnitude faster that the benchmark in selecting a new stimulus to attend. PMID:24379753

  18. Visual search and the aging brain: discerning the effects of age-related brain volume shrinkage on alertness, feature binding, and attentional control.

    PubMed

    Müller-Oehring, Eva M; Schulte, Tilman; Rohlfing, Torsten; Pfefferbaum, Adolf; Sullivan, Edith V

    2013-01-01

    Decline in visuospatial abilities with advancing age has been attributed to a demise of bottom-up and top-down functions involving sensory processing, selective attention, and executive control. These functions may be differentially affected by age-related volume shrinkage of subcortical and cortical nodes subserving the dorsal and ventral processing streams and the corpus callosum mediating interhemispheric information exchange. Fifty-five healthy adults (25-84 years) underwent structural MRI and performed a visual search task to test perceptual and attentional demands by combining feature-conjunction searches with "gestalt" grouping and attentional cueing paradigms. Poorer conjunction, but not feature, search performance was related to older age and volume shrinkage of nodes in the dorsolateral processing stream. When displays allowed perceptual grouping through distractor homogeneity, poorer conjunction-search performance correlated with smaller ventrolateral prefrontal cortical and callosal volumes. An alerting cue attenuated age effects on conjunction search, and the alertness benefit was associated with thalamic, callosal, and temporal cortex volumes. Our results indicate that older adults can capitalize on early parallel stages of visual information processing, whereas age-related limitations arise at later serial processing stages requiring self-guided selective attention and executive control. These limitations are explained in part by age-related brain volume shrinkage and can be mitigated by external cues.

  19. Visual attention in a complex search task differs between honeybees and bumblebees.

    PubMed

    Morawetz, Linde; Spaethe, Johannes

    2012-07-15

    Mechanisms of spatial attention are used when the amount of gathered information exceeds processing capacity. Such mechanisms have been proposed in bees, but have not yet been experimentally demonstrated. We provide evidence that selective attention influences the foraging performance of two social bee species, the honeybee Apis mellifera and the bumblebee Bombus terrestris. Visual search tasks, originally developed for application in human psychology, were adapted for behavioural experiments on bees. We examined the impact of distracting visual information on search performance, which we measured as error rate and decision time. We found that bumblebees were significantly less affected by distracting objects than honeybees. Based on the results, we conclude that the search mechanism in honeybees is serial like, whereas in bumblebees it shows the characteristics of a restricted parallel-like search. Furthermore, the bees differed in their strategy to solve the speed-accuracy trade-off. Whereas bumblebees displayed slow but correct decision-making, honeybees exhibited fast and inaccurate decision-making. We propose two neuronal mechanisms of visual information processing that account for the different responses between honeybees and bumblebees, and we correlate species-specific features of the search behaviour to differences in habitat and life history.

  20. The Diagnosticity of Color for Emotional Objects

    PubMed Central

    McMenamin, Brenton W.; Radue, Jasmine; Trask, Joanna; Huskamp, Kristin; Kersten, Daniel; Marsolek, Chad J.

    2012-01-01

    Object classification can be facilitated if simple diagnostic features can be used to determine class membership. Previous studies have found that simple shapes may be diagnostic for emotional content and automatically alter the allocation of visual attention. In the present study, we analyzed whether color is diagnostic of emotional content and tested whether emotionally diagnostic hues alter the allocation of visual attention. Reddish-yellow hues are more common in (i.e., diagnostic of) emotional images, particularly images with positive emotional content. An exogenous cueing paradigm was employed to test whether these diagnostic hues orient attention differently from other hues due to the emotional diagnosticity. In two experiments, we found that participants allocated attention differently to diagnostic hues than to non-diagnostic hues, in a pattern indicating a broadening of spatial attention when cued with diagnostic hues. Moreover, the attentional broadening effect was predicted by self-reported measures of affective style, linking the behavioral effect to emotional processes. These results confirm the existence and use of diagnostic features for the rapid detection of emotional content. PMID:24659831

  1. Perisaccadic Updating of Visual Representations and Attentional States: Linking Behavior and Neurophysiology

    PubMed Central

    Marino, Alexandria C.; Mazer, James A.

    2016-01-01

    During natural vision, saccadic eye movements lead to frequent retinal image changes that result in different neuronal subpopulations representing the same visual feature across fixations. Despite these potentially disruptive changes to the neural representation, our visual percept is remarkably stable. Visual receptive field remapping, characterized as an anticipatory shift in the position of a neuron’s spatial receptive field immediately before saccades, has been proposed as one possible neural substrate for visual stability. Many of the specific properties of remapping, e.g., the exact direction of remapping relative to the saccade vector and the precise mechanisms by which remapping could instantiate stability, remain a matter of debate. Recent studies have also shown that visual attention, like perception itself, can be sustained across saccades, suggesting that the attentional control system can also compensate for eye movements. Classical remapping could have an attentional component, or there could be a distinct attentional analog of visual remapping. At this time we do not yet fully understand how the stability of attentional representations relates to perisaccadic receptive field shifts. In this review, we develop a vocabulary for discussing perisaccadic shifts in receptive field location and perisaccadic shifts of attentional focus, review and synthesize behavioral and neurophysiological studies of perisaccadic perception and perisaccadic attention, and identify open questions that remain to be experimentally addressed. PMID:26903820

  2. The influence of naturalistic, directionally non-specific motion on the spatial deployment of visual attention in right-hemispheric stroke.

    PubMed

    Cazzoli, Dario; Hopfner, Simone; Preisig, Basil; Zito, Giuseppe; Vanbellingen, Tim; Jäger, Michael; Nef, Tobias; Mosimann, Urs; Bohlhalter, Stephan; Müri, René M; Nyffeler, Thomas

    2016-11-01

    An impairment of the spatial deployment of visual attention during exploration of static (i.e., motionless) stimuli is a common finding after an acute, right-hemispheric stroke. However, less is known about how these deficits: (a) are modulated through naturalistic motion (i.e., without directional, specific spatial features); and, (b) evolve in the subacute/chronic post-stroke phase. In the present study, we investigated free visual exploration in three patient groups with subacute/chronic right-hemispheric stroke and in healthy subjects. The first group included patients with left visual neglect and a left visual field defect (VFD), the second patients with a left VFD but no neglect, and the third patients without neglect or VFD. Eye movements were measured in all participants while they freely explored a traffic scene without (static condition) and with (dynamic condition) naturalistic motion, i.e., cars moving from the right or left. In the static condition, all patient groups showed similar deployment of visual exploration (i.e., as measured by the cumulative fixation duration) as compared to healthy subjects, suggesting that recovery processes took place, with normal spatial allocation of attention. However, the more demanding dynamic condition with moving cars elicited different re-distribution patterns of visual attention, quite similar to those typically observed in acute stroke. Neglect patients with VFD showed a significant decrease of visual exploration in the contralesional space, whereas patients with VFD but no neglect showed a significant increase of visual exploration in the contralesional space. No differences, as compared to healthy subjects, were found in patients without neglect or VFD. These results suggest that naturalistic motion, without directional, specific spatial features, may critically influence the spatial distribution of visual attention in subacute/chronic stroke patients. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Feature saliency and feedback information interactively impact visual category learning

    PubMed Central

    Hammer, Rubi; Sloutsky, Vladimir; Grill-Spector, Kalanit

    2015-01-01

    Visual category learning (VCL) involves detecting which features are most relevant for categorization. VCL relies on attentional learning, which enables effectively redirecting attention to object’s features most relevant for categorization, while ‘filtering out’ irrelevant features. When features relevant for categorization are not salient, VCL relies also on perceptual learning, which enables becoming more sensitive to subtle yet important differences between objects. Little is known about how attentional learning and perceptual learning interact when VCL relies on both processes at the same time. Here we tested this interaction. Participants performed VCL tasks in which they learned to categorize novel stimuli by detecting the feature dimension relevant for categorization. Tasks varied both in feature saliency (low-saliency tasks that required perceptual learning vs. high-saliency tasks), and in feedback information (tasks with mid-information, moderately ambiguous feedback that increased attentional load, vs. tasks with high-information non-ambiguous feedback). We found that mid-information and high-information feedback were similarly effective for VCL in high-saliency tasks. This suggests that an increased attentional load, associated with the processing of moderately ambiguous feedback, has little effect on VCL when features are salient. In low-saliency tasks, VCL relied on slower perceptual learning; but when the feedback was highly informative participants were able to ultimately attain the same performance as during the high-saliency VCL tasks. However, VCL was significantly compromised in the low-saliency mid-information feedback task. We suggest that such low-saliency mid-information learning scenarios are characterized by a ‘cognitive loop paradox’ where two interdependent learning processes have to take place simultaneously. PMID:25745404

  4. Deployment of spatial attention towards locations in memory representations. An EEG study.

    PubMed

    Leszczyński, Marcin; Wykowska, Agnieszka; Perez-Osorio, Jairo; Müller, Hermann J

    2013-01-01

    Recalling information from visual short-term memory (VSTM) involves the same neural mechanisms as attending to an actually perceived scene. In particular, retrieval from VSTM has been associated with orienting of visual attention towards a location within a spatially-organized memory representation. However, an open question concerns whether spatial attention is also recruited during VSTM retrieval even when performing the task does not require access to spatial coordinates of items in the memorized scene. The present study combined a visual search task with a modified, delayed central probe protocol, together with EEG analysis, to answer this question. We found a temporal contralateral negativity (TCN) elicited by a centrally presented go-signal which was spatially uninformative and featurally unrelated to the search target and informed participants only about a response key that they had to press to indicate a prepared target-present vs. -absent decision. This lateralization during VSTM retrieval (TCN) provides strong evidence of a shift of attention towards the target location in the memory representation, which occurred despite the fact that the present task required no spatial (or featural) information from the search to be encoded, maintained, and retrieved to produce the correct response and that the go-signal did not itself specify any information relating to the location and defining feature of the target.

  5. A novel brain-computer interface based on the rapid serial visual presentation paradigm.

    PubMed

    Acqualagna, Laura; Treder, Matthias Sebastian; Schreuder, Martijn; Blankertz, Benjamin

    2010-01-01

    Most present-day visual brain computer interfaces (BCIs) suffer from the fact that they rely on eye movements, are slow-paced, or feature a small vocabulary. As a potential remedy, we explored a novel BCI paradigm consisting of a central rapid serial visual presentation (RSVP) of the stimuli. It has a large vocabulary and realizes a BCI system based on covert non-spatial selective visual attention. In an offline study, eight participants were presented sequences of rapid bursts of symbols. Two different speeds and two different color conditions were investigated. Robust early visual and P300 components were elicited time-locked to the presentation of the target. Offline classification revealed a mean accuracy of up to 90% for selecting the correct symbol out of 30 possibilities. The results suggest that RSVP-BCI is a promising new paradigm, also for patients with oculomotor impairments.

  6. Population Response Profiles in Early Visual Cortex Are Biased in Favor of More Valuable Stimuli

    PubMed Central

    Saproo, Sameer

    2010-01-01

    Voluntary and stimulus-driven shifts of attention can modulate the representation of behaviorally relevant stimuli in early areas of visual cortex. In turn, attended items are processed faster and more accurately, facilitating the selection of appropriate behavioral responses. Information processing is also strongly influenced by past experience and recent studies indicate that the learned value of a stimulus can influence relatively late stages of decision making such as the process of selecting a motor response. However, the learned value of a stimulus can also influence the magnitude of cortical responses in early sensory areas such as V1 and S1. These early effects of stimulus value are presumed to improve the quality of sensory representations; however, the nature of these modulations is not clear. They could reflect nonspecific changes in response amplitude associated with changes in general arousal or they could reflect a bias in population responses so that high-value features are represented more robustly. To examine this issue, subjects performed a two-alternative forced choice paradigm with a variable-interval payoff schedule to dynamically manipulate the relative value of two stimuli defined by their orientation (one was rotated clockwise from vertical, the other counterclockwise). Activation levels in visual cortex were monitored using functional MRI and feature-selective voxel tuning functions while subjects performed the behavioral task. The results suggest that value not only modulates the relative amplitude of responses in early areas of human visual cortex, but also sharpens the response profile across the populations of feature-selective neurons that encode the critical stimulus feature (orientation). Moreover, changes in space- or feature-based attention cannot easily explain the results because representations of both the selected and the unselected stimuli underwent a similar feature-selective modulation. This sharpening in the population response profile could theoretically improve the probability of correctly discriminating high-value stimuli from low-value alternatives. PMID:20410360

  7. Distinct roles of theta and alpha oscillations in the involuntary capture of goal-directed attention.

    PubMed

    Harris, Anthony M; Dux, Paul E; Jones, Caelyn N; Mattingley, Jason B

    2017-05-15

    Mechanisms of attention assign priority to sensory inputs on the basis of current task goals. Previous studies have shown that lateralized neural oscillations within the alpha (8-14Hz) range are associated with the voluntary allocation of attention to the contralateral visual field. It is currently unknown, however, whether similar oscillatory signatures instantiate the involuntary capture of spatial attention by goal-relevant stimulus properties. Here we investigated the roles of theta (4-8Hz), alpha, and beta (14-30Hz) oscillations in human goal-directed visual attention. Across two experiments, we had participants respond to a brief target of a particular color among heterogeneously colored distractors. Prior to target onset, we cued one location with a lateralized, non-predictive cue that was either target- or non-target-colored. During the behavioral task, we recorded brain activity using electroencephalography (EEG), with the aim of analyzing cue-elicited oscillatory activity. We found that theta oscillations lateralized in response to all cues, and this lateralization was stronger if the cue matched the target color. Alpha oscillations lateralized relatively later, and only in response to target-colored cues, consistent with the capture of spatial attention. Our findings suggest that stimulus induced changes in theta and alpha amplitude reflect task-based modulation of signals by feature-based and spatial attention, respectively. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Visual attention for a desktop virtual environment with ambient scent

    PubMed Central

    Toet, Alexander; van Schaik, Martin G.

    2013-01-01

    In the current study participants explored a desktop virtual environment (VE) representing a suburban neighborhood with signs of public disorder (neglect, vandalism, and crime), while being exposed to either room air (control group), or subliminal levels of tar (unpleasant; typically associated with burned or waste material) or freshly cut grass (pleasant; typically associated with natural or fresh material) ambient odor. They reported all signs of disorder they noticed during their walk together with their associated emotional response. Based on recent evidence that odors reflexively direct visual attention to (either semantically or affectively) congruent visual objects, we hypothesized that participants would notice more signs of disorder in the presence of ambient tar odor (since this odor may bias attention to unpleasant and negative features), and less signs of disorder in the presence of ambient grass odor (since this odor may bias visual attention toward the vegetation in the environment and away from the signs of disorder). Contrary to our expectations the results provide no indication that the presence of an ambient odor affected the participants’ visual attention for signs of disorder or their emotional response. However, the paradigm used in present study does not allow us to draw any conclusions in this respect. We conclude that a closer affective, semantic, or spatiotemporal link between the contents of a desktop VE and ambient scents may be required to effectively establish diagnostic associations that guide a user’s attention. In the absence of these direct links, ambient scent may be more diagnostic for the physical environment of the observer as a whole than for the particular items in that environment (or, in this case, items represented in the VE). PMID:24324453

  9. Look at That! Video Chat and Joint Visual Attention Development among Babies and Toddlers

    ERIC Educational Resources Information Center

    McClure, Elisabeth R.; Chentsova-Dutton, Yulia E.; Holochwost, Steven J.; Parrott, W. G.; Barr, Rachel

    2018-01-01

    Although many relatives use video chat to keep in touch with toddlers, key features of adult-toddler interaction like joint visual attention (JVA) may be compromised in this context. In this study, 25 families with a child between 6 and 24 months were observed using video chat at home with geographically separated grandparents. We define two types…

  10. Eye-gaze independent EEG-based brain-computer interfaces for communication.

    PubMed

    Riccio, A; Mattia, D; Simione, L; Olivetti, M; Cincotti, F

    2012-08-01

    The present review systematically examines the literature reporting gaze independent interaction modalities in non-invasive brain-computer interfaces (BCIs) for communication. BCIs measure signals related to specific brain activity and translate them into device control signals. This technology can be used to provide users with severe motor disability (e.g. late stage amyotrophic lateral sclerosis (ALS); acquired brain injury) with an assistive device that does not rely on muscular contraction. Most of the studies on BCIs explored mental tasks and paradigms using visual modality. Considering that in ALS patients the oculomotor control can deteriorate and also other potential users could have impaired visual function, tactile and auditory modalities have been investigated over the past years to seek alternative BCI systems which are independent from vision. In addition, various attentional mechanisms, such as covert attention and feature-directed attention, have been investigated to develop gaze independent visual-based BCI paradigms. Three areas of research were considered in the present review: (i) auditory BCIs, (ii) tactile BCIs and (iii) independent visual BCIs. Out of a total of 130 search results, 34 articles were selected on the basis of pre-defined exclusion criteria. Thirteen articles dealt with independent visual BCIs, 15 reported on auditory BCIs and the last six on tactile BCIs, respectively. From the review of the available literature, it can be concluded that a crucial point is represented by the trade-off between BCI systems/paradigms with high accuracy and speed, but highly demanding in terms of attention and memory load, and systems requiring lower cognitive effort but with a limited amount of communicable information. These issues should be considered as priorities to be explored in future studies to meet users' requirements in a real-life scenario.

  11. Eye-gaze independent EEG-based brain-computer interfaces for communication

    NASA Astrophysics Data System (ADS)

    Riccio, A.; Mattia, D.; Simione, L.; Olivetti, M.; Cincotti, F.

    2012-08-01

    The present review systematically examines the literature reporting gaze independent interaction modalities in non-invasive brain-computer interfaces (BCIs) for communication. BCIs measure signals related to specific brain activity and translate them into device control signals. This technology can be used to provide users with severe motor disability (e.g. late stage amyotrophic lateral sclerosis (ALS); acquired brain injury) with an assistive device that does not rely on muscular contraction. Most of the studies on BCIs explored mental tasks and paradigms using visual modality. Considering that in ALS patients the oculomotor control can deteriorate and also other potential users could have impaired visual function, tactile and auditory modalities have been investigated over the past years to seek alternative BCI systems which are independent from vision. In addition, various attentional mechanisms, such as covert attention and feature-directed attention, have been investigated to develop gaze independent visual-based BCI paradigms. Three areas of research were considered in the present review: (i) auditory BCIs, (ii) tactile BCIs and (iii) independent visual BCIs. Out of a total of 130 search results, 34 articles were selected on the basis of pre-defined exclusion criteria. Thirteen articles dealt with independent visual BCIs, 15 reported on auditory BCIs and the last six on tactile BCIs, respectively. From the review of the available literature, it can be concluded that a crucial point is represented by the trade-off between BCI systems/paradigms with high accuracy and speed, but highly demanding in terms of attention and memory load, and systems requiring lower cognitive effort but with a limited amount of communicable information. These issues should be considered as priorities to be explored in future studies to meet users’ requirements in a real-life scenario.

  12. Feature-selective Attention in Frontoparietal Cortex: Multivoxel Codes Adjust to Prioritize Task-relevant Information.

    PubMed

    Jackson, Jade; Rich, Anina N; Williams, Mark A; Woolgar, Alexandra

    2017-02-01

    Human cognition is characterized by astounding flexibility, enabling us to select appropriate information according to the objectives of our current task. A circuit of frontal and parietal brain regions, often referred to as the frontoparietal attention network or multiple-demand (MD) regions, are believed to play a fundamental role in this flexibility. There is evidence that these regions dynamically adjust their responses to selectively process information that is currently relevant for behavior, as proposed by the "adaptive coding hypothesis" [Duncan, J. An adaptive coding model of neural function in prefrontal cortex. Nature Reviews Neuroscience, 2, 820-829, 2001]. Could this provide a neural mechanism for feature-selective attention, the process by which we preferentially process one feature of a stimulus over another? We used multivariate pattern analysis of fMRI data during a perceptually challenging categorization task to investigate whether the representation of visual object features in the MD regions flexibly adjusts according to task relevance. Participants were trained to categorize visually similar novel objects along two orthogonal stimulus dimensions (length/orientation) and performed short alternating blocks in which only one of these dimensions was relevant. We found that multivoxel patterns of activation in the MD regions encoded the task-relevant distinctions more strongly than the task-irrelevant distinctions: The MD regions discriminated between stimuli of different lengths when length was relevant and between the same objects according to orientation when orientation was relevant. The data suggest a flexible neural system that adjusts its representation of visual objects to preferentially encode stimulus features that are currently relevant for behavior, providing a neural mechanism for feature-selective attention.

  13. The Speed of Feature-Based Attention: Attentional Advantage Is Slow, but Selection Is Fast

    ERIC Educational Resources Information Center

    Huang, Liqiang

    2010-01-01

    When paying attention to a feature (e.g., red), no attentional advantage is gained in perceiving items with this feature in very brief displays. Therefore, feature-based attention seems to be slow. In previous feature-based attention studies, attention has often been measured as the difference in performance in a secondary task. In our recent work…

  14. Self-Regulation of Visual Attention and Facial Expression of Emotions in ADHD Children

    ERIC Educational Resources Information Center

    Kuhle, Hans J.; Kinkelbur, Jorg; Andes, Kerstin; Heidorn, Fridjof M.; Zeyer, Solveigh; Rautzenberg, Petra; Jansen, Fritz

    2007-01-01

    Objective: To test if visual focusing and mimic display as features of self-regulation in ADHD children show a curvilinear relation to rising methylphenidate (MPH) doses. To test if small dose steps of 2.5mg MPH cause significant changes in behavior. And to test the relation of these features to intellectual performance, parents' ratings, and…

  15. Preattentive representation of feature conjunctions for concurrent spatially distributed auditory objects.

    PubMed

    Takegata, Rika; Brattico, Elvira; Tervaniemi, Mari; Varyagina, Olga; Näätänen, Risto; Winkler, István

    2005-09-01

    The role of attention in conjoining features of an object has been a topic of much debate. Studies using the mismatch negativity (MMN), an index of detecting acoustic deviance, suggested that the conjunctions of auditory features are preattentively represented in the brain. These studies, however, used sequentially presented sounds and thus are not directly comparable with visual studies of feature integration. Therefore, the current study presented an array of spatially distributed sounds to determine whether the auditory features of concurrent sounds are correctly conjoined without focal attention directed to the sounds. Two types of sounds differing from each other in timbre and pitch were repeatedly presented together while subjects were engaged in a visual n-back working-memory task and ignored the sounds. Occasional reversals of the frequent pitch-timbre combinations elicited MMNs of a very similar amplitude and latency irrespective of the task load. This result suggested preattentive integration of auditory features. However, performance in a subsequent target-search task with the same stimuli indicated the occurrence of illusory conjunctions. The discrepancy between the results obtained with and without focal attention suggests that illusory conjunctions may occur during voluntary access to the preattentively encoded object representations.

  16. Crowding, feature integration, and two kinds of "attention".

    PubMed

    Põder, Endel

    2006-02-21

    In a recent article, Pelli, Palomares, and Majaj (2004) suggested that feature binding is mediated by hard-wired integration fields instead of a spotlight of spatial attention (as assumed by Treisman & Gelade, 1980). Consequently, the correct conjoining of visual features can be guaranteed only when there are no other competing features within a circle with a radius of approximately 0.5E (E = eccentricity of the target object). This claim seems contradicted by an observation that we can easily see--for example, the orientation of a single blue bar within a dense array of randomly oriented red bars. In the present study, possible determinants of the extent of crowding (or feature integration) zones were analyzed with feature (color) singletons as targets. It was found that the number of distractors has a dramatic effect on crowding. With a few distractors, a normal crowding effect was observed. However, by increasing the number of distractors, the crowding effect was remarkably reduced. Similar results were observed when the target and distractors were of the same color and when only a differently colored circle indicated the target location. The results can be explained by bottom-up "attention" that facilitates the processing of information from salient locations in the visual field.

  17. Perceptual load vs. dilution: the roles of attentional focus, stimulus category, and target predictability

    PubMed Central

    Chen, Zhe; Cave, Kyle R.

    2013-01-01

    Many studies have shown that increasing the number of neutral stimuli in a display decreases distractor interference. This result has been interpreted within two different frameworks; a perceptual load account, based on a reduction in spare resources, and a dilution account, based on a degradation in distractor representation and/or an increase in crosstalk between the distractor and the neutral stimuli that contain visually similar features. In four experiments, we systematically manipulated the extent of attentional focus, stimulus category, and preknowledge of the target to examine how these factors would interact with the display set size to influence the degree of distractor processing. Display set size did not affect the degree of distractor processing in all situations. Increasing the number of neutral items decreased distractor processing only when a task induced a broad attentional focus that included the neutral stimuli, when the neutral stimuli were in the same category as the target and distractor, and when the preknowledge of the target was insufficient to guide attention to the target efficiently. These results suggest that the effect of neutral stimuli on the degree of distractor processing is more complex than previously assumed. They provide new insight into the competitive interactions between bottom-up and top-down processes that govern the efficiency of visual selective attention. PMID:23761777

  18. Perceptual load vs. dilution: the roles of attentional focus, stimulus category, and target predictability.

    PubMed

    Chen, Zhe; Cave, Kyle R

    2013-01-01

    Many studies have shown that increasing the number of neutral stimuli in a display decreases distractor interference. This result has been interpreted within two different frameworks; a perceptual load account, based on a reduction in spare resources, and a dilution account, based on a degradation in distractor representation and/or an increase in crosstalk between the distractor and the neutral stimuli that contain visually similar features. In four experiments, we systematically manipulated the extent of attentional focus, stimulus category, and preknowledge of the target to examine how these factors would interact with the display set size to influence the degree of distractor processing. Display set size did not affect the degree of distractor processing in all situations. Increasing the number of neutral items decreased distractor processing only when a task induced a broad attentional focus that included the neutral stimuli, when the neutral stimuli were in the same category as the target and distractor, and when the preknowledge of the target was insufficient to guide attention to the target efficiently. These results suggest that the effect of neutral stimuli on the degree of distractor processing is more complex than previously assumed. They provide new insight into the competitive interactions between bottom-up and top-down processes that govern the efficiency of visual selective attention.

  19. Human attention filters for single colors.

    PubMed

    Sun, Peng; Chubb, Charles; Wright, Charles E; Sperling, George

    2016-10-25

    The visual images in the eyes contain much more information than the brain can process. An important selection mechanism is feature-based attention (FBA). FBA is best described by attention filters that specify precisely the extent to which items containing attended features are selectively processed and the extent to which items that do not contain the attended features are attenuated. The centroid-judgment paradigm enables quick, precise measurements of such human perceptual attention filters, analogous to transmission measurements of photographic color filters. Subjects use a mouse to locate the centroid-the center of gravity-of a briefly displayed cloud of dots and receive precise feedback. A subset of dots is distinguished by some characteristic, such as a different color, and subjects judge the centroid of only the distinguished subset (e.g., dots of a particular color). The analysis efficiently determines the precise weight in the judged centroid of dots of every color in the display (i.e., the attention filter for the particular attended color in that context). We report 32 attention filters for single colors. Attention filters that discriminate one saturated hue from among seven other equiluminant distractor hues are extraordinarily selective, achieving attended/unattended weight ratios >20:1. Attention filters for selecting a color that differs in saturation or lightness from distractors are much less selective than attention filters for hue (given equal discriminability of the colors), and their filter selectivities are proportional to the discriminability distance of neighboring colors, whereas in the same range hue attention-filter selectivity is virtually independent of discriminabilty.

  20. Subsystems of sensory attention for skilled reaching: vision for transport and pre-shaping and somatosensation for grasping, withdrawal and release.

    PubMed

    Sacrey, Lori-Ann R; Whishaw, Ian Q

    2012-06-01

    Skilled reaching is a forelimb movement in which a subject reaches for a piece of food that is placed in the mouth for eating. It is a natural movement used by many animal species and is a routine, daily activity for humans. Its prominent features include transport of the hand to a target, shaping the digits in preparation for grasping, grasping, and withdrawal of the hand to place the food in the mouth. Studies on normal human adults show that skilled reaching is mediated by at least two sensory attention processes. Hand transport to the target and hand shaping are temporally coupled with visual fixation on the target. Grasping, withdrawal, and placing the food into the mouth are associated with visual disengagement and somatosensory guidance. Studies on nonhuman animal species illustrate that shared visual and somatosensory attention likely evolved in the primate lineage. Studies on developing infants illustrate that shared attention requires both experience and maturation. Studies on subjects with Parkinson's disease and Huntington's disease illustrate that decomposition of shared attention also features compensatory visual guidance. The evolutionary, developmental, and neural control of skilled reaching suggests that associative learning processes are importantly related to normal adult attention sharing and so can be used in remediation. The economical use of sensory attention in the different phases of skilled reaching ensures efficiency in eating, reduces sensory interference between sensory reference frames, and provides efficient neural control of the advance and withdrawal components of skilled reaching movements. Copyright © 2011 Elsevier B.V. All rights reserved.

  1. Enhancing links between visual short term memory, visual attention and cognitive control processes through practice: An electrophysiological insight.

    PubMed

    Fuggetta, Giorgio; Duke, Philip A

    2017-05-01

    The operation of attention on visible objects involves a sequence of cognitive processes. The current study firstly aimed to elucidate the effects of practice on neural mechanisms underlying attentional processes as measured with both behavioural and electrophysiological measures. Secondly, it aimed to identify any pattern in the relationship between Event-Related Potential (ERP) components which play a role in the operation of attention in vision. Twenty-seven participants took part in two recording sessions one week apart, performing an experimental paradigm which combined a match-to-sample task with a memory-guided efficient visual-search task within one trial sequence. Overall, practice decreased behavioural response times, increased accuracy, and modulated several ERP components that represent cognitive and neural processing stages. This neuromodulation through practice was also associated with an enhanced link between behavioural measures and ERP components and with an enhanced cortico-cortical interaction of functionally interconnected ERP components. Principal component analysis (PCA) of the ERP amplitude data revealed three components, having different rostro-caudal topographic representations. The first component included both the centro-parietal and parieto-occipital mismatch triggered negativity - involved in integration of visual representations of the target with current task-relevant representations stored in visual working memory - loaded with second negative posterior-bilateral (N2pb) component, involved in categorising specific pop-out target features. The second component comprised the amplitude of bilateral anterior P2 - related to detection of a specific pop-out feature - loaded with bilateral anterior N2, related to detection of conflicting features, and fronto-central mismatch triggered negativity. The third component included the parieto-occipital N1 - related to early neural responses to the stimulus array - which loaded with the second negative posterior-contralateral (N2pc) component, mediating the process of orienting and focusing covert attention on peripheral target features. We discussed these three components as representing different neurocognitive systems modulated with practice within which the input selection process operates. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.

  2. Attention modulates visual size adaptation.

    PubMed

    Kreutzer, Sylvia; Fink, Gereon R; Weidner, Ralph

    2015-01-01

    The current study determined in healthy subjects (n = 16) whether size adaptation occurs at early, i.e., preattentive, levels of processing or whether higher cognitive processes such as attention can modulate the illusion. To investigate this issue, bottom-up stimulation was kept constant across conditions by using a single adaptation display containing both small and large adapter stimuli. Subjects' attention was directed to either the large or small adapter stimulus by means of a luminance detection task. When attention was directed toward the small as compared to the large adapter, the perceived size of the subsequent target was significantly increased. Data suggest that different size adaptation effects can be induced by one and the same stimulus depending on the current allocation of attention. This indicates that size adaptation is subject to attentional modulation. These findings are in line with previous research showing that transient as well as sustained attention modulates visual features, such as contrast sensitivity and spatial frequency, and influences adaptation in other contexts, such as motion adaptation (Alais & Blake, 1999; Lankheet & Verstraten, 1995). Based on a recently suggested model (Pooresmaeili, Arrighi, Biagi, & Morrone, 2013), according to which perceptual adaptation is based on local excitation and inhibition in V1, we conclude that guiding attention can boost these local processes in one or the other direction by increasing the weight of the attended adapter. In sum, perceptual adaptation, although reflected in changes of neural activity at early levels (as shown in the aforementioned study), is nevertheless subject to higher-order modulation.

  3. Infrared and visible image fusion with the target marked based on multi-resolution visual attention mechanisms

    NASA Astrophysics Data System (ADS)

    Huang, Yadong; Gao, Kun; Gong, Chen; Han, Lu; Guo, Yue

    2016-03-01

    During traditional multi-resolution infrared and visible image fusion processing, the low contrast ratio target may be weakened and become inconspicuous because of the opposite DN values in the source images. So a novel target pseudo-color enhanced image fusion algorithm based on the modified attention model and fast discrete curvelet transformation is proposed. The interesting target regions are extracted from source images by introducing the motion features gained from the modified attention model, and source images are performed the gray fusion via the rules based on physical characteristics of sensors in curvelet domain. The final fusion image is obtained by mapping extracted targets into the gray result with the proper pseudo-color instead. The experiments show that the algorithm can highlight dim targets effectively and improve SNR of fusion image.

  4. Selective attention increases choice certainty in human decision making.

    PubMed

    Zizlsperger, Leopold; Sauvigny, Thomas; Haarmeier, Thomas

    2012-01-01

    Choice certainty is a probabilistic estimate of past performance and expected outcome. In perceptual decisions the degree of confidence correlates closely with choice accuracy and reaction times, suggesting an intimate relationship to objective performance. Here we show that spatial and feature-based attention increase human subjects' certainty more than accuracy in visual motion discrimination tasks. Our findings demonstrate for the first time a dissociation of choice accuracy and certainty with a significantly stronger influence of voluntary top-down attention on subjective performance measures than on objective performance. These results reveal a so far unknown mechanism of the selection process implemented by attention and suggest a unique biological valence of choice certainty beyond a faithful reflection of the decision process.

  5. Visual selective attention in body dysmorphic disorder, bulimia nervosa and healthy controls.

    PubMed

    Kollei, Ines; Horndasch, Stefanie; Erim, Yesim; Martin, Alexandra

    2017-01-01

    Cognitive behavioral models postulate that selective attention plays an important role in the maintenance of body dysmorphic disorder (BDD). It is suggested that individuals with BDD overfocus on perceived defects in their appearance, which may contribute to the excessive preoccupation with their appearance. The present study used eye tracking to examine visual selective attention in individuals with BDD (n=19), as compared to individuals with bulimia nervosa (BN) (n=21) and healthy controls (HCs) (n=21). Participants completed interviews, questionnaires, rating scales and an eye tracking task: Eye movements were recorded while participants viewed photographs of their own face and attractive as well as unattractive other faces. Eye tracking data showed that BDD and BN participants focused less on their self-rated most attractive facial part than HCs. Scanning patterns in own and other faces showed that BDD and BN participants paid as much attention to attractive as to unattractive features in their own face, whereas they focused more on attractive features in attractive other faces. HCs paid more attention to attractive features in their own face and did the same in attractive other faces. Results indicate an attentional bias in BDD and BN participants manifesting itself in a neglect of positive features compared to HCs. Perceptual retraining may be an important aspect to focus on in therapy in order to overcome the neglect of positive facial aspects. Future research should aim to disentangle attentional processes in BDD by examining the time course of attentional processing. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Temporal kinetics of prefrontal modulation of the extrastriate cortex during visual attention.

    PubMed

    Yago, Elena; Duarte, Audrey; Wong, Ting; Barceló, Francisco; Knight, Robert T

    2004-12-01

    Single-unit, event-related potential (ERP), and neuroimaging studies have implicated the prefrontal cortex (PFC) in top-down control of attention and working memory. We conducted an experiment in patients with unilateral PFC damage (n = 8) to assess the temporal kinetics of PFC-extrastriate interactions during visual attention. Subjects alternated attention between the left and the right hemifields in successive runs while they detected target stimuli embedded in streams of repetitive task-irrelevant stimuli (standards). The design enabled us to examine tonic (spatial selection) and phasic (feature selection) PFC-extrastriate interactions. PFC damage impaired performance in the visual field contralateral to lesions, as manifested by both larger reaction times and error rates. Assessment of the extrastriate P1 ERP revealed that the PFC exerts a tonic (spatial selection) excitatory input to the ipsilateral extrastriate cortex as early as 100 msec post stimulus delivery. The PFC exerts a second phasic (feature selection) excitatory extrastriate modulation from 180 to 300 msec, as evidenced by reductions in selection negativity after damage. Finally, reductions of the N2 ERP to target stimuli supports the notion that the PFC exerts a third phasic (target selection) signal necessary for successful template matching during postselection analysis of target features. The results provide electrophysiological evidence of three distinct tonic and phasic PFC inputs to the extrastriate cortex in the initial few hundred milliseconds of stimulus processing. Damage to this network appears to underlie the pervasive deficits in attention observed in patients with prefrontal lesions.

  7. Individual differences in attention strategies during detection, fine discrimination, and coarse discrimination

    PubMed Central

    Hecker, Elizabeth A.; Serences, John T.; Srinivasan, Ramesh

    2013-01-01

    Interacting with the environment requires the ability to flexibly direct attention to relevant features. We examined the degree to which individuals attend to visual features within and across Detection, Fine Discrimination, and Coarse Discrimination tasks. Electroencephalographic (EEG) responses were measured to an unattended peripheral flickering (4 or 6 Hz) grating while individuals (n = 33) attended to orientations that were offset by 0°, 10°, 20°, 30°, 40°, and 90° from the orientation of the unattended flicker. These unattended responses may be sensitive to attentional gain at the attended spatial location, since attention to features enhances early visual responses throughout the visual field. We found no significant differences in tuning curves across the three tasks in part due to individual differences in strategies. We sought to characterize individual attention strategies using hierarchical Bayesian modeling, which grouped individuals into families of curves that reflect attention to the physical target orientation (“on-channel”) or away from the target orientation (“off-channel”) or a uniform distribution of attention. The different curves were related to behavioral performance; individuals with “on-channel” curves had lower thresholds than individuals with uniform curves. Individuals with “off-channel” curves during Fine Discrimination additionally had lower thresholds than those assigned to uniform curves, highlighting the perceptual benefits of attending away from the physical target orientation during fine discriminations. Finally, we showed that a subset of individuals with optimal curves (“on-channel”) during Detection also demonstrated optimal curves (“off-channel”) during Fine Discrimination, indicating that a subset of individuals can modulate tuning optimally for detection and discrimination. PMID:23678013

  8. There's Waldo! A Normalization Model of Visual Search Predicts Single-Trial Human Fixations in an Object Search Task

    PubMed Central

    Miconi, Thomas; Groomes, Laura; Kreiman, Gabriel

    2016-01-01

    When searching for an object in a scene, how does the brain decide where to look next? Visual search theories suggest the existence of a global “priority map” that integrates bottom-up visual information with top-down, target-specific signals. We propose a mechanistic model of visual search that is consistent with recent neurophysiological evidence, can localize targets in cluttered images, and predicts single-trial behavior in a search task. This model posits that a high-level retinotopic area selective for shape features receives global, target-specific modulation and implements local normalization through divisive inhibition. The normalization step is critical to prevent highly salient bottom-up features from monopolizing attention. The resulting activity pattern constitues a priority map that tracks the correlation between local input and target features. The maximum of this priority map is selected as the locus of attention. The visual input is then spatially enhanced around the selected location, allowing object-selective visual areas to determine whether the target is present at this location. This model can localize objects both in array images and when objects are pasted in natural scenes. The model can also predict single-trial human fixations, including those in error and target-absent trials, in a search task involving complex objects. PMID:26092221

  9. Research progress on Drosophila visual cognition in China.

    PubMed

    Guo, AiKe; Zhang, Ke; Peng, YueQin; Xi, Wang

    2010-03-01

    Visual cognition, as one of the fundamental aspects of cognitive neuroscience, is generally associated with high-order brain functions in animals and human. Drosophila, as a model organism, shares certain features of visual cognition in common with mammals at the genetic, molecular, cellular, and even higher behavioral levels. From learning and memory to decision making, Drosophila covers a broad spectrum of higher cognitive behaviors beyond what we had expected. Armed with powerful tools of genetic manipulation in Drosophila, an increasing number of studies have been conducted in order to elucidate the neural circuit mechanisms underlying these cognitive behaviors from a genes-brain-behavior perspective. The goal of this review is to integrate the most important studies on visual cognition in Drosophila carried out in mainland China during the last decade into a body of knowledge encompassing both the basic neural operations and circuitry of higher brain function in Drosophila. Here, we consider a series of the higher cognitive behaviors beyond learning and memory, such as visual pattern recognition, feature and context generalization, different feature memory traces, salience-based decision, attention-like behavior, and cross-modal leaning and memory. We discuss the possible general gain-gating mechanism implementing by dopamine - mushroom body circuit in fly's visual cognition. We hope that our brief review on this aspect will inspire further study on visual cognition in flies, or even beyond.

  10. Object detection in natural scenes: Independent effects of spatial and category-based attention.

    PubMed

    Stein, Timo; Peelen, Marius V

    2017-04-01

    Humans are remarkably efficient in detecting highly familiar object categories in natural scenes, with evidence suggesting that such object detection can be performed in the (near) absence of attention. Here we systematically explored the influences of both spatial attention and category-based attention on the accuracy of object detection in natural scenes. Manipulating both types of attention additionally allowed for addressing how these factors interact: whether the requirement for spatial attention depends on the extent to which observers are prepared to detect a specific object category-that is, on category-based attention. The results showed that the detection of targets from one category (animals or vehicles) was better than the detection of targets from two categories (animals and vehicles), demonstrating the beneficial effect of category-based attention. This effect did not depend on the semantic congruency of the target object and the background scene, indicating that observers attended to visual features diagnostic of the foreground target objects from the cued category. Importantly, in three experiments the detection of objects in scenes presented in the periphery was significantly impaired when observers simultaneously performed an attentionally demanding task at fixation, showing that spatial attention affects natural scene perception. In all experiments, the effects of category-based attention and spatial attention on object detection performance were additive rather than interactive. Finally, neither spatial nor category-based attention influenced metacognitive ability for object detection performance. These findings demonstrate that efficient object detection in natural scenes is independently facilitated by spatial and category-based attention.

  11. Global motion compensated visual attention-based video watermarking

    NASA Astrophysics Data System (ADS)

    Oakes, Matthew; Bhowmik, Deepayan; Abhayaratne, Charith

    2016-11-01

    Imperceptibility and robustness are two key but complementary requirements of any watermarking algorithm. Low-strength watermarking yields high imperceptibility but exhibits poor robustness. High-strength watermarking schemes achieve good robustness but often suffer from embedding distortions resulting in poor visual quality in host media. This paper proposes a unique video watermarking algorithm that offers a fine balance between imperceptibility and robustness using motion compensated wavelet-based visual attention model (VAM). The proposed VAM includes spatial cues for visual saliency as well as temporal cues. The spatial modeling uses the spatial wavelet coefficients while the temporal modeling accounts for both local and global motion to arrive at the spatiotemporal VAM for video. The model is then used to develop a video watermarking algorithm, where a two-level watermarking weighting parameter map is generated from the VAM saliency maps using the saliency model and data are embedded into the host image according to the visual attentiveness of each region. By avoiding higher strength watermarking in the visually attentive region, the resulting watermarked video achieves high perceived visual quality while preserving high robustness. The proposed VAM outperforms the state-of-the-art video visual attention methods in joint saliency detection and low computational complexity performance. For the same embedding distortion, the proposed visual attention-based watermarking achieves up to 39% (nonblind) and 22% (blind) improvement in robustness against H.264/AVC compression, compared to existing watermarking methodology that does not use the VAM. The proposed visual attention-based video watermarking results in visual quality similar to that of low-strength watermarking and a robustness similar to those of high-strength watermarking.

  12. Binding biological motion and visual features in working memory.

    PubMed

    Ding, Xiaowei; Zhao, Yangfan; Wu, Fan; Lu, Xiqian; Gao, Zaifeng; Shen, Mowei

    2015-06-01

    Working memory mechanisms for binding have been examined extensively in the last decade, yet few studies have explored bindings relating to human biological motion (BM). Human BM is the most salient and biologically significant kinetic information encountered in everyday life and is stored independently from other visual features (e.g., colors). The current study explored 3 critical issues of BM-related binding in working memory: (a) how many BM binding units can be retained in working memory, (b) whether involuntarily object-based binding occurs during BM binding, and (c) whether the maintenance of BM bindings in working memory requires attention above and beyond that needed to maintain the constituent dimensions. We isolated motion signals of human BM from non-BM sources by using point-light displays as to-be-memorized BM and presented the participants colored BM in a change detection task. We found that working memory capacity for BM-color bindings is rather low; only 1 or 2 BM-color bindings could be retained in working memory regardless of the presentation manners (Experiments 1-3). Furthermore, no object-based encoding took place for colored BM stimuli regardless of the processed dimensions (Experiments 4 and 5). Central executive attention contributes to the maintenance of BM-color bindings, yet maintaining BM bindings in working memory did not require more central attention than did maintaining the constituent dimensions in working memory (Experiment 6). Overall, these results suggest that keeping BM bindings in working memory is a fairly resource-demanding process, yet central executive attention does not play a special role in this cross-module binding. (c) 2015 APA, all rights reserved).

  13. Resonant Cholinergic Dynamics in Cognitive and Motor Decision-Making: Attention, Category Learning, and Choice in Neocortex, Superior Colliculus, and Optic Tectum.

    PubMed

    Grossberg, Stephen; Palma, Jesse; Versace, Massimiliano

    2015-01-01

    Freely behaving organisms need to rapidly calibrate their perceptual, cognitive, and motor decisions based on continuously changing environmental conditions. These plastic changes include sharpening or broadening of cognitive and motor attention and learning to match the behavioral demands that are imposed by changing environmental statistics. This article proposes that a shared circuit design for such flexible decision-making is used in specific cognitive and motor circuits, and that both types of circuits use acetylcholine to modulate choice selectivity. Such task-sensitive control is proposed to control thalamocortical choice of the critical features that are cognitively attended and that are incorporated through learning into prototypes of visual recognition categories. A cholinergically-modulated process of vigilance control determines if a recognition category and its attended features are abstract (low vigilance) or concrete (high vigilance). Homologous neural mechanisms of cholinergic modulation are proposed to focus attention and learn a multimodal map within the deeper layers of superior colliculus. This map enables visual, auditory, and planned movement commands to compete for attention, leading to selection of a winning position that controls where the next saccadic eye movement will go. Such map learning may be viewed as a kind of attentive motor category learning. The article hereby explicates a link between attention, learning, and cholinergic modulation during decision making within both cognitive and motor systems. Homologs between the mammalian superior colliculus and the avian optic tectum lead to predictions about how multimodal map learning may occur in the mammalian and avian brain and how such learning may be modulated by acetycholine.

  14. Resonant Cholinergic Dynamics in Cognitive and Motor Decision-Making: Attention, Category Learning, and Choice in Neocortex, Superior Colliculus, and Optic Tectum

    PubMed Central

    Grossberg, Stephen; Palma, Jesse; Versace, Massimiliano

    2016-01-01

    Freely behaving organisms need to rapidly calibrate their perceptual, cognitive, and motor decisions based on continuously changing environmental conditions. These plastic changes include sharpening or broadening of cognitive and motor attention and learning to match the behavioral demands that are imposed by changing environmental statistics. This article proposes that a shared circuit design for such flexible decision-making is used in specific cognitive and motor circuits, and that both types of circuits use acetylcholine to modulate choice selectivity. Such task-sensitive control is proposed to control thalamocortical choice of the critical features that are cognitively attended and that are incorporated through learning into prototypes of visual recognition categories. A cholinergically-modulated process of vigilance control determines if a recognition category and its attended features are abstract (low vigilance) or concrete (high vigilance). Homologous neural mechanisms of cholinergic modulation are proposed to focus attention and learn a multimodal map within the deeper layers of superior colliculus. This map enables visual, auditory, and planned movement commands to compete for attention, leading to selection of a winning position that controls where the next saccadic eye movement will go. Such map learning may be viewed as a kind of attentive motor category learning. The article hereby explicates a link between attention, learning, and cholinergic modulation during decision making within both cognitive and motor systems. Homologs between the mammalian superior colliculus and the avian optic tectum lead to predictions about how multimodal map learning may occur in the mammalian and avian brain and how such learning may be modulated by acetycholine. PMID:26834535

  15. Mechanisms of perceptual organization provide auto-zoom and auto-localization for attention to objects

    PubMed Central

    Mihalas, Stefan; Dong, Yi; von der Heydt, Rüdiger; Niebur, Ernst

    2011-01-01

    Visual attention is often understood as a modulatory field acting at early stages of processing, but the mechanisms that direct and fit the field to the attended object are not known. We show that a purely spatial attention field propagating downward in the neuronal network responsible for perceptual organization will be reshaped, repositioned, and sharpened to match the object's shape and scale. Key features of the model are grouping neurons integrating local features into coherent tentative objects, excitatory feedback to the same local feature neurons that caused grouping neuron activation, and inhibition between incompatible interpretations both at the local feature level and at the object representation level. PMID:21502489

  16. Spatial and object-based attention modulates broadband high-frequency responses across the human visual cortical hierarchy.

    PubMed

    Davidesco, Ido; Harel, Michal; Ramot, Michal; Kramer, Uri; Kipervasser, Svetlana; Andelman, Fani; Neufeld, Miri Y; Goelman, Gadi; Fried, Itzhak; Malach, Rafael

    2013-01-16

    One of the puzzling aspects in the visual attention literature is the discrepancy between electrophysiological and fMRI findings: whereas fMRI studies reveal strong attentional modulation in the earliest visual areas, single-unit and local field potential studies yielded mixed results. In addition, it is not clear to what extent spatial attention effects extend from early to high-order visual areas. Here we addressed these issues using electrocorticography recordings in epileptic patients. The patients performed a task that allowed simultaneous manipulation of both spatial and object-based attention. They were presented with composite stimuli, consisting of a small object (face or house) superimposed on a large one, and in separate blocks, were instructed to attend one of the objects. We found a consistent increase in broadband high-frequency (30-90 Hz) power, but not in visual evoked potentials, associated with spatial attention starting with V1/V2 and continuing throughout the visual hierarchy. The magnitude of the attentional modulation was correlated with the spatial selectivity of each electrode and its distance from the occipital pole. Interestingly, the latency of the attentional modulation showed a significant decrease along the visual hierarchy. In addition, electrodes placed over high-order visual areas (e.g., fusiform gyrus) showed both effects of spatial and object-based attention. Overall, our results help to reconcile previous observations of discrepancy between fMRI and electrophysiology. They also imply that spatial attention effects can be found both in early and high-order visual cortical areas, in parallel with their stimulus tuning properties.

  17. Fragmented Perception: Slower Space-Based but Faster Object-Based Attention in Recent-Onset Psychosis with and without Schizophrenia

    PubMed Central

    Smid, Henderikus G. O. M.; Bruggeman, Richard; Martens, Sander

    2013-01-01

    Background Schizophrenia is associated with impairments of the perception of objects, but how this affects higher cognitive functions, whether this impairment is already present after recent onset of psychosis, and whether it is specific for schizophrenia related psychosis, is not clear. We therefore tested the hypothesis that because schizophrenia is associated with impaired object perception, schizophrenia patients should differ in shifting attention between objects compared to healthy controls. To test this hypothesis, a task was used that allowed us to separately observe space-based and object-based covert orienting of attention. To examine whether impairment of object-based visual attention is related to higher order cognitive functions, standard neuropsychological tests were also administered. Method Patients with recent onset psychosis and normal controls performed the attention task, in which space- and object-based attention shifts were induced by cue-target sequences that required reorienting of attention within an object, or reorienting attention between objects. Results Patients with and without schizophrenia showed slower than normal spatial attention shifts, but the object-based component of attention shifts in patients was smaller than normal. Schizophrenia was specifically associated with slowed right-to-left attention shifts. Reorienting speed was significantly correlated with verbal memory scores in controls, and with visual attention scores in patients, but not with speed-of-processing scores in either group. Conclusions deficits of object-perception and spatial attention shifting are not only associated with schizophrenia, but are common to all psychosis patients. Schizophrenia patients only differed by having abnormally slow right-to-left visual field reorienting. Deficits of object-perception and spatial attention shifting are already present after recent onset of psychosis. Studies investigating visual spatial attention should take into account the separable effects of space-based and object-based shifting of attention. Impaired reorienting in patients was related to impaired visual attention, but not to deficits of processing speed and verbal memory. PMID:23536901

  18. Fragmented perception: slower space-based but faster object-based attention in recent-onset psychosis with and without Schizophrenia.

    PubMed

    Smid, Henderikus G O M; Bruggeman, Richard; Martens, Sander

    2013-01-01

    Schizophrenia is associated with impairments of the perception of objects, but how this affects higher cognitive functions, whether this impairment is already present after recent onset of psychosis, and whether it is specific for schizophrenia related psychosis, is not clear. We therefore tested the hypothesis that because schizophrenia is associated with impaired object perception, schizophrenia patients should differ in shifting attention between objects compared to healthy controls. To test this hypothesis, a task was used that allowed us to separately observe space-based and object-based covert orienting of attention. To examine whether impairment of object-based visual attention is related to higher order cognitive functions, standard neuropsychological tests were also administered. Patients with recent onset psychosis and normal controls performed the attention task, in which space- and object-based attention shifts were induced by cue-target sequences that required reorienting of attention within an object, or reorienting attention between objects. Patients with and without schizophrenia showed slower than normal spatial attention shifts, but the object-based component of attention shifts in patients was smaller than normal. Schizophrenia was specifically associated with slowed right-to-left attention shifts. Reorienting speed was significantly correlated with verbal memory scores in controls, and with visual attention scores in patients, but not with speed-of-processing scores in either group. deficits of object-perception and spatial attention shifting are not only associated with schizophrenia, but are common to all psychosis patients. Schizophrenia patients only differed by having abnormally slow right-to-left visual field reorienting. Deficits of object-perception and spatial attention shifting are already present after recent onset of psychosis. Studies investigating visual spatial attention should take into account the separable effects of space-based and object-based shifting of attention. Impaired reorienting in patients was related to impaired visual attention, but not to deficits of processing speed and verbal memory.

  19. Encoding of Spatial Attention by Primate Prefrontal Cortex Neuronal Ensembles

    PubMed Central

    Treue, Stefan

    2018-01-01

    Abstract Single neurons in the primate lateral prefrontal cortex (LPFC) encode information about the allocation of visual attention and the features of visual stimuli. However, how this compares to the performance of neuronal ensembles at encoding the same information is poorly understood. Here, we recorded the responses of neuronal ensembles in the LPFC of two macaque monkeys while they performed a task that required attending to one of two moving random dot patterns positioned in different hemifields and ignoring the other pattern. We found single units selective for the location of the attended stimulus as well as for its motion direction. To determine the coding of both variables in the population of recorded units, we used a linear classifier and progressively built neuronal ensembles by iteratively adding units according to their individual performance (best single units), or by iteratively adding units based on their contribution to the ensemble performance (best ensemble). For both methods, ensembles of relatively small sizes (n < 60) yielded substantially higher decoding performance relative to individual single units. However, the decoder reached similar performance using fewer neurons with the best ensemble building method compared with the best single units method. Our results indicate that neuronal ensembles within the LPFC encode more information about the attended spatial and nonspatial features of visual stimuli than individual neurons. They further suggest that efficient coding of attention can be achieved by relatively small neuronal ensembles characterized by a certain relationship between signal and noise correlation structures. PMID:29568798

  20. Adults with dyslexia demonstrate space-based and object-based covert attention deficits: shifting attention to the periphery and shifting attention between objects in the left visual field.

    PubMed

    Buchholz, Judy; Aimola Davies, Anne

    2005-02-01

    Performance on a covert visual attention task is compared between a group of adults with developmental dyslexia (specifically phonological difficulties) and a group of age and IQ matched controls. The group with dyslexia were generally slower to detect validly-cued targets. Costs of shifting attention toward the periphery when the target was invalidly cued were significantly higher for the group with dyslexia, while costs associated with shifts toward the fovea tended to be lower. Higher costs were also shown by the group with dyslexia for up-down shifts of attention in the periphery. A visual field processing difference was found, in that the group with dyslexia showed higher costs associated with shifting attention between objects in they LVF. These findings indicate that these adults with dyslexia have difficulty in both the space-based and the object-based components of covert visual attention, and more specifically to stimuli located in the periphery.

  1. Probability and the changing shape of response distributions for orientation.

    PubMed

    Anderson, Britt

    2014-11-18

    Spatial attention and feature-based attention are regarded as two independent mechanisms for biasing the processing of sensory stimuli. Feature attention is held to be a spatially invariant mechanism that advantages a single feature per sensory dimension. In contrast to the prediction of location independence, I found that participants were able to report the orientation of a briefly presented visual grating better for targets defined by high probability conjunctions of features and locations even when orientations and locations were individually uniform. The advantage for high-probability conjunctions was accompanied by changes in the shape of the response distributions. High-probability conjunctions had error distributions that were not normally distributed but demonstrated increased kurtosis. The increase in kurtosis could be explained as a change in the variances of the component tuning functions that comprise a population mixture. By changing the mixture distribution of orientation-tuned neurons, it is possible to change the shape of the discrimination function. This prompts the suggestion that attention may not "increase" the quality of perceptual processing in an absolute sense but rather prioritizes some stimuli over others. This results in an increased number of highly accurate responses to probable targets and, simultaneously, an increase in the number of very inaccurate responses. © 2014 ARVO.

  2. All Set! Evidence of Simultaneous Attentional Control Settings for Multiple Target Colors

    ERIC Educational Resources Information Center

    Irons, Jessica L.; Folk, Charles L.; Remington, Roger W.

    2012-01-01

    Although models of visual search have often assumed that attention can only be set for a single feature or property at a time, recent studies have suggested that it may be possible to maintain more than one attentional control setting. The aim of the present study was to investigate whether spatial attention could be guided by multiple attentional…

  3. Task Demands Control Acquisition and Storage of Visual Information

    ERIC Educational Resources Information Center

    Droll, Jason A.; Hayhoe, Mary M.; Triesch, Jochen; Sullivan, Brian T.

    2005-01-01

    Attention and working memory limitations set strict limits on visual representations, yet researchers have little appreciation of how these limits constrain the acquisition of information in ongoing visually guided behavior. Subjects performed a brick sorting task in a virtual environment. A change was made to 1 of the features of the brick being…

  4. Oculomotor guidance and capture by irrelevant faces.

    PubMed

    Devue, Christel; Belopolsky, Artem V; Theeuwes, Jan

    2012-01-01

    Even though it is generally agreed that face stimuli constitute a special class of stimuli, which are treated preferentially by our visual system, it remains unclear whether faces can capture attention in a stimulus-driven manner. Moreover, there is a long-standing debate regarding the mechanism underlying the preferential bias of selecting faces. Some claim that faces constitute a set of special low-level features to which our visual system is tuned; others claim that the visual system is capable of extracting the meaning of faces very rapidly, driving attentional selection. Those debates continue because many studies contain methodological peculiarities and manipulations that prevent a definitive conclusion. Here, we present a new visual search task in which observers had to make a saccade to a uniquely colored circle while completely irrelevant objects were also present in the visual field. The results indicate that faces capture and guide the eyes more than other animated objects and that our visual system is not only tuned to the low-level features that make up a face but also to its meaning.

  5. ERP correlates of anticipatory attention: spatial and non-spatial specificity and relation to subsequent selective attention.

    PubMed

    Dale, Corby L; Simpson, Gregory V; Foxe, John J; Luks, Tracy L; Worden, Michael S

    2008-06-01

    Brain-based models of visual attention hypothesize that attention-related benefits afforded to imperative stimuli occur via enhancement of neural activity associated with relevant spatial and non-spatial features. When relevant information is available in advance of a stimulus, anticipatory deployment processes are likely to facilitate allocation of attention to stimulus properties prior to its arrival. The current study recorded EEG from humans during a centrally-cued covert attention task. Cues indicated relevance of left or right visual field locations for an upcoming motion or orientation discrimination. During a 1 s delay between cue and S2, multiple attention-related events occurred at frontal, parietal and occipital electrode sites. Differences in anticipatory activity associated with the non-spatial task properties were found late in the delay, while spatially-specific modulation of activity occurred during both early and late periods and continued during S2 processing. The magnitude of anticipatory activity preceding the S2 at frontal scalp sites (and not occipital) was predictive of the magnitude of subsequent selective attention effects on the S2 event-related potentials observed at occipital electrodes. Results support the existence of multiple anticipatory attention-related processes, some with differing specificity for spatial and non-spatial task properties, and the hypothesis that levels of activity in anterior areas are important for effective control of subsequent S2 selective attention.

  6. Early and late selection processes have separable influences on the neural substrates of attention.

    PubMed

    Drisdelle, Brandi Lee; Jolicoeur, Pierre

    2018-05-01

    To improve our understanding of the mechanisms of target selection, we examined how the spatial separation of salient items and their similarity to a pre-defined target interact using lateralised electrophysiological correlates of visual spatial attention (N2pc component) and visual short-term memory (VSTM; SPCN component). Using these features of target selection, we sought to expand on previous work proposing a model of early and late selection, where the N2pc is suggested to reflect the selection probability of visual stimuli (Aubin and Jolicoeur, 2016). The authors suggested that early-selection processes could be enhanced when items are adjacent. In the present work, the stimuli were short oriented lines, all of which were grey except for two that were blue and hence salient. A decrease in N2pc amplitude with decreasing spatial separation between salient items was observed. The N2pc increased in amplitude with increasing similarity of salient distractors to the target template, but only in target-absent trials. There was no interaction between these two factors, suggesting that separable attentional mechanisms influenced the N2pc. The findings suggest that selection is initially based on easily-distinguished attributes (i.e., both blue items) followed by a later identification-based process (if necessary), which depends on feature similarity to a target template. For the SPCN component, the results were in line with previous work: for target-present trials, an increase in similarity of salient distractors was associated with an increase in SPCN amplitude, suggesting more information was maintained in VSTM. In sum, results suggest there is a need for further inspection of salient distractors when they are similar to the target, increasing the need for focal attention, demonstrated by an increase in N2pc amplitude, followed by a higher probability of transfer to VSTM, demonstrated by an increase in SPCN amplitude. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Feature Integration Theory Revisited: Dissociating Feature Detection and Attentional Guidance in Visual Search

    ERIC Educational Resources Information Center

    Chan, Louis K. H.; Hayward, William G.

    2009-01-01

    In feature integration theory (FIT; A. Treisman & S. Sato, 1990), feature detection is driven by independent dimensional modules, and other searches are driven by a master map of locations that integrates dimensional information into salience signals. Although recent theoretical models have largely abandoned this distinction, some observed…

  8. Diagnostic Features of Emotional Expressions Are Processed Preferentially

    PubMed Central

    Scheller, Elisa; Büchel, Christian; Gamer, Matthias

    2012-01-01

    Diagnostic features of emotional expressions are differentially distributed across the face. The current study examined whether these diagnostic features are preferentially attended to even when they are irrelevant for the task at hand or when faces appear at different locations in the visual field. To this aim, fearful, happy and neutral faces were presented to healthy individuals in two experiments while measuring eye movements. In Experiment 1, participants had to accomplish an emotion classification, a gender discrimination or a passive viewing task. To differentiate fast, potentially reflexive, eye movements from a more elaborate scanning of faces, stimuli were either presented for 150 or 2000 ms. In Experiment 2, similar faces were presented at different spatial positions to rule out the possibility that eye movements only reflect a general bias for certain visual field locations. In both experiments, participants fixated the eye region much longer than any other region in the face. Furthermore, the eye region was attended to more pronouncedly when fearful or neutral faces were shown whereas more attention was directed toward the mouth of happy facial expressions. Since these results were similar across the other experimental manipulations, they indicate that diagnostic features of emotional expressions are preferentially processed irrespective of task demands and spatial locations. Saliency analyses revealed that a computational model of bottom-up visual attention could not explain these results. Furthermore, as these gaze preferences were evident very early after stimulus onset and occurred even when saccades did not allow for extracting further information from these stimuli, they may reflect a preattentive mechanism that automatically detects relevant facial features in the visual field and facilitates the orientation of attention towards them. This mechanism might crucially depend on amygdala functioning and it is potentially impaired in a number of clinical conditions such as autism or social anxiety disorders. PMID:22848607

  9. Diagnostic features of emotional expressions are processed preferentially.

    PubMed

    Scheller, Elisa; Büchel, Christian; Gamer, Matthias

    2012-01-01

    Diagnostic features of emotional expressions are differentially distributed across the face. The current study examined whether these diagnostic features are preferentially attended to even when they are irrelevant for the task at hand or when faces appear at different locations in the visual field. To this aim, fearful, happy and neutral faces were presented to healthy individuals in two experiments while measuring eye movements. In Experiment 1, participants had to accomplish an emotion classification, a gender discrimination or a passive viewing task. To differentiate fast, potentially reflexive, eye movements from a more elaborate scanning of faces, stimuli were either presented for 150 or 2000 ms. In Experiment 2, similar faces were presented at different spatial positions to rule out the possibility that eye movements only reflect a general bias for certain visual field locations. In both experiments, participants fixated the eye region much longer than any other region in the face. Furthermore, the eye region was attended to more pronouncedly when fearful or neutral faces were shown whereas more attention was directed toward the mouth of happy facial expressions. Since these results were similar across the other experimental manipulations, they indicate that diagnostic features of emotional expressions are preferentially processed irrespective of task demands and spatial locations. Saliency analyses revealed that a computational model of bottom-up visual attention could not explain these results. Furthermore, as these gaze preferences were evident very early after stimulus onset and occurred even when saccades did not allow for extracting further information from these stimuli, they may reflect a preattentive mechanism that automatically detects relevant facial features in the visual field and facilitates the orientation of attention towards them. This mechanism might crucially depend on amygdala functioning and it is potentially impaired in a number of clinical conditions such as autism or social anxiety disorders.

  10. An integrated measure of display clutter based on feature content, user knowledge and attention allocation factors.

    PubMed

    Pankok, Carl; Kaber, David B

    2018-05-01

    Existing measures of display clutter in the literature generally exhibit weak correlations with task performance, which limits their utility in safety-critical domains. A literature review led to formulation of an integrated display data- and user knowledge-driven measure of display clutter. A driving simulation experiment was conducted in which participants were asked to search 'high' and 'low' clutter displays for navigation information. Data-driven measures and subjective perceptions of clutter were collected along with patterns of visual attention allocation and driving performance responses during time periods in which participants searched the navigation display for information. The new integrated measure was more strongly correlated with driving performance than other, previously developed measures of clutter, particularly in the case of low-clutter displays. Integrating display data and user knowledge factors with patterns of visual attention allocation shows promise for measuring display clutter and correlation with task performance, particularly for low-clutter displays. Practitioner Summary: A novel measure of display clutter was formulated, accounting for display data content, user knowledge states and patterns of visual attention allocation. The measure was evaluated in terms of correlations with driver performance in a safety-critical driving simulation study. The measure exhibited stronger correlations with task performance than previously defined measures.

  11. Perceptual grouping and attention in visual search for features and for objects.

    PubMed

    Treisman, A

    1982-04-01

    This article explores the effects of perceptual grouping on search for targets defined by separate features or by conjunction of features. Treisman and Gelade proposed a feature-integration theory of attention, which claims that in the absence of prior knowledge, the separable features of objects are correctly combined only when focused attention is directed to each item in turn. If items are preattentively grouped, however, attention may be directed to groups rather than to single items whenever no recombination of features within a group could generate an illusory target. This prediction is confirmed: In search for conjunctions, subjects appear to scan serially between groups rather than items. The scanning rate shows little effect of the spatial density of distractors, suggesting that it reflects serial fixations of attention rather than eye movements. Search for features, on the other hand, appears to independent of perceptual grouping, suggesting that features are detected preattentively. A conjunction target can be camouflaged at the preattentive level by placing it at the boundary between two adjacent groups, each of which shares one of its features. This suggests that preattentive grouping creates separate feature maps within each separable dimension rather than one global configuration.

  12. Indexing and the object concept: developing `what' and `where' systems.

    PubMed

    Leslie, A M; Xu, F; Tremoulet, P D; Scholl, B J

    1998-01-01

    The study of object cognition over the past 25 years has proceeded in two largely non-interacting camps. One camp has studied object-based visual attention in adults, while the other has studied the object concept in infants. We briefly review both sets of literature and distill from the adult research a theoretical model that we apply to findings from the infant studies. The key notion in our model of object representation is the `sticky' index, a mechanism of selective attention that points at a physical object in a location. An object index does not represent any of the properties of the entity at which it points. However, once an index is pointing to an object, the properties of that object can be examined and featural information can be associated with, or `bound' to, its index. The distinction between indexing and feature binding underwrites the distinction between object individuation and object identification, a distinction that turns out to be crucial in both the adult attention and the infant object-concept literature. By developing the indexing model, we draw together two disparate sets of literature and suggest new ways to study object-based attention in infancy.

  13. Automatic capture of attention by conceptually generated working memory templates.

    PubMed

    Sun, Sol Z; Shen, Jenny; Shaw, Mark; Cant, Jonathan S; Ferber, Susanne

    2015-08-01

    Many theories of attention propose that the contents of working memory (WM) can act as an attentional template, which biases processing in favor of perceptually similar inputs. While support has been found for this claim, it is unclear how attentional templates are generated when searching real-world environments. We hypothesized that in naturalistic settings, attentional templates are commonly generated from conceptual knowledge, an idea consistent with sensorimotor models of knowledge representation. Participants performed a visual search task in the delay period of a WM task, where the item in memory was either a colored disk or a word associated with a color concept (e.g., "Rose," associated with red). During search, we manipulated whether a singleton distractor in the array matched the contents of WM. Overall, we found that search times were impaired in the presence of a memory-matching distractor. Furthermore, the degree of impairment did not differ based on the contents of WM. Put differently, regardless of whether participants were maintaining a perceptually colored disk identical to the singleton distractor, or whether they were simply maintaining a word associated with the color of the distractor, the magnitude of attentional capture was the same. Our results suggest that attentional templates can be generated from conceptual knowledge, in the physical absence of the visual feature.

  14. An investigation of visual selection priority of objects with texture and crossed and uncrossed disparities

    NASA Astrophysics Data System (ADS)

    Khaustova, Dar'ya; Fournier, Jérôme; Wyckens, Emmanuel; Le Meur, Olivier

    2014-02-01

    The aim of this research is to understand the difference in visual attention to 2D and 3D content depending on texture and amount of depth. Two experiments were conducted using an eye-tracker and a 3DTV display. Collected fixation data were used to build saliency maps and to analyze the differences between 2D and 3D conditions. In the first experiment 51 observers participated in the test. Using scenes that contained objects with crossed disparity, it was discovered that such objects are the most salient, even if observers experience discomfort due to the high level of disparity. The goal of the second experiment is to decide whether depth is a determinative factor for visual attention. During the experiment, 28 observers watched the scenes that contained objects with crossed and uncrossed disparities. We evaluated features influencing the saliency of the objects in stereoscopic conditions by using contents with low-level visual features. With univariate tests of significance (MANOVA), it was detected that texture is more important than depth for selection of objects. Objects with crossed disparity are significantly more important for selection processes when compared to 2D. However, objects with uncrossed disparity have the same influence on visual attention as 2D objects. Analysis of eyemovements indicated that there is no difference in saccade length. Fixation durations were significantly higher in stereoscopic conditions for low-level stimuli than in 2D. We believe that these experiments can help to refine existing models of visual attention for 3D content.

  15. Effects of ensemble and summary displays on interpretations of geospatial uncertainty data.

    PubMed

    Padilla, Lace M; Ruginski, Ian T; Creem-Regehr, Sarah H

    2017-01-01

    Ensemble and summary displays are two widely used methods to represent visual-spatial uncertainty; however, there is disagreement about which is the most effective technique to communicate uncertainty to the general public. Visualization scientists create ensemble displays by plotting multiple data points on the same Cartesian coordinate plane. Despite their use in scientific practice, it is more common in public presentations to use visualizations of summary displays, which scientists create by plotting statistical parameters of the ensemble members. While prior work has demonstrated that viewers make different decisions when viewing summary and ensemble displays, it is unclear what components of the displays lead to diverging judgments. This study aims to compare the salience of visual features - or visual elements that attract bottom-up attention - as one possible source of diverging judgments made with ensemble and summary displays in the context of hurricane track forecasts. We report that salient visual features of both ensemble and summary displays influence participant judgment. Specifically, we find that salient features of summary displays of geospatial uncertainty can be misunderstood as displaying size information. Further, salient features of ensemble displays evoke judgments that are indicative of accurate interpretations of the underlying probability distribution of the ensemble data. However, when participants use ensemble displays to make point-based judgments, they may overweight individual ensemble members in their decision-making process. We propose that ensemble displays are a promising alternative to summary displays in a geospatial context but that decisions about visualization methods should be informed by the viewer's task.

  16. Visual Complexity Attenuates Emotional Processing in Psychopathy: Implications for Fear-Potentiated Startle Deficits

    PubMed Central

    Sadeh, Naomi; Verona, Edelyn

    2012-01-01

    A long-standing debate is the extent to which psychopathy is characterized by a fundamental deficit in attention or emotion. We tested the hypothesis that the interplay of emotional and attentional systems is critical for understanding processing deficits in psychopathy. Sixty-three offenders were assessed using the Psychopathy Checklist: Screening Version. Event-related brain potentials (ERPs) and fear-potentiated startle (FPS) were collected while participants viewed pictures selected to disentangle an existing confound between perceptual complexity and emotional content in the pictures typically used to study fear deficits in psychopathy. As predicted, picture complexity moderated emotional processing deficits. Specifically, the affective-interpersonal features of psychopathy were associated with greater allocation of attentional resources to processing emotional stimuli at initial perception (visual N1) but only when picture stimuli were visually-complex. Despite this, results for the late positive potential indicated that emotional pictures were less attentionally engaging and held less motivational significance for individuals high in affective-interpersonal traits. This deficient negative emotional processing was observed later in their reduced defensive fear reactivity (FPS) to high-complexity unpleasant pictures. In contrast, the impulsive-antisocial features of psychopathy were associated with decreased sensitivity to picture complexity (visual N1) and unrelated to emotional processing as assessed by ERP and FPS. These findings are the first to demonstrate that picture complexity moderates FPS deficits and implicate the interplay of attention and emotional systems as deficient in psychopathy. PMID:22187225

  17. The perception of naturalness correlates with low-level visual features of environmental scenes.

    PubMed

    Berman, Marc G; Hout, Michael C; Kardan, Omid; Hunter, MaryCarol R; Yourganov, Grigori; Henderson, John M; Hanayik, Taylor; Karimi, Hossein; Jonides, John

    2014-01-01

    Previous research has shown that interacting with natural environments vs. more urban or built environments can have salubrious psychological effects, such as improvements in attention and memory. Even viewing pictures of nature vs. pictures of built environments can produce similar effects. A major question is: What is it about natural environments that produces these benefits? Problematically, there are many differing qualities between natural and urban environments, making it difficult to narrow down the dimensions of nature that may lead to these benefits. In this study, we set out to uncover visual features that related to individuals' perceptions of naturalness in images. We quantified naturalness in two ways: first, implicitly using a multidimensional scaling analysis and second, explicitly with direct naturalness ratings. Features that seemed most related to perceptions of naturalness were related to the density of contrast changes in the scene, the density of straight lines in the scene, the average color saturation in the scene and the average hue diversity in the scene. We then trained a machine-learning algorithm to predict whether a scene was perceived as being natural or not based on these low-level visual features and we could do so with 81% accuracy. As such we were able to reliably predict subjective perceptions of naturalness with objective low-level visual features. Our results can be used in future studies to determine if these features, which are related to naturalness, may also lead to the benefits attained from interacting with nature.

  18. The control of attentional target selection in a colour/colour conjunction task.

    PubMed

    Berggren, Nick; Eimer, Martin

    2016-11-01

    To investigate the time course of attentional object selection processes in visual search tasks where targets are defined by a combination of features from the same dimension, we measured the N2pc component as an electrophysiological marker of attentional object selection during colour/colour conjunction search. In Experiment 1, participants searched for targets defined by a combination of two colours, while ignoring distractor objects that matched only one of these colours. Reliable N2pc components were triggered by targets and also by partially matching distractors, even when these distractors were accompanied by a target in the same display. The target N2pc was initially equal in size to the sum of the two N2pc components to the two different types of partially matching distractors and became superadditive from approximately 250 ms after search display onset. Experiment 2 demonstrated that the superadditivity of the target N2pc was not due to a selective disengagement of attention from task-irrelevant partially matching distractors. These results indicate that attention was initially deployed separately and in parallel to all target-matching colours, before attentional allocation processes became sensitive to the presence of both matching colours within the same object. They suggest that attention can be controlled simultaneously and independently by multiple features from the same dimension and that feature-guided attentional selection processes operate in parallel for different target-matching objects in the visual field.

  19. Human attention filters for single colors

    PubMed Central

    Sun, Peng; Chubb, Charles; Wright, Charles E.; Sperling, George

    2016-01-01

    The visual images in the eyes contain much more information than the brain can process. An important selection mechanism is feature-based attention (FBA). FBA is best described by attention filters that specify precisely the extent to which items containing attended features are selectively processed and the extent to which items that do not contain the attended features are attenuated. The centroid-judgment paradigm enables quick, precise measurements of such human perceptual attention filters, analogous to transmission measurements of photographic color filters. Subjects use a mouse to locate the centroid—the center of gravity—of a briefly displayed cloud of dots and receive precise feedback. A subset of dots is distinguished by some characteristic, such as a different color, and subjects judge the centroid of only the distinguished subset (e.g., dots of a particular color). The analysis efficiently determines the precise weight in the judged centroid of dots of every color in the display (i.e., the attention filter for the particular attended color in that context). We report 32 attention filters for single colors. Attention filters that discriminate one saturated hue from among seven other equiluminant distractor hues are extraordinarily selective, achieving attended/unattended weight ratios >20:1. Attention filters for selecting a color that differs in saturation or lightness from distractors are much less selective than attention filters for hue (given equal discriminability of the colors), and their filter selectivities are proportional to the discriminability distance of neighboring colors, whereas in the same range hue attention-filter selectivity is virtually independent of discriminabilty. PMID:27791040

  20. The method for detecting small lesions in medical image based on sliding window

    NASA Astrophysics Data System (ADS)

    Han, Guilai; Jiao, Yuan

    2016-10-01

    At present, the research on computer-aided diagnosis includes the sample image segmentation, extracting visual features, generating the classification model by learning, and according to the model generated to classify and judge the inspected images. However, this method has a large scale of calculation and speed is slow. And because medical images are usually low contrast, when the traditional image segmentation method is applied to the medical image, there is a complete failure. As soon as possible to find the region of interest, improve detection speed, this topic attempts to introduce the current popular visual attention model into small lesions detection. However, Itti model is mainly for natural images. But the effect is not ideal when it is used to medical images which usually are gray images. Especially in the early stages of some cancers, the focus of a disease in the whole image is not the most significant region and sometimes is very difficult to be found. But these lesions are prominent in the local areas. This paper proposes a visual attention mechanism based on sliding window, and use sliding window to calculate the significance of a local area. Combined with the characteristics of the lesion, select the features of gray, entropy, corner and edge to generate a saliency map. Then the significant region is segmented and distinguished. This method reduces the difficulty of image segmentation, and improves the detection accuracy of small lesions, and it has great significance to early discovery, early diagnosis and treatment of cancers.

  1. Typical visual search performance and atypical gaze behaviors in response to faces in Williams syndrome.

    PubMed

    Hirai, Masahiro; Muramatsu, Yukako; Mizuno, Seiji; Kurahashi, Naoko; Kurahashi, Hirokazu; Nakamura, Miho

    2016-01-01

    Evidence indicates that individuals with Williams syndrome (WS) exhibit atypical attentional characteristics when viewing faces. However, the dynamics of visual attention captured by faces remain unclear, especially when explicit attentional forces are present. To clarify this, we introduced a visual search paradigm and assessed how the relative strength of visual attention captured by a face and explicit attentional control changes as search progresses. Participants (WS and controls) searched for a target (butterfly) within an array of distractors, which sometimes contained an upright face. We analyzed reaction time and location of the first fixation-which reflect the attentional profile at the initial stage-and fixation durations. These features represent aspects of attention at later stages of visual search. The strength of visual attention captured by faces and explicit attentional control (toward the butterfly) was characterized by the frequency of first fixations on a face or butterfly and on the duration of face or butterfly fixations. Although reaction time was longer in all groups when faces were present, and visual attention was not dominated by faces in any group during the initial stages of the search, when faces were present, attention to faces dominated in the WS group during the later search stages. Furthermore, for the WS group, reaction time correlated with eye-movement measures at different stages of searching such that longer reaction times were associated with longer face-fixations, specifically at the initial stage of searching. Moreover, longer reaction times were associated with longer face-fixations at the later stages of searching, while shorter reaction times were associated with longer butterfly fixations. The relative strength of attention captured by faces in people with WS is not observed at the initial stage of searching but becomes dominant as the search progresses. Furthermore, although behavioral responses are associated with some aspects of eye movements, they are not as sensitive as eye-movement measurements themselves at detecting atypical attentional characteristics in people with WS.

  2. Influence of audio triggered emotional attention on video perception

    NASA Astrophysics Data System (ADS)

    Torres, Freddy; Kalva, Hari

    2014-02-01

    Perceptual video coding methods attempt to improve compression efficiency by discarding visual information not perceived by end users. Most of the current approaches for perceptual video coding only use visual features ignoring the auditory component. Many psychophysical studies have demonstrated that auditory stimuli affects our visual perception. In this paper we present our study of audio triggered emotional attention and it's applicability to perceptual video coding. Experiments with movie clips show that the reaction time to detect video compression artifacts was longer when video was presented with the audio information. The results reported are statistically significant with p=0.024.

  3. Working memory capacity accounts for the ability to switch between object-based and location-based allocation of visual attention.

    PubMed

    Bleckley, M Kathryn; Foster, Jeffrey L; Engle, Randall W

    2015-04-01

    Bleckley, Durso, Crutchfield, Engle, and Khanna (Psychonomic Bulletin & Review, 10, 884-889, 2003) found that visual attention allocation differed between groups high or low in working memory capacity (WMC). High-span, but not low-span, subjects showed an invalid-cue cost during a letter localization task in which the letter appeared closer to fixation than the cue, but not when the letter appeared farther from fixation than the cue. This suggests that low-spans allocated attention as a spotlight, whereas high-spans allocated their attention to objects. In this study, we tested whether utilizing object-based visual attention is a resource-limited process that is difficult for low-span individuals. In the first experiment, we tested the uses of object versus location-based attention with high and low-span subjects, with half of the subjects completing a demanding secondary load task. Under load, high-spans were no longer able to use object-based visual attention. A second experiment supported the hypothesis that these differences in allocation were due to high-spans using object-based allocation, whereas low-spans used location-based allocation.

  4. Simultaneous selection by object-based attention in visual and frontal cortex

    PubMed Central

    Pooresmaeili, Arezoo; Poort, Jasper; Roelfsema, Pieter R.

    2014-01-01

    Models of visual attention hold that top-down signals from frontal cortex influence information processing in visual cortex. It is unknown whether situations exist in which visual cortex actively participates in attentional selection. To investigate this question, we simultaneously recorded neuronal activity in the frontal eye fields (FEF) and primary visual cortex (V1) during a curve-tracing task in which attention shifts are object-based. We found that accurate performance was associated with similar latencies of attentional selection in both areas and that the latency in both areas increased if the task was made more difficult. The amplitude of the attentional signals in V1 saturated early during a trial, whereas these selection signals kept increasing for a longer time in FEF, until the moment of an eye movement, as if FEF integrated attentional signals present in early visual cortex. In erroneous trials, we observed an interareal latency difference because FEF selected the wrong curve before V1 and imposed its erroneous decision onto visual cortex. The neuronal activity in visual and frontal cortices was correlated across trials, and this trial-to-trial coupling was strongest for the attended curve. These results imply that selective attention relies on reciprocal interactions within a large network of areas that includes V1 and FEF. PMID:24711379

  5. Attention mechanisms in visual search -- an fMRI study.

    PubMed

    Leonards, U; Sunaert, S; Van Hecke, P; Orban, G A

    2000-01-01

    The human visual system is usually confronted with many different objects at a time, with only some of them reaching consciousness. Reaction-time studies have revealed two different strategies by which objects are selected for further processing: an automatic, efficient search process, and a conscious, so-called inefficient search [Treisman, A. (1991). Search, similarity, and integration of features between and within dimensions. Journal of Experimental Psychology: Human Perception and Performance, 17, 652--676; Treisman, A., & Gelade, G. (1980). A feature integration theory of attention. Cognitive Psychology, 12, 97--136; Wolfe, J. M. (1996). Visual search. In H. Pashler (Ed.), Attention. London: University College London Press]. Two different theories have been proposed to account for these search processes. Parallel theories presume that both types of search are treated by a single mechanism that is modulated by attentional and computational demands. Serial theories, in contrast, propose that parallel processing may underlie efficient search, but inefficient searching requires an additional serial mechanism, an attentional "spotlight" (Treisman, A., 1991) that successively shifts attention to different locations in the visual field. Using functional magnetic resonance imaging (fMRI), we show that the cerebral networks involved in efficient and inefficient search overlap almost completely. Only the superior frontal region, known to be involved in working memory [Courtney, S. M., Petit, L., Maisog, J. M., Ungerleider, L. G., & Haxby, J. V. (1998). An area specialized for spatial working memory in human frontal cortex. Science, 279, 1347--1351], and distinct from the frontal eye fields, that control spatial shifts of attention, was specifically involved in inefficient search. Activity modulations correlated with subjects' behavior best in the extrastriate cortical areas, where the amount of activity depended on the number of distracting elements in the display. Such a correlation was not observed in the parietal and frontal regions, usually assumed as being involved in spatial attention processing. These results can be interpreted in two ways: the most likely is that visual search does not require serial processing, otherwise we must assume the existence of a serial searchlight that operates in the extrastriate cortex but differs from the visuospatial shifts of attention involving the parietal and frontal regions.

  6. A Feedback Model of Attention Explains the Diverse Effects of Attention on Neural Firing Rates and Receptive Field Structure.

    PubMed

    Miconi, Thomas; VanRullen, Rufin

    2016-02-01

    Visual attention has many effects on neural responses, producing complex changes in firing rates, as well as modifying the structure and size of receptive fields, both in topological and feature space. Several existing models of attention suggest that these effects arise from selective modulation of neural inputs. However, anatomical and physiological observations suggest that attentional modulation targets higher levels of the visual system (such as V4 or MT) rather than input areas (such as V1). Here we propose a simple mechanism that explains how a top-down attentional modulation, falling on higher visual areas, can produce the observed effects of attention on neural responses. Our model requires only the existence of modulatory feedback connections between areas, and short-range lateral inhibition within each area. Feedback connections redistribute the top-down modulation to lower areas, which in turn alters the inputs of other higher-area cells, including those that did not receive the initial modulation. This produces firing rate modulations and receptive field shifts. Simultaneously, short-range lateral inhibition between neighboring cells produce competitive effects that are automatically scaled to receptive field size in any given area. Our model reproduces the observed attentional effects on response rates (response gain, input gain, biased competition automatically scaled to receptive field size) and receptive field structure (shifts and resizing of receptive fields both spatially and in complex feature space), without modifying model parameters. Our model also makes the novel prediction that attentional effects on response curves should shift from response gain to contrast gain as the spatial focus of attention drifts away from the studied cell.

  7. Top-down modulation of visual and auditory cortical processing in aging.

    PubMed

    Guerreiro, Maria J S; Eck, Judith; Moerel, Michelle; Evers, Elisabeth A T; Van Gerven, Pascal W M

    2015-02-01

    Age-related cognitive decline has been accounted for by an age-related deficit in top-down attentional modulation of sensory cortical processing. In light of recent behavioral findings showing that age-related differences in selective attention are modality dependent, our goal was to investigate the role of sensory modality in age-related differences in top-down modulation of sensory cortical processing. This question was addressed by testing younger and older individuals in several memory tasks while undergoing fMRI. Throughout these tasks, perceptual features were kept constant while attentional instructions were varied, allowing us to devise all combinations of relevant and irrelevant, visual and auditory information. We found no top-down modulation of auditory sensory cortical processing in either age group. In contrast, we found top-down modulation of visual cortical processing in both age groups, and this effect did not differ between age groups. That is, older adults enhanced cortical processing of relevant visual information and suppressed cortical processing of visual distractors during auditory attention to the same extent as younger adults. The present results indicate that older adults are capable of suppressing irrelevant visual information in the context of cross-modal auditory attention, and thereby challenge the view that age-related attentional and cognitive decline is due to a general deficits in the ability to suppress irrelevant information. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Differential Gaze Patterns on Eyes and Mouth During Audiovisual Speech Segmentation

    PubMed Central

    Lusk, Laina G.; Mitchel, Aaron D.

    2016-01-01

    Speech is inextricably multisensory: both auditory and visual components provide critical information for all aspects of speech processing, including speech segmentation, the visual components of which have been the target of a growing number of studies. In particular, a recent study (Mitchel and Weiss, 2014) established that adults can utilize facial cues (i.e., visual prosody) to identify word boundaries in fluent speech. The current study expanded upon these results, using an eye tracker to identify highly attended facial features of the audiovisual display used in Mitchel and Weiss (2014). Subjects spent the most time watching the eyes and mouth. A significant trend in gaze durations was found with the longest gaze duration on the mouth, followed by the eyes and then the nose. In addition, eye-gaze patterns changed across familiarization as subjects learned the word boundaries, showing decreased attention to the mouth in later blocks while attention on other facial features remained consistent. These findings highlight the importance of the visual component of speech processing and suggest that the mouth may play a critical role in visual speech segmentation. PMID:26869959

  9. Characteristics of covert and overt visual orienting: Evidence from attentional and oculomotor capture

    NASA Technical Reports Server (NTRS)

    Wu, Shu-Chieh; Remington, Roger W.

    2003-01-01

    Five visual search experiments found oculomotor and attentional capture consistent with predictions of contingent orienting, contrary to claims that oculomotor capture is purely stimulus driven. Separate saccade and attend-only conditions contained a color target appearing either singly, with an onset or color distractor, or both. In singleton mode, onsets produced oculomotor and attentional capture. In feature mode, capture was absent or greatly reduced, providing evidence for top-down modulation of both types of capture. Although attentional capture by color abstractors was present throughout, oculomotor capture by color occurred only when accompanied by transient change, providing evidence for a dissociation between oculomotor and attentional capture. Oculomotor and attentional capture appear to be mediated by top-down attentional control settings, but transient change may be necessary for oculomotor capture. ((c) 2003 APA, all rights reserved).

  10. Brain activity during auditory and visual phonological, spatial and simple discrimination tasks.

    PubMed

    Salo, Emma; Rinne, Teemu; Salonen, Oili; Alho, Kimmo

    2013-02-16

    We used functional magnetic resonance imaging to measure human brain activity during tasks demanding selective attention to auditory or visual stimuli delivered in concurrent streams. Auditory stimuli were syllables spoken by different voices and occurring in central or peripheral space. Visual stimuli were centrally or more peripherally presented letters in darker or lighter fonts. The participants performed a phonological, spatial or "simple" (speaker-gender or font-shade) discrimination task in either modality. Within each modality, we expected a clear distinction between brain activations related to nonspatial and spatial processing, as reported in previous studies. However, within each modality, different tasks activated largely overlapping areas in modality-specific (auditory and visual) cortices, as well as in the parietal and frontal brain regions. These overlaps may be due to effects of attention common for all three tasks within each modality or interaction of processing task-relevant features and varying task-irrelevant features in the attended-modality stimuli. Nevertheless, brain activations caused by auditory and visual phonological tasks overlapped in the left mid-lateral prefrontal cortex, while those caused by the auditory and visual spatial tasks overlapped in the inferior parietal cortex. These overlapping activations reveal areas of multimodal phonological and spatial processing. There was also some evidence for intermodal attention-related interaction. Most importantly, activity in the superior temporal sulcus elicited by unattended speech sounds was attenuated during the visual phonological task in comparison with the other visual tasks. This effect might be related to suppression of processing irrelevant speech presumably distracting the phonological task involving the letters. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. The guidance of spatial attention during visual search for color combinations and color configurations.

    PubMed

    Berggren, Nick; Eimer, Martin

    2016-09-01

    Representations of target-defining features (attentional templates) guide the selection of target objects in visual search. We used behavioral and electrophysiological measures to investigate how such search templates control the allocation of attention in search tasks where targets are defined by the combination of 2 colors or by a specific spatial configuration of these colors. Target displays were preceded by spatially uninformative cue displays that contained items in 1 or both target-defining colors. Experiments 1 and 2 demonstrated that, during search for color combinations, attention is initially allocated independently and in parallel to all objects with target-matching colors, but is then rapidly withdrawn from objects that only have 1 of the 2 target colors. In Experiment 3, targets were defined by a particular spatial configuration of 2 colors, and could be accompanied by nontarget objects with a different configuration of the same colors. Attentional guidance processes were unable to distinguish between these 2 types of objects. Both attracted attention equally when they appeared in a cue display, and both received parallel focal-attentional processing and were encoded into working memory when they were presented in the same target display. Results demonstrate that attention can be guided simultaneously by multiple features from the same dimension, but that these guidance processes have no access to the spatial-configural properties of target objects. They suggest that attentional templates do not represent target objects in an integrated pictorial fashion, but contain separate representations of target-defining features. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  12. Evaluating comparative and equality judgments in contrast perception: attention alters appearance.

    PubMed

    Anton-Erxleben, Katharina; Abrams, Jared; Carrasco, Marisa

    2010-09-09

    Covert attention not only improves performance in many visual tasks but also modulates the appearance of several visual features. Studies on attention and appearance have assessed subjective appearance using a task contingent upon a comparative judgment (e.g., M. Carrasco, S. Ling, & S. Read, 2004). Recently, K. A. Schneider and M. Komlos (2008) questioned the validity of those results because they did not find a significant effect of attention on contrast appearance using an equality task. They claim that such equality judgments are bias-free whereas comparative judgments are bias-prone and propose an alternative interpretation of the previous findings based on a decision bias. However, to date there is no empirical support for the superiority of the equality procedure. Here, we compare biases and sensitivity to shifts in perceived contrast of both paradigms. We measured contrast appearance using both a comparative and an equality judgment. Observers judged the contrasts of two simultaneously presented stimuli, while either the contrast of one stimulus was physically incremented (Experiments 1 and 2) or exogenous attention was drawn to it (Experiments 3 and 4). We demonstrate several methodological limitations of the equality paradigm. Nevertheless, both paradigms capture shifts in PSE due to physical and perceived changes in contrast and show that attention enhances apparent contrast.

  13. Space-based visual attention: a marker of immature selective attention in toddlers?

    PubMed

    Rivière, James; Brisson, Julie

    2014-11-01

    Various studies suggested that attentional difficulties cause toddlers' failure in some spatial search tasks. However, attention is not a unitary construct and this study investigated two attentional mechanisms: location selection (space-based attention) and object selection (object-based attention). We investigated how toddlers' attention is distributed in the visual field during a manual search task for objects moving out of sight, namely the moving boxes task. Results show that 2.5-year-olds who failed this task allocated more attention to the location of the relevant object than to the object itself. These findings suggest that in some manual search tasks the primacy of space-based attention over object-based attention could be a marker of immature selective attention in toddlers. © 2014 Wiley Periodicals, Inc.

  14. Serial vs. parallel models of attention in visual search: accounting for benchmark RT-distributions.

    PubMed

    Moran, Rani; Zehetleitner, Michael; Liesefeld, Heinrich René; Müller, Hermann J; Usher, Marius

    2016-10-01

    Visual search is central to the investigation of selective visual attention. Classical theories propose that items are identified by serially deploying focal attention to their locations. While this accounts for set-size effects over a continuum of task difficulties, it has been suggested that parallel models can account for such effects equally well. We compared the serial Competitive Guided Search model with a parallel model in their ability to account for RT distributions and error rates from a large visual search data-set featuring three classical search tasks: 1) a spatial configuration search (2 vs. 5); 2) a feature-conjunction search; and 3) a unique feature search (Wolfe, Palmer & Horowitz Vision Research, 50(14), 1304-1311, 2010). In the parallel model, each item is represented by a diffusion to two boundaries (target-present/absent); the search corresponds to a parallel race between these diffusors. The parallel model was highly flexible in that it allowed both for a parametric range of capacity-limitation and for set-size adjustments of identification boundaries. Furthermore, a quit unit allowed for a continuum of search-quitting policies when the target is not found, with "single-item inspection" and exhaustive searches comprising its extremes. The serial model was found to be superior to the parallel model, even before penalizing the parallel model for its increased complexity. We discuss the implications of the results and the need for future studies to resolve the debate.

  15. The effects of visual stimulation and selective visual attention on rhythmic neuronal synchronization in macaque area V4.

    PubMed

    Fries, Pascal; Womelsdorf, Thilo; Oostenveld, Robert; Desimone, Robert

    2008-04-30

    Selective attention lends relevant sensory input priority access to higher-level brain areas and ultimately to behavior. Recent studies have suggested that those neurons in visual areas that are activated by an attended stimulus engage in enhanced gamma-band (30-70 Hz) synchronization compared with neurons activated by a distracter. Such precise synchronization could enhance the postsynaptic impact of cells carrying behaviorally relevant information. Previous studies have used the local field potential (LFP) power spectrum or spike-LFP coherence (SFC) to indirectly estimate spike synchronization. Here, we directly demonstrate zero-phase gamma-band coherence among spike trains of V4 neurons. This synchronization was particularly evident during visual stimulation and enhanced by selective attention, thus confirming the pattern inferred from LFP power and SFC. We therefore investigated the time course of LFP gamma-band power and found rapid dynamics consistent with interactions of top-down spatial and feature attention with bottom-up saliency. In addition to the modulation of synchronization during visual stimulation, selective attention significantly changed the prestimulus pattern of synchronization. Attention inside the receptive field of the recorded neuronal population enhanced gamma-band synchronization and strongly reduced alpha-band (9-11 Hz) synchronization in the prestimulus period. These results lend further support for a functional role of rhythmic neuronal synchronization in attentional stimulus selection.

  16. Attention Alters Perceived Attractiveness.

    PubMed

    Störmer, Viola S; Alvarez, George A

    2016-04-01

    Can attention alter the impression of a face? Previous studies showed that attention modulates the appearance of lower-level visual features. For instance, attention can make a simple stimulus appear to have higher contrast than it actually does. We tested whether attention can also alter the perception of a higher-order property-namely, facial attractiveness. We asked participants to judge the relative attractiveness of two faces after summoning their attention to one of the faces using a briefly presented visual cue. Across trials, participants judged the attended face to be more attractive than the same face when it was unattended. This effect was not due to decision or response biases, but rather was due to changes in perceptual processing of the faces. These results show that attention alters perceived facial attractiveness, and broadly demonstrate that attention can influence higher-level perception and may affect people's initial impressions of one another. © The Author(s) 2016.

  17. The Control of Single-color and Multiple-color Visual Search by Attentional Templates in Working Memory and in Long-term Memory.

    PubMed

    Grubert, Anna; Carlisle, Nancy B; Eimer, Martin

    2016-12-01

    The question whether target selection in visual search can be effectively controlled by simultaneous attentional templates for multiple features is still under dispute. We investigated whether multiple-color attentional guidance is possible when target colors remain constant and can thus be represented in long-term memory but not when they change frequently and have to be held in working memory. Participants searched for one, two, or three possible target colors that were specified by cue displays at the start of each trial. In constant-color blocks, the same colors remained task-relevant throughout. In variable-color blocks, target colors changed between trials. The contralateral delay activity (CDA) to cue displays increased in amplitude as a function of color memory load in variable-color blocks, which indicates that cued target colors were held in working memory. In constant-color blocks, the CDA was much smaller, suggesting that color representations were primarily stored in long-term memory. N2pc components to targets were measured as a marker of attentional target selection. Target N2pcs were attenuated and delayed during multiple-color search, demonstrating less efficient attentional deployment to color-defined target objects relative to single-color search. Importantly, these costs were the same in constant-color and variable-color blocks. These results demonstrate that attentional guidance by multiple-feature as compared with single-feature templates is less efficient both when target features remain constant and can be represented in long-term memory and when they change across trials and therefore have to be maintained in working memory.

  18. The effects of visual search efficiency on object-based attention

    PubMed Central

    Rosen, Maya; Cutrone, Elizabeth; Behrmann, Marlene

    2017-01-01

    The attentional prioritization hypothesis of object-based attention (Shomstein & Yantis in Perception & Psychophysics, 64, 41–51, 2002) suggests a two-stage selection process comprising an automatic spatial gradient and flexible strategic (prioritization) selection. The combined attentional priorities of these two stages of object-based selection determine the order in which participants will search the display for the presence of a target. The strategic process has often been likened to a prioritized visual search. By modifying the double-rectangle cueing paradigm (Egly, Driver, & Rafal in Journal of Experimental Psychology: General, 123, 161–177, 1994) and placing it in the context of a larger-scale visual search, we examined how the prioritization search is affected by search efficiency. By probing both targets located on the cued object and targets external to the cued object, we found that the attentional priority surrounding a selected object is strongly modulated by search mode. However, the ordering of the prioritization search is unaffected by search mode. The data also provide evidence that standard spatial visual search and object-based prioritization search may rely on distinct mechanisms. These results provide insight into the interactions between the mode of visual search and object-based selection, and help define the modulatory consequences of search efficiency for object-based attention. PMID:25832192

  19. Relationship between visual binding, reentry and awareness.

    PubMed

    Koivisto, Mika; Silvanto, Juha

    2011-12-01

    Visual feature binding has been suggested to depend on reentrant processing. We addressed the relationship between binding, reentry, and visual awareness by asking the participants to discriminate the color and orientation of a colored bar (presented either alone or simultaneously with a white distractor bar) and to report their phenomenal awareness of the target features. The success of reentry was manipulated with object substitution masking and backward masking. The results showed that late reentrant processes are necessary for successful binding but not for phenomenal awareness of the bound features. Binding errors were accompanied by phenomenal awareness of the misbound feature conjunctions, demonstrating that they were experienced as real properties of the stimuli (i.e., illusory conjunctions). Our results suggest that early preattentive binding and local recurrent processing enable features to reach phenomenal awareness, while later attention-related reentrant iterations modulate the way in which the features are bound and experienced in awareness. Copyright © 2011 Elsevier Inc. All rights reserved.

  20. Mastering algebra retrains the visual system to perceive hierarchical structure in equations.

    PubMed

    Marghetis, Tyler; Landy, David; Goldstone, Robert L

    2016-01-01

    Formal mathematics is a paragon of abstractness. It thus seems natural to assume that the mathematical expert should rely more on symbolic or conceptual processes, and less on perception and action. We argue instead that mathematical proficiency relies on perceptual systems that have been retrained to implement mathematical skills. Specifically, we investigated whether the visual system-in particular, object-based attention-is retrained so that parsing algebraic expressions and evaluating algebraic validity are accomplished by visual processing. Object-based attention occurs when the visual system organizes the world into discrete objects, which then guide the deployment of attention. One classic signature of object-based attention is better perceptual discrimination within, rather than between, visual objects. The current study reports that object-based attention occurs not only for simple shapes but also for symbolic mathematical elements within algebraic expressions-but only among individuals who have mastered the hierarchical syntax of algebra. Moreover, among these individuals, increased object-based attention within algebraic expressions is associated with a better ability to evaluate algebraic validity. These results suggest that, in mastering the rules of algebra, people retrain their visual system to represent and evaluate abstract mathematical structure. We thus argue that algebraic expertise involves the regimentation and reuse of evolutionarily ancient perceptual processes. Our findings implicate the visual system as central to learning and reasoning in mathematics, leading us to favor educational approaches to mathematics and related STEM fields that encourage students to adapt, not abandon, their use of perception.

  1. Should I stay or should I go? Attentional disengagement from visually unique and unexpected items at fixation.

    PubMed

    Brockmole, James R; Boot, Walter R

    2009-06-01

    Distinctive aspects of a scene can capture attention even when they are irrelevant to one's goals. The authors address whether visually unique, unexpected, but task-irrelevant features also tend to hold attention. Observers searched through displays in which the color of each item was irrelevant. At the start of search, all objects changed color. Critically, the foveated item changed to an unexpected color (it was novel), became a color singleton (it was unique), or both. Saccade latency revealed the time required to disengage overt attention from this object. Singletons resulted in longer latencies, but only if they were unexpected. Conversely, unexpected items only delayed disengagement if they were singletons. Thus, the time spent overtly attending to an object is determined, at least in part, by task-irrelevant stimulus properties, but this depends on the confluence of expectation and visual salience. (c) 2009 APA, all rights reserved.

  2. TVA-based assessment of visual attentional functions in developmental dyslexia

    PubMed Central

    Bogon, Johanna; Finke, Kathrin; Stenneken, Prisca

    2014-01-01

    There is an ongoing debate whether an impairment of visual attentional functions constitutes an additional or even an isolated deficit of developmental dyslexia (DD). Especially performance in tasks that require the processing of multiple visual elements in parallel has been reported to be impaired in DD. We review studies that used parameter-based assessment for identifying and quantifying impaired aspect(s) of visual attention that underlie this multi-element processing deficit in DD. These studies used the mathematical framework provided by the “theory of visual attention” (Bundesen, 1990) to derive quantitative measures of general attentional resources and attentional weighting aspects on the basis of behavioral performance in whole- and partial-report tasks. Based on parameter estimates in children and adults with DD, the reviewed studies support a slowed perceptual processing speed as an underlying primary deficit in DD. Moreover, a reduction in visual short term memory storage capacity seems to present a modulating component, contributing to difficulties in written language processing. Furthermore, comparing the spatial distributions of attentional weights in children and adults suggests that having limited reading and writing skills might impair the development of a slight leftward bias, that is typical for unimpaired adult readers. PMID:25360129

  3. Model-Free Estimation of Tuning Curves and Their Attentional Modulation, Based on Sparse and Noisy Data.

    PubMed

    Helmer, Markus; Kozyrev, Vladislav; Stephan, Valeska; Treue, Stefan; Geisel, Theo; Battaglia, Demian

    2016-01-01

    Tuning curves are the functions that relate the responses of sensory neurons to various values within one continuous stimulus dimension (such as the orientation of a bar in the visual domain or the frequency of a tone in the auditory domain). They are commonly determined by fitting a model e.g. a Gaussian or other bell-shaped curves to the measured responses to a small subset of discrete stimuli in the relevant dimension. However, as neuronal responses are irregular and experimental measurements noisy, it is often difficult to determine reliably the appropriate model from the data. We illustrate this general problem by fitting diverse models to representative recordings from area MT in rhesus monkey visual cortex during multiple attentional tasks involving complex composite stimuli. We find that all models can be well-fitted, that the best model generally varies between neurons and that statistical comparisons between neuronal responses across different experimental conditions are affected quantitatively and qualitatively by specific model choices. As a robust alternative to an often arbitrary model selection, we introduce a model-free approach, in which features of interest are extracted directly from the measured response data without the need of fitting any model. In our attentional datasets, we demonstrate that data-driven methods provide descriptions of tuning curve features such as preferred stimulus direction or attentional gain modulations which are in agreement with fit-based approaches when a good fit exists. Furthermore, these methods naturally extend to the frequent cases of uncertain model selection. We show that model-free approaches can identify attentional modulation patterns, such as general alterations of the irregular shape of tuning curves, which cannot be captured by fitting stereotyped conventional models. Finally, by comparing datasets across different conditions, we demonstrate effects of attention that are cell- and even stimulus-specific. Based on these proofs-of-concept, we conclude that our data-driven methods can reliably extract relevant tuning information from neuronal recordings, including cells whose seemingly haphazard response curves defy conventional fitting approaches.

  4. Model-Free Estimation of Tuning Curves and Their Attentional Modulation, Based on Sparse and Noisy Data

    PubMed Central

    Helmer, Markus; Kozyrev, Vladislav; Stephan, Valeska; Treue, Stefan; Geisel, Theo; Battaglia, Demian

    2016-01-01

    Tuning curves are the functions that relate the responses of sensory neurons to various values within one continuous stimulus dimension (such as the orientation of a bar in the visual domain or the frequency of a tone in the auditory domain). They are commonly determined by fitting a model e.g. a Gaussian or other bell-shaped curves to the measured responses to a small subset of discrete stimuli in the relevant dimension. However, as neuronal responses are irregular and experimental measurements noisy, it is often difficult to determine reliably the appropriate model from the data. We illustrate this general problem by fitting diverse models to representative recordings from area MT in rhesus monkey visual cortex during multiple attentional tasks involving complex composite stimuli. We find that all models can be well-fitted, that the best model generally varies between neurons and that statistical comparisons between neuronal responses across different experimental conditions are affected quantitatively and qualitatively by specific model choices. As a robust alternative to an often arbitrary model selection, we introduce a model-free approach, in which features of interest are extracted directly from the measured response data without the need of fitting any model. In our attentional datasets, we demonstrate that data-driven methods provide descriptions of tuning curve features such as preferred stimulus direction or attentional gain modulations which are in agreement with fit-based approaches when a good fit exists. Furthermore, these methods naturally extend to the frequent cases of uncertain model selection. We show that model-free approaches can identify attentional modulation patterns, such as general alterations of the irregular shape of tuning curves, which cannot be captured by fitting stereotyped conventional models. Finally, by comparing datasets across different conditions, we demonstrate effects of attention that are cell- and even stimulus-specific. Based on these proofs-of-concept, we conclude that our data-driven methods can reliably extract relevant tuning information from neuronal recordings, including cells whose seemingly haphazard response curves defy conventional fitting approaches. PMID:26785378

  5. Adults with Dyslexia Demonstrate Space-Based and Object-Based Covert Attention Deficits: Shifting Attention to the Periphery and Shifting Attention between Objects in the Left Visual Field

    ERIC Educational Resources Information Center

    Buchholz, J.; Davies, A.A.

    2005-01-01

    Performance on a covert visual attention task is compared between a group of adults with developmental dyslexia (specifically phonological difficulties) and a group of age and IQ matched controls. The group with dyslexia were generally slower to detect validly-cued targets. Costs of shifting attention toward the periphery when the target was…

  6. The Role of Visual Noise in Influencing Mental Load and Fatigue in a Steady-State Motion Visual Evoked Potential-Based Brain-Computer Interface.

    PubMed

    Xie, Jun; Xu, Guanghua; Luo, Ailing; Li, Min; Zhang, Sicong; Han, Chengcheng; Yan, Wenqiang

    2017-08-14

    As a spatial selective attention-based brain-computer interface (BCI) paradigm, steady-state visual evoked potential (SSVEP) BCI has the advantages of high information transfer rate, high tolerance to artifacts, and robust performance across users. However, its benefits come at the cost of mental load and fatigue occurring in the concentration on the visual stimuli. Noise, as a ubiquitous random perturbation with the power of randomness, may be exploited by the human visual system to enhance higher-level brain functions. In this study, a novel steady-state motion visual evoked potential (SSMVEP, i.e., one kind of SSVEP)-based BCI paradigm with spatiotemporal visual noise was used to investigate the influence of noise on the compensation of mental load and fatigue deterioration during prolonged attention tasks. Changes in α , θ , θ + α powers, θ / α ratio, and electroencephalography (EEG) properties of amplitude, signal-to-noise ratio (SNR), and online accuracy, were used to evaluate mental load and fatigue. We showed that presenting a moderate visual noise to participants could reliably alleviate the mental load and fatigue during online operation of visual BCI that places demands on the attentional processes. This demonstrated that noise could provide a superior solution to the implementation of visual attention controlling-based BCI applications.

  7. Aging and Visual Attention

    PubMed Central

    Madden, David J.

    2007-01-01

    Older adults are often slower and less accurate than are younger adults in performing visual-search tasks, suggesting an age-related decline in attentional functioning. Age-related decline in attention, however, is not entirely pervasive. Visual search that is based on the observer’s expectations (i.e., top-down attention) is relatively preserved as a function of adult age. Neuroimaging research suggests that age-related decline occurs in the structure and function of brain regions mediating the visual sensory input, whereas activation of regions in the frontal and parietal lobes is often greater for older adults than for younger adults. This increased activation may represent an age-related increase in the role of top-down attention during visual tasks. To obtain a more complete account of age-related decline and preservation of visual attention, current research is beginning to explore the relation of neuroimaging measures of brain structure and function to behavioral measures of visual attention. PMID:18080001

  8. A Componential Analysis of Visual Attention in Children With ADHD.

    PubMed

    McAvinue, Laura P; Vangkilde, Signe; Johnson, Katherine A; Habekost, Thomas; Kyllingsbæk, Søren; Bundesen, Claus; Robertson, Ian H

    2015-10-01

    Inattentive behaviour is a defining characteristic of ADHD. Researchers have wondered about the nature of the attentional deficit underlying these symptoms. The primary purpose of the current study was to examine this attentional deficit using a novel paradigm based upon the Theory of Visual Attention (TVA). The TVA paradigm enabled a componential analysis of visual attention through the use of a mathematical model to estimate parameters relating to attentional selectivity and capacity. Children's ability to sustain attention was also assessed using the Sustained Attention to Response Task. The sample included a comparison between 25 children with ADHD and 25 control children aged 9-13. Children with ADHD had significantly impaired sustained attention and visual processing speed but intact attentional selectivity, perceptual threshold and visual short-term memory capacity. The results of this study lend support to the notion of differential impairment of attentional functions in children with ADHD. © 2012 SAGE Publications.

  9. Crossmodal integration enhances neural representation of task-relevant features in audiovisual face perception.

    PubMed

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Liu, Yongjian; Liang, Changhong; Sun, Pei

    2015-02-01

    Previous studies have shown that audiovisual integration improves identification performance and enhances neural activity in heteromodal brain areas, for example, the posterior superior temporal sulcus/middle temporal gyrus (pSTS/MTG). Furthermore, it has also been demonstrated that attention plays an important role in crossmodal integration. In this study, we considered crossmodal integration in audiovisual facial perception and explored its effect on the neural representation of features. The audiovisual stimuli in the experiment consisted of facial movie clips that could be classified into 2 gender categories (male vs. female) or 2 emotion categories (crying vs. laughing). The visual/auditory-only stimuli were created from these movie clips by removing the auditory/visual contents. The subjects needed to make a judgment about the gender/emotion category for each movie clip in the audiovisual, visual-only, or auditory-only stimulus condition as functional magnetic resonance imaging (fMRI) signals were recorded. The neural representation of the gender/emotion feature was assessed using the decoding accuracy and the brain pattern-related reproducibility indices, obtained by a multivariate pattern analysis method from the fMRI data. In comparison to the visual-only and auditory-only stimulus conditions, we found that audiovisual integration enhanced the neural representation of task-relevant features and that feature-selective attention might play a role of modulation in the audiovisual integration. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  10. Effects of facial emotion recognition remediation on visual scanning of novel face stimuli.

    PubMed

    Marsh, Pamela J; Luckett, Gemma; Russell, Tamara; Coltheart, Max; Green, Melissa J

    2012-11-01

    Previous research shows that emotion recognition in schizophrenia can be improved with targeted remediation that draws attention to important facial features (eyes, nose, mouth). Moreover, the effects of training have been shown to last for up to one month after training. The aim of this study was to investigate whether improved emotion recognition of novel faces is associated with concomitant changes in visual scanning of these same novel facial expressions. Thirty-nine participants with schizophrenia received emotion recognition training using Ekman's Micro-Expression Training Tool (METT), with emotion recognition and visual scanpath (VSP) recordings to face stimuli collected simultaneously. Baseline ratings of interpersonal and cognitive functioning were also collected from all participants. Post-METT training, participants showed changes in foveal attention to the features of facial expressions of emotion not used in METT training, which were generally consistent with the information about important features from the METT. In particular, there were changes in how participants looked at the features of facial expressions of emotion surprise, disgust, fear, happiness, and neutral, demonstrating that improved emotion recognition is paralleled by changes in the way participants with schizophrenia viewed novel facial expressions of emotion. However, there were overall decreases in foveal attention to sad and neutral faces that indicate more intensive instruction might be needed for these faces during training. Most importantly, the evidence shows that participant gender may affect training outcomes. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. Age-related decline in bottom-up processing and selective attention in the very old.

    PubMed

    Zhuravleva, Tatyana Y; Alperin, Brittany R; Haring, Anna E; Rentz, Dorene M; Holcomb, Philip J; Daffner, Kirk R

    2014-06-01

    Previous research demonstrating age-related deficits in selective attention have not included old-old adults, an increasingly important group to study. The current investigation compared event-related potentials in 15 young-old (65-79 years old) and 23 old-old (80-99 years old) subjects during a color-selective attention task. Subjects responded to target letters in a specified color (Attend) while ignoring letters in a different color (Ignore) under both low and high loads. There were no group differences in visual acuity, accuracy, reaction time, or latency of early event-related potential components. The old-old group showed a disruption in bottom-up processing, indexed by a substantially diminished posterior N1 (smaller amplitude). They also demonstrated markedly decreased modulation of bottom-up processing based on selected visual features, indexed by the posterior selection negativity (SN), with similar attenuation under both loads. In contrast, there were no group differences in frontally mediated attentional selection, measured by the anterior selection positivity (SP). There was a robust inverse relationship between the size of the SN and SP (the smaller the SN, the larger the SP), which may represent an anteriorly supported compensatory mechanism. In the absence of a decline in top-down modulation indexed by the SP, the diminished SN may reflect age-related degradation of early bottom-up visual processing in old-old adults.

  12. Finding regions of interest in pathological images: an attentional model approach

    NASA Astrophysics Data System (ADS)

    Gómez, Francisco; Villalón, Julio; Gutierrez, Ricardo; Romero, Eduardo

    2009-02-01

    This paper introduces an automated method for finding diagnostic regions-of-interest (RoIs) in histopathological images. This method is based on the cognitive process of visual selective attention that arises during a pathologist's image examination. Specifically, it emulates the first examination phase, which consists in a coarse search for tissue structures at a "low zoom" to separate the image into relevant regions.1 The pathologist's cognitive performance depends on inherent image visual cues - bottom-up information - and on acquired clinical medicine knowledge - top-down mechanisms -. Our pathologist's visual attention model integrates the latter two components. The selected bottom-up information includes local low level features such as intensity, color, orientation and texture information. Top-down information is related to the anatomical and pathological structures known by the expert. A coarse approximation to these structures is achieved by an oversegmentation algorithm, inspired by psychological grouping theories. The algorithm parameters are learned from an expert pathologist's segmentation. Top-down and bottom-up integration is achieved by calculating a unique index for each of the low level characteristics inside the region. Relevancy is estimated as a simple average of these indexes. Finally, a binary decision rule defines whether or not a region is interesting. The method was evaluated on a set of 49 images using a perceptually-weighted evaluation criterion, finding a quality gain of 3dB when comparing to a classical bottom-up model of attention.

  13. Looking is buying. How visual attention and choice are affected by consumer preferences and properties of the supermarket shelf.

    PubMed

    Gidlöf, Kerstin; Anikin, Andrey; Lingonblad, Martin; Wallin, Annika

    2017-09-01

    There is a battle in the supermarket isle, a battle between what the consumer wants and what the retailer and others want her to see, and subsequently to buy. Product packages and displays contain a number of features and attributes tailored to catch consumers' attention. These are what we call external factors comprising the visual saliency, the number of facings, and the placement of each product. But a consumer also brings with her a number of goals and interests related to the products and their attributes. These are important internal factors, including brand preferences, price sensitivity, and dietary inclinations. We fit mobile eye trackers to consumers visiting real-life supermarkets in order to investigate to what extent external and internal factors affect consumers' visual attention and purchases. Both external and internal factors influenced what products consumers looked at, with a strong positive interaction between visual saliency and consumer preferences. Consumers appear to take advantage of visual saliency in their decision making, using their knowledge about products' appearance to guide their visual attention towards those that fit their preferences. When it comes to actual purchases, however, visual attention was by far the most important predictor, even after controlling for all other internal and external factors. In other words, the very act of looking longer or repeatedly at a package, for any reason, makes it more likely that this product will be bought. Visual attention is thus crucial for understanding consumer behaviour, even in the cluttered supermarket environment, but it cannot be captured by measurements of visual saliency alone. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Eye movement assessment of selective attentional capture by emotional pictures.

    PubMed

    Nummenmaa, Lauri; Hyönä, Jukka; Calvo, Manuel G

    2006-05-01

    The eye-tracking method was used to assess attentional orienting to and engagement on emotional visual scenes. In Experiment 1, unpleasant, neutral, or pleasant target pictures were presented simultaneously with neutral control pictures in peripheral vision under instruction to compare pleasantness of the pictures. The probability of first fixating an emotional picture, and the frequency of subsequent fixations, were greater than those for neutral pictures. In Experiment 2, participants were instructed to avoid looking at the emotional pictures, but these were still more likely to be fixated first and gazed longer during the first-pass viewing than neutral pictures. Low-level visual features cannot explain the results. It is concluded that overt visual attention is captured by both unpleasant and pleasant emotional content. 2006 APA, all rights reserved

  15. Viewing the dynamics and control of visual attention through the lens of electrophysiology

    PubMed Central

    Woodman, Geoffrey F.

    2013-01-01

    How we find what we are looking for in complex visual scenes is a seemingly simple ability that has taken half a century to unravel. The first study to use the term visual search showed that as the number of objects in a complex scene increases, observers’ reaction times increase proportionally (Green and Anderson, 1956). This observation suggests that our ability to process the objects in the scenes is limited in capacity. However, if it is known that the target will have a certain feature attribute, for example, that it will be red, then only an increase in the number of red items increases reaction time. This observation suggests that we can control which visual inputs receive the benefit of our limited capacity to recognize the objects, such as those defined by the color red, as the items we seek. The nature of the mechanisms that underlie these basic phenomena in the literature on visual search have been more difficult to definitively determine. In this paper, I discuss how electrophysiological methods have provided us with the necessary tools to understand the nature of the mechanisms that give rise to the effects observed in the first visual search paper. I begin by describing how recordings of event-related potentials from humans and nonhuman primates have shown us how attention is deployed to possible target items in complex visual scenes. Then, I will discuss how event-related potential experiments have allowed us to directly measure the memory representations that are used to guide these deployments of attention to items with target-defining features. PMID:23357579

  16. Threat as a feature in visual semantic object memory.

    PubMed

    Calley, Clifford S; Motes, Michael A; Chiang, H-Sheng; Buhl, Virginia; Spence, Jeffrey S; Abdi, Hervé; Anand, Raksha; Maguire, Mandy; Estevez, Leonardo; Briggs, Richard; Freeman, Thomas; Kraut, Michael A; Hart, John

    2013-08-01

    Threatening stimuli have been found to modulate visual processes related to perception and attention. The present functional magnetic resonance imaging (fMRI) study investigated whether threat modulates visual object recognition of man-made and naturally occurring categories of stimuli. Compared with nonthreatening pictures, threatening pictures of real items elicited larger fMRI BOLD signal changes in medial visual cortices extending inferiorly into the temporo-occipital (TO) "what" pathways. This region elicited greater signal changes for threatening items compared to nonthreatening from both the natural-occurring and man-made stimulus supraordinate categories, demonstrating a featural component to these visual processing areas. Two additional loci of signal changes within more lateral inferior TO areas (bilateral BA18 and 19 as well as the right ventral temporal lobe) were detected for a category-feature interaction, with stronger responses to man-made (category) threatening (feature) stimuli than to natural threats. The findings are discussed in terms of visual recognition of processing efficiently or rapidly groups of items that confer an advantage for survival. Copyright © 2012 Wiley Periodicals, Inc.

  17. Is the Binding of Visual Features in Working Memory Resource-Demanding?

    ERIC Educational Resources Information Center

    Allen, Richard J.; Baddeley, Alan D.; Hitch, Graham J.

    2006-01-01

    The episodic buffer component of working memory is assumed to play a role in the binding of features into chunks. A series of experiments compared memory for arrays of colors or shapes with memory for bound combinations of these features. Demanding concurrent verbal tasks were used to investigate the role of general attentional processes,…

  18. Reappraising Abstract Paintings after Exposure to Background Information

    PubMed Central

    Park, Seongmin A.; Yun, Kyongsik; Jeong, Jaeseung

    2015-01-01

    Can knowledge help viewers when they appreciate an artwork? Experts’ judgments of the aesthetic value of a painting often differ from the estimates of naïve viewers, and this phenomenon is especially pronounced in the aesthetic judgment of abstract paintings. We compared the changes in aesthetic judgments of naïve viewers while they were progressively exposed to five pieces of background information. The participants were asked to report their aesthetic judgments of a given painting after each piece of information was presented. We found that commentaries by the artist and a critic significantly increased the subjective aesthetic ratings. Does knowledge enable experts to attend to the visual features in a painting and to link it to the evaluative conventions, thus potentially causing different aesthetic judgments? To investigate whether a specific pattern of attention is essential for the knowledge-based appreciation, we tracked the eye movements of subjects while viewing a painting with a commentary by the artist and with a commentary by a critic. We observed that critics’ commentaries directed the viewers’ attention to the visual components that were highly relevant to the presented commentary. However, attention to specific features of a painting was not necessary for increasing the subjective aesthetic judgment when the artists’ commentary was presented. Our results suggest that at least two different cognitive mechanisms may be involved in knowledge- guided aesthetic judgments while viewers reappraise a painting. PMID:25945789

  19. Cross-Modal Decoding of Neural Patterns Associated with Working Memory: Evidence for Attention-Based Accounts of Working Memory

    PubMed Central

    Majerus, Steve; Cowan, Nelson; Péters, Frédéric; Van Calster, Laurens; Phillips, Christophe; Schrouff, Jessica

    2016-01-01

    Recent studies suggest common neural substrates involved in verbal and visual working memory (WM), interpreted as reflecting shared attention-based, short-term retention mechanisms. We used a machine-learning approach to determine more directly the extent to which common neural patterns characterize retention in verbal WM and visual WM. Verbal WM was assessed via a standard delayed probe recognition task for letter sequences of variable length. Visual WM was assessed via a visual array WM task involving the maintenance of variable amounts of visual information in the focus of attention. We trained a classifier to distinguish neural activation patterns associated with high- and low-visual WM load and tested the ability of this classifier to predict verbal WM load (high–low) from their associated neural activation patterns, and vice versa. We observed significant between-task prediction of load effects during WM maintenance, in posterior parietal and superior frontal regions of the dorsal attention network; in contrast, between-task prediction in sensory processing cortices was restricted to the encoding stage. Furthermore, between-task prediction of load effects was strongest in those participants presenting the highest capacity for the visual WM task. This study provides novel evidence for common, attention-based neural patterns supporting verbal and visual WM. PMID:25146374

  20. Cognitive load reducing in destination decision system

    NASA Astrophysics Data System (ADS)

    Wu, Chunhua; Wang, Cong; Jiang, Qien; Wang, Jian; Chen, Hong

    2007-12-01

    With limited cognitive resource, the quantity of information can be processed by a person is limited. If the limitation is broken, the whole cognitive process would be affected, so did the final decision. The research of effective ways to reduce the cognitive load is launched from two aspects: cutting down the number of alternatives and directing the user to allocate his limited attention resource based on the selective visual attention theory. Decision-making is such a complex process that people usually have difficulties to express their requirements completely. An effective method to get user's hidden requirements is put forward in this paper. With more requirements be caught, the destination decision system can filtering more quantity of inappropriate alternatives. Different information piece has different utility, if the information with high utility would get attention easily, the decision might be made more easily. After analyzing the current selective visual attention theory, a new presentation style based on user's visual attention also put forward in this paper. This model arranges information presentation according to the movement of sightline. Through visual attention, the user can put their limited attention resource on the important information. Hidden requirements catching and presenting information based on the selective visual attention are effective ways to reducing the cognitive load.

  1. Sustained selective attention to competing amplitude-modulations in human auditory cortex.

    PubMed

    Riecke, Lars; Scharke, Wolfgang; Valente, Giancarlo; Gutschalk, Alexander

    2014-01-01

    Auditory selective attention plays an essential role for identifying sounds of interest in a scene, but the neural underpinnings are still incompletely understood. Recent findings demonstrate that neural activity that is time-locked to a particular amplitude-modulation (AM) is enhanced in the auditory cortex when the modulated stream of sounds is selectively attended to under sensory competition with other streams. However, the target sounds used in the previous studies differed not only in their AM, but also in other sound features, such as carrier frequency or location. Thus, it remains uncertain whether the observed enhancements reflect AM-selective attention. The present study aims at dissociating the effect of AM frequency on response enhancement in auditory cortex by using an ongoing auditory stimulus that contains two competing targets differing exclusively in their AM frequency. Electroencephalography results showed a sustained response enhancement for auditory attention compared to visual attention, but not for AM-selective attention (attended AM frequency vs. ignored AM frequency). In contrast, the response to the ignored AM frequency was enhanced, although a brief trend toward response enhancement occurred during the initial 15 s. Together with the previous findings, these observations indicate that selective enhancement of attended AMs in auditory cortex is adaptive under sustained AM-selective attention. This finding has implications for our understanding of cortical mechanisms for feature-based attentional gain control.

  2. Sustained Selective Attention to Competing Amplitude-Modulations in Human Auditory Cortex

    PubMed Central

    Riecke, Lars; Scharke, Wolfgang; Valente, Giancarlo; Gutschalk, Alexander

    2014-01-01

    Auditory selective attention plays an essential role for identifying sounds of interest in a scene, but the neural underpinnings are still incompletely understood. Recent findings demonstrate that neural activity that is time-locked to a particular amplitude-modulation (AM) is enhanced in the auditory cortex when the modulated stream of sounds is selectively attended to under sensory competition with other streams. However, the target sounds used in the previous studies differed not only in their AM, but also in other sound features, such as carrier frequency or location. Thus, it remains uncertain whether the observed enhancements reflect AM-selective attention. The present study aims at dissociating the effect of AM frequency on response enhancement in auditory cortex by using an ongoing auditory stimulus that contains two competing targets differing exclusively in their AM frequency. Electroencephalography results showed a sustained response enhancement for auditory attention compared to visual attention, but not for AM-selective attention (attended AM frequency vs. ignored AM frequency). In contrast, the response to the ignored AM frequency was enhanced, although a brief trend toward response enhancement occurred during the initial 15 s. Together with the previous findings, these observations indicate that selective enhancement of attended AMs in auditory cortex is adaptive under sustained AM-selective attention. This finding has implications for our understanding of cortical mechanisms for feature-based attentional gain control. PMID:25259525

  3. PSQM-based RR and NR video quality metrics

    NASA Astrophysics Data System (ADS)

    Lu, Zhongkang; Lin, Weisi; Ong, Eeping; Yang, Xiaokang; Yao, Susu

    2003-06-01

    This paper presents a new and general concept, PQSM (Perceptual Quality Significance Map), to be used in measuring the visual distortion. It makes use of the selectivity characteristic of HVS (Human Visual System) that it pays more attention to certain area/regions of visual signal due to one or more of the following factors: salient features in image/video, cues from domain knowledge, and association of other media (e.g., speech or audio). PQSM is an array whose elements represent the relative perceptual-quality significance levels for the corresponding area/regions for images or video. Due to its generality, PQSM can be incorporated into any visual distortion metrics: to improve effectiveness or/and efficiency of perceptual metrics; or even to enhance a PSNR-based metric. A three-stage PQSM estimation method is also proposed in this paper, with an implementation of motion, texture, luminance, skin-color and face mapping. Experimental results show the scheme can improve the performance of current image/video distortion metrics.

  4. Normal aging delays and compromises early multifocal visual attention during object tracking.

    PubMed

    Störmer, Viola S; Li, Shu-Chen; Heekeren, Hauke R; Lindenberger, Ulman

    2013-02-01

    Declines in selective attention are one of the sources contributing to age-related impairments in a broad range of cognitive functions. Most previous research on mechanisms underlying older adults' selection deficits has studied the deployment of visual attention to static objects and features. Here we investigate neural correlates of age-related differences in spatial attention to multiple objects as they move. We used a multiple object tracking task, in which younger and older adults were asked to keep track of moving target objects that moved randomly in the visual field among irrelevant distractor objects. By recording the brain's electrophysiological responses during the tracking period, we were able to delineate neural processing for targets and distractors at early stages of visual processing (~100-300 msec). Older adults showed less selective attentional modulation in the early phase of the visual P1 component (100-125 msec) than younger adults, indicating that early selection is compromised in old age. However, with a 25-msec delay relative to younger adults, older adults showed distinct processing of targets (125-150 msec), that is, a delayed yet intact attentional modulation. The magnitude of this delayed attentional modulation was related to tracking performance in older adults. The amplitude of the N1 component (175-210 msec) was smaller in older adults than in younger adults, and the target amplification effect of this component was also smaller in older relative to younger adults. Overall, these results indicate that normal aging affects the efficiency and timing of early visual processing during multiple object tracking.

  5. A methodology for coupling a visual enhancement device to human visual attention

    NASA Astrophysics Data System (ADS)

    Todorovic, Aleksandar; Black, John A., Jr.; Panchanathan, Sethuraman

    2009-02-01

    The Human Variation Model views disability as simply "an extension of the natural physical, social, and cultural variability of mankind." Given this human variation, it can be difficult to distinguish between a prosthetic device such as a pair of glasses (which extends limited visual abilities into the "normal" range) and a visual enhancement device such as a pair of binoculars (which extends visual abilities beyond the "normal" range). Indeed, there is no inherent reason why the design of visual prosthetic devices should be limited to just providing "normal" vision. One obvious enhancement to human vision would be the ability to visually "zoom" in on objects that are of particular interest to the viewer. Indeed, it could be argued that humans already have a limited zoom capability, which is provided by their highresolution foveal vision. However, humans still find additional zooming useful, as evidenced by their purchases of binoculars equipped with mechanized zoom features. The fact that these zoom features are manually controlled raises two questions: (1) Could a visual enhancement device be developed to monitor attention and control visual zoom automatically? (2) If such a device were developed, would its use be experienced by users as a simple extension of their natural vision? This paper details the results of work with two research platforms called the Remote Visual Explorer (ReVEx) and the Interactive Visual Explorer (InVEx) that were developed specifically to answer these two questions.

  6. Capacity for visual features in mental rotation

    PubMed Central

    Xu, Yangqing; Franconeri, Steven L.

    2015-01-01

    Although mental rotation is a core component of scientific reasoning, we still know little about its underlying mechanism. For instance - how much visual information can we rotate at once? Participants rotated a simple multi-part shape, requiring them to maintain attachments between features and moving parts. The capacity of this aspect of mental rotation was strikingly low – only one feature could remain attached to one part. Behavioral and eyetracking data showed that this single feature remained ‘glued’ via a singular focus of attention, typically on the object’s top. We argue that the architecture of the human visual system is not suited for keeping multiple features attached to multiple parts during mental rotation. Such measurement of the capacity limits may prove to be a critical step in dissecting the suite of visuospatial tools involved in mental rotation, leading to insights for improvement of pedagogy in science education contexts. PMID:26174781

  7. Capacity for Visual Features in Mental Rotation.

    PubMed

    Xu, Yangqing; Franconeri, Steven L

    2015-08-01

    Although mental rotation is a core component of scientific reasoning, little is known about its underlying mechanisms. For instance, how much visual information can someone rotate at once? We asked participants to rotate a simple multipart shape, requiring them to maintain attachments between features and moving parts. The capacity of this aspect of mental rotation was strikingly low: Only one feature could remain attached to one part. Behavioral and eye-tracking data showed that this single feature remained "glued" via a singular focus of attention, typically on the object's top. We argue that the architecture of the human visual system is not suited for keeping multiple features attached to multiple parts during mental rotation. Such measurement of capacity limits may prove to be a critical step in dissecting the suite of visuospatial tools involved in mental rotation, leading to insights for improvement of pedagogy in science-education contexts. © The Author(s) 2015.

  8. The Temporal Dynamics of Visual Search: Evidence for Parallel Processing in Feature and Conjunction Searches

    PubMed Central

    McElree, Brian; Carrasco, Marisa

    2012-01-01

    Feature and conjunction searches have been argued to delineate parallel and serial operations in visual processing. The authors evaluated this claim by examining the temporal dynamics of the detection of features and conjunctions. The 1st experiment used a reaction time (RT) task to replicate standard mean RT patterns and to examine the shapes of the RT distributions. The 2nd experiment used the response-signal speed–accuracy trade-off (SAT) procedure to measure discrimination (asymptotic detection accuracy) and detection speed (processing dynamics). Set size affected discrimination in both feature and conjunction searches but affected detection speed only in the latter. Fits of models to the SAT data that included a serial component overpredicted the magnitude of the observed dynamics differences. The authors concluded that both features and conjunctions are detected in parallel. Implications for the role of attention in visual processing are discussed. PMID:10641310

  9. The stage of priming: are intertrial repetition effects attentional or decisional?

    PubMed

    Becker, Stefanie I

    2008-02-01

    In a visual search task, reaction times to a target are shorter when its features are repeated than when they switch. The present study investigated whether these priming effects affect the attentional stage of target selection, as proposed by the priming of pop-out account, or whether they modulate performance at a later, post-selectional stage, as claimed by the episodic retrieval view. Secondly, to test whether priming affects only the target-defining feature, or whether priming can apply to all target-features in a holistic fashion, two presentation conditions were invoked, that either promoted encoding of only the target-defining feature or holistic encoding of all target features. Results from four eye tracking experiments involving a size and colour singleton target showed that, first, priming modulates selectional processes concerned with guiding attention. Second, there were traces of holistic priming effects, which however were not modulated by the displays, but by expectation and task difficulty.

  10. Visual attention capacity: a review of TVA-based patient studies.

    PubMed

    Habekost, Thomas; Starrfelt, Randi

    2009-02-01

    Psychophysical studies have identified two distinct limitations of visual attention capacity: processing speed and apprehension span. Using a simple test, these cognitive factors can be analyzed by Bundesen's Theory of Visual Attention (TVA). The method has strong specificity and sensitivity, and measurements are highly reliable. As the method is theoretically founded, it also has high validity. TVA-based assessment has recently been used to investigate a broad range of neuropsychological and neurological conditions. We present the method, including the experimental paradigm and practical guidelines to patient testing, and review existing TVA-based patient studies organized by lesion anatomy. Lesions in three anatomical regions affect visual capacity: The parietal lobes, frontal cortex and basal ganglia, and extrastriate cortex. Visual capacity thus depends on large, bilaterally distributed anatomical networks that include several regions outside the visual system. The two visual capacity parameters are functionally separable, but seem to rely on largely overlapping brain areas.

  11. Colour expectations during object perception are associated with early and late modulations of electrophysiological activity.

    PubMed

    Stojanoski, Bobby Boge; Niemeier, Matthias

    2015-10-01

    It is well known that visual expectation and attention modulate object perception. Yet, the mechanisms underlying these top-down influences are not completely understood. Event-related potentials (ERPs) indicate late contributions of expectations to object processing around the P2 or N2. This is true independent of whether people expect objects (vs. no objects) or specific shapes, hence when expectations pertain to complex visual features. However, object perception can also benefit from expecting colour information, which can facilitate figure/ground segregation. Studies on attention to colour show attention-sensitive modulations of the P1, but are limited to simple transient detection paradigms. The aim of the current study was to examine whether expecting simple features (colour information) during challenging object perception tasks produce early or late ERP modulations. We told participants to expect an object defined by predominantly black or white lines that were embedded in random arrays of distractor lines and then asked them to report the object's shape. Performance was better when colour expectations were met. ERPs revealed early and late phases of modulation. An early modulation at the P1/N1 transition arguably reflected earlier stages of object processing. Later modulations, at the P3, could be consistent with decisional processes. These results provide novel insights into feature-specific contributions of visual expectations to object perception.

  12. Online decoding of object-based attention using real-time fMRI.

    PubMed

    Niazi, Adnan M; van den Broek, Philip L C; Klanke, Stefan; Barth, Markus; Poel, Mannes; Desain, Peter; van Gerven, Marcel A J

    2014-01-01

    Visual attention is used to selectively filter relevant information depending on current task demands and goals. Visual attention is called object-based attention when it is directed to coherent forms or objects in the visual field. This study used real-time functional magnetic resonance imaging for moment-to-moment decoding of attention to spatially overlapped objects belonging to two different object categories. First, a whole-brain classifier was trained on pictures of faces and places. Subjects then saw transparently overlapped pictures of a face and a place, and attended to only one of them while ignoring the other. The category of the attended object, face or place, was decoded on a scan-by-scan basis using the previously trained decoder. The decoder performed at 77.6% accuracy indicating that despite competing bottom-up sensory input, object-based visual attention biased neural patterns towards that of the attended object. Furthermore, a comparison between different classification approaches indicated that the representation of faces and places is distributed rather than focal. This implies that real-time decoding of object-based attention requires a multivariate decoding approach that can detect these distributed patterns of cortical activity. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  13. Associative cueing of attention through implicit feature-location binding.

    PubMed

    Girardi, Giovanna; Nico, Daniele

    2017-09-01

    In order to assess associative learning between two task-irrelevant features in cueing spatial attention, we devised a task in which participants have to make an identity comparison between two sequential visual stimuli. Unbeknownst to them, location of the second stimulus could be predicted by the colour of the first or a concurrent sound. Albeit unnecessary to perform the identity-matching judgment the predictive features thus provided an arbitrary association favouring the spatial anticipation of the second stimulus. A significant advantage was found with faster responses at predicted compared to non-predicted locations. Results clearly demonstrated an associative cueing of attention via a second-order arbitrary feature/location association but with a substantial discrepancy depending on the sensory modality of the predictive feature. With colour as predictive feature, significant advantages emerged only after the completion of three blocks of trials. On the contrary, sound affected responses from the first block of trials and significant advantages were manifest from the beginning of the second. The possible mechanisms underlying the associative cueing of attention in both conditions are discussed. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Visual Attention to Radar Displays

    NASA Technical Reports Server (NTRS)

    Moray, N.; Richards, M.; Brophy, C.

    1984-01-01

    A model is described which predicts the allocation of attention to the features of a radar display. It uses the growth of uncertainty and the probability of near collision to call the eye to a feature of the display. The main source of uncertainty is forgetting following a fixation, which is modelled as a two dimensional diffusion process. The model was used to predict information overload in intercept controllers, and preliminary validation obtained by recording eye movements of intercept controllers in simulated and live (practice) interception.

  15. Rapid feature-driven changes in the attentional window.

    PubMed

    Leonard, Carly J; Lopez-Calderon, Javier; Kreither, Johanna; Luck, Steven J

    2013-07-01

    Spatial attention must adjust around an object of interest in a manner that reflects the object's size on the retina as well as the proximity of distracting objects, a process often guided by nonspatial features. This study used ERPs to investigate how quickly the size of this type of "attentional window" can adjust around a fixated target object defined by its color and whether this variety of attention influences the feedforward flow of subsequent information through the visual system. The task involved attending either to a circular region at fixation or to a surrounding annulus region, depending on which region contained an attended color. The region containing the attended color varied randomly from trial to trial, so the spatial distribution of attention had to be adjusted on each trial. We measured the initial sensory ERP response elicited by an irrelevant probe stimulus that appeared in one of the two regions at different times after task display onset. This allowed us to measure the amount of time required to adjust spatial attention on the basis of the location of the task-relevant feature. We found that the probe-elicited sensory response was larger when the probe occurred within the region of the attended dots, and this effect required a delay of approximately 175 msec between the onset of the task display and the onset of the probe. Thus, the window of attention is rapidly adjusted around the point of fixation in a manner that reflects the spatial extent of a task-relevant stimulus, leading to changes in the feedforward flow of subsequent information through the visual system.

  16. The Role of Early Visual Attention in Social Development

    ERIC Educational Resources Information Center

    Wagner, Jennifer B.; Luyster, Rhiannon J.; Yim, Jung Yeon; Tager-Flusberg, Helen; Nelson, Charles A.

    2013-01-01

    Faces convey important information about the social environment, and even very young infants are preferentially attentive to face-like over non-face stimuli. Eye-tracking studies have allowed researchers to examine which features of faces infants find most salient across development, and the present study examined scanning of familiar (i.e.,…

  17. Invariant visual object recognition: a model, with lighting invariance.

    PubMed

    Rolls, Edmund T; Stringer, Simon M

    2006-01-01

    How are invariant representations of objects formed in the visual cortex? We describe a neurophysiological and computational approach which focusses on a feature hierarchy model in which invariant representations can be built by self-organizing learning based on the statistics of the visual input. The model can use temporal continuity in an associative synaptic learning rule with a short term memory trace, and/or it can use spatial continuity in Continuous Transformation learning. The model of visual processing in the ventral cortical stream can build representations of objects that are invariant with respect to translation, view, size, and in this paper we show also lighting. The model has been extended to provide an account of invariant representations in the dorsal visual system of the global motion produced by objects such as looming, rotation, and object-based movement. The model has been extended to incorporate top-down feedback connections to model the control of attention by biased competition in for example spatial and object search tasks. The model has also been extended to account for how the visual system can select single objects in complex visual scenes, and how multiple objects can be represented in a scene.

  18. Target--distractor separation and feature integration in visual attention to letters.

    PubMed

    Driver, J; Baylis, G C

    1991-04-01

    The interference produced by distractor letters diminishes with increasing distance from a target letter, as if the distractors fall outside an attentional spotlight focussed on the target (Eriksen and Eriksen 1974). We examine Hagenaar and Van der Heijden's (1986) claim that this distance effect is an acuity artefact. Feature integration theory (Treisman 1986) predicts that even when acuity is controlled for, distance effects should be found when interference is produced by conjoined distractor features (e.g. letter-identities), but not when interference arises from isolated distractor features (e.g. letter-strokes). The opposite pattern of results is found. A model is proposed in which both letter-strokes and letter-identities are derived in parallel. The location of letter-strokes can also be coded in parallel, but locating letter-identities may require the operation of attention.

  19. (C)overt attention and visual speller design in an ERP-based brain-computer interface.

    PubMed

    Treder, Matthias S; Blankertz, Benjamin

    2010-05-28

    In a visual oddball paradigm, attention to an event usually modulates the event-related potential (ERP). An ERP-based brain-computer interface (BCI) exploits this neural mechanism for communication. Hitherto, it was unclear to what extent the accuracy of such a BCI requires eye movements (overt attention) or whether it is also feasible for targets in the visual periphery (covert attention). Also unclear was how the visual design of the BCI can be improved to meet peculiarities of peripheral vision such as low spatial acuity and crowding. Healthy participants (N = 13) performed a copy-spelling task wherein they had to count target intensifications. EEG and eye movements were recorded concurrently. First, (c)overt attention was investigated by way of a target fixation condition and a central fixation condition. In the latter, participants had to fixate a dot in the center of the screen and allocate their attention to a target in the visual periphery. Second, the effect of visual speller layout was investigated by comparing the symbol Matrix to an ERP-based Hex-o-Spell, a two-levels speller consisting of six discs arranged on an invisible hexagon. We assessed counting errors, ERP amplitudes, and offline classification performance. There is an advantage (i.e., less errors, larger ERP amplitude modulation, better classification) of overt attention over covert attention, and there is also an advantage of the Hex-o-Spell over the Matrix. Using overt attention, P1, N1, P2, N2, and P3 components are enhanced by attention. Using covert attention, only N2 and P3 are enhanced for both spellers, and N1 and P2 are modulated when using the Hex-o-Spell but not when using the Matrix. Consequently, classifiers rely mainly on early evoked potentials in overt attention and on later cognitive components in covert attention. Both overt and covert attention can be used to drive an ERP-based BCI, but performance is markedly lower for covert attention. The Hex-o-Spell outperforms the Matrix, especially when eye movements are not permitted, illustrating that performance can be increased if one accounts for peculiarities of peripheral vision.

  20. (C)overt attention and visual speller design in an ERP-based brain-computer interface

    PubMed Central

    2010-01-01

    Background In a visual oddball paradigm, attention to an event usually modulates the event-related potential (ERP). An ERP-based brain-computer interface (BCI) exploits this neural mechanism for communication. Hitherto, it was unclear to what extent the accuracy of such a BCI requires eye movements (overt attention) or whether it is also feasible for targets in the visual periphery (covert attention). Also unclear was how the visual design of the BCI can be improved to meet peculiarities of peripheral vision such as low spatial acuity and crowding. Method Healthy participants (N = 13) performed a copy-spelling task wherein they had to count target intensifications. EEG and eye movements were recorded concurrently. First, (c)overt attention was investigated by way of a target fixation condition and a central fixation condition. In the latter, participants had to fixate a dot in the center of the screen and allocate their attention to a target in the visual periphery. Second, the effect of visual speller layout was investigated by comparing the symbol Matrix to an ERP-based Hex-o-Spell, a two-levels speller consisting of six discs arranged on an invisible hexagon. Results We assessed counting errors, ERP amplitudes, and offline classification performance. There is an advantage (i.e., less errors, larger ERP amplitude modulation, better classification) of overt attention over covert attention, and there is also an advantage of the Hex-o-Spell over the Matrix. Using overt attention, P1, N1, P2, N2, and P3 components are enhanced by attention. Using covert attention, only N2 and P3 are enhanced for both spellers, and N1 and P2 are modulated when using the Hex-o-Spell but not when using the Matrix. Consequently, classifiers rely mainly on early evoked potentials in overt attention and on later cognitive components in covert attention. Conclusions Both overt and covert attention can be used to drive an ERP-based BCI, but performance is markedly lower for covert attention. The Hex-o-Spell outperforms the Matrix, especially when eye movements are not permitted, illustrating that performance can be increased if one accounts for peculiarities of peripheral vision. PMID:20509913

  1. Four-Channel Biosignal Analysis and Feature Extraction for Automatic Emotion Recognition

    NASA Astrophysics Data System (ADS)

    Kim, Jonghwa; André, Elisabeth

    This paper investigates the potential of physiological signals as a reliable channel for automatic recognition of user's emotial state. For the emotion recognition, little attention has been paid so far to physiological signals compared to audio-visual emotion channels such as facial expression or speech. All essential stages of automatic recognition system using biosignals are discussed, from recording physiological dataset up to feature-based multiclass classification. Four-channel biosensors are used to measure electromyogram, electrocardiogram, skin conductivity and respiration changes. A wide range of physiological features from various analysis domains, including time/frequency, entropy, geometric analysis, subband spectra, multiscale entropy, etc., is proposed in order to search the best emotion-relevant features and to correlate them with emotional states. The best features extracted are specified in detail and their effectiveness is proven by emotion recognition results.

  2. Iconic memory requires attention

    PubMed Central

    Persuh, Marjan; Genzer, Boris; Melara, Robert D.

    2012-01-01

    Two experiments investigated whether attention plays a role in iconic memory, employing either a change detection paradigm (Experiment 1) or a partial-report paradigm (Experiment 2). In each experiment, attention was taxed during initial display presentation, focusing the manipulation on consolidation of information into iconic memory, prior to transfer into working memory. Observers were able to maintain high levels of performance (accuracy of change detection or categorization) even when concurrently performing an easy visual search task (low load). However, when the concurrent search was made difficult (high load), observers' performance dropped to almost chance levels, while search accuracy held at single-task levels. The effects of attentional load remained the same across paradigms. The results suggest that, without attention, participants consolidate in iconic memory only gross representations of the visual scene, information too impoverished for successful detection of perceptual change or categorization of features. PMID:22586389

  3. Iconic memory requires attention.

    PubMed

    Persuh, Marjan; Genzer, Boris; Melara, Robert D

    2012-01-01

    Two experiments investigated whether attention plays a role in iconic memory, employing either a change detection paradigm (Experiment 1) or a partial-report paradigm (Experiment 2). In each experiment, attention was taxed during initial display presentation, focusing the manipulation on consolidation of information into iconic memory, prior to transfer into working memory. Observers were able to maintain high levels of performance (accuracy of change detection or categorization) even when concurrently performing an easy visual search task (low load). However, when the concurrent search was made difficult (high load), observers' performance dropped to almost chance levels, while search accuracy held at single-task levels. The effects of attentional load remained the same across paradigms. The results suggest that, without attention, participants consolidate in iconic memory only gross representations of the visual scene, information too impoverished for successful detection of perceptual change or categorization of features.

  4. Contextual cueing of pop-out visual search: when context guides the deployment of attention.

    PubMed

    Geyer, Thomas; Zehetleitner, Michael; Müller, Hermann J

    2010-05-01

    Visual context information can guide attention in demanding (i.e., inefficient) search tasks. When participants are repeatedly presented with identically arranged ('repeated') displays, reaction times are faster relative to newly composed ('non-repeated') displays. The present article examines whether this 'contextual cueing' effect operates also in simple (i.e., efficient) search tasks and if so, whether there it influences target, rather than response, selection. The results were that singleton-feature targets were detected faster when the search items were presented in repeated, rather than non-repeated, arrangements. Importantly, repeated, relative to novel, displays also led to an increase in signal detection accuracy. Thus, contextual cueing can expedite the selection of pop-out targets, most likely by enhancing feature contrast signals at the overall-salience computation stage.

  5. TMS over the right precuneus reduces the bilateral field advantage in visual short term memory capacity.

    PubMed

    Kraft, Antje; Dyrholm, Mads; Kehrer, Stefanie; Kaufmann, Christian; Bruening, Jovita; Kathmann, Norbert; Bundesen, Claus; Irlbacher, Kerstin; Brandt, Stephan A

    2015-01-01

    Several studies have demonstrated a bilateral field advantage (BFA) in early visual attentional processing, that is, enhanced visual processing when stimuli are spread across both visual hemifields. The results are reminiscent of a hemispheric resource model of parallel visual attentional processing, suggesting more attentional resources on an early level of visual processing for bilateral displays [e.g. Sereno AB, Kosslyn SM. Discrimination within and between hemifields: a new constraint on theories of attention. Neuropsychologia 1991;29(7):659-75.]. Several studies have shown that the BFA extends beyond early stages of visual attentional processing, demonstrating that visual short term memory (VSTM) capacity is higher when stimuli are distributed bilaterally rather than unilaterally. Here we examine whether hemisphere-specific resources are also evident on later stages of visual attentional processing. Based on the Theory of Visual Attention (TVA) [Bundesen C. A theory of visual attention. Psychol Rev 1990;97(4):523-47.] we used a whole report paradigm that allows investigating visual attention capacity variability in unilateral and bilateral displays during navigated repetitive transcranial magnetic stimulation (rTMS) of the precuneus region. A robust BFA in VSTM storage capacity was apparent after rTMS over the left precuneus and in the control condition without rTMS. In contrast, the BFA diminished with rTMS over the right precuneus. This finding indicates that the right precuneus plays a causal role in VSTM capacity, particularly in bilateral visual displays. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Feature-based attention is functionally distinct from relation-based attention: The double dissociation between color-based capture and color-relation-based capture of attention.

    PubMed

    Du, Feng; Jiao, Jun

    2016-04-01

    The present study used a spatial blink task and a cuing task to examine the boundary between feature-based capture and relation-based capture. Feature-based capture occurs when distractors match the target feature such as target color. The occurrence of relation-based capture is contingent upon the feature relation between target and distractor (e.g., color relation). The results show that color distractors that match the target-nontarget color relation do not consistently capture attention when they appear outside of the attentional window, but distractors appearing outside the attentional window that match the target color consistently capture attention. In contrast, color distractors that best match the target-nontarget color relation but not the target color, are more likely to capture attention when they appear within the attentional window. Consistently, color cues that match the target-nontarget color relation produce a cuing effect when they appear within the attentional window, while target-color matched cues do not. Such a double dissociation between color-based capture and color-relation-based capture indicates functionally distinct mechanisms for these 2 types of attentional selection. This also indicates that the spatial blink task and the uninformative cuing task are measuring distinctive aspects of involuntary attention. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  7. Modeling the Evolution of Beliefs Using an Attentional Focus Mechanism

    PubMed Central

    Marković, Dimitrije; Gläscher, Jan; Bossaerts, Peter; O’Doherty, John; Kiebel, Stefan J.

    2015-01-01

    For making decisions in everyday life we often have first to infer the set of environmental features that are relevant for the current task. Here we investigated the computational mechanisms underlying the evolution of beliefs about the relevance of environmental features in a dynamical and noisy environment. For this purpose we designed a probabilistic Wisconsin card sorting task (WCST) with belief solicitation, in which subjects were presented with stimuli composed of multiple visual features. At each moment in time a particular feature was relevant for obtaining reward, and participants had to infer which feature was relevant and report their beliefs accordingly. To test the hypothesis that attentional focus modulates the belief update process, we derived and fitted several probabilistic and non-probabilistic behavioral models, which either incorporate a dynamical model of attentional focus, in the form of a hierarchical winner-take-all neuronal network, or a diffusive model, without attention-like features. We used Bayesian model selection to identify the most likely generative model of subjects’ behavior and found that attention-like features in the behavioral model are essential for explaining subjects’ responses. Furthermore, we demonstrate a method for integrating both connectionist and Bayesian models of decision making within a single framework that allowed us to infer hidden belief processes of human subjects. PMID:26495984

  8. Using Saliency-Weighted Disparity Statistics for Objective Visual Comfort Assessment of Stereoscopic Images

    NASA Astrophysics Data System (ADS)

    Zhang, Wenlan; Luo, Ting; Jiang, Gangyi; Jiang, Qiuping; Ying, Hongwei; Lu, Jing

    2016-06-01

    Visual comfort assessment (VCA) for stereoscopic images is a particularly significant yet challenging task in 3D quality of experience research field. Although the subjective assessment given by human observers is known as the most reliable way to evaluate the experienced visual discomfort, it is time-consuming and non-systematic. Therefore, it is of great importance to develop objective VCA approaches that can faithfully predict the degree of visual discomfort as human beings do. In this paper, a novel two-stage objective VCA framework is proposed. The main contribution of this study is that the important visual attention mechanism of human visual system is incorporated for visual comfort-aware feature extraction. Specifically, in the first stage, we first construct an adaptive 3D visual saliency detection model to derive saliency map of a stereoscopic image, and then a set of saliency-weighted disparity statistics are computed and combined to form a single feature vector to represent a stereoscopic image in terms of visual comfort. In the second stage, a high dimensional feature vector is fused into a single visual comfort score by performing random forest algorithm. Experimental results on two benchmark databases confirm the superior performance of the proposed approach.

  9. On the generality of the displaywide contingent orienting hypothesis: can a visual onset capture attention without top-down control settings for displaywide onset?

    PubMed

    Yeh, Su-Ling; Liao, Hsin-I

    2010-10-01

    The contingent orienting hypothesis (Folk, Remington, & Johnston, 1992) states that attentional capture is contingent on top-down control settings induced by task demands. Past studies supporting this hypothesis have identified three kinds of top-down control settings: for target-specific features, for the strategy to search for a singleton, and for visual features in the target display as a whole. Previously, we have found stimulus-driven capture by onset that was not contingent on the first two kinds of settings (Yeh & Liao, 2008). The current study aims to test the third kind: the displaywide contingent orienting hypothesis (Gibson & Kelsey, 1998). Specifically, we ask whether an onset stimulus can still capture attention in the spatial cueing paradigm when attentional control settings for the displaywide onset of the target are excluded by making all letters in the target display emerge from placeholders. Results show that a preceding uninformative onset cue still captured attention to its location in a stimulus-driven fashion, whereas a color cue captured attention only when it was contingent on the setting for displaywide color. These results raise doubts as to the generality of the displaywide contingent orienting hypothesis and help delineate the boundary conditions on this hypothesis. Copyright © 2010 Elsevier B.V. All rights reserved.

  10. View-Based Models of 3D Object Recognition and Class-Specific Invariance

    DTIC Science & Technology

    1994-04-01

    underlie recognition of geon-like com- ponents (see Edelman, 1991 and Biederman , 1987 ). I(X -_ ta)II1y = (X - ta)TWTW(x -_ ta) (3) View-invariant features...Institute of Technology, 1993. neocortex. Biological Cybernetics, 1992. 14] I. Biederman . Recognition by components: a theory [20] B. Olshausen, C...Anderson, and D. Van Essen. A of human image understanding. Psychol. Review, neural model of visual attention and invariant pat- 94:115-147, 1987 . tern

  11. Feature-Based Attention in Early Vision for the Modulation of Figure–Ground Segregation

    PubMed Central

    Wagatsuma, Nobuhiko; Oki, Megumi; Sakai, Ko

    2013-01-01

    We investigated psychophysically whether feature-based attention modulates the perception of figure–ground (F–G) segregation and, based on the results, we investigated computationally the neural mechanisms underlying attention modulation. In the psychophysical experiments, the attention of participants was drawn to a specific motion direction and they were then asked to judge the side of figure in an ambiguous figure with surfaces consisting of distinct motion directions. The results of these experiments showed that the surface consisting of the attended direction of motion was more frequently observed as figure, with a degree comparable to that of spatial attention (Wagatsuma et al., 2008). These experiments also showed that perception was dependent on the distribution of feature contrast, specifically the motion direction differences. These results led us to hypothesize that feature-based attention functions in a framework similar to that of spatial attention. We proposed a V1–V2 model in which feature-based attention modulates the contrast of low-level feature in V1, and this modulation of contrast changes directly the surround modulation of border-ownership-selective cells in V2; thus, perception of F–G is biased. The model exhibited good agreement with human perception in the magnitude of attention modulation and its invariance among stimuli. These results indicate that early-level features that are modified by feature-based attention alter subsequent processing along afferent pathway, and that such modification could even change the perception of object. PMID:23515841

  12. Feature-based attention in early vision for the modulation of figure-ground segregation.

    PubMed

    Wagatsuma, Nobuhiko; Oki, Megumi; Sakai, Ko

    2013-01-01

    We investigated psychophysically whether feature-based attention modulates the perception of figure-ground (F-G) segregation and, based on the results, we investigated computationally the neural mechanisms underlying attention modulation. In the psychophysical experiments, the attention of participants was drawn to a specific motion direction and they were then asked to judge the side of figure in an ambiguous figure with surfaces consisting of distinct motion directions. The results of these experiments showed that the surface consisting of the attended direction of motion was more frequently observed as figure, with a degree comparable to that of spatial attention (Wagatsuma et al., 2008). These experiments also showed that perception was dependent on the distribution of feature contrast, specifically the motion direction differences. These results led us to hypothesize that feature-based attention functions in a framework similar to that of spatial attention. We proposed a V1-V2 model in which feature-based attention modulates the contrast of low-level feature in V1, and this modulation of contrast changes directly the surround modulation of border-ownership-selective cells in V2; thus, perception of F-G is biased. The model exhibited good agreement with human perception in the magnitude of attention modulation and its invariance among stimuli. These results indicate that early-level features that are modified by feature-based attention alter subsequent processing along afferent pathway, and that such modification could even change the perception of object.

  13. Cross-Modal Decoding of Neural Patterns Associated with Working Memory: Evidence for Attention-Based Accounts of Working Memory.

    PubMed

    Majerus, Steve; Cowan, Nelson; Péters, Frédéric; Van Calster, Laurens; Phillips, Christophe; Schrouff, Jessica

    2016-01-01

    Recent studies suggest common neural substrates involved in verbal and visual working memory (WM), interpreted as reflecting shared attention-based, short-term retention mechanisms. We used a machine-learning approach to determine more directly the extent to which common neural patterns characterize retention in verbal WM and visual WM. Verbal WM was assessed via a standard delayed probe recognition task for letter sequences of variable length. Visual WM was assessed via a visual array WM task involving the maintenance of variable amounts of visual information in the focus of attention. We trained a classifier to distinguish neural activation patterns associated with high- and low-visual WM load and tested the ability of this classifier to predict verbal WM load (high-low) from their associated neural activation patterns, and vice versa. We observed significant between-task prediction of load effects during WM maintenance, in posterior parietal and superior frontal regions of the dorsal attention network; in contrast, between-task prediction in sensory processing cortices was restricted to the encoding stage. Furthermore, between-task prediction of load effects was strongest in those participants presenting the highest capacity for the visual WM task. This study provides novel evidence for common, attention-based neural patterns supporting verbal and visual WM. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  14. How the deployment of attention determines what we see

    PubMed Central

    Treisman, Anne

    2007-01-01

    Attention is a tool to adapt what we see to our current needs. It can be focused narrowly on a single object or spread over several or distributed over the scene as a whole. In addition to increasing or decreasing the number of attended objects, these different deployments may have different effects on what we see. This chapter describes some research both on focused attention and its use in binding features, and on distributed attention and the kinds of information we gain and lose with the attention window opened wide. One kind of processing that we suggest occurs automatically with distributed attention results in a statistical description of sets of similar objects. Another gives the gist of the scene, which may be inferred from sets of features registered in parallel. Flexible use of these different modes of attention allows us to reconcile sharp capacity limits with a richer understanding of the visual scene. PMID:17387378

  15. Cognitive Control Network Contributions to Memory-Guided Visual Attention

    PubMed Central

    Rosen, Maya L.; Stern, Chantal E.; Michalka, Samantha W.; Devaney, Kathryn J.; Somers, David C.

    2016-01-01

    Visual attentional capacity is severely limited, but humans excel in familiar visual contexts, in part because long-term memories guide efficient deployment of attention. To investigate the neural substrates that support memory-guided visual attention, we performed a set of functional MRI experiments that contrast long-term, memory-guided visuospatial attention with stimulus-guided visuospatial attention in a change detection task. Whereas the dorsal attention network was activated for both forms of attention, the cognitive control network (CCN) was preferentially activated during memory-guided attention. Three posterior nodes in the CCN, posterior precuneus, posterior callosal sulcus/mid-cingulate, and lateral intraparietal sulcus exhibited the greatest specificity for memory-guided attention. These 3 regions exhibit functional connectivity at rest, and we propose that they form a subnetwork within the broader CCN. Based on the task activation patterns, we conclude that the nodes of this subnetwork are preferentially recruited for long-term memory guidance of visuospatial attention. PMID:25750253

  16. Clinical Features of Auditory Hallucinations in Patients With Dementia With Lewy Bodies: A Soundtrack of Visual Hallucinations.

    PubMed

    Tsunoda, Naoko; Hashimoto, Mamoru; Ishikawa, Tomohisa; Fukuhara, Ryuji; Yuki, Seiji; Tanaka, Hibiki; Hatada, Yutaka; Miyagawa, Yusuke; Ikeda, Manabu

    2018-05-08

    Auditory hallucinations are an important symptom for diagnosing dementia with Lewy bodies (DLB), yet they have received less attention than visual hallucinations. We investigated the clinical features of auditory hallucinations and the possible mechanisms by which they arise in patients with DLB. We recruited 124 consecutive patients with probable DLB (diagnosis based on the DLB International Workshop 2005 criteria; study period: June 2007-January 2015) from the dementia referral center of Kumamoto University Hospital. We used the Neuropsychiatric Inventory to assess the presence of auditory hallucinations, visual hallucinations, and other neuropsychiatric symptoms. We reviewed all available clinical records of patients with auditory hallucinations to assess their clinical features. We performed multiple logistic regression analysis to identify significant independent predictors of auditory hallucinations. Of the 124 patients, 44 (35.5%) had auditory hallucinations and 75 (60.5%) had visual hallucinations. The majority of patients (90.9%) with auditory hallucinations also had visual hallucinations. Auditory hallucinations consisted mostly of human voices, and 90% of patients described them as like hearing a soundtrack of the scene. Multiple logistic regression showed that the presence of auditory hallucinations was significantly associated with female sex (P = .04) and hearing impairment (P = .004). The analysis also revealed independent correlations between the presence of auditory hallucinations and visual hallucinations (P < .001), phantom boarder delusions (P = .001), and depression (P = .038). Auditory hallucinations are common neuropsychiatric symptoms in DLB and usually appear as a background soundtrack accompanying visual hallucinations. Auditory hallucinations in patients with DLB are more likely to occur in women and those with impaired hearing, depression, delusions, or visual hallucinations. © Copyright 2018 Physicians Postgraduate Press, Inc.

  17. Short-term plasticity in auditory cognition.

    PubMed

    Jääskeläinen, Iiro P; Ahveninen, Jyrki; Belliveau, John W; Raij, Tommi; Sams, Mikko

    2007-12-01

    Converging lines of evidence suggest that auditory system short-term plasticity can enable several perceptual and cognitive functions that have been previously considered as relatively distinct phenomena. Here we review recent findings suggesting that auditory stimulation, auditory selective attention and cross-modal effects of visual stimulation each cause transient excitatory and (surround) inhibitory modulations in the auditory cortex. These modulations might adaptively tune hierarchically organized sound feature maps of the auditory cortex (e.g. tonotopy), thus filtering relevant sounds during rapidly changing environmental and task demands. This could support auditory sensory memory, pre-attentive detection of sound novelty, enhanced perception during selective attention, influence of visual processing on auditory perception and longer-term plastic changes associated with perceptual learning.

  18. Modeling selective attention using a neuromorphic analog VLSI device.

    PubMed

    Indiveri, G

    2000-12-01

    Attentional mechanisms are required to overcome the problem of flooding a limited processing capacity system with information. They are present in biological sensory systems and can be a useful engineering tool for artificial visual systems. In this article we present a hardware model of a selective attention mechanism implemented on a very large-scale integration (VLSI) chip, using analog neuromorphic circuits. The chip exploits a spike-based representation to receive, process, and transmit signals. It can be used as a transceiver module for building multichip neuromorphic vision systems. We describe the circuits that carry out the main processing stages of the selective attention mechanism and provide experimental data for each circuit. We demonstrate the expected behavior of the model at the system level by stimulating the chip with both artificially generated control signals and signals obtained from a saliency map, computed from an image containing several salient features.

  19. A neural model of the temporal dynamics of figure-ground segregation in motion perception.

    PubMed

    Raudies, Florian; Neumann, Heiko

    2010-03-01

    How does the visual system manage to segment a visual scene into surfaces and objects and manage to attend to a target object? Based on psychological and physiological investigations, it has been proposed that the perceptual organization and segmentation of a scene is achieved by the processing at different levels of the visual cortical hierarchy. According to this, motion onset detection, motion-defined shape segregation, and target selection are accomplished by processes which bind together simple features into fragments of increasingly complex configurations at different levels in the processing hierarchy. As an alternative to this hierarchical processing hypothesis, it has been proposed that the processing stages for feature detection and segregation are reflected in different temporal episodes in the response patterns of individual neurons. Such temporal epochs have been observed in the activation pattern of neurons as low as in area V1. Here, we present a neural network model of motion detection, figure-ground segregation and attentive selection which explains these response patterns in an unifying framework. Based on known principles of functional architecture of the visual cortex, we propose that initial motion and motion boundaries are detected at different and hierarchically organized stages in the dorsal pathway. Visual shapes that are defined by boundaries, which were generated from juxtaposed opponent motions, are represented at different stages in the ventral pathway. Model areas in the different pathways interact through feedforward and modulating feedback, while mutual interactions enable the communication between motion and form representations. Selective attention is devoted to shape representations by sending modulating feedback signals from higher levels (working memory) to intermediate levels to enhance their responses. Areas in the motion and form pathway are coupled through top-down feedback with V1 cells at the bottom end of the hierarchy. We propose that the different temporal episodes in the response pattern of V1 cells, as recorded in recent experiments, reflect the strength of modulating feedback signals. This feedback results from the consolidated shape representations from coherent motion patterns and the attentive modulation of responses along the cortical hierarchy. The model makes testable predictions concerning the duration and delay of the temporal episodes of V1 cell responses as well as their response variations that were caused by modulating feedback signals. Copyright 2009 Elsevier Ltd. All rights reserved.

  20. Selection-for-action in visual search.

    PubMed

    Hannus, Aave; Cornelissen, Frans W; Lindemann, Oliver; Bekkering, Harold

    2005-01-01

    Grasping an object rather than pointing to it enhances processing of its orientation but not its color. Apparently, visual discrimination is selectively enhanced for a behaviorally relevant feature. In two experiments we investigated the limitations and targets of this bias. Specifically, in Experiment 1 we were interested to find out whether the effect is capacity demanding, therefore we manipulated the set-size of the display. The results indicated a clear cognitive processing capacity requirement, i.e. the magnitude of the effect decreased for a larger set size. Consequently, in Experiment 2, we investigated if the enhancement effect occurs only at the level of behaviorally relevant feature or at a level common to different features. Therefore we manipulated the discriminability of the behaviorally neutral feature (color). Again, results showed that this manipulation influenced the action enhancement of the behaviorally relevant feature. Particularly, the effect of the color manipulation on the action enhancement suggests that the action effect is more likely to bias the competition between different visual features rather than to enhance the processing of the relevant feature. We offer a theoretical account that integrates the action-intention effect within the biased competition model of visual selective attention.

  1. Visual Prediction Error Spreads Across Object Features in Human Visual Cortex

    PubMed Central

    Summerfield, Christopher; Egner, Tobias

    2016-01-01

    Visual cognition is thought to rely heavily on contextual expectations. Accordingly, previous studies have revealed distinct neural signatures for expected versus unexpected stimuli in visual cortex. However, it is presently unknown how the brain combines multiple concurrent stimulus expectations such as those we have for different features of a familiar object. To understand how an unexpected object feature affects the simultaneous processing of other expected feature(s), we combined human fMRI with a task that independently manipulated expectations for color and motion features of moving-dot stimuli. Behavioral data and neural signals from visual cortex were then interrogated to adjudicate between three possible ways in which prediction error (surprise) in the processing of one feature might affect the concurrent processing of another, expected feature: (1) feature processing may be independent; (2) surprise might “spread” from the unexpected to the expected feature, rendering the entire object unexpected; or (3) pairing a surprising feature with an expected feature might promote the inference that the two features are not in fact part of the same object. To formalize these rival hypotheses, we implemented them in a simple computational model of multifeature expectations. Across a range of analyses, behavior and visual neural signals consistently supported a model that assumes a mixing of prediction error signals across features: surprise in one object feature spreads to its other feature(s), thus rendering the entire object unexpected. These results reveal neurocomputational principles of multifeature expectations and indicate that objects are the unit of selection for predictive vision. SIGNIFICANCE STATEMENT We address a key question in predictive visual cognition: how does the brain combine multiple concurrent expectations for different features of a single object such as its color and motion trajectory? By combining a behavioral protocol that independently varies expectation of (and attention to) multiple object features with computational modeling and fMRI, we demonstrate that behavior and fMRI activity patterns in visual cortex are best accounted for by a model in which prediction error in one object feature spreads to other object features. These results demonstrate how predictive vision forms object-level expectations out of multiple independent features. PMID:27810936

  2. Theory of Visual Attention (TVA) applied to mice in the 5-choice serial reaction time task.

    PubMed

    Fitzpatrick, C M; Caballero-Puntiverio, M; Gether, U; Habekost, T; Bundesen, C; Vangkilde, S; Woldbye, D P D; Andreasen, J T; Petersen, A

    2017-03-01

    The 5-choice serial reaction time task (5-CSRTT) is widely used to measure rodent attentional functions. In humans, many attention studies in healthy and clinical populations have used testing based on Bundesen's Theory of Visual Attention (TVA) to estimate visual processing speeds and other parameters of attentional capacity. We aimed to bridge these research fields by modifying the 5-CSRTT's design and by mathematically modelling data to derive attentional parameters analogous to human TVA-based measures. C57BL/6 mice were tested in two 1-h sessions on consecutive days with a version of the 5-CSRTT where stimulus duration (SD) probe length was varied based on information from previous TVA studies. Thereafter, a scopolamine hydrobromide (HBr; 0.125 or 0.25 mg/kg) pharmacological challenge was undertaken, using a Latin square design. Mean score values were modelled using a new three-parameter version of TVA to obtain estimates of visual processing speeds, visual thresholds and motor response baselines in each mouse. The parameter estimates for each animal were reliable across sessions, showing that the data were stable enough to support analysis on an individual level. Scopolamine HBr dose-dependently reduced 5-CSRTT attentional performance while also increasing reward collection latency at the highest dose. Upon TVA modelling, scopolamine HBr significantly reduced visual processing speed at both doses, while having less pronounced effects on visual thresholds and motor response baselines. This study shows for the first time how 5-CSRTT performance in mice can be mathematically modelled to yield estimates of attentional capacity that are directly comparable to estimates from human studies.

  3. Motion-seeded object-based attention for dynamic visual imagery

    NASA Astrophysics Data System (ADS)

    Huber, David J.; Khosla, Deepak; Kim, Kyungnam

    2017-05-01

    This paper† describes a novel system that finds and segments "objects of interest" from dynamic imagery (video) that (1) processes each frame using an advanced motion algorithm that pulls out regions that exhibit anomalous motion, and (2) extracts the boundary of each object of interest using a biologically-inspired segmentation algorithm based on feature contours. The system uses a series of modular, parallel algorithms, which allows many complicated operations to be carried out by the system in a very short time, and can be used as a front-end to a larger system that includes object recognition and scene understanding modules. Using this method, we show 90% accuracy with fewer than 0.1 false positives per frame of video, which represents a significant improvement over detection using a baseline attention algorithm.

  4. The Neural Basis of Selective Attention

    PubMed Central

    Yantis, Steven

    2009-01-01

    Selective attention is an intrinsic component of perceptual representation in a visual system that is hierarchically organized. Modulatory signals originate in brain regions that represent behavioral goals; these signals specify which perceptual objects are to be represented by sensory neurons that are subject to contextual modulation. Attention can be deployed to spatial locations, features, or objects, and corresponding modulatory signals must be targeted within these domains. Open questions include how nonspatial perceptual domains are modulated by attention and how abstract goals are transformed into targeted modulatory signals. PMID:19444327

  5. An evaluation of attention models for use in SLAM

    NASA Astrophysics Data System (ADS)

    Dodge, Samuel; Karam, Lina

    2013-12-01

    In this paper we study the application of visual saliency models for the simultaneous localization and mapping (SLAM) problem. We consider visual SLAM, where the location of the camera and a map of the environment can be generated using images from a single moving camera. In visual SLAM, the interest point detector is of key importance. This detector must be invariant to certain image transformations so that features can be matched across di erent frames. Recent work has used a model of human visual attention to detect interest points, however it is unclear as to what is the best attention model for this purpose. To this aim, we compare the performance of interest points from four saliency models (Itti, GBVS, RARE, and AWS) with the performance of four traditional interest point detectors (Harris, Shi-Tomasi, SIFT, and FAST). We evaluate these detectors under several di erent types of image transformation and nd that the Itti saliency model, in general, achieves the best performance in terms of keypoint repeatability.

  6. Signal enhancement, not active suppression, follows the contingent capture of visual attention.

    PubMed

    Livingstone, Ashley C; Christie, Gregory J; Wright, Richard D; McDonald, John J

    2017-02-01

    Irrelevant visual cues capture attention when they possess a task-relevant feature. Electrophysiologically, this contingent capture of attention is evidenced by the N2pc component of the visual event-related potential (ERP) and an enlarged ERP positivity over the occipital hemisphere contralateral to the cued location. The N2pc reflects an early stage of attentional selection, but presently it is unclear what the contralateral ERP positivity reflects. One hypothesis is that it reflects the perceptual enhancement of the cued search-array item; another hypothesis is that it is time-locked to the preceding cue display and reflects active suppression of the cue itself. Here, we varied the time interval between a cue display and a subsequent target display to evaluate these competing hypotheses. The results demonstrated that the contralateral ERP positivity is tightly time-locked to the appearance of the search display rather than the cue display, thereby supporting the perceptual enhancement hypothesis and disconfirming the cue-suppression hypothesis. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  7. An online EEG BCI based on covert visuospatial attention in absence of exogenous stimulation

    NASA Astrophysics Data System (ADS)

    Tonin, L.; Leeb, R.; Sobolewski, A.; Millán, J. del R.

    2013-10-01

    Objective. In this work we present—for the first time—the online operation of an electroencephalogram (EEG) brain-computer interface (BCI) system based on covert visuospatial attention (CVSA), without relying on any evoked responses. Electrophysiological correlates of pure top-down CVSA have only recently been proposed as a control signal for BCI. Such systems are expected to share the ease of use of stimulus-driven BCIs (e.g. P300, steady state visually evoked potential) with the autonomy afforded by decoding voluntary modulations of ongoing activity (e.g. motor imagery). Approach. Eight healthy subjects participated in the study. EEG signals were acquired with an active 64-channel system. The classification method was based on a time-dependent approach tuned to capture the most discriminant spectral features of the temporal evolution of attentional processes. The system was used by all subjects over two days without retraining, to verify its robustness and reliability. Main results. We report a mean online accuracy across the group of 70.6 ± 1.5%, and 88.8 ± 5.8% for the best subject. Half of the participants produced stable features over the entire duration of the study. Additionally, we explain drops in performance in subjects showing stable features in terms of known electrophysiological correlates of fatigue, suggesting the prospect of online monitoring of mental states in BCI systems. Significance. This work represents the first demonstration of the feasibility of an online EEG BCI based on CVSA. The results achieved suggest the CVSA BCI as a promising alternative to standard BCI modalities.

  8. Mapping feature-sensitivity and attentional modulation in human auditory cortex with functional magnetic resonance imaging

    PubMed Central

    Paltoglou, Aspasia E; Sumner, Christian J; Hall, Deborah A

    2011-01-01

    Feature-specific enhancement refers to the process by which selectively attending to a particular stimulus feature specifically increases the response in the same region of the brain that codes that stimulus property. Whereas there are many demonstrations of this mechanism in the visual system, the evidence is less clear in the auditory system. The present functional magnetic resonance imaging (fMRI) study examined this process for two complex sound features, namely frequency modulation (FM) and spatial motion. The experimental design enabled us to investigate whether selectively attending to FM and spatial motion enhanced activity in those auditory cortical areas that were sensitive to the two features. To control for attentional effort, the difficulty of the target-detection tasks was matched as closely as possible within listeners. Locations of FM-related and motion-related activation were broadly compatible with previous research. The results also confirmed a general enhancement across the auditory cortex when either feature was being attended to, as compared with passive listening. The feature-specific effects of selective attention revealed the novel finding of enhancement for the nonspatial (FM) feature, but not for the spatial (motion) feature. However, attention to spatial features also recruited several areas outside the auditory cortex. Further analyses led us to conclude that feature-specific effects of selective attention are not statistically robust, and appear to be sensitive to the choice of fMRI experimental design and localizer contrast. PMID:21447093

  9. Comparing visual search and eye movements in bilinguals and monolinguals

    PubMed Central

    Hout, Michael C.; Walenchok, Stephen C.; Azuma, Tamiko; Goldinger, Stephen D.

    2017-01-01

    Recent research has suggested that bilinguals show advantages over monolinguals in visual search tasks, although these findings have been derived from global behavioral measures of accuracy and response times. In the present study we sought to explore the bilingual advantage by using more sensitive eyetracking techniques across three visual search experiments. These spatially and temporally fine-grained measures allowed us to carefully investigate any nuanced attentional differences between bilinguals and monolinguals. Bilingual and monolingual participants completed visual search tasks that varied in difficulty. The experiments required participants to make careful discriminations in order to detect target Landolt Cs among similar distractors. In Experiment 1, participants performed both feature and conjunction search. In Experiments 2 and 3, participants performed visual search while making different types of speeded discriminations, after either locating the target or mentally updating a constantly changing target. The results across all experiments revealed that bilinguals and monolinguals were equally efficient at guiding attention and generating responses. These findings suggest that the bilingual advantage does not reflect a general benefit in attentional guidance, but could reflect more efficient guidance only under specific task demands. PMID:28508116

  10. Bottom-Up Guidance in Visual Search for Conjunctions

    ERIC Educational Resources Information Center

    Proulx, Michael J.

    2007-01-01

    Understanding the relative role of top-down and bottom-up guidance is crucial for models of visual search. Previous studies have addressed the role of top-down and bottom-up processes in search for a conjunction of features but with inconsistent results. Here, the author used an attentional capture method to address the role of top-down and…

  11. Inter-area correlations in the ventral visual pathway reflect feature integration

    PubMed Central

    Freeman, Jeremy; Donner, Tobias H.; Heeger, David J.

    2011-01-01

    During object perception, the brain integrates simple features into representations of complex objects. A perceptual phenomenon known as visual crowding selectively interferes with this process. Here, we use crowding to characterize a neural correlate of feature integration. Cortical activity was measured with functional magnetic resonance imaging, simultaneously in multiple areas of the ventral visual pathway (V1–V4 and the visual word form area, VWFA, which responds preferentially to familiar letters), while human subjects viewed crowded and uncrowded letters. Temporal correlations between cortical areas were lower for crowded letters than for uncrowded letters, especially between V1 and VWFA. These differences in correlation were retinotopically specific, and persisted when attention was diverted from the letters. But correlation differences were not evident when we substituted the letters with grating patches that were not crowded under our stimulus conditions. We conclude that inter-area correlations reflect feature integration and are disrupted by crowding. We propose that crowding may perturb the transformations between neural representations along the ventral pathway that underlie the integration of features into objects. PMID:21521832

  12. The Role of Visual Working Memory in Attentive Tracking of Unique Objects

    ERIC Educational Resources Information Center

    Makovski, Tal; Jiang, Yuhong V.

    2009-01-01

    When tracking moving objects in space humans usually attend to the objects' spatial locations and update this information over time. To what extent do surface features assist attentive tracking? In this study we asked participants to track identical or uniquely colored objects. Tracking was enhanced when objects were unique in color. The benefit…

  13. Common Cognitive Deficits in Children with Attention-Deficit/Hyperactivity Disorder and Autism: Working Memory and Visual-Motor Integration

    ERIC Educational Resources Information Center

    Englund, Julia A.; Decker, Scott L.; Allen, Ryan A.; Roberts, Alycia M.

    2014-01-01

    Cognitive deficits in working memory (WM) are characteristic features of Attention-Deficit/Hyperactivity Disorder (ADHD) and autism. However, few studies have investigated cognitive deficits using a wide range of cognitive measures. We compared children with ADHD ("n" = 49) and autism ("n" = 33) with a demographically matched…

  14. Influence of inter-item symmetry in visual search.

    PubMed

    Roggeveen, Alexa B; Kingstone, Alan; Enns, James T

    2004-01-01

    Does visual search involve a serial inspection of individual items (Feature Integration Theory) or are items grouped and segregated prior to their consideration as a possible target (Attentional Engagement Theory)? For search items defined by motion and shape there is strong support for prior grouping (Kingstone and Bischof, 1999). The present study tested for grouping based on inter-item shape symmetry. Results showed that target-distractor symmetry strongly influenced search whereas distractor-distractor symmetry influenced search more weakly. This indicates that static shapes are evaluated for similarity to one another prior to their explicit identification as 'target' or 'distractor'. Possible reasons for the unequal contributions of target-distractor and distractor-distractor relations are discussed.

  15. The Role of Color in Search Templates for Real-world Target Objects.

    PubMed

    Nako, Rebecca; Smith, Tim J; Eimer, Martin

    2016-11-01

    During visual search, target representations (attentional templates) control the allocation of attention to template-matching objects. The activation of new attentional templates can be prompted by verbal or pictorial target specifications. We measured the N2pc component of the ERP as a temporal marker of attentional target selection to determine the role of color signals in search templates for real-world search target objects that are set up in response to word or picture cues. On each trial run, a word cue (e.g., "apple") was followed by three search displays that contained the cued target object among three distractors. The selection of the first target was based on the word cue only, whereas selection of the two subsequent targets could be controlled by templates set up after the first visual presentation of the target (picture cue). In different trial runs, search displays either contained objects in their natural colors or monochromatic objects. These two display types were presented in different blocks (Experiment 1) or in random order within each block (Experiment 2). RTs were faster, and target N2pc components emerged earlier for the second and third display of each trial run relative to the first display, demonstrating that pictures are more effective than word cues in guiding search. N2pc components were triggered more rapidly for targets in the second and third display in trial runs with colored displays. This demonstrates that when visual target attributes are fully specified by picture cues, the additional presence of color signals in target templates facilitates the speed with which attention is allocated to template-matching objects. No such selection benefits for colored targets were found when search templates were set up in response to word cues. Experiment 2 showed that color templates activated by word cues can even impair the attentional selection of noncolored targets. Results provide new insights into the status of color during the guidance of visual search for real-world target objects. Color is a powerful guiding feature when the precise visual properties of these objects are known but seems to be less important when search targets are specified by word cues.

  16. Extracting alpha band modulation during visual spatial attention without flickering stimuli using common spatial pattern.

    PubMed

    Fujisawa, Junya; Touyama, Hideaki; Hirose, Michitaka

    2008-01-01

    In this paper, alpha band modulation during visual spatial attention without visual stimuli was focused. Visual spatial attention has been expected to provide a new channel of non-invasive independent brain computer interface (BCI), but little work has been done on the new interfacing method. The flickering stimuli used in previous work cause a decline of independency and have difficulties in a practical use. Therefore we investigated whether visual spatial attention could be detected without such stimuli. Further, the common spatial patterns (CSP) were for the first time applied to the brain states during visual spatial attention. The performance evaluation was based on three brain states of left, right and center direction attention. The 30-channel scalp electroencephalographic (EEG) signals over occipital cortex were recorded for five subjects. Without CSP, the analyses made 66.44 (range 55.42 to 72.27) % of average classification performance in discriminating left and right attention classes. With CSP, the averaged classification accuracy was 75.39 (range 63.75 to 86.13) %. It is suggested that CSP is useful in the context of visual spatial attention, and the alpha band modulation during visual spatial attention without flickering stimuli has the possibility of a new channel for independent BCI as well as motor imagery.

  17. Unique sudden onsets capture attention even when observers are in feature-search mode.

    PubMed

    Spalek, Thomas M; Yanko, Matthew R; Poiese, Paola; Lagroix, Hayley E P

    2012-01-01

    Two sources of attentional capture have been proposed: stimulus-driven (exogenous) and goal-oriented (endogenous). A resolution between these modes of capture has not been straightforward. Even such a clearly exogenous event as the sudden onset of a stimulus can be said to capture attention endogenously if observers operate in singleton-detection mode rather than feature-search mode. In four experiments we show that a unique sudden onset captures attention even when observers are in feature-search mode. The displays were rapid serial visual presentation (RSVP) streams of differently coloured letters with the target letter defined by a specific colour. Distractors were four #s, one of the target colour, surrounding one of the non-target letters. Capture was substantially reduced when the onset of the distractor array was not unique because it was preceded by other sets of four grey # arrays in the RSVP stream. This provides unambiguous evidence that attention can be captured both exogenously and endogenously within a single task.

  18. Independent, Synchronous Access to Color and Motion Features

    ERIC Educational Resources Information Center

    Holcombe, Alex O.; Cavanagh, Patrick

    2008-01-01

    We investigated the role of attention in pairing superimposed visual features. When moving dots alternate in color and in motion direction, reports of the perceived color and motion reveal an asynchrony: the most accurate reports occur when the motion change precedes the associated color change by approximately 100ms [Moutoussis, K., & Zeki,…

  19. When do letter features migrate? A boundary condition for feature-integration theory.

    PubMed

    Butler, B E; Mewhort, D J; Browse, R A

    1991-01-01

    Feature-integration theory postulates that a lapse of attention will allow letter features to change position and to recombine as illusory conjunctions (Treisman & Paterson, 1984). To study such errors, we used a set of uppercase letters known to yield illusory conjunctions in each of three tasks. The first, a bar-probe task, showed whole-character mislocations but not errors based on feature migration and recombination. The second, a two-alternative forced-choice detection task, allowed subjects to focus on the presence or absence of subletter features and showed illusory conjunctions based on feature migration and recombination. The third was also a two-alternative forced-choice detection task, but we manipulated the subjects' knowledge of the shape of the stimuli: In the case-certain condition, the stimuli were always in uppercase, but in the case-uncertain condition, the stimuli could appear in either upper- or lowercase. Subjects in the case-certain condition produced illusory conjunctions based on feature recombination, whereas subjects in the case-uncertain condition did not. The results suggest that when subjects can view the stimuli as feature groups, letter features regroup as illusory conjunctions; when subjects encode the stimuli as letters, whole items may be mislocated, but subletter features are not. Thus, illusory conjunctions reflect the subject's processing strategy, rather than the architecture of the visual system.

  20. The influence of attention levels on psychophysiological responses.

    PubMed

    Chang, Yu-Chieh; Huang, Shwu-Lih

    2012-10-01

    This study aimed to examine which brain oscillatory activities and peripheral physiological measures were influenced by attention levels. A new experimental procedure was designed. Participants were asked to count the number of target events while viewing eight moving white circles. An event occurred when two of the circles changed from white to red or blue. In the low-attention task, similar to a feature search, the target events were defined by color only. In the high-attention task, similar to a conjunction search, the target events were defined by both color and size. In the control task, participants were asked to passively watch the series of events while remembering a number. Based on Feature Integration Theory, our high-attention task would demand more attentional investment than the low-attention task. Given the identical visual stimuli and requirement of keeping a number in working memory for all three tasks, the changes in brain oscillatory activities can be attributed to attention level rather than to perceptual content or memory processes. Peripheral measures such as heart rate, heart rate variability (HRV), respiration rate, eye blinks, and skin conductance level were also evaluated. In comparing the high-attention task with the low-attention task, theta synchronization at the Fz, Cz, and Pz electrodes as a group, alpha2 desynchronization at the Fz, Cz, Pz, and Oz electrodes as a group, and a decrease in the low-frequency component and ratio measure of HRV were evident. These measures are considered to be promising indices for discriminating between attention levels. Copyright © 2012 Elsevier B.V. All rights reserved.

  1. Oscillations during observations: Dynamic oscillatory networks serving visuospatial attention.

    PubMed

    Wiesman, Alex I; Heinrichs-Graham, Elizabeth; Proskovec, Amy L; McDermott, Timothy J; Wilson, Tony W

    2017-10-01

    The dynamic allocation of neural resources to discrete features within a visual scene enables us to react quickly and accurately to salient environmental circumstances. A network of bilateral cortical regions is known to subserve such visuospatial attention functions; however the oscillatory and functional connectivity dynamics of information coding within this network are not fully understood. Particularly, the coding of information within prototypical attention-network hubs and the subsecond functional connections formed between these hubs have not been adequately characterized. Herein, we use the precise temporal resolution of magnetoencephalography (MEG) to define spectrally specific functional nodes and connections that underlie the deployment of attention in visual space. Twenty-three healthy young adults completed a visuospatial discrimination task designed to elicit multispectral activity in visual cortex during MEG, and the resulting data were preprocessed and reconstructed in the time-frequency domain. Oscillatory responses were projected to the cortical surface using a beamformer, and time series were extracted from peak voxels to examine their temporal evolution. Dynamic functional connectivity was then computed between nodes within each frequency band of interest. We find that visual attention network nodes are defined functionally by oscillatory frequency, that the allocation of attention to the visual space dynamically modulates functional connectivity between these regions on a millisecond timescale, and that these modulations significantly correlate with performance on a spatial discrimination task. We conclude that functional hubs underlying visuospatial attention are segregated not only anatomically but also by oscillatory frequency, and importantly that these oscillatory signatures promote dynamic communication between these hubs. Hum Brain Mapp 38:5128-5140, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  2. Do different perceptual task sets modulate electrophysiological correlates of masked visuomotor priming? Attention to shape and color put to the test.

    PubMed

    Zovko, Monika; Kiefer, Markus

    2013-02-01

    According to classical theories, automatic processes operate independently of attention. Recent evidence, however, shows that masked visuomotor priming, an example of an automatic process, depends on attention to visual form versus semantics. In a continuation of this approach, we probed feature-specific attention within the perceptual domain and tested in two event-related potential (ERP) studies whether masked visuomotor priming in a shape decision task specifically depends on attentional sensitization of visual pathways for shape in contrast to color. Prior to the masked priming procedure, a shape or a color decision task served to induce corresponding task sets. ERP analyses revealed visuomotor priming effects over the occipitoparietal scalp only after the shape, but not after the color induction task. Thus, top-down control coordinates automatic processing streams in congruency with higher-level goals even at a fine-grained level. Copyright © 2012 Society for Psychophysiological Research.

  3. Does apparent size capture attention in visual search? Evidence from the Muller-Lyer illusion.

    PubMed

    Proulx, Michael J; Green, Monique

    2011-11-23

    Is perceived size a crucial factor for the bottom-up guidance of attention? Here, a visual search experiment was used to examine whether an irrelevantly longer object can capture attention when participants were to detect a vertical target item. The longer object was created by an apparent size manipulation, the Müller-Lyer illusion; however, all objects contained the same number of pixels. The vertical target was detected more efficiently when it was also perceived as the longer item that was defined by apparent size. Further analysis revealed that the longer Müller-Lyer object received a greater degree of attentional priority than published results for other features such as retinal size, luminance contrast, and the abrupt onset of a new object. The present experiment has demonstrated for the first time that apparent size can capture attention and, thus, provide bottom-up guidance on the basis of perceived salience.

  4. Enhancing reading performance through action video games: the role of visual attention span.

    PubMed

    Antzaka, A; Lallier, M; Meyer, S; Diard, J; Carreiras, M; Valdois, S

    2017-11-06

    Recent studies reported that Action Video Game-AVG training improves not only certain attentional components, but also reading fluency in children with dyslexia. We aimed to investigate the shared attentional components of AVG playing and reading, by studying whether the Visual Attention (VA) span, a component of visual attention that has previously been linked to both reading development and dyslexia, is improved in frequent players of AVGs. Thirty-six French fluent adult readers, matched on chronological age and text reading proficiency, composed two groups: frequent AVG players and non-players. Participants performed behavioural tasks measuring the VA span, and a challenging reading task (reading of briefly presented pseudo-words). AVG players performed better on both tasks and performance on these tasks was correlated. These results further support the transfer of the attentional benefits of playing AVGs to reading, and indicate that the VA span could be a core component mediating this transfer. The correlation between VA span and pseudo-word reading also supports the involvement of VA span even in adult reading. Future studies could combine VA span training with defining features of AVGs, in order to build a new generation of remediation software.

  5. A unified selection signal for attention and reward in primary visual cortex.

    PubMed

    Stănişor, Liviu; van der Togt, Chris; Pennartz, Cyriel M A; Roelfsema, Pieter R

    2013-05-28

    Stimuli associated with high rewards evoke stronger neuronal activity than stimuli associated with lower rewards in many brain regions. It is not well understood how these reward effects influence activity in sensory cortices that represent low-level stimulus features. Here, we investigated the effects of reward information in the primary visual cortex (area V1) of monkeys. We found that the reward value of a stimulus relative to the value of other stimuli is a good predictor of V1 activity. Relative value biases the competition between stimuli, just as has been shown for selective attention. The neuronal latency of this reward value effect in V1 was similar to the latency of attentional influences. Moreover, V1 neurons with a strong value effect also exhibited a strong attention effect, which implies that relative value and top-down attention engage overlapping, if not identical, neuronal selection mechanisms. Our findings demonstrate that the effects of reward value reach down to the earliest sensory processing levels of the cerebral cortex and imply that theories about the effects of reward coding and top-down attention on visual representations should be unified.

  6. Priming and the guidance by visual and categorical templates in visual search.

    PubMed

    Wilschut, Anna; Theeuwes, Jan; Olivers, Christian N L

    2014-01-01

    Visual search is thought to be guided by top-down templates that are held in visual working memory. Previous studies have shown that a search-guiding template can be rapidly and strongly implemented from a visual cue, whereas templates are less effective when based on categorical cues. Direct visual priming from cue to target may underlie this difference. In two experiments we first asked observers to remember two possible target colors. A postcue then indicated which of the two would be the relevant color. The task was to locate a briefly presented and masked target of the cued color among irrelevant distractor items. Experiment 1 showed that overall search accuracy improved more rapidly on the basis of a direct visual postcue that carried the target color, compared to a neutral postcue that pointed to the memorized color. However, selectivity toward the target feature, i.e., the extent to which observers searched selectively among items of the cued vs. uncued color, was found to be relatively unaffected by the presence of the visual signal. In Experiment 2 we compared search that was based on either visual or categorical information, but now controlled for direct visual priming. This resulted in no differences in overall performance nor selectivity. Altogether the results suggest that perceptual processing of visual search targets is facilitated by priming from visual cues, whereas attentional selectivity is enhanced by a working memory template that can formed from both visual and categorical input. Furthermore, if the priming is controlled for, categorical- and visual-based templates similarly enhance search guidance.

  7. Object recognition with hierarchical discriminant saliency networks.

    PubMed

    Han, Sunhyoung; Vasconcelos, Nuno

    2014-01-01

    The benefits of integrating attention and object recognition are investigated. While attention is frequently modeled as a pre-processor for recognition, we investigate the hypothesis that attention is an intrinsic component of recognition and vice-versa. This hypothesis is tested with a recognition model, the hierarchical discriminant saliency network (HDSN), whose layers are top-down saliency detectors, tuned for a visual class according to the principles of discriminant saliency. As a model of neural computation, the HDSN has two possible implementations. In a biologically plausible implementation, all layers comply with the standard neurophysiological model of visual cortex, with sub-layers of simple and complex units that implement a combination of filtering, divisive normalization, pooling, and non-linearities. In a convolutional neural network implementation, all layers are convolutional and implement a combination of filtering, rectification, and pooling. The rectification is performed with a parametric extension of the now popular rectified linear units (ReLUs), whose parameters can be tuned for the detection of target object classes. This enables a number of functional enhancements over neural network models that lack a connection to saliency, including optimal feature denoising mechanisms for recognition, modulation of saliency responses by the discriminant power of the underlying features, and the ability to detect both feature presence and absence. In either implementation, each layer has a precise statistical interpretation, and all parameters are tuned by statistical learning. Each saliency detection layer learns more discriminant saliency templates than its predecessors and higher layers have larger pooling fields. This enables the HDSN to simultaneously achieve high selectivity to target object classes and invariance. The performance of the network in saliency and object recognition tasks is compared to those of models from the biological and computer vision literatures. This demonstrates benefits for all the functional enhancements of the HDSN, the class tuning inherent to discriminant saliency, and saliency layers based on templates of increasing target selectivity and invariance. Altogether, these experiments suggest that there are non-trivial benefits in integrating attention and recognition.

  8. Serial grouping of 2D-image regions with object-based attention in humans.

    PubMed

    Jeurissen, Danique; Self, Matthew W; Roelfsema, Pieter R

    2016-06-13

    After an initial stage of local analysis within the retina and early visual pathways, the human visual system creates a structured representation of the visual scene by co-selecting image elements that are part of behaviorally relevant objects. The mechanisms underlying this perceptual organization process are only partially understood. We here investigate the time-course of perceptual grouping of two-dimensional image-regions by measuring the reaction times of human participants and report that it is associated with the gradual spread of object-based attention. Attention spreads fastest over large and homogeneous areas and is slowed down at locations that require small-scale processing. We find that the time-course of the object-based selection process is well explained by a 'growth-cone' model, which selects surface elements in an incremental, scale-dependent manner. We discuss how the visual cortical hierarchy can implement this scale-dependent spread of object-based attention, leveraging the different receptive field sizes in distinct cortical areas.

  9. ViA: a perceptual visualization assistant

    NASA Astrophysics Data System (ADS)

    Healey, Chris G.; St. Amant, Robert; Elhaddad, Mahmoud S.

    2000-05-01

    This paper describes an automated visualized assistant called ViA. ViA is designed to help users construct perceptually optical visualizations to represent, explore, and analyze large, complex, multidimensional datasets. We have approached this problem by studying what is known about the control of human visual attention. By harnessing the low-level human visual system, we can support our dual goals of rapid and accurate visualization. Perceptual guidelines that we have built using psychophysical experiments form the basis for ViA. ViA uses modified mixed-initiative planning algorithms from artificial intelligence to search of perceptually optical data attribute to visual feature mappings. Our perceptual guidelines are integrated into evaluation engines that provide evaluation weights for a given data-feature mapping, and hints on how that mapping might be improved. ViA begins by asking users a set of simple questions about their dataset and the analysis tasks they want to perform. Answers to these questions are used in combination with the evaluation engines to identify and intelligently pursue promising data-feature mappings. The result is an automatically-generated set of mappings that are perceptually salient, but that also respect the context of the dataset and users' preferences about how they want to visualize their data.

  10. Enhanced attentional gain as a mechanism for generalized perceptual learning in human visual cortex.

    PubMed

    Byers, Anna; Serences, John T

    2014-09-01

    Learning to better discriminate a specific visual feature (i.e., a specific orientation in a specific region of space) has been associated with plasticity in early visual areas (sensory modulation) and with improvements in the transmission of sensory information from early visual areas to downstream sensorimotor and decision regions (enhanced readout). However, in many real-world scenarios that require perceptual expertise, observers need to efficiently process numerous exemplars from a broad stimulus class as opposed to just a single stimulus feature. Some previous data suggest that perceptual learning leads to highly specific neural modulations that support the discrimination of specific trained features. However, the extent to which perceptual learning acts to improve the discriminability of a broad class of stimuli via the modulation of sensory responses in human visual cortex remains largely unknown. Here, we used functional MRI and a multivariate analysis method to reconstruct orientation-selective response profiles based on activation patterns in the early visual cortex before and after subjects learned to discriminate small offsets in a set of grating stimuli that were rendered in one of nine possible orientations. Behavioral performance improved across 10 training sessions, and there was a training-related increase in the amplitude of orientation-selective response profiles in V1, V2, and V3 when orientation was task relevant compared with when it was task irrelevant. These results suggest that generalized perceptual learning can lead to modified responses in the early visual cortex in a manner that is suitable for supporting improved discriminability of stimuli drawn from a large set of exemplars. Copyright © 2014 the American Physiological Society.

  11. Behavior Selection of Mobile Robot Based on Integration of Multimodal Information

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Kaneko, Masahide

    Recently, biologically inspired robots have been developed to acquire the capacity for directing visual attention to salient stimulus generated from the audiovisual environment. On purpose to realize this behavior, a general method is to calculate saliency maps to represent how much the external information attracts the robot's visual attention, where the audiovisual information and robot's motion status should be involved. In this paper, we represent a visual attention model where three modalities, that is, audio information, visual information and robot's motor status are considered, while the previous researches have not considered all of them. Firstly, we introduce a 2-D density map, on which the value denotes how much the robot pays attention to each spatial location. Then we model the attention density using a Bayesian network where the robot's motion statuses are involved. Secondly, the information from both of audio and visual modalities is integrated with the attention density map in integrate-fire neurons. The robot can direct its attention to the locations where the integrate-fire neurons are fired. Finally, the visual attention model is applied to make the robot select the visual information from the environment, and react to the content selected. Experimental results show that it is possible for robots to acquire the visual information related to their behaviors by using the attention model considering motion statuses. The robot can select its behaviors to adapt to the dynamic environment as well as to switch to another task according to the recognition results of visual attention.

  12. Functional Connectivity Between Superior Parietal Lobule and Primary Visual Cortex "at Rest" Predicts Visual Search Efficiency.

    PubMed

    Bueichekú, Elisenda; Ventura-Campos, Noelia; Palomar-García, María-Ángeles; Miró-Padilla, Anna; Parcet, María-Antonia; Ávila, César

    2015-10-01

    Spatiotemporal activity that emerges spontaneously "at rest" has been proposed to reflect individual a priori biases in cognitive processing. This research focused on testing neurocognitive models of visual attention by studying the functional connectivity (FC) of the superior parietal lobule (SPL), given its central role in establishing priority maps during visual search tasks. Twenty-three human participants completed a functional magnetic resonance imaging session that featured a resting-state scan, followed by a visual search task based on the alphanumeric category effect. As expected, the behavioral results showed longer reaction times and more errors for the within-category (i.e., searching a target letter among letters) than the between-category search (i.e., searching a target letter among numbers). The within-category condition was related to greater activation of the superior and inferior parietal lobules, occipital cortex, inferior frontal cortex, dorsal anterior cingulate cortex, and the superior colliculus than the between-category search. The resting-state FC analysis of the SPL revealed a broad network that included connections with the inferotemporal cortex, dorsolateral prefrontal cortex, and dorsal frontal areas like the supplementary motor area and frontal eye field. Noteworthy, the regression analysis revealed that the more efficient participants in the visual search showed stronger FC between the SPL and areas of primary visual cortex (V1) related to the search task. We shed some light on how the SPL establishes a priority map of the environment during visual attention tasks and how FC is a valuable tool for assessing individual differences while performing cognitive tasks.

  13. Disturbance of visual search by stimulating to posterior parietal cortex in the brain using transcranial magnetic stimulation

    NASA Astrophysics Data System (ADS)

    Iramina, Keiji; Ge, Sheng; Hyodo, Akira; Hayami, Takehito; Ueno, Shoogo

    2009-04-01

    In this study, we applied a transcranial magnetic stimulation (TMS) to investigate the temporal aspect for the functional processing of visual attention. Although it has been known that right posterior parietal cortex (PPC) in the brain has a role in certain visual search tasks, there is little knowledge about the temporal aspect of this area. Three visual search tasks that have different difficulties of task execution individually were carried out. These three visual search tasks are the "easy feature task," the "hard feature task," and the "conjunction task." To investigate the temporal aspect of the PPC involved in the visual search, we applied various stimulus onset asynchronies (SOAs) and measured the reaction time of the visual search. The magnetic stimulation was applied on the right PPC or the left PPC by the figure-eight coil. The results show that the reaction times of the hard feature task are longer than those of the easy feature task. When SOA=150 ms, compared with no-TMS condition, there was a significant increase in target-present reaction time when TMS pulses were applied. We considered that the right PPC was involved in the visual search at about SOA=150 ms after visual stimulus presentation. The magnetic stimulation to the right PPC disturbed the processing of the visual search. However, the magnetic stimulation to the left PPC gives no effect on the processing of the visual search.

  14. Toward semantic-based retrieval of visual information: a model-based approach

    NASA Astrophysics Data System (ADS)

    Park, Youngchoon; Golshani, Forouzan; Panchanathan, Sethuraman

    2002-07-01

    This paper center around the problem of automated visual content classification. To enable classification based image or visual object retrieval, we propose a new image representation scheme called visual context descriptor (VCD) that is a multidimensional vector in which each element represents the frequency of a unique visual property of an image or a region. VCD utilizes the predetermined quality dimensions (i.e., types of features and quantization level) and semantic model templates mined in priori. Not only observed visual cues, but also contextually relevant visual features are proportionally incorporated in VCD. Contextual relevance of a visual cue to a semantic class is determined by using correlation analysis of ground truth samples. Such co-occurrence analysis of visual cues requires transformation of a real-valued visual feature vector (e.g., color histogram, Gabor texture, etc.,) into a discrete event (e.g., terms in text). Good-feature to track, rule of thirds, iterative k-means clustering and TSVQ are involved in transformation of feature vectors into unified symbolic representations called visual terms. Similarity-based visual cue frequency estimation is also proposed and used for ensuring the correctness of model learning and matching since sparseness of sample data causes the unstable results of frequency estimation of visual cues. The proposed method naturally allows integration of heterogeneous visual or temporal or spatial cues in a single classification or matching framework, and can be easily integrated into a semantic knowledge base such as thesaurus, and ontology. Robust semantic visual model template creation and object based image retrieval are demonstrated based on the proposed content description scheme.

  15. Recognition and attention guidance during contextual cueing in real-world scenes: evidence from eye movements.

    PubMed

    Brockmole, James R; Henderson, John M

    2006-07-01

    When confronted with a previously encountered scene, what information is used to guide search to a known target? We contrasted the role of a scene's basic-level category membership with its specific arrangement of visual properties. Observers were repeatedly shown photographs of scenes that contained consistently but arbitrarily located targets, allowing target positions to be associated with scene content. Learned scenes were then unexpectedly mirror reversed, spatially translating visual features as well as the target across the display while preserving the scene's identity and concept. Mirror reversals produced a cost as the eyes initially moved toward the position in the display in which the target had previously appeared. The cost was not complete, however; when initial search failed, the eyes were quickly directed to the target's new position. These results suggest that in real-world scenes, shifts of attention are initially based on scene identity, and subsequent shifts are guided by more detailed information regarding scene and object layout.

  16. Oculomotor capture by colour singletons depends on intertrial priming.

    PubMed

    Becker, Stefanie I

    2010-10-12

    In visual search, an irrelevant colour singleton captures attention when the colour of the distractor changes across trials (e.g., from red to green), but not when the colour remains constant (Becker, 2007). The present study shows that intertrial changes of the distractor colour also modulate oculomotor capture: an irrelevant colour singleton distractor was only selected more frequently than the inconspicuous nontargets (1) when its features had switched (compared to the previous trial), or (2) when the distractor had been presented at the same position as the target on the previous trial. These results throw doubt on the notion that colour distractors capture attention and the eyes because of their high feature contrast, which is available at an earlier point in time than information about specific feature values. Instead, attention and eye movements are apparently controlled by a system that operates on feature-specific information, and gauges the informativity of nominally irrelevant features. Copyright © 2010 Elsevier Ltd. All rights reserved.

  17. Age Mediation of Frontoparietal Activation during Visual Feature Search

    PubMed Central

    Madden, David J.; Parks, Emily L.; Davis, Simon W.; Diaz, Michele T.; Potter, Guy G.; Chou, Ying-hui; Chen, Nan-kuei; Cabeza, Roberto

    2014-01-01

    Activation of frontal and parietal brain regions is associated with attentional control during visual search. We used fMRI to characterize age-related differences in frontoparietal activation in a highly efficient feature search task, detection of a shape singleton. On half of the trials, a salient distractor (a color singleton) was present in the display. The hypothesis was that frontoparietal activation mediated the relation between age and attentional capture by the salient distractor. Participants were healthy, community-dwelling individuals, 21 younger adults (19 – 29 years of age) and 21 older adults (60 – 87 years of age). Top-down attention, in the form of target predictability, was associated with an improvement in search performance that was comparable for younger and older adults. The increase in search reaction time (RT) associated with the salient distractor (attentional capture), standardized to correct for generalized age-related slowing, was greater for older adults than for younger adults. On trials with a color singleton distractor, search RT increased as a function of increasing activation in frontal regions, for both age groups combined, suggesting increased task difficulty. Mediational analyses disconfirmed the hypothesized model, in which frontal activation mediated the age-related increase in attentional capture, but supported an alternative model in which age was a mediator of the relation between frontal activation and capture. PMID:25102420

  18. Working memory for visual features and conjunctions in schizophrenia.

    PubMed

    Gold, James M; Wilk, Christopher M; McMahon, Robert P; Buchanan, Robert W; Luck, Steven J

    2003-02-01

    The visual working memory (WM) storage capacity of patients with schizophrenia was investigated using a change detection paradigm. Participants were presented with 2, 3, 4, or 6 colored bars with testing of both single feature (color, orientation) and feature conjunction conditions. Patients performed significantly worse than controls at all set sizes but demonstrated normal feature binding. Unlike controls, patient WM capacity declined at set size 6 relative to set size 4. Impairments with subcapacity arrays suggest a deficit in task set maintenance: Greater impairment for supercapacity set sizes suggests a deficit in the ability to selectively encode information for WM storage. Thus, the WM impairment in schizophrenia appears to be a consequence of attentional deficits rather than a reduction in storage capacity.

  19. Examining drivers' eye glance patterns during distracted driving: Insights from scanning randomness and glance transition matrix.

    PubMed

    Wang, Yuan; Bao, Shan; Du, Wenjun; Ye, Zhirui; Sayer, James R

    2017-12-01

    Visual attention to the driving environment is of great importance for road safety. Eye glance behavior has been used as an indicator of distracted driving. This study examined and quantified drivers' glance patterns and features during distracted driving. Data from an existing naturalistic driving study were used. Entropy rate was calculated and used to assess the randomness associated with drivers' scanning patterns. A glance-transition proportion matrix was defined to quantity visual search patterns transitioning among four main eye glance locations while driving (i.e., forward on-road, phone, mirrors and others). All measurements were calculated within a 5s time window under both cell phone and non-cell phone use conditions. Results of the glance data analyses showed different patterns between distracted and non-distracted driving, featured by a higher entropy rate value and highly biased attention transferring between forward and phone locations during distracted driving. Drivers in general had higher number of glance transitions, and their on-road glance duration was significantly shorter during distracted driving when compared to non-distracted driving. Results suggest that drivers have a higher scanning randomness/disorder level and shift their main attention from surrounding areas towards phone area when engaging in visual-manual tasks. Drivers' visual search patterns during visual-manual distraction with a high scanning randomness and a high proportion of eye glance transitions towards the location of the phone provide insight into driver distraction detection. This will help to inform the design of in-vehicle human-machine interface/systems. Copyright © 2017. Published by Elsevier Ltd.

  20. Age-Related Occipito-Temporal Hypoactivation during Visual Search: Relationships between mN2pc Sources and Performance

    ERIC Educational Resources Information Center

    Lorenzo-Lopez, L.; Gutierrez, R.; Moratti, S.; Maestu, F.; Cadaveira, F.; Amenedo, E.

    2011-01-01

    Recently, an event-related potential (ERP) study (Lorenzo-Lopez et al., 2008) provided evidence that normal aging significantly delays and attenuates the electrophysiological correlate of the allocation of visuospatial attention (N2pc component) during a feature-detection visual search task. To further explore the effects of normal aging on the…

Top