Sample records for visual attention model

  1. A probabilistic model of overt visual attention for cognitive robots.

    PubMed

    Begum, Momotaz; Karray, Fakhri; Mann, George K I; Gosine, Raymond G

    2010-10-01

    Visual attention is one of the major requirements for a robot to serve as a cognitive companion for human. The robotic visual attention is mostly concerned with overt attention which accompanies head and eye movements of a robot. In this case, each movement of the camera head triggers a number of events, namely transformation of the camera and the image coordinate systems, change of content of the visual field, and partial appearance of the stimuli. All of these events contribute to the reduction in probability of meaningful identification of the next focus of attention. These events are specific to overt attention with head movement and, therefore, their effects are not addressed in the classical models of covert visual attention. This paper proposes a Bayesian model as a robot-centric solution for the overt visual attention problem. The proposed model, while taking inspiration from the primates visual attention mechanism, guides a robot to direct its camera toward behaviorally relevant and/or visually demanding stimuli. A particle filter implementation of this model addresses the challenges involved in overt attention with head movement. Experimental results demonstrate the performance of the proposed model.

  2. Spatial and Feature-Based Attention in a Layered Cortical Microcircuit Model

    PubMed Central

    Wagatsuma, Nobuhiko; Potjans, Tobias C.; Diesmann, Markus; Sakai, Ko; Fukai, Tomoki

    2013-01-01

    Directing attention to the spatial location or the distinguishing feature of a visual object modulates neuronal responses in the visual cortex and the stimulus discriminability of subjects. However, the spatial and feature-based modes of attention differently influence visual processing by changing the tuning properties of neurons. Intriguingly, neurons' tuning curves are modulated similarly across different visual areas under both these modes of attention. Here, we explored the mechanism underlying the effects of these two modes of visual attention on the orientation selectivity of visual cortical neurons. To do this, we developed a layered microcircuit model. This model describes multiple orientation-specific microcircuits sharing their receptive fields and consisting of layers 2/3, 4, 5, and 6. These microcircuits represent a functional grouping of cortical neurons and mutually interact via lateral inhibition and excitatory connections between groups with similar selectivity. The individual microcircuits receive bottom-up visual stimuli and top-down attention in different layers. A crucial assumption of the model is that feature-based attention activates orientation-specific microcircuits for the relevant feature selectively, whereas spatial attention activates all microcircuits homogeneously, irrespective of their orientation selectivity. Consequently, our model simultaneously accounts for the multiplicative scaling of neuronal responses in spatial attention and the additive modulations of orientation tuning curves in feature-based attention, which have been observed widely in various visual cortical areas. Simulations of the model predict contrasting differences between excitatory and inhibitory neurons in the two modes of attentional modulations. Furthermore, the model replicates the modulation of the psychophysical discriminability of visual stimuli in the presence of external noise. Our layered model with a biologically suggested laminar structure describes the basic circuit mechanism underlying the attention-mode specific modulations of neuronal responses and visual perception. PMID:24324628

  3. Behavior Selection of Mobile Robot Based on Integration of Multimodal Information

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Kaneko, Masahide

    Recently, biologically inspired robots have been developed to acquire the capacity for directing visual attention to salient stimulus generated from the audiovisual environment. On purpose to realize this behavior, a general method is to calculate saliency maps to represent how much the external information attracts the robot's visual attention, where the audiovisual information and robot's motion status should be involved. In this paper, we represent a visual attention model where three modalities, that is, audio information, visual information and robot's motor status are considered, while the previous researches have not considered all of them. Firstly, we introduce a 2-D density map, on which the value denotes how much the robot pays attention to each spatial location. Then we model the attention density using a Bayesian network where the robot's motion statuses are involved. Secondly, the information from both of audio and visual modalities is integrated with the attention density map in integrate-fire neurons. The robot can direct its attention to the locations where the integrate-fire neurons are fired. Finally, the visual attention model is applied to make the robot select the visual information from the environment, and react to the content selected. Experimental results show that it is possible for robots to acquire the visual information related to their behaviors by using the attention model considering motion statuses. The robot can select its behaviors to adapt to the dynamic environment as well as to switch to another task according to the recognition results of visual attention.

  4. Visual Attention Model Based on Statistical Properties of Neuron Responses

    PubMed Central

    Duan, Haibin; Wang, Xiaohua

    2015-01-01

    Visual attention is a mechanism of the visual system that can select relevant objects from a specific scene. Interactions among neurons in multiple cortical areas are considered to be involved in attentional allocation. However, the characteristics of the encoded features and neuron responses in those attention related cortices are indefinite. Therefore, further investigations carried out in this study aim at demonstrating that unusual regions arousing more attention generally cause particular neuron responses. We suppose that visual saliency is obtained on the basis of neuron responses to contexts in natural scenes. A bottom-up visual attention model is proposed based on the self-information of neuron responses to test and verify the hypothesis. Four different color spaces are adopted and a novel entropy-based combination scheme is designed to make full use of color information. Valuable regions are highlighted while redundant backgrounds are suppressed in the saliency maps obtained by the proposed model. Comparative results reveal that the proposed model outperforms several state-of-the-art models. This study provides insights into the neuron responses based saliency detection and may underlie the neural mechanism of early visual cortices for bottom-up visual attention. PMID:25747859

  5. Towards the quantitative evaluation of visual attention models.

    PubMed

    Bylinskii, Z; DeGennaro, E M; Rajalingham, R; Ruda, H; Zhang, J; Tsotsos, J K

    2015-11-01

    Scores of visual attention models have been developed over the past several decades of research. Differences in implementation, assumptions, and evaluations have made comparison of these models very difficult. Taxonomies have been constructed in an attempt at the organization and classification of models, but are not sufficient at quantifying which classes of models are most capable of explaining available data. At the same time, a multitude of physiological and behavioral findings have been published, measuring various aspects of human and non-human primate visual attention. All of these elements highlight the need to integrate the computational models with the data by (1) operationalizing the definitions of visual attention tasks and (2) designing benchmark datasets to measure success on specific tasks, under these definitions. In this paper, we provide some examples of operationalizing and benchmarking different visual attention tasks, along with the relevant design considerations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. A Closed-Loop Model of Operator Visual Attention, Situation Awareness, and Performance Across Automation Mode Transitions.

    PubMed

    Johnson, Aaron W; Duda, Kevin R; Sheridan, Thomas B; Oman, Charles M

    2017-03-01

    This article describes a closed-loop, integrated human-vehicle model designed to help understand the underlying cognitive processes that influenced changes in subject visual attention, mental workload, and situation awareness across control mode transitions in a simulated human-in-the-loop lunar landing experiment. Control mode transitions from autopilot to manual flight may cause total attentional demands to exceed operator capacity. Attentional resources must be reallocated and reprioritized, which can increase the average uncertainty in the operator's estimates of low-priority system states. We define this increase in uncertainty as a reduction in situation awareness. We present a model built upon the optimal control model for state estimation, the crossover model for manual control, and the SEEV (salience, effort, expectancy, value) model for visual attention. We modify the SEEV attention executive to direct visual attention based, in part, on the uncertainty in the operator's estimates of system states. The model was validated using the simulated lunar landing experimental data, demonstrating an average difference in the percentage of attention ≤3.6% for all simulator instruments. The model's predictions of mental workload and situation awareness, measured by task performance and system state uncertainty, also mimicked the experimental data. Our model supports the hypothesis that visual attention is influenced by the uncertainty in system state estimates. Conceptualizing situation awareness around the metric of system state uncertainty is a valuable way for system designers to understand and predict how reallocations in the operator's visual attention during control mode transitions can produce reallocations in situation awareness of certain states.

  7. Global motion compensated visual attention-based video watermarking

    NASA Astrophysics Data System (ADS)

    Oakes, Matthew; Bhowmik, Deepayan; Abhayaratne, Charith

    2016-11-01

    Imperceptibility and robustness are two key but complementary requirements of any watermarking algorithm. Low-strength watermarking yields high imperceptibility but exhibits poor robustness. High-strength watermarking schemes achieve good robustness but often suffer from embedding distortions resulting in poor visual quality in host media. This paper proposes a unique video watermarking algorithm that offers a fine balance between imperceptibility and robustness using motion compensated wavelet-based visual attention model (VAM). The proposed VAM includes spatial cues for visual saliency as well as temporal cues. The spatial modeling uses the spatial wavelet coefficients while the temporal modeling accounts for both local and global motion to arrive at the spatiotemporal VAM for video. The model is then used to develop a video watermarking algorithm, where a two-level watermarking weighting parameter map is generated from the VAM saliency maps using the saliency model and data are embedded into the host image according to the visual attentiveness of each region. By avoiding higher strength watermarking in the visually attentive region, the resulting watermarked video achieves high perceived visual quality while preserving high robustness. The proposed VAM outperforms the state-of-the-art video visual attention methods in joint saliency detection and low computational complexity performance. For the same embedding distortion, the proposed visual attention-based watermarking achieves up to 39% (nonblind) and 22% (blind) improvement in robustness against H.264/AVC compression, compared to existing watermarking methodology that does not use the VAM. The proposed visual attention-based video watermarking results in visual quality similar to that of low-strength watermarking and a robustness similar to those of high-strength watermarking.

  8. The Attentional Field Revealed by Single-Voxel Modeling of fMRI Time Courses

    PubMed Central

    DeYoe, Edgar A.

    2015-01-01

    The spatial topography of visual attention is a distinguishing and critical feature of many theoretical models of visuospatial attention. Previous fMRI-based measurements of the topography of attention have typically been too crude to adequately test the predictions of different competing models. This study demonstrates a new technique to make detailed measurements of the topography of visuospatial attention from single-voxel, fMRI time courses. Briefly, this technique involves first estimating a voxel's population receptive field (pRF) and then “drifting” attention through the pRF such that the modulation of the voxel's fMRI time course reflects the spatial topography of attention. The topography of the attentional field (AF) is then estimated using a time-course modeling procedure. Notably, we are able to make these measurements in many visual areas including smaller, higher order areas, thus enabling a more comprehensive comparison of attentional mechanisms throughout the full hierarchy of human visual cortex. Using this technique, we show that the AF scales with eccentricity and varies across visual areas. We also show that voxels in multiple visual areas exhibit suppressive attentional effects that are well modeled by an AF having an enhancing Gaussian center with a suppressive surround. These findings provide extensive, quantitative neurophysiological data for use in modeling the psychological effects of visuospatial attention. PMID:25810532

  9. The Attentional Drift Diffusion Model of Simple Perceptual Decision-Making.

    PubMed

    Tavares, Gabriela; Perona, Pietro; Rangel, Antonio

    2017-01-01

    Perceptual decisions requiring the comparison of spatially distributed stimuli that are fixated sequentially might be influenced by fluctuations in visual attention. We used two psychophysical tasks with human subjects to investigate the extent to which visual attention influences simple perceptual choices, and to test the extent to which the attentional Drift Diffusion Model (aDDM) provides a good computational description of how attention affects the underlying decision processes. We find evidence for sizable attentional choice biases and that the aDDM provides a reasonable quantitative description of the relationship between fluctuations in visual attention, choices and reaction times. We also find that exogenous manipulations of attention induce choice biases consistent with the predictions of the model.

  10. Neural network modelling of the influence of channelopathies on reflex visual attention.

    PubMed

    Gravier, Alexandre; Quek, Chai; Duch, Włodzisław; Wahab, Abdul; Gravier-Rymaszewska, Joanna

    2016-02-01

    This paper introduces a model of Emergent Visual Attention in presence of calcium channelopathy (EVAC). By modelling channelopathy, EVAC constitutes an effort towards identifying the possible causes of autism. The network structure embodies the dual pathways model of cortical processing of visual input, with reflex attention as an emergent property of neural interactions. EVAC extends existing work by introducing attention shift in a larger-scale network and applying a phenomenological model of channelopathy. In presence of a distractor, the channelopathic network's rate of failure to shift attention is lower than the control network's, but overall, the control network exhibits a lower classification error rate. The simulation results also show differences in task-relative reaction times between control and channelopathic networks. The attention shift timings inferred from the model are consistent with studies of attention shift in autistic children.

  11. Visual Attention and Applications in Multimedia Technologies

    PubMed Central

    Le Callet, Patrick; Niebur, Ernst

    2013-01-01

    Making technological advances in the field of human-machine interactions requires that the capabilities and limitations of the human perceptual system are taken into account. The focus of this report is an important mechanism of perception, visual selective attention, which is becoming more and more important for multimedia applications. We introduce the concept of visual attention and describe its underlying mechanisms. In particular, we introduce the concepts of overt and covert visual attention, and of bottom-up and top-down processing. Challenges related to modeling visual attention and their validation using ad hoc ground truth are also discussed. Examples of the usage of visual attention models in image and video processing are presented. We emphasize multimedia delivery, retargeting and quality assessment of image and video, medical imaging, and the field of stereoscopic 3D images applications. PMID:24489403

  12. Social Image Captioning: Exploring Visual Attention and User Attention.

    PubMed

    Wang, Leiquan; Chu, Xiaoliang; Zhang, Weishan; Wei, Yiwei; Sun, Weichen; Wu, Chunlei

    2018-02-22

    Image captioning with a natural language has been an emerging trend. However, the social image, associated with a set of user-contributed tags, has been rarely investigated for a similar task. The user-contributed tags, which could reflect the user attention, have been neglected in conventional image captioning. Most existing image captioning models cannot be applied directly to social image captioning. In this work, a dual attention model is proposed for social image captioning by combining the visual attention and user attention simultaneously.Visual attention is used to compress a large mount of salient visual information, while user attention is applied to adjust the description of the social images with user-contributed tags. Experiments conducted on the Microsoft (MS) COCO dataset demonstrate the superiority of the proposed method of dual attention.

  13. Social Image Captioning: Exploring Visual Attention and User Attention

    PubMed Central

    Chu, Xiaoliang; Zhang, Weishan; Wei, Yiwei; Sun, Weichen; Wu, Chunlei

    2018-01-01

    Image captioning with a natural language has been an emerging trend. However, the social image, associated with a set of user-contributed tags, has been rarely investigated for a similar task. The user-contributed tags, which could reflect the user attention, have been neglected in conventional image captioning. Most existing image captioning models cannot be applied directly to social image captioning. In this work, a dual attention model is proposed for social image captioning by combining the visual attention and user attention simultaneously.Visual attention is used to compress a large mount of salient visual information, while user attention is applied to adjust the description of the social images with user-contributed tags. Experiments conducted on the Microsoft (MS) COCO dataset demonstrate the superiority of the proposed method of dual attention. PMID:29470409

  14. Attraction of position preference by spatial attention throughout human visual cortex.

    PubMed

    Klein, Barrie P; Harvey, Ben M; Dumoulin, Serge O

    2014-10-01

    Voluntary spatial attention concentrates neural resources at the attended location. Here, we examined the effects of spatial attention on spatial position selectivity in humans. We measured population receptive fields (pRFs) using high-field functional MRI (fMRI) (7T) while subjects performed an attention-demanding task at different locations. We show that spatial attention attracts pRF preferred positions across the entire visual field, not just at the attended location. This global change in pRF preferred positions systematically increases up the visual hierarchy. We model these pRF preferred position changes as an interaction between two components: an attention field and a pRF without the influence of attention. This computational model suggests that increasing effects of attention up the hierarchy result primarily from differences in pRF size and that the attention field is similar across the visual hierarchy. A similar attention field suggests that spatial attention transforms different neural response selectivities throughout the visual hierarchy in a similar manner. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. The Attention Cascade Model and Attentional Blink

    ERIC Educational Resources Information Center

    Shih, Shui-I

    2008-01-01

    An attention cascade model is proposed to account for attentional blinks in rapid serial visual presentation (RSVP) of stimuli. Data were collected using single characters in a single RSVP stream at 10 Hz [Shih, S., & Reeves, A. (2007). "Attentional capture in rapid serial visual presentation." "Spatial Vision", 20(4), 301-315], and single words,…

  16. Attention Gating in Short-Term Visual Memory.

    ERIC Educational Resources Information Center

    Reeves, Adam; Sperling, George

    1986-01-01

    An experiment is conducted showing that an attention shift to a stream of numerals presented in rapid serial visual presentation mode produces not a total loss, but a systematic distortion of order. An attention gating model (AGM) is developed from a more general attention model. (Author/LMO)

  17. Visual attention based bag-of-words model for image classification

    NASA Astrophysics Data System (ADS)

    Wang, Qiwei; Wan, Shouhong; Yue, Lihua; Wang, Che

    2014-04-01

    Bag-of-words is a classical method for image classification. The core problem is how to count the frequency of the visual words and what visual words to select. In this paper, we propose a visual attention based bag-of-words model (VABOW model) for image classification task. The VABOW model utilizes visual attention method to generate a saliency map, and uses the saliency map as a weighted matrix to instruct the statistic process for the frequency of the visual words. On the other hand, the VABOW model combines shape, color and texture cues and uses L1 regularization logistic regression method to select the most relevant and most efficient features. We compare our approach with traditional bag-of-words based method on two datasets, and the result shows that our VABOW model outperforms the state-of-the-art method for image classification.

  18. An amodal shared resource model of language-mediated visual attention

    PubMed Central

    Smith, Alastair C.; Monaghan, Padraic; Huettig, Falk

    2013-01-01

    Language-mediated visual attention describes the interaction of two fundamental components of the human cognitive system, language and vision. Within this paper we present an amodal shared resource model of language-mediated visual attention that offers a description of the information and processes involved in this complex multimodal behavior and a potential explanation for how this ability is acquired. We demonstrate that the model is not only sufficient to account for the experimental effects of Visual World Paradigm studies but also that these effects are emergent properties of the architecture of the model itself, rather than requiring separate information processing channels or modular processing systems. The model provides an explicit description of the connection between the modality-specific input from language and vision and the distribution of eye gaze in language-mediated visual attention. The paper concludes by discussing future applications for the model, specifically its potential for investigating the factors driving observed individual differences in language-mediated eye gaze. PMID:23966967

  19. Dynamic visual attention: motion direction versus motion magnitude

    NASA Astrophysics Data System (ADS)

    Bur, A.; Wurtz, P.; Müri, R. M.; Hügli, H.

    2008-02-01

    Defined as an attentive process in the context of visual sequences, dynamic visual attention refers to the selection of the most informative parts of video sequence. This paper investigates the contribution of motion in dynamic visual attention, and specifically compares computer models designed with the motion component expressed either as the speed magnitude or as the speed vector. Several computer models, including static features (color, intensity and orientation) and motion features (magnitude and vector) are considered. Qualitative and quantitative evaluations are performed by comparing the computer model output with human saliency maps obtained experimentally from eye movement recordings. The model suitability is evaluated in various situations (synthetic and real sequences, acquired with fixed and moving camera perspective), showing advantages and inconveniences of each method as well as preferred domain of application.

  20. Infant Visual Attention and Object Recognition

    PubMed Central

    Reynolds, Greg D.

    2015-01-01

    This paper explores the role visual attention plays in the recognition of objects in infancy. Research and theory on the development of infant attention and recognition memory are reviewed in three major sections. The first section reviews some of the major findings and theory emerging from a rich tradition of behavioral research utilizing preferential looking tasks to examine visual attention and recognition memory in infancy. The second section examines research utilizing neural measures of attention and object recognition in infancy as well as research on brain-behavior relations in the early development of attention and recognition memory. The third section addresses potential areas of the brain involved in infant object recognition and visual attention. An integrated synthesis of some of the existing models of the development of visual attention is presented which may account for the observed changes in behavioral and neural measures of visual attention and object recognition that occur across infancy. PMID:25596333

  1. The Role of Target-Distractor Relationships in Guiding Attention and the Eyes in Visual Search

    ERIC Educational Resources Information Center

    Becker, Stefanie I.

    2010-01-01

    Current models of visual search assume that visual attention can be guided by tuning attention toward specific feature values (e.g., particular size, color) or by inhibiting the features of the irrelevant nontargets. The present study demonstrates that attention and eye movements can also be guided by a relational specification of how the target…

  2. From shunting inhibition to dynamic normalization: Attentional selection and decision-making in brief visual displays.

    PubMed

    Smith, Philip L; Sewell, David K; Lilburn, Simon D

    2015-11-01

    Normalization models of visual sensitivity assume that the response of a visual mechanism is scaled divisively by the sum of the activity in the excitatory and inhibitory mechanisms in its neighborhood. Normalization models of attention assume that the weighting of excitatory and inhibitory mechanisms is modulated by attention. Such models have provided explanations of the effects of attention in both behavioral and single-cell recording studies. We show how normalization models can be obtained as the asymptotic solutions of shunting differential equations, in which stimulus inputs and the activity in the mechanism control growth rates multiplicatively rather than additively. The value of the shunting equation approach is that it characterizes the entire time course of the response, not just its asymptotic strength. We describe two models of attention based on shunting dynamics, the integrated system model of Smith and Ratcliff (2009) and the competitive interaction theory of Smith and Sewell (2013). These models assume that attention, stimulus salience, and the observer's strategy for the task jointly determine the selection of stimuli into visual short-term memory (VSTM) and the way in which stimulus representations are weighted. The quality of the VSTM representation determines the speed and accuracy of the decision. The models provide a unified account of a variety of attentional phenomena found in psychophysical tasks using single-element and multi-element displays. Our results show the generality and utility of the normalization approach to modeling attention. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Theory of Visual Attention (TVA) applied to mice in the 5-choice serial reaction time task.

    PubMed

    Fitzpatrick, C M; Caballero-Puntiverio, M; Gether, U; Habekost, T; Bundesen, C; Vangkilde, S; Woldbye, D P D; Andreasen, J T; Petersen, A

    2017-03-01

    The 5-choice serial reaction time task (5-CSRTT) is widely used to measure rodent attentional functions. In humans, many attention studies in healthy and clinical populations have used testing based on Bundesen's Theory of Visual Attention (TVA) to estimate visual processing speeds and other parameters of attentional capacity. We aimed to bridge these research fields by modifying the 5-CSRTT's design and by mathematically modelling data to derive attentional parameters analogous to human TVA-based measures. C57BL/6 mice were tested in two 1-h sessions on consecutive days with a version of the 5-CSRTT where stimulus duration (SD) probe length was varied based on information from previous TVA studies. Thereafter, a scopolamine hydrobromide (HBr; 0.125 or 0.25 mg/kg) pharmacological challenge was undertaken, using a Latin square design. Mean score values were modelled using a new three-parameter version of TVA to obtain estimates of visual processing speeds, visual thresholds and motor response baselines in each mouse. The parameter estimates for each animal were reliable across sessions, showing that the data were stable enough to support analysis on an individual level. Scopolamine HBr dose-dependently reduced 5-CSRTT attentional performance while also increasing reward collection latency at the highest dose. Upon TVA modelling, scopolamine HBr significantly reduced visual processing speed at both doses, while having less pronounced effects on visual thresholds and motor response baselines. This study shows for the first time how 5-CSRTT performance in mice can be mathematically modelled to yield estimates of attentional capacity that are directly comparable to estimates from human studies.

  4. A Componential Analysis of Visual Attention in Children With ADHD.

    PubMed

    McAvinue, Laura P; Vangkilde, Signe; Johnson, Katherine A; Habekost, Thomas; Kyllingsbæk, Søren; Bundesen, Claus; Robertson, Ian H

    2015-10-01

    Inattentive behaviour is a defining characteristic of ADHD. Researchers have wondered about the nature of the attentional deficit underlying these symptoms. The primary purpose of the current study was to examine this attentional deficit using a novel paradigm based upon the Theory of Visual Attention (TVA). The TVA paradigm enabled a componential analysis of visual attention through the use of a mathematical model to estimate parameters relating to attentional selectivity and capacity. Children's ability to sustain attention was also assessed using the Sustained Attention to Response Task. The sample included a comparison between 25 children with ADHD and 25 control children aged 9-13. Children with ADHD had significantly impaired sustained attention and visual processing speed but intact attentional selectivity, perceptual threshold and visual short-term memory capacity. The results of this study lend support to the notion of differential impairment of attentional functions in children with ADHD. © 2012 SAGE Publications.

  5. Functional Imaging of Audio-Visual Selective Attention in Monkeys and Humans: How do Lapses in Monkey Performance Affect Cross-Species Correspondences?

    PubMed

    Rinne, Teemu; Muers, Ross S; Salo, Emma; Slater, Heather; Petkov, Christopher I

    2017-06-01

    The cross-species correspondences and differences in how attention modulates brain responses in humans and animal models are poorly understood. We trained 2 monkeys to perform an audio-visual selective attention task during functional magnetic resonance imaging (fMRI), rewarding them to attend to stimuli in one modality while ignoring those in the other. Monkey fMRI identified regions strongly modulated by auditory or visual attention. Surprisingly, auditory attention-related modulations were much more restricted in monkeys than humans performing the same tasks during fMRI. Further analyses ruled out trivial explanations, suggesting that labile selective-attention performance was associated with inhomogeneous modulations in wide cortical regions in the monkeys. The findings provide initial insights into how audio-visual selective attention modulates the primate brain, identify sources for "lost" attention effects in monkeys, and carry implications for modeling the neurobiology of human cognition with nonhuman animals. © The Author 2017. Published by Oxford University Press.

  6. Functional Imaging of Audio–Visual Selective Attention in Monkeys and Humans: How do Lapses in Monkey Performance Affect Cross-Species Correspondences?

    PubMed Central

    Muers, Ross S.; Salo, Emma; Slater, Heather; Petkov, Christopher I.

    2017-01-01

    Abstract The cross-species correspondences and differences in how attention modulates brain responses in humans and animal models are poorly understood. We trained 2 monkeys to perform an audio–visual selective attention task during functional magnetic resonance imaging (fMRI), rewarding them to attend to stimuli in one modality while ignoring those in the other. Monkey fMRI identified regions strongly modulated by auditory or visual attention. Surprisingly, auditory attention-related modulations were much more restricted in monkeys than humans performing the same tasks during fMRI. Further analyses ruled out trivial explanations, suggesting that labile selective-attention performance was associated with inhomogeneous modulations in wide cortical regions in the monkeys. The findings provide initial insights into how audio–visual selective attention modulates the primate brain, identify sources for “lost” attention effects in monkeys, and carry implications for modeling the neurobiology of human cognition with nonhuman animals. PMID:28419201

  7. Predicting bias in perceived position using attention field models.

    PubMed

    Klein, Barrie P; Paffen, Chris L E; Pas, Susan F Te; Dumoulin, Serge O

    2016-05-01

    Attention is the mechanism through which we select relevant information from our visual environment. We have recently demonstrated that attention attracts receptive fields across the visual hierarchy (Klein, Harvey, & Dumoulin, 2014). We captured this receptive field attraction using an attention field model. Here, we apply this model to human perception: We predict that receptive field attraction results in a bias in perceived position, which depends on the size of the underlying receptive fields. We instructed participants to compare the relative position of Gabor stimuli, while we manipulated the focus of attention using exogenous cueing. We varied the eccentric position and spatial frequency of the Gabor stimuli to vary underlying receptive field size. The positional biases as a function of eccentricity matched the predictions by an attention field model, whereas the bias as a function of spatial frequency did not. As spatial frequency and eccentricity are encoded differently across the visual hierarchy, we speculate that they might interact differently with the attention field that is spatially defined.

  8. Infant visual attention and object recognition.

    PubMed

    Reynolds, Greg D

    2015-05-15

    This paper explores the role visual attention plays in the recognition of objects in infancy. Research and theory on the development of infant attention and recognition memory are reviewed in three major sections. The first section reviews some of the major findings and theory emerging from a rich tradition of behavioral research utilizing preferential looking tasks to examine visual attention and recognition memory in infancy. The second section examines research utilizing neural measures of attention and object recognition in infancy as well as research on brain-behavior relations in the early development of attention and recognition memory. The third section addresses potential areas of the brain involved in infant object recognition and visual attention. An integrated synthesis of some of the existing models of the development of visual attention is presented which may account for the observed changes in behavioral and neural measures of visual attention and object recognition that occur across infancy. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. A visual model for object detection based on active contours and level-set method.

    PubMed

    Satoh, Shunji

    2006-09-01

    A visual model for object detection is proposed. In order to make the detection ability comparable with existing technical methods for object detection, an evolution equation of neurons in the model is derived from the computational principle of active contours. The hierarchical structure of the model emerges naturally from the evolution equation. One drawback involved with initial values of active contours is alleviated by introducing and formulating convexity, which is a visual property. Numerical experiments show that the proposed model detects objects with complex topologies and that it is tolerant of noise. A visual attention model is introduced into the proposed model. Other simulations show that the visual properties of the model are consistent with the results of psychological experiments that disclose the relation between figure-ground reversal and visual attention. We also demonstrate that the model tends to perceive smaller regions as figures, which is a characteristic observed in human visual perception.

  10. Spatial Attention Reduces Burstiness in Macaque Visual Cortical Area MST.

    PubMed

    Xue, Cheng; Kaping, Daniel; Ray, Sonia Baloni; Krishna, B Suresh; Treue, Stefan

    2017-01-01

    Visual attention modulates the firing rate of neurons in many primate cortical areas. In V4, a cortical area in the ventral visual pathway, spatial attention has also been shown to reduce the tendency of neurons to fire closely separated spikes (burstiness). A recent model proposes that a single mechanism accounts for both the firing rate enhancement and the burstiness reduction in V4, but this has not been empirically tested. It is also unclear if the burstiness reduction by spatial attention is found in other visual areas and for other attentional types. We therefore recorded from single neurons in the medial superior temporal area (MST), a key motion-processing area along the dorsal visual pathway, of two rhesus monkeys while they performed a task engaging both spatial and feature-based attention. We show that in MST, spatial attention is associated with a clear reduction in burstiness that is independent of the concurrent enhancement of firing rate. In contrast, feature-based attention enhances firing rate but is not associated with a significant reduction in burstiness. These results establish burstiness reduction as a widespread effect of spatial attention. They also suggest that in contrast to the recently proposed model, the effects of spatial attention on burstiness and firing rate emerge from different mechanisms. © The Author 2016. Published by Oxford University Press.

  11. Spatial Attention Reduces Burstiness in Macaque Visual Cortical Area MST

    PubMed Central

    Xue, Cheng; Kaping, Daniel; Ray, Sonia Baloni; Krishna, B. Suresh; Treue, Stefan

    2017-01-01

    Abstract Visual attention modulates the firing rate of neurons in many primate cortical areas. In V4, a cortical area in the ventral visual pathway, spatial attention has also been shown to reduce the tendency of neurons to fire closely separated spikes (burstiness). A recent model proposes that a single mechanism accounts for both the firing rate enhancement and the burstiness reduction in V4, but this has not been empirically tested. It is also unclear if the burstiness reduction by spatial attention is found in other visual areas and for other attentional types. We therefore recorded from single neurons in the medial superior temporal area (MST), a key motion-processing area along the dorsal visual pathway, of two rhesus monkeys while they performed a task engaging both spatial and feature-based attention. We show that in MST, spatial attention is associated with a clear reduction in burstiness that is independent of the concurrent enhancement of firing rate. In contrast, feature-based attention enhances firing rate but is not associated with a significant reduction in burstiness. These results establish burstiness reduction as a widespread effect of spatial attention. They also suggest that in contrast to the recently proposed model, the effects of spatial attention on burstiness and firing rate emerge from different mechanisms. PMID:28365773

  12. Guidance of visual attention by semantic information in real-world scenes

    PubMed Central

    Wu, Chia-Chien; Wick, Farahnaz Ahmed; Pomplun, Marc

    2014-01-01

    Recent research on attentional guidance in real-world scenes has focused on object recognition within the context of a scene. This approach has been valuable for determining some factors that drive the allocation of visual attention and determine visual selection. This article provides a review of experimental work on how different components of context, especially semantic information, affect attentional deployment. We review work from the areas of object recognition, scene perception, and visual search, highlighting recent studies examining semantic structure in real-world scenes. A better understanding on how humans parse scene representations will not only improve current models of visual attention but also advance next-generation computer vision systems and human-computer interfaces. PMID:24567724

  13. Visual Attention Modeling for Stereoscopic Video: A Benchmark and Computational Model.

    PubMed

    Fang, Yuming; Zhang, Chi; Li, Jing; Lei, Jianjun; Perreira Da Silva, Matthieu; Le Callet, Patrick

    2017-10-01

    In this paper, we investigate the visual attention modeling for stereoscopic video from the following two aspects. First, we build one large-scale eye tracking database as the benchmark of visual attention modeling for stereoscopic video. The database includes 47 video sequences and their corresponding eye fixation data. Second, we propose a novel computational model of visual attention for stereoscopic video based on Gestalt theory. In the proposed model, we extract the low-level features, including luminance, color, texture, and depth, from discrete cosine transform coefficients, which are used to calculate feature contrast for the spatial saliency computation. The temporal saliency is calculated by the motion contrast from the planar and depth motion features in the stereoscopic video sequences. The final saliency is estimated by fusing the spatial and temporal saliency with uncertainty weighting, which is estimated by the laws of proximity, continuity, and common fate in Gestalt theory. Experimental results show that the proposed method outperforms the state-of-the-art stereoscopic video saliency detection models on our built large-scale eye tracking database and one other database (DML-ITRACK-3D).

  14. Simulating the Role of Visual Selective Attention during the Development of Perceptual Completion

    ERIC Educational Resources Information Center

    Schlesinger, Matthew; Amso, Dima; Johnson, Scott P.

    2012-01-01

    We recently proposed a multi-channel, image-filtering model for simulating the development of visual selective attention in young infants (Schlesinger, Amso & Johnson, 2007). The model not only captures the performance of 3-month-olds on a visual search task, but also implicates two cortical regions that may play a role in the development of…

  15. Visual Scan Adaptation During Repeated Visual Search

    DTIC Science & Technology

    2010-01-01

    Junge, J. A. (2004). Searching for stimulus-driven shifts of attention. Psychonomic Bulletin & Review , 11, 876–881. Furst, C. J. (1971...search strategies cannot override attentional capture. Psychonomic Bulletin & Review , 11, 65–70. Wolfe, J. M. (1994). Guided search 2.0: A revised model...of visual search. Psychonomic Bulletin & Review , 1, 202–238. Wolfe, J. M. (1998a). Visual search. In H. Pashler (Ed.), Attention (pp. 13–73). East

  16. TMS over the right precuneus reduces the bilateral field advantage in visual short term memory capacity.

    PubMed

    Kraft, Antje; Dyrholm, Mads; Kehrer, Stefanie; Kaufmann, Christian; Bruening, Jovita; Kathmann, Norbert; Bundesen, Claus; Irlbacher, Kerstin; Brandt, Stephan A

    2015-01-01

    Several studies have demonstrated a bilateral field advantage (BFA) in early visual attentional processing, that is, enhanced visual processing when stimuli are spread across both visual hemifields. The results are reminiscent of a hemispheric resource model of parallel visual attentional processing, suggesting more attentional resources on an early level of visual processing for bilateral displays [e.g. Sereno AB, Kosslyn SM. Discrimination within and between hemifields: a new constraint on theories of attention. Neuropsychologia 1991;29(7):659-75.]. Several studies have shown that the BFA extends beyond early stages of visual attentional processing, demonstrating that visual short term memory (VSTM) capacity is higher when stimuli are distributed bilaterally rather than unilaterally. Here we examine whether hemisphere-specific resources are also evident on later stages of visual attentional processing. Based on the Theory of Visual Attention (TVA) [Bundesen C. A theory of visual attention. Psychol Rev 1990;97(4):523-47.] we used a whole report paradigm that allows investigating visual attention capacity variability in unilateral and bilateral displays during navigated repetitive transcranial magnetic stimulation (rTMS) of the precuneus region. A robust BFA in VSTM storage capacity was apparent after rTMS over the left precuneus and in the control condition without rTMS. In contrast, the BFA diminished with rTMS over the right precuneus. This finding indicates that the right precuneus plays a causal role in VSTM capacity, particularly in bilateral visual displays. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Color impact in visual attention deployment considering emotional images

    NASA Astrophysics Data System (ADS)

    Chamaret, C.

    2012-03-01

    Color is a predominant factor in the human visual attention system. Even if it cannot be sufficient to the global or complete understanding of a scene, it may impact the visual attention deployment. We propose to study the color impact as well as the emotion aspect of pictures regarding the visual attention deployment. An eye-tracking campaign has been conducted involving twenty people watching half pictures of database in full color and the other half of database in grey color. The eye fixations of color and black and white images were highly correlated leading to the question of the integration of such cues in the design of visual attention model. Indeed, the prediction of two state-of-the-art computational models shows similar results for the two color categories. Similarly, the study of saccade amplitude and fixation duration versus time viewing did not bring any significant differences between the two mentioned categories. In addition, spatial coordinates of eye fixations reveal an interesting indicator for investigating the differences of visual attention deployment over time and fixation number. The second factor related to emotion categories shows evidences of emotional inter-categories differences between color and grey eye fixations for passive and positive emotion. The particular aspect associated to this category induces a specific behavior, rather based on high frequencies, where the color components influence the visual attention deployment.

  18. A normalization model suggests that attention changes the weighting of inputs between visual areas

    PubMed Central

    Cohen, Marlene R.

    2017-01-01

    Models of divisive normalization can explain the trial-averaged responses of neurons in sensory, association, and motor areas under a wide range of conditions, including how visual attention changes the gains of neurons in visual cortex. Attention, like other modulatory processes, is also associated with changes in the extent to which pairs of neurons share trial-to-trial variability. We showed recently that in addition to decreasing correlations between similarly tuned neurons within the same visual area, attention increases correlations between neurons in primary visual cortex (V1) and the middle temporal area (MT) and that an extension of a classic normalization model can account for this correlation increase. One of the benefits of having a descriptive model that can account for many physiological observations is that it can be used to probe the mechanisms underlying processes such as attention. Here, we use electrical microstimulation in V1 paired with recording in MT to provide causal evidence that the relationship between V1 and MT activity is nonlinear and is well described by divisive normalization. We then use the normalization model and recording and microstimulation experiments to show that the attention dependence of V1–MT correlations is better explained by a mechanism in which attention changes the weights of connections between V1 and MT than by a mechanism that modulates responses in either area. Our study shows that normalization can explain interactions between neurons in different areas and provides a framework for using multiarea recording and stimulation to probe the neural mechanisms underlying neuronal computations. PMID:28461501

  19. A normalization model suggests that attention changes the weighting of inputs between visual areas.

    PubMed

    Ruff, Douglas A; Cohen, Marlene R

    2017-05-16

    Models of divisive normalization can explain the trial-averaged responses of neurons in sensory, association, and motor areas under a wide range of conditions, including how visual attention changes the gains of neurons in visual cortex. Attention, like other modulatory processes, is also associated with changes in the extent to which pairs of neurons share trial-to-trial variability. We showed recently that in addition to decreasing correlations between similarly tuned neurons within the same visual area, attention increases correlations between neurons in primary visual cortex (V1) and the middle temporal area (MT) and that an extension of a classic normalization model can account for this correlation increase. One of the benefits of having a descriptive model that can account for many physiological observations is that it can be used to probe the mechanisms underlying processes such as attention. Here, we use electrical microstimulation in V1 paired with recording in MT to provide causal evidence that the relationship between V1 and MT activity is nonlinear and is well described by divisive normalization. We then use the normalization model and recording and microstimulation experiments to show that the attention dependence of V1-MT correlations is better explained by a mechanism in which attention changes the weights of connections between V1 and MT than by a mechanism that modulates responses in either area. Our study shows that normalization can explain interactions between neurons in different areas and provides a framework for using multiarea recording and stimulation to probe the neural mechanisms underlying neuronal computations.

  20. Modeling global scene factors in attention

    NASA Astrophysics Data System (ADS)

    Torralba, Antonio

    2003-07-01

    Models of visual attention have focused predominantly on bottom-up approaches that ignored structured contextual and scene information. I propose a model of contextual cueing for attention guidance based on the global scene configuration. It is shown that the statistics of low-level features across the whole image can be used to prime the presence or absence of objects in the scene and to predict their location, scale, and appearance before exploring the image. In this scheme, visual context information can become available early in the visual processing chain, which allows modulation of the saliency of image regions and provides an efficient shortcut for object detection and recognition. 2003 Optical Society of America

  1. Divisive normalization and neuronal oscillations in a single hierarchical framework of selective visual attention.

    PubMed

    Montijn, Jorrit Steven; Klink, P Christaan; van Wezel, Richard J A

    2012-01-01

    Divisive normalization models of covert attention commonly use spike rate modulations as indicators of the effect of top-down attention. In addition, an increasing number of studies have shown that top-down attention increases the synchronization of neuronal oscillations as well, particularly in gamma-band frequencies (25-100 Hz). Although modulations of spike rate and synchronous oscillations are not mutually exclusive as mechanisms of attention, there has thus far been little effort to integrate these concepts into a single framework of attention. Here, we aim to provide such a unified framework by expanding the normalization model of attention with a multi-level hierarchical structure and a time dimension; allowing the simulation of a recently reported backward progression of attentional effects along the visual cortical hierarchy. A simple cascade of normalization models simulating different cortical areas is shown to cause signal degradation and a loss of stimulus discriminability over time. To negate this degradation and ensure stable neuronal stimulus representations, we incorporate a kind of oscillatory phase entrainment into our model that has previously been proposed as the "communication-through-coherence" (CTC) hypothesis. Our analysis shows that divisive normalization and oscillation models can complement each other in a unified account of the neural mechanisms of selective visual attention. The resulting hierarchical normalization and oscillation (HNO) model reproduces several additional spatial and temporal aspects of attentional modulation and predicts a latency effect on neuronal responses as a result of cued attention.

  2. Divisive Normalization and Neuronal Oscillations in a Single Hierarchical Framework of Selective Visual Attention

    PubMed Central

    Montijn, Jorrit Steven; Klink, P. Christaan; van Wezel, Richard J. A.

    2012-01-01

    Divisive normalization models of covert attention commonly use spike rate modulations as indicators of the effect of top-down attention. In addition, an increasing number of studies have shown that top-down attention increases the synchronization of neuronal oscillations as well, particularly in gamma-band frequencies (25–100 Hz). Although modulations of spike rate and synchronous oscillations are not mutually exclusive as mechanisms of attention, there has thus far been little effort to integrate these concepts into a single framework of attention. Here, we aim to provide such a unified framework by expanding the normalization model of attention with a multi-level hierarchical structure and a time dimension; allowing the simulation of a recently reported backward progression of attentional effects along the visual cortical hierarchy. A simple cascade of normalization models simulating different cortical areas is shown to cause signal degradation and a loss of stimulus discriminability over time. To negate this degradation and ensure stable neuronal stimulus representations, we incorporate a kind of oscillatory phase entrainment into our model that has previously been proposed as the “communication-through-coherence” (CTC) hypothesis. Our analysis shows that divisive normalization and oscillation models can complement each other in a unified account of the neural mechanisms of selective visual attention. The resulting hierarchical normalization and oscillation (HNO) model reproduces several additional spatial and temporal aspects of attentional modulation and predicts a latency effect on neuronal responses as a result of cued attention. PMID:22586372

  3. Modeling the Effects of Perceptual Load: Saliency, Competitive Interactions, and Top-Down Biases.

    PubMed

    Neokleous, Kleanthis; Shimi, Andria; Avraamides, Marios N

    2016-01-01

    A computational model of visual selective attention has been implemented to account for experimental findings on the Perceptual Load Theory (PLT) of attention. The model was designed based on existing neurophysiological findings on attentional processes with the objective to offer an explicit and biologically plausible formulation of PLT. Simulation results verified that the proposed model is capable of capturing the basic pattern of results that support the PLT as well as findings that are considered contradictory to the theory. Importantly, the model is able to reproduce the behavioral results from a dilution experiment, providing thus a way to reconcile PLT with the competing Dilution account. Overall, the model presents a novel account for explaining PLT effects on the basis of the low-level competitive interactions among neurons that represent visual input and the top-down signals that modulate neural activity. The implications of the model concerning the debate on the locus of selective attention as well as the origins of distractor interference in visual displays of varying load are discussed.

  4. Attention and normalization circuits in macaque V1

    PubMed Central

    Sanayei, M; Herrero, J L; Distler, C; Thiele, A

    2015-01-01

    Attention affects neuronal processing and improves behavioural performance. In extrastriate visual cortex these effects have been explained by normalization models, which assume that attention influences the circuit that mediates surround suppression. While normalization models have been able to explain attentional effects, their validity has rarely been tested against alternative models. Here we investigate how attention and surround/mask stimuli affect neuronal firing rates and orientation tuning in macaque V1. Surround/mask stimuli provide an estimate to what extent V1 neurons are affected by normalization, which was compared against effects of spatial top down attention. For some attention/surround effect comparisons, the strength of attentional modulation was correlated with the strength of surround modulation, suggesting that attention and surround/mask stimulation (i.e. normalization) might use a common mechanism. To explore this in detail, we fitted multiplicative and additive models of attention to our data. In one class of models, attention contributed to normalization mechanisms, whereas in a different class of models it did not. Model selection based on Akaike's and on Bayesian information criteria demonstrated that in most cells the effects of attention were best described by models where attention did not contribute to normalization mechanisms. This demonstrates that attentional influences on neuronal responses in primary visual cortex often bypass normalization mechanisms. PMID:25757941

  5. Single Canonical Model of Reflexive Memory and Spatial Attention

    PubMed Central

    Patel, Saumil S.; Red, Stuart; Lin, Eric; Sereno, Anne B.

    2015-01-01

    Many neurons in the dorsal and ventral visual stream have the property that after a brief visual stimulus presentation in their receptive field, the spiking activity in these neurons persists above their baseline levels for several seconds. This maintained activity is not always correlated with the monkey’s task and its origin is unknown. We have previously proposed a simple neural network model, based on shape selective neurons in monkey lateral intraparietal cortex, which predicts the valence and time course of reflexive (bottom-up) spatial attention. In the same simple model, we demonstrate here that passive maintained activity or short-term memory of specific visual events can result without need for an external or top-down modulatory signal. Mutual inhibition and neuronal adaptation play distinct roles in reflexive attention and memory. This modest 4-cell model provides the first simple and unified physiologically plausible mechanism of reflexive spatial attention and passive short-term memory processes. PMID:26493949

  6. Cognitive programs: software for attention's executive

    PubMed Central

    Tsotsos, John K.; Kruijne, Wouter

    2014-01-01

    What are the computational tasks that an executive controller for visual attention must solve? This question is posed in the context of the Selective Tuning model of attention. The range of required computations go beyond top-down bias signals or region-of-interest determinations, and must deal with overt and covert fixations, process timing and synchronization, information routing, memory, matching control to task, spatial localization, priming, and coordination of bottom-up with top-down information. During task execution, results must be monitored to ensure the expected results. This description includes the kinds of elements that are common in the control of any kind of complex machine or system. We seek a mechanistic integration of the above, in other words, algorithms that accomplish control. Such algorithms operate on representations, transforming a representation of one kind into another, which then forms the input to yet another algorithm. Cognitive Programs (CPs) are hypothesized to capture exactly such representational transformations via stepwise sequences of operations. CPs, an updated and modernized offspring of Ullman's Visual Routines, impose an algorithmic structure to the set of attentional functions and play a role in the overall shaping of attentional modulation of the visual system so that it provides its best performance. This requires that we consider the visual system as a dynamic, yet general-purpose processor tuned to the task and input of the moment. This differs dramatically from the almost universal cognitive and computational views, which regard vision as a passively observing module to which simple questions about percepts can be posed, regardless of task. Differing from Visual Routines, CPs explicitly involve the critical elements of Visual Task Executive (vTE), Visual Attention Executive (vAE), and Visual Working Memory (vWM). Cognitive Programs provide the software that directs the actions of the Selective Tuning model of visual attention. PMID:25505430

  7. Simultaneous selection by object-based attention in visual and frontal cortex

    PubMed Central

    Pooresmaeili, Arezoo; Poort, Jasper; Roelfsema, Pieter R.

    2014-01-01

    Models of visual attention hold that top-down signals from frontal cortex influence information processing in visual cortex. It is unknown whether situations exist in which visual cortex actively participates in attentional selection. To investigate this question, we simultaneously recorded neuronal activity in the frontal eye fields (FEF) and primary visual cortex (V1) during a curve-tracing task in which attention shifts are object-based. We found that accurate performance was associated with similar latencies of attentional selection in both areas and that the latency in both areas increased if the task was made more difficult. The amplitude of the attentional signals in V1 saturated early during a trial, whereas these selection signals kept increasing for a longer time in FEF, until the moment of an eye movement, as if FEF integrated attentional signals present in early visual cortex. In erroneous trials, we observed an interareal latency difference because FEF selected the wrong curve before V1 and imposed its erroneous decision onto visual cortex. The neuronal activity in visual and frontal cortices was correlated across trials, and this trial-to-trial coupling was strongest for the attended curve. These results imply that selective attention relies on reciprocal interactions within a large network of areas that includes V1 and FEF. PMID:24711379

  8. The Effects of Visual Attention Span and Phonological Decoding in Reading Comprehension in Dyslexia: A Path Analysis.

    PubMed

    Chen, Chen; Schneps, Matthew H; Masyn, Katherine E; Thomson, Jennifer M

    2016-11-01

    Increasing evidence has shown visual attention span to be a factor, distinct from phonological skills, that explains single-word identification (pseudo-word/word reading) performance in dyslexia. Yet, little is known about how well visual attention span explains text comprehension. Observing reading comprehension in a sample of 105 high school students with dyslexia, we used a pathway analysis to examine the direct and indirect path between visual attention span and reading comprehension while controlling for other factors such as phonological awareness, letter identification, short-term memory, IQ and age. Integrating phonemic decoding efficiency skills in the analytic model, this study aimed to disentangle how visual attention span and phonological skills work together in reading comprehension for readers with dyslexia. We found visual attention span to have a significant direct effect on more difficult reading comprehension but not on an easier level. It also had a significant direct effect on pseudo-word identification but not on word identification. In addition, we found that visual attention span indirectly explains reading comprehension through pseudo-word reading and word reading skills. This study supports the hypothesis that at least part of the dyslexic profile can be explained by visual attention abilities. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  9. Computational Model of Primary Visual Cortex Combining Visual Attention for Action Recognition

    PubMed Central

    Shu, Na; Gao, Zhiyong; Chen, Xiangan; Liu, Haihua

    2015-01-01

    Humans can easily understand other people’s actions through visual systems, while computers cannot. Therefore, a new bio-inspired computational model is proposed in this paper aiming for automatic action recognition. The model focuses on dynamic properties of neurons and neural networks in the primary visual cortex (V1), and simulates the procedure of information processing in V1, which consists of visual perception, visual attention and representation of human action. In our model, a family of the three-dimensional spatial-temporal correlative Gabor filters is used to model the dynamic properties of the classical receptive field of V1 simple cell tuned to different speeds and orientations in time for detection of spatiotemporal information from video sequences. Based on the inhibitory effect of stimuli outside the classical receptive field caused by lateral connections of spiking neuron networks in V1, we propose surround suppressive operator to further process spatiotemporal information. Visual attention model based on perceptual grouping is integrated into our model to filter and group different regions. Moreover, in order to represent the human action, we consider the characteristic of the neural code: mean motion map based on analysis of spike trains generated by spiking neurons. The experimental evaluation on some publicly available action datasets and comparison with the state-of-the-art approaches demonstrate the superior performance of the proposed model. PMID:26132270

  10. An evaluation of attention models for use in SLAM

    NASA Astrophysics Data System (ADS)

    Dodge, Samuel; Karam, Lina

    2013-12-01

    In this paper we study the application of visual saliency models for the simultaneous localization and mapping (SLAM) problem. We consider visual SLAM, where the location of the camera and a map of the environment can be generated using images from a single moving camera. In visual SLAM, the interest point detector is of key importance. This detector must be invariant to certain image transformations so that features can be matched across di erent frames. Recent work has used a model of human visual attention to detect interest points, however it is unclear as to what is the best attention model for this purpose. To this aim, we compare the performance of interest points from four saliency models (Itti, GBVS, RARE, and AWS) with the performance of four traditional interest point detectors (Harris, Shi-Tomasi, SIFT, and FAST). We evaluate these detectors under several di erent types of image transformation and nd that the Itti saliency model, in general, achieves the best performance in terms of keypoint repeatability.

  11. Simulating the role of visual selective attention during the development of perceptual completion

    PubMed Central

    Schlesinger, Matthew; Amso, Dima; Johnson, Scott P.

    2014-01-01

    We recently proposed a multi-channel, image-filtering model for simulating the development of visual selective attention in young infants (Schlesinger, Amso & Johnson, 2007). The model not only captures the performance of 3-month-olds on a visual search task, but also implicates two cortical regions that may play a role in the development of visual selective attention. In the current simulation study, we used the same model to simulate 3-month-olds’ performance on a second measure, the perceptual unity task. Two parameters in the model – corresponding to areas in the occipital and parietal cortices – were systematically varied while the gaze patterns produced by the model were recorded and subsequently analyzed. Three key findings emerged from the simulation study. First, the model successfully replicated the performance of 3-month-olds on the unity perception task. Second, the model also helps to explain the improved performance of 2-month-olds when the size of the occluder in the unity perception task is reduced. Third, in contrast to our previous simulation results, variation in only one of the two cortical regions simulated (i.e. recurrent activity in posterior parietal cortex) resulted in a performance pattern that matched 3-month-olds. These findings provide additional support for our hypothesis that the development of perceptual completion in early infancy is promoted by progressive improvements in visual selective attention and oculomotor skill. PMID:23106728

  12. Simulating the role of visual selective attention during the development of perceptual completion.

    PubMed

    Schlesinger, Matthew; Amso, Dima; Johnson, Scott P

    2012-11-01

    We recently proposed a multi-channel, image-filtering model for simulating the development of visual selective attention in young infants (Schlesinger, Amso & Johnson, 2007). The model not only captures the performance of 3-month-olds on a visual search task, but also implicates two cortical regions that may play a role in the development of visual selective attention. In the current simulation study, we used the same model to simulate 3-month-olds' performance on a second measure, the perceptual unity task. Two parameters in the model - corresponding to areas in the occipital and parietal cortices - were systematically varied while the gaze patterns produced by the model were recorded and subsequently analyzed. Three key findings emerged from the simulation study. First, the model successfully replicated the performance of 3-month-olds on the unity perception task. Second, the model also helps to explain the improved performance of 2-month-olds when the size of the occluder in the unity perception task is reduced. Third, in contrast to our previous simulation results, variation in only one of the two cortical regions simulated (i.e. recurrent activity in posterior parietal cortex) resulted in a performance pattern that matched 3-month-olds. These findings provide additional support for our hypothesis that the development of perceptual completion in early infancy is promoted by progressive improvements in visual selective attention and oculomotor skill. © 2012 Blackwell Publishing Ltd.

  13. Visual attention spreads broadly but selects information locally.

    PubMed

    Shioiri, Satoshi; Honjyo, Hajime; Kashiwase, Yoshiyuki; Matsumiya, Kazumichi; Kuriki, Ichiro

    2016-10-19

    Visual attention spreads over a range around the focus as the spotlight metaphor describes. Spatial spread of attentional enhancement and local selection/inhibition are crucial factors determining the profile of the spatial attention. Enhancement and ignorance/suppression are opposite effects of attention, and appeared to be mutually exclusive. Yet, no unified view of the factors has been provided despite their necessity for understanding the functions of spatial attention. This report provides electroencephalographic and behavioral evidence for the attentional spread at an early stage and selection/inhibition at a later stage of visual processing. Steady state visual evoked potential showed broad spatial tuning whereas the P3 component of the event related potential showed local selection or inhibition of the adjacent areas. Based on these results, we propose a two-stage model of spatial attention with broad spread at an early stage and local selection at a later stage.

  14. Attentional Control in Visual Signal Detection: Effects of Abrupt-Onset and No-Onset Stimuli

    ERIC Educational Resources Information Center

    Sewell, David K.; Smith, Philip L.

    2012-01-01

    The attention literature distinguishes two general mechanisms by which attention can benefit performance: gain (or resource) models and orienting (or switching) models. In gain models, processing efficiency is a function of a spatial distribution of capacity or resources; in orienting models, an attentional spotlight must be aligned with the…

  15. Research on metallic material defect detection based on bionic sensing of human visual properties

    NASA Astrophysics Data System (ADS)

    Zhang, Pei Jiang; Cheng, Tao

    2018-05-01

    Due to the fact that human visual system can quickly lock the areas of interest in complex natural environment and focus on it, this paper proposes an eye-based visual attention mechanism by simulating human visual imaging features based on human visual attention mechanism Bionic Sensing Visual Inspection Model Method to Detect Defects of Metallic Materials in the Mechanical Field. First of all, according to the biologically visually significant low-level features, the mark of defect experience marking is used as the intermediate feature of simulated visual perception. Afterwards, SVM method was used to train the advanced features of visual defects of metal material. According to the weight of each party, the biometrics detection model of metal material defect, which simulates human visual characteristics, is obtained.

  16. Distinctive Correspondence Between Separable Visual Attention Functions and Intrinsic Brain Networks.

    PubMed

    Ruiz-Rizzo, Adriana L; Neitzel, Julia; Müller, Hermann J; Sorg, Christian; Finke, Kathrin

    2018-01-01

    Separable visual attention functions are assumed to rely on distinct but interacting neural mechanisms. Bundesen's "theory of visual attention" (TVA) allows the mathematical estimation of independent parameters that characterize individuals' visual attentional capacity (i.e., visual processing speed and visual short-term memory storage capacity) and selectivity functions (i.e., top-down control and spatial laterality). However, it is unclear whether these parameters distinctively map onto different brain networks obtained from intrinsic functional connectivity, which organizes slowly fluctuating ongoing brain activity. In our study, 31 demographically homogeneous healthy young participants performed whole- and partial-report tasks and underwent resting-state functional magnetic resonance imaging (rs-fMRI). Report accuracy was modeled using TVA to estimate, individually, the four TVA parameters. Networks encompassing cortical areas relevant for visual attention were derived from independent component analysis of rs-fMRI data: visual, executive control, right and left frontoparietal, and ventral and dorsal attention networks. Two TVA parameters were mapped on particular functional networks. First, participants with higher (vs. lower) visual processing speed showed lower functional connectivity within the ventral attention network. Second, participants with more (vs. less) efficient top-down control showed higher functional connectivity within the dorsal attention network and lower functional connectivity within the visual network. Additionally, higher performance was associated with higher functional connectivity between networks: specifically, between the ventral attention and right frontoparietal networks for visual processing speed, and between the visual and executive control networks for top-down control. The higher inter-network functional connectivity was related to lower intra-network connectivity. These results demonstrate that separable visual attention parameters that are assumed to constitute relatively stable traits correspond distinctly to the functional connectivity both within and between particular functional networks. This implies that individual differences in basic attention functions are represented by differences in the coherence of slowly fluctuating brain activity.

  17. Parafoveal magnification: visual acuity does not modulate the perceptual span in reading.

    PubMed

    Miellet, Sébastien; O'Donnell, Patrick J; Sereno, Sara C

    2009-06-01

    Models of eye guidance in reading rely on the concept of the perceptual span-the amount of information perceived during a single eye fixation, which is considered to be a consequence of visual and attentional constraints. To directly investigate attentional mechanisms underlying the perceptual span, we implemented a new reading paradigm-parafoveal magnification (PM)-that compensates for how visual acuity drops off as a function of retinal eccentricity. On each fixation and in real time, parafoveal text is magnified to equalize its perceptual impact with that of concurrent foveal text. Experiment 1 demonstrated that PM does not increase the amount of text that is processed, supporting an attentional-based account of eye movements in reading. Experiment 2 explored a contentious issue that differentiates competing models of eye movement control and showed that, even when parafoveal information is enlarged, visual attention in reading is allocated in a serial fashion from word to word.

  18. How attention influences perceptual decision making: Single-trial EEG correlates of drift-diffusion model parameters

    PubMed Central

    Nunez, Michael D.; Vandekerckhove, Joachim; Srinivasan, Ramesh

    2016-01-01

    Perceptual decision making can be accounted for by drift-diffusion models, a class of decision-making models that assume a stochastic accumulation of evidence on each trial. Fitting response time and accuracy to a drift-diffusion model produces evidence accumulation rate and non-decision time parameter estimates that reflect cognitive processes. Our goal is to elucidate the effect of attention on visual decision making. In this study, we show that measures of attention obtained from simultaneous EEG recordings can explain per-trial evidence accumulation rates and perceptual preprocessing times during a visual decision making task. Models assuming linear relationships between diffusion model parameters and EEG measures as external inputs were fit in a single step in a hierarchical Bayesian framework. The EEG measures were features of the evoked potential (EP) to the onset of a masking noise and the onset of a task-relevant signal stimulus. Single-trial evoked EEG responses, P200s to the onsets of visual noise and N200s to the onsets of visual signal, explain single-trial evidence accumulation and preprocessing times. Within-trial evidence accumulation variance was not found to be influenced by attention to the signal or noise. Single-trial measures of attention lead to better out-of-sample predictions of accuracy and correct reaction time distributions for individual subjects. PMID:28435173

  19. How attention influences perceptual decision making: Single-trial EEG correlates of drift-diffusion model parameters.

    PubMed

    Nunez, Michael D; Vandekerckhove, Joachim; Srinivasan, Ramesh

    2017-02-01

    Perceptual decision making can be accounted for by drift-diffusion models, a class of decision-making models that assume a stochastic accumulation of evidence on each trial. Fitting response time and accuracy to a drift-diffusion model produces evidence accumulation rate and non-decision time parameter estimates that reflect cognitive processes. Our goal is to elucidate the effect of attention on visual decision making. In this study, we show that measures of attention obtained from simultaneous EEG recordings can explain per-trial evidence accumulation rates and perceptual preprocessing times during a visual decision making task. Models assuming linear relationships between diffusion model parameters and EEG measures as external inputs were fit in a single step in a hierarchical Bayesian framework. The EEG measures were features of the evoked potential (EP) to the onset of a masking noise and the onset of a task-relevant signal stimulus. Single-trial evoked EEG responses, P200s to the onsets of visual noise and N200s to the onsets of visual signal, explain single-trial evidence accumulation and preprocessing times. Within-trial evidence accumulation variance was not found to be influenced by attention to the signal or noise. Single-trial measures of attention lead to better out-of-sample predictions of accuracy and correct reaction time distributions for individual subjects.

  20. Body region dissatisfaction predicts attention to body regions on other women.

    PubMed

    Lykins, Amy D; Ferris, Tamara; Graham, Cynthia A

    2014-09-01

    The proliferation of "idealized" (i.e., very thin and attractive) women in the media has contributed to increasing rates of body dissatisfaction among women. However, it remains relatively unknown how women attend to these images: does dissatisfaction predict greater or lesser attention to these body regions on others? Fifty healthy women (mean age=21.8 years) viewed images of idealized and plus-size models; an eye-tracker recorded visual attention. Participants also completed measures of satisfaction for specific body regions, which were then used as predictors of visual attention to these regions on models. Consistent with an avoidance-type process, lower levels of satisfaction with the two regions of greatest reported concern (mid, lower torso) predicted less attention to these regions; greater satisfaction predicted more attention to these regions. While this visual attention bias may aid in preserving self-esteem when viewing idealized others, it may preclude the opportunity for comparisons that could improve self-esteem. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Attention and normalization circuits in macaque V1.

    PubMed

    Sanayei, M; Herrero, J L; Distler, C; Thiele, A

    2015-04-01

    Attention affects neuronal processing and improves behavioural performance. In extrastriate visual cortex these effects have been explained by normalization models, which assume that attention influences the circuit that mediates surround suppression. While normalization models have been able to explain attentional effects, their validity has rarely been tested against alternative models. Here we investigate how attention and surround/mask stimuli affect neuronal firing rates and orientation tuning in macaque V1. Surround/mask stimuli provide an estimate to what extent V1 neurons are affected by normalization, which was compared against effects of spatial top down attention. For some attention/surround effect comparisons, the strength of attentional modulation was correlated with the strength of surround modulation, suggesting that attention and surround/mask stimulation (i.e. normalization) might use a common mechanism. To explore this in detail, we fitted multiplicative and additive models of attention to our data. In one class of models, attention contributed to normalization mechanisms, whereas in a different class of models it did not. Model selection based on Akaike's and on Bayesian information criteria demonstrated that in most cells the effects of attention were best described by models where attention did not contribute to normalization mechanisms. This demonstrates that attentional influences on neuronal responses in primary visual cortex often bypass normalization mechanisms. © 2015 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  2. Visual attention and flexible normalization pools

    PubMed Central

    Schwartz, Odelia; Coen-Cagli, Ruben

    2013-01-01

    Attention to a spatial location or feature in a visual scene can modulate the responses of cortical neurons and affect perceptual biases in illusions. We add attention to a cortical model of spatial context based on a well-founded account of natural scene statistics. The cortical model amounts to a generalized form of divisive normalization, in which the surround is in the normalization pool of the center target only if they are considered statistically dependent. Here we propose that attention influences this computation by accentuating the neural unit activations at the attended location, and that the amount of attentional influence of the surround on the center thus depends on whether center and surround are deemed in the same normalization pool. The resulting form of model extends a recent divisive normalization model of attention (Reynolds & Heeger, 2009). We simulate cortical surround orientation experiments with attention and show that the flexible model is suitable for capturing additional data and makes nontrivial testable predictions. PMID:23345413

  3. Modeling the Effects of Perceptual Load: Saliency, Competitive Interactions, and Top-Down Biases

    PubMed Central

    Neokleous, Kleanthis; Shimi, Andria; Avraamides, Marios N.

    2016-01-01

    A computational model of visual selective attention has been implemented to account for experimental findings on the Perceptual Load Theory (PLT) of attention. The model was designed based on existing neurophysiological findings on attentional processes with the objective to offer an explicit and biologically plausible formulation of PLT. Simulation results verified that the proposed model is capable of capturing the basic pattern of results that support the PLT as well as findings that are considered contradictory to the theory. Importantly, the model is able to reproduce the behavioral results from a dilution experiment, providing thus a way to reconcile PLT with the competing Dilution account. Overall, the model presents a novel account for explaining PLT effects on the basis of the low-level competitive interactions among neurons that represent visual input and the top-down signals that modulate neural activity. The implications of the model concerning the debate on the locus of selective attention as well as the origins of distractor interference in visual displays of varying load are discussed. PMID:26858668

  4. Distinctive Correspondence Between Separable Visual Attention Functions and Intrinsic Brain Networks

    PubMed Central

    Ruiz-Rizzo, Adriana L.; Neitzel, Julia; Müller, Hermann J.; Sorg, Christian; Finke, Kathrin

    2018-01-01

    Separable visual attention functions are assumed to rely on distinct but interacting neural mechanisms. Bundesen's “theory of visual attention” (TVA) allows the mathematical estimation of independent parameters that characterize individuals' visual attentional capacity (i.e., visual processing speed and visual short-term memory storage capacity) and selectivity functions (i.e., top-down control and spatial laterality). However, it is unclear whether these parameters distinctively map onto different brain networks obtained from intrinsic functional connectivity, which organizes slowly fluctuating ongoing brain activity. In our study, 31 demographically homogeneous healthy young participants performed whole- and partial-report tasks and underwent resting-state functional magnetic resonance imaging (rs-fMRI). Report accuracy was modeled using TVA to estimate, individually, the four TVA parameters. Networks encompassing cortical areas relevant for visual attention were derived from independent component analysis of rs-fMRI data: visual, executive control, right and left frontoparietal, and ventral and dorsal attention networks. Two TVA parameters were mapped on particular functional networks. First, participants with higher (vs. lower) visual processing speed showed lower functional connectivity within the ventral attention network. Second, participants with more (vs. less) efficient top-down control showed higher functional connectivity within the dorsal attention network and lower functional connectivity within the visual network. Additionally, higher performance was associated with higher functional connectivity between networks: specifically, between the ventral attention and right frontoparietal networks for visual processing speed, and between the visual and executive control networks for top-down control. The higher inter-network functional connectivity was related to lower intra-network connectivity. These results demonstrate that separable visual attention parameters that are assumed to constitute relatively stable traits correspond distinctly to the functional connectivity both within and between particular functional networks. This implies that individual differences in basic attention functions are represented by differences in the coherence of slowly fluctuating brain activity. PMID:29662444

  5. Measuring and modeling salience with the theory of visual attention.

    PubMed

    Krüger, Alexander; Tünnermann, Jan; Scharlau, Ingrid

    2017-08-01

    For almost three decades, the theory of visual attention (TVA) has been successful in mathematically describing and explaining a wide variety of phenomena in visual selection and recognition with high quantitative precision. Interestingly, the influence of feature contrast on attention has been included in TVA only recently, although it has been extensively studied outside the TVA framework. The present approach further develops this extension of TVA's scope by measuring and modeling salience. An empirical measure of salience is achieved by linking different (orientation and luminance) contrasts to a TVA parameter. In the modeling part, the function relating feature contrasts to salience is described mathematically and tested against alternatives by Bayesian model comparison. This model comparison reveals that the power function is an appropriate model of salience growth in the dimensions of orientation and luminance contrast. Furthermore, if contrasts from the two dimensions are combined, salience adds up additively.

  6. Superior colliculus neurons encode a visual saliency map during free viewing of natural dynamic video

    NASA Astrophysics Data System (ADS)

    White, Brian J.; Berg, David J.; Kan, Janis Y.; Marino, Robert A.; Itti, Laurent; Munoz, Douglas P.

    2017-01-01

    Models of visual attention postulate the existence of a saliency map whose function is to guide attention and gaze to the most conspicuous regions in a visual scene. Although cortical representations of saliency have been reported, there is mounting evidence for a subcortical saliency mechanism, which pre-dates the evolution of neocortex. Here, we conduct a strong test of the saliency hypothesis by comparing the output of a well-established computational saliency model with the activation of neurons in the primate superior colliculus (SC), a midbrain structure associated with attention and gaze, while monkeys watched video of natural scenes. We find that the activity of SC superficial visual-layer neurons (SCs), specifically, is well-predicted by the model. This saliency representation is unlikely to be inherited from fronto-parietal cortices, which do not project to SCs, but may be computed in SCs and relayed to other areas via tectothalamic pathways.

  7. Attentional bias to food-related visual cues: is there a role in obesity?

    PubMed

    Doolan, K J; Breslin, G; Hanna, D; Gallagher, A M

    2015-02-01

    The incentive sensitisation model of obesity suggests that modification of the dopaminergic associated reward systems in the brain may result in increased awareness of food-related visual cues present in the current food environment. Having a heightened awareness of these visual food cues may impact on food choices and eating behaviours with those being most aware of or demonstrating greater attention to food-related stimuli potentially being at greater risk of overeating and subsequent weight gain. To date, research related to attentional responses to visual food cues has been both limited and conflicting. Such inconsistent findings may in part be explained by the use of different methodological approaches to measure attentional bias and the impact of other factors such as hunger levels, energy density of visual food cues and individual eating style traits that may influence visual attention to food-related cues outside of weight status alone. This review examines the various methodologies employed to measure attentional bias with a particular focus on the role that attentional processing of food-related visual cues may have in obesity. Based on the findings of this review, it appears that it may be too early to clarify the role visual attention to food-related cues may have in obesity. Results however highlight the importance of considering the most appropriate methodology to use when measuring attentional bias and the characteristics of the study populations targeted while interpreting results to date and in designing future studies.

  8. Visual attention in egocentric field-of-view using RGB-D data

    NASA Astrophysics Data System (ADS)

    Olesova, Veronika; Benesova, Wanda; Polatsek, Patrik

    2017-03-01

    Most of the existing solutions predicting visual attention focus solely on referenced 2D images and disregard any depth information. This aspect has always represented a weak point since the depth is an inseparable part of the biological vision. This paper presents a novel method of saliency map generation based on results of our experiments with egocentric visual attention and investigation of its correlation with perceived depth. We propose a model to predict the attention using superpixel representation with an assumption that contrast objects are usually salient and have a sparser spatial distribution of superpixels than their background. To incorporate depth information into this model, we propose three different depth techniques. The evaluation is done on our new RGB-D dataset created by SMI eye-tracker glasses and KinectV2 device.

  9. Feature-selective attention enhances color signals in early visual areas of the human brain.

    PubMed

    Müller, M M; Andersen, S; Trujillo, N J; Valdés-Sosa, P; Malinowski, P; Hillyard, S A

    2006-09-19

    We used an electrophysiological measure of selective stimulus processing (the steady-state visual evoked potential, SSVEP) to investigate feature-specific attention to color cues. Subjects viewed a display consisting of spatially intermingled red and blue dots that continually shifted their positions at random. The red and blue dots flickered at different frequencies and thereby elicited distinguishable SSVEP signals in the visual cortex. Paying attention selectively to either the red or blue dot population produced an enhanced amplitude of its frequency-tagged SSVEP, which was localized by source modeling to early levels of the visual cortex. A control experiment showed that this selection was based on color rather than flicker frequency cues. This signal amplification of attended color items provides an empirical basis for the rapid identification of feature conjunctions during visual search, as proposed by "guided search" models.

  10. Associative learning in baboons (Papio papio) and humans (Homo sapiens): species differences in learned attention to visual features.

    PubMed

    Fagot, J; Kruschke, J K; Dépy, D; Vauclair, J

    1998-10-01

    We examined attention shifting in baboons and humans during the learning of visual categories. Within a conditional matching-to-sample task, participants of the two species sequentially learned two two-feature categories which shared a common feature. Results showed that humans encoded both features of the initially learned category, but predominantly only the distinctive feature of the subsequently learned category. Although baboons initially encoded both features of the first category, they ultimately retained only the distinctive features of each category. Empirical data from the two species were analyzed with the 1996 ADIT connectionist model of Kruschke. ADIT fits the baboon data when the attentional shift rate is zero, and the human data when the attentional shift rate is not zero. These empirical and modeling results suggest species differences in learned attention to visual features.

  11. Stimulus Dependence of Correlated Variability across Cortical Areas

    PubMed Central

    Cohen, Marlene R.

    2016-01-01

    The way that correlated trial-to-trial variability between pairs of neurons in the same brain area (termed spike count or noise correlation, rSC) depends on stimulus or task conditions can constrain models of cortical circuits and of the computations performed by networks of neurons (Cohen and Kohn, 2011). In visual cortex, rSC tends not to depend on stimulus properties (Kohn and Smith, 2005; Huang and Lisberger, 2009) but does depend on cognitive factors like visual attention (Cohen and Maunsell, 2009; Mitchell et al., 2009). However, neurons across visual areas respond to any visual stimulus or contribute to any perceptual decision, and the way that information from multiple areas is combined to guide perception is unknown. To gain insight into these issues, we recorded simultaneously from neurons in two areas of visual cortex (primary visual cortex, V1, and the middle temporal area, MT) while rhesus monkeys viewed different visual stimuli in different attention conditions. We found that correlations between neurons in different areas depend on stimulus and attention conditions in very different ways than do correlations within an area. Correlations across, but not within, areas depend on stimulus direction and the presence of a second stimulus, and attention has opposite effects on correlations within and across areas. This observed pattern of cross-area correlations is predicted by a normalization model where MT units sum V1 inputs that are passed through a divisive nonlinearity. Together, our results provide insight into how neurons in different areas interact and constrain models of the neural computations performed across cortical areas. SIGNIFICANCE STATEMENT Correlations in the responses of pairs of neurons within the same cortical area have been a subject of growing interest in systems neuroscience. However, correlated variability between different cortical areas is likely just as important. We recorded simultaneously from neurons in primary visual cortex and the middle temporal area while rhesus monkeys viewed different visual stimuli in different attention conditions. We found that correlations between neurons in different areas depend on stimulus and attention conditions in very different ways than do correlations within an area. The observed pattern of cross-area correlations was predicted by a simple normalization model. Our results provide insight into how neurons in different areas interact and constrain models of the neural computations performed across cortical areas. PMID:27413163

  12. Multiperson visual focus of attention from head pose and meeting contextual cues.

    PubMed

    Ba, Sileye O; Odobez, Jean-Marc

    2011-01-01

    This paper introduces a novel contextual model for the recognition of people's visual focus of attention (VFOA) in meetings from audio-visual perceptual cues. More specifically, instead of independently recognizing the VFOA of each meeting participant from his own head pose, we propose to jointly recognize the participants' visual attention in order to introduce context-dependent interaction models that relate to group activity and the social dynamics of communication. Meeting contextual information is represented by the location of people, conversational events identifying floor holding patterns, and a presentation activity variable. By modeling the interactions between the different contexts and their combined and sometimes contradictory impact on the gazing behavior, our model allows us to handle VFOA recognition in difficult task-based meetings involving artifacts, presentations, and moving people. We validated our model through rigorous evaluation on a publicly available and challenging data set of 12 real meetings (5 hours of data). The results demonstrated that the integration of the presentation and conversation dynamical context using our model can lead to significant performance improvements.

  13. Why people see things that are not there: a novel Perception and Attention Deficit model for recurrent complex visual hallucinations.

    PubMed

    Collerton, Daniel; Perry, Elaine; McKeith, Ian

    2005-12-01

    As many as two million people in the United Kingdom repeatedly see people, animals, and objects that have no objective reality. Hallucinations on the border of sleep, dementing illnesses, delirium, eye disease, and schizophrenia account for 90% of these. The remainder have rarer disorders. We review existing models of recurrent complex visual hallucinations (RCVH) in the awake person, including cortical irritation, cortical hyperexcitability and cortical release, top-down activation, misperception, dream intrusion, and interactive models. We provide evidence that these can neither fully account for the phenomenology of RCVH, nor for variations in the frequency of RCVH in different disorders. We propose a novel Perception and Attention Deficit (PAD) model for RCVH. A combination of impaired attentional binding and poor sensory activation of a correct proto-object, in conjunction with a relatively intact scene representation, bias perception to allow the intrusion of a hallucinatory proto-object into a scene perception. Incorporation of this image into a context-specific hallucinatory scene representation accounts for repetitive hallucinations. We suggest that these impairments are underpinned by disturbances in a lateral frontal cortex-ventral visual stream system. We show how the frequency of RCVH in different diseases is related to the coexistence of attentional and visual perceptual impairments; how attentional and perceptual processes can account for their phenomenology; and that diseases and other states with high rates of RCVH have cholinergic dysfunction in both frontal cortex and the ventral visual stream. Several tests of the model are indicated, together with a number of treatment options that it generates.

  14. Signal detection evidence for limited capacity in visual search

    PubMed Central

    Fencsik, David E.; Flusberg, Stephen J.; Horowitz, Todd S.; Wolfe, Jeremy M.

    2014-01-01

    The nature of capacity limits (if any) in visual search has been a topic of controversy for decades. In 30 years of work, researchers have attempted to distinguish between two broad classes of visual search models. Attention-limited models have proposed two stages of perceptual processing: an unlimited-capacity preattentive stage, and a limited-capacity selective attention stage. Conversely, noise-limited models have proposed a single, unlimited-capacity perceptual processing stage, with decision processes influenced only by stochastic noise. Here, we use signal detection methods to test a strong prediction of attention-limited models. In standard attention-limited models, performance of some searches (feature searches) should only be limited by a preattentive stage. Other search tasks (e.g., spatial configuration search for a “2” among “5”s) should be additionally limited by an attentional bottleneck. We equated average accuracies for a feature and a spatial configuration search over set sizes of 1–8 for briefly presented stimuli. The strong prediction of attention-limited models is that, given overall equivalence in performance, accuracy should be better on the spatial configuration search than on the feature search for set size 1, and worse for set size 8. We confirm this crossover interaction and show that it is problematic for at least one class of one-stage decision models. PMID:21901574

  15. Can Attention be Divided Between Perceptual Groups?

    NASA Technical Reports Server (NTRS)

    McCann, Robert S.; Foyle, David C.; Johnston, James C.; Hart, Sandra G. (Technical Monitor)

    1994-01-01

    Previous work using Head-Up Displays (HUDs) suggests that the visual system parses the HUD and the outside world into distinct perceptual groups, with attention deployed sequentially to first one group and then the other. New experiments show that both groups can be processed in parallel in a divided attention search task, even though subjects have just processed a stimulus in one perceptual group or the other. Implications for models of visual attention will be discussed.

  16. A Feedback Model of Attention Explains the Diverse Effects of Attention on Neural Firing Rates and Receptive Field Structure.

    PubMed

    Miconi, Thomas; VanRullen, Rufin

    2016-02-01

    Visual attention has many effects on neural responses, producing complex changes in firing rates, as well as modifying the structure and size of receptive fields, both in topological and feature space. Several existing models of attention suggest that these effects arise from selective modulation of neural inputs. However, anatomical and physiological observations suggest that attentional modulation targets higher levels of the visual system (such as V4 or MT) rather than input areas (such as V1). Here we propose a simple mechanism that explains how a top-down attentional modulation, falling on higher visual areas, can produce the observed effects of attention on neural responses. Our model requires only the existence of modulatory feedback connections between areas, and short-range lateral inhibition within each area. Feedback connections redistribute the top-down modulation to lower areas, which in turn alters the inputs of other higher-area cells, including those that did not receive the initial modulation. This produces firing rate modulations and receptive field shifts. Simultaneously, short-range lateral inhibition between neighboring cells produce competitive effects that are automatically scaled to receptive field size in any given area. Our model reproduces the observed attentional effects on response rates (response gain, input gain, biased competition automatically scaled to receptive field size) and receptive field structure (shifts and resizing of receptive fields both spatially and in complex feature space), without modifying model parameters. Our model also makes the novel prediction that attentional effects on response curves should shift from response gain to contrast gain as the spatial focus of attention drifts away from the studied cell.

  17. Identifying the computational requirements of an integrated top-down-bottom-up model for overt visual attention within an active vision system.

    PubMed

    McBride, Sebastian; Huelse, Martin; Lee, Mark

    2013-01-01

    Computational visual attention systems have been constructed in order for robots and other devices to detect and locate regions of interest in their visual world. Such systems often attempt to take account of what is known of the human visual system and employ concepts, such as 'active vision', to gain various perceived advantages. However, despite the potential for gaining insights from such experiments, the computational requirements for visual attention processing are often not clearly presented from a biological perspective. This was the primary objective of this study, attained through two specific phases of investigation: 1) conceptual modeling of a top-down-bottom-up framework through critical analysis of the psychophysical and neurophysiological literature, 2) implementation and validation of the model into robotic hardware (as a representative of an active vision system). Seven computational requirements were identified: 1) transformation of retinotopic to egocentric mappings, 2) spatial memory for the purposes of medium-term inhibition of return, 3) synchronization of 'where' and 'what' information from the two visual streams, 4) convergence of top-down and bottom-up information to a centralized point of information processing, 5) a threshold function to elicit saccade action, 6) a function to represent task relevance as a ratio of excitation and inhibition, and 7) derivation of excitation and inhibition values from object-associated feature classes. The model provides further insight into the nature of data representation and transfer between brain regions associated with the vertebrate 'active' visual attention system. In particular, the model lends strong support to the functional role of the lateral intraparietal region of the brain as a primary area of information consolidation that directs putative action through the use of a 'priority map'.

  18. Identifying the Computational Requirements of an Integrated Top-Down-Bottom-Up Model for Overt Visual Attention within an Active Vision System

    PubMed Central

    McBride, Sebastian; Huelse, Martin; Lee, Mark

    2013-01-01

    Computational visual attention systems have been constructed in order for robots and other devices to detect and locate regions of interest in their visual world. Such systems often attempt to take account of what is known of the human visual system and employ concepts, such as ‘active vision’, to gain various perceived advantages. However, despite the potential for gaining insights from such experiments, the computational requirements for visual attention processing are often not clearly presented from a biological perspective. This was the primary objective of this study, attained through two specific phases of investigation: 1) conceptual modeling of a top-down-bottom-up framework through critical analysis of the psychophysical and neurophysiological literature, 2) implementation and validation of the model into robotic hardware (as a representative of an active vision system). Seven computational requirements were identified: 1) transformation of retinotopic to egocentric mappings, 2) spatial memory for the purposes of medium-term inhibition of return, 3) synchronization of ‘where’ and ‘what’ information from the two visual streams, 4) convergence of top-down and bottom-up information to a centralized point of information processing, 5) a threshold function to elicit saccade action, 6) a function to represent task relevance as a ratio of excitation and inhibition, and 7) derivation of excitation and inhibition values from object-associated feature classes. The model provides further insight into the nature of data representation and transfer between brain regions associated with the vertebrate ‘active’ visual attention system. In particular, the model lends strong support to the functional role of the lateral intraparietal region of the brain as a primary area of information consolidation that directs putative action through the use of a ‘priority map’. PMID:23437044

  19. A review of the findings and theories on surface size effects on visual attention

    PubMed Central

    Peschel, Anne O.; Orquin, Jacob L.

    2013-01-01

    That surface size has an impact on attention has been well-known in advertising research for almost a century; however, theoretical accounts of this effect have been sparse. To address this issue, we review studies on surface size effects on eye movements in this paper. While most studies find that large objects are more likely to be fixated, receive more fixations, and are fixated faster than small objects, a comprehensive explanation of this effect is still lacking. To bridge the theoretical gap, we relate the findings from this review to three theories of surface size effects suggested in the literature: a linear model based on the assumption of random fixations (Lohse, 1997), a theory of surface size as visual saliency (Pieters etal., 2007), and a theory based on competition for attention (CA; Janiszewski, 1998). We furthermore suggest a fourth model – demand for attention – which we derive from the theory of CA by revising the underlying model assumptions. In order to test the models against each other, we reanalyze data from an eye tracking study investigating surface size and saliency effects on attention. The reanalysis revealed little support for the first three theories while the demand for attention model showed a much better alignment with the data. We conclude that surface size effects may best be explained as an increase in object signal strength which depends on object size, number of objects in the visual scene, and object distance to the center of the scene. Our findings suggest that advertisers should take into account how objects in the visual scene interact in order to optimize attention to, for instance, brands and logos. PMID:24367343

  20. A review of the findings and theories on surface size effects on visual attention.

    PubMed

    Peschel, Anne O; Orquin, Jacob L

    2013-12-09

    That surface size has an impact on attention has been well-known in advertising research for almost a century; however, theoretical accounts of this effect have been sparse. To address this issue, we review studies on surface size effects on eye movements in this paper. While most studies find that large objects are more likely to be fixated, receive more fixations, and are fixated faster than small objects, a comprehensive explanation of this effect is still lacking. To bridge the theoretical gap, we relate the findings from this review to three theories of surface size effects suggested in the literature: a linear model based on the assumption of random fixations (Lohse, 1997), a theory of surface size as visual saliency (Pieters etal., 2007), and a theory based on competition for attention (CA; Janiszewski, 1998). We furthermore suggest a fourth model - demand for attention - which we derive from the theory of CA by revising the underlying model assumptions. In order to test the models against each other, we reanalyze data from an eye tracking study investigating surface size and saliency effects on attention. The reanalysis revealed little support for the first three theories while the demand for attention model showed a much better alignment with the data. We conclude that surface size effects may best be explained as an increase in object signal strength which depends on object size, number of objects in the visual scene, and object distance to the center of the scene. Our findings suggest that advertisers should take into account how objects in the visual scene interact in order to optimize attention to, for instance, brands and logos.

  1. Happy but still focused: failures to find evidence for a mood-induced widening of visual attention.

    PubMed

    Bruyneel, Lynn; van Steenbergen, Henk; Hommel, Bernhard; Band, Guido P H; De Raedt, Rudi; Koster, Ernst H W

    2013-05-01

    In models of affect and cognition, it is held that positive affect broadens the scope of attention. Consistent with this claim, previous research has indeed suggested that positive affect is associated with impaired selective attention as evidenced by increased interference of spatially distant distractors. However, several recent findings cast doubt on the reliability of this observation. In the present study, we examined whether selective attention in a visual flanker task is influenced by positive mood induction. Across three experiments, positive affect consistently failed to exert any impact on selective attention. The implications of this null-finding for theoretical models of affect and cognition are discussed.

  2. Attentional Episodes in Visual Perception

    ERIC Educational Resources Information Center

    Wyble, Brad; Potter, Mary C.; Bowman, Howard; Nieuwenstein, Mark

    2011-01-01

    Is one's temporal perception of the world truly as seamless as it appears? This article presents a computationally motivated theory suggesting that visual attention samples information from temporal episodes (episodic simultaneous type/serial token model; Wyble, Bowman, & Nieuwenstein, 2009). Breaks between these episodes are punctuated by periods…

  3. Neural practice effect during cross-modal selective attention: Supra-modal and modality-specific effects.

    PubMed

    Xia, Jing; Zhang, Wei; Jiang, Yizhou; Li, You; Chen, Qi

    2018-05-16

    Practice and experiences gradually shape the central nervous system, from the synaptic level to large-scale neural networks. In natural multisensory environment, even when inundated by streams of information from multiple sensory modalities, our brain does not give equal weight to different modalities. Rather, visual information more frequently receives preferential processing and eventually dominates consciousness and behavior, i.e., visual dominance. It remains unknown, however, the supra-modal and modality-specific practice effect during cross-modal selective attention, and moreover whether the practice effect shows similar modality preferences as the visual dominance effect in the multisensory environment. To answer the above two questions, we adopted a cross-modal selective attention paradigm in conjunction with the hybrid fMRI design. Behaviorally, visual performance significantly improved while auditory performance remained constant with practice, indicating that visual attention more flexibly adapted behavior with practice than auditory attention. At the neural level, the practice effect was associated with decreasing neural activity in the frontoparietal executive network and increasing activity in the default mode network, which occurred independently of the modality attended, i.e., the supra-modal mechanisms. On the other hand, functional decoupling between the auditory and the visual system was observed with the progress of practice, which varied as a function of the modality attended. The auditory system was functionally decoupled with both the dorsal and ventral visual stream during auditory attention while was decoupled only with the ventral visual stream during visual attention. To efficiently suppress the irrelevant visual information with practice, auditory attention needs to additionally decouple the auditory system from the dorsal visual stream. The modality-specific mechanisms, together with the behavioral effect, thus support the visual dominance model in terms of the practice effect during cross-modal selective attention. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Research on Heads Up and Helmet Mounted Symbology

    DTIC Science & Technology

    2000-03-30

    used to research and develop HUD symbology. 2.2.1 Attention as a filter Broadbent’s (1958) filter model is one of the earliest attentional metaphors...that attention plays a role in favouring or enhancing processing at locations in the visual field. Although the filter model has been modified and...13 2.2 MODELS OF SPATIAL ATTENTION .................................... ~ .. · .. ~·~ ............................... 14

  5. Eccentricity effects in vision and attention.

    PubMed

    Staugaard, Camilla Funch; Petersen, Anders; Vangkilde, Signe

    2016-11-01

    Stimulus eccentricity affects visual processing in multiple ways. Performance on a visual task is often better when target stimuli are presented near or at the fovea compared to the retinal periphery. For instance, reaction times and error rates are often reported to increase with increasing eccentricity. Such findings have been interpreted as purely visual, reflecting neurophysiological differences in central and peripheral vision, as well as attentional, reflecting a central bias in the allocation of attentional resources. Other findings indicate that in some cases, information from the periphery is preferentially processed. Specifically, it has been suggested that visual processing speed increases with increasing stimulus eccentricity, and that this positive correlation is reduced, but not eliminated, when the amount of cortex activated by a stimulus is kept constant by magnifying peripheral stimuli (Carrasco et al., 2003). In this study, we investigated effects of eccentricity on visual attentional capacity with and without magnification, using computational modeling based on Bundesen's (1990) theory of visual attention. Our results suggest a general decrease in attentional capacity with increasing stimulus eccentricity, irrespective of magnification. We discuss these results in relation to the physiology of the visual system, the use of different paradigms for investigating visual perception across the visual field, and the use of different stimulus materials (e.g. Gabor patches vs. letters). Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. Attention reduces spatial uncertainty in human ventral temporal cortex.

    PubMed

    Kay, Kendrick N; Weiner, Kevin S; Grill-Spector, Kalanit

    2015-03-02

    Ventral temporal cortex (VTC) is the latest stage of the ventral "what" visual pathway, which is thought to code the identity of a stimulus regardless of its position or size [1, 2]. Surprisingly, recent studies show that position information can be decoded from VTC [3-5]. However, the computational mechanisms by which spatial information is encoded in VTC are unknown. Furthermore, how attention influences spatial representations in human VTC is also unknown because the effect of attention on spatial representations has only been examined in the dorsal "where" visual pathway [6-10]. Here, we fill these significant gaps in knowledge using an approach that combines functional magnetic resonance imaging and sophisticated computational methods. We first develop a population receptive field (pRF) model [11, 12] of spatial responses in human VTC. Consisting of spatial summation followed by a compressive nonlinearity, this model accurately predicts responses of individual voxels to stimuli at any position and size, explains how spatial information is encoded, and reveals a functional hierarchy in VTC. We then manipulate attention and use our model to decipher the effects of attention. We find that attention to the stimulus systematically and selectively modulates responses in VTC, but not early visual areas. Locally, attention increases eccentricity, size, and gain of individual pRFs, thereby increasing position tolerance. However, globally, these effects reduce uncertainty regarding stimulus location and actually increase position sensitivity of distributed responses across VTC. These results demonstrate that attention actively shapes and enhances spatial representations in the ventral visual pathway. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Attention reduces spatial uncertainty in human ventral temporal cortex

    PubMed Central

    Kay, Kendrick N.; Weiner, Kevin S.; Grill-Spector, Kalanit

    2014-01-01

    SUMMARY Ventral temporal cortex (VTC) is the latest stage of the ventral ‘what’ visual pathway, which is thought to code the identity of a stimulus regardless of its position or size [1, 2]. Surprisingly, recent studies show that position information can be decoded from VTC [3–5]. However, the computational mechanisms by which spatial information is encoded in VTC are unknown. Furthermore, how attention influences spatial representations in human VTC is also unknown because the effect of attention on spatial representations has only been examined in the dorsal ‘where’ visual pathway [6–10]. Here we fill these significant gaps in knowledge using an approach that combines functional magnetic resonance imaging and sophisticated computational methods. We first develop a population receptive field (pRF) model [11, 12] of spatial responses in human VTC. Consisting of spatial summation followed by a compressive nonlinearity, this model accurately predicts responses of individual voxels to stimuli at any position and size, explains how spatial information is encoded, and reveals a functional hierarchy in VTC. We then manipulate attention and use our model to decipher the effects of attention. We find that attention to the stimulus systematically and selectively modulates responses in VTC, but not early visual areas. Locally, attention increases eccentricity, size, and gain of individual pRFs, thereby increasing position tolerance. However, globally, these effects reduce uncertainty regarding stimulus location and actually increase position sensitivity of distributed responses across VTC. These results demonstrate that attention actively shapes and enhances spatial representations in the ventral visual pathway. PMID:25702580

  8. Simulating Visual Attention Allocation of Pilots in an Advanced Cockpit Environment

    NASA Technical Reports Server (NTRS)

    Frische, F.; Osterloh, J.-P.; Luedtke, A.

    2011-01-01

    This paper describes the results of experiments conducted with human line pilots and a cognitive pilot model during interaction with a new 40 Flight Management System (FMS). The aim of these experiments was to gather human pilot behavior data in order to calibrate the behavior of the model. Human behavior is mainly triggered by visual perception. Thus, the main aspect was to setup a profile of human pilots' visual attention allocation in a cockpit environment containing the new FMS. We first performed statistical analyses of eye tracker data and then compared our results to common results of familiar analyses in standard cockpit environments. The comparison has shown a significant influence of the new system on the visual performance of human pilots. Further on, analyses of the pilot models' visual performance have been performed. A comparison to human pilots' visual performance revealed important improvement potentials.

  9. A Biophysical Neural Model To Describe Spatial Visual Attention

    NASA Astrophysics Data System (ADS)

    Hugues, Etienne; José, Jorge V.

    2008-02-01

    Visual scenes have enormous spatial and temporal information that are transduced into neural spike trains. Psychophysical experiments indicate that only a small portion of a spatial image is consciously accessible. Electrophysiological experiments in behaving monkeys have revealed a number of modulations of the neural activity in special visual area known as V4, when the animal is paying attention directly towards a particular stimulus location. The nature of the attentional input to V4, however, remains unknown as well as to the mechanisms responsible for these modulations. We use a biophysical neural network model of V4 to address these issues. We first constrain our model to reproduce the experimental results obtained for different external stimulus configurations and without paying attention. To reproduce the known neuronal response variability, we found that the neurons should receive about equal, or balanced, levels of excitatory and inhibitory inputs and whose levels are high as they are in in vivo conditions. Next we consider attentional inputs that can induce and reproduce the observed spiking modulations. We also elucidate the role played by the neural network to generate these modulations.

  10. A Biophysical Neural Model To Describe Spatial Visual Attention

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hugues, Etienne; Jose, Jorge V.

    2008-02-14

    Visual scenes have enormous spatial and temporal information that are transduced into neural spike trains. Psychophysical experiments indicate that only a small portion of a spatial image is consciously accessible. Electrophysiological experiments in behaving monkeys have revealed a number of modulations of the neural activity in special visual area known as V4, when the animal is paying attention directly towards a particular stimulus location. The nature of the attentional input to V4, however, remains unknown as well as to the mechanisms responsible for these modulations. We use a biophysical neural network model of V4 to address these issues. We firstmore » constrain our model to reproduce the experimental results obtained for different external stimulus configurations and without paying attention. To reproduce the known neuronal response variability, we found that the neurons should receive about equal, or balanced, levels of excitatory and inhibitory inputs and whose levels are high as they are in in vivo conditions. Next we consider attentional inputs that can induce and reproduce the observed spiking modulations. We also elucidate the role played by the neural network to generate these modulations.« less

  11. Attentional Selection in Object Recognition

    DTIC Science & Technology

    1993-02-01

    order. It also affects the choice of strategies in both the 24 A Computational Model of Attentional Selection filtering and arbiter stages. The set...such processing. In Treisman’s model this was hidden in the concept of the selection filter . Later computational models of attention tried to...This thesis presents a novel approach to the selection problem by propos. ing a computational model of visual attentional selection as a paradigm for

  12. Neurocognitive Predictors of Mathematical Processing in School-Aged Children with Spina Bifida and Their Typically Developing Peers: Attention, Working Memory, and Fine Motor Skills

    PubMed Central

    Raghubar, Kimberly P.; Barnes, Marcia A.; Dennis, Maureen; Cirino, Paul T.; Taylor, Heather; Landry, Susan

    2015-01-01

    Objective Math and attention are related in neurobiological and behavioral models of mathematical cognition. This study employed model-driven assessments of attention and math in children with spina bifida myelomeningocele (SBM), who have known math difficulties and specific attentional deficits, to more directly examine putative relations between attention and mathematical processing. The relation of other domain general abilities and math was also investigated. Method Participants were 9.5-year-old children with SBM (N = 44) and typically developing children (N = 50). Participants were administered experimental exact and approximate arithmetic tasks, and standardized measures of math fluency and calculation. Cognitive measures included the Attention Network Test (ANT), and standardized measures of fine motor skills, verbal working memory (WM), and visual-spatial WM. Results Children with SBM performed similarly to peers on exact arithmetic but more poorly on approximate and standardized arithmetic measures. On the ANT, children with SBM differed from controls on orienting attention but not alerting and executive attention. Multiple mediation models showed that: fine motor skills and verbal WM mediated the relation of group to approximate arithmetic; fine motor skills and visual-spatial WM mediated the relation of group to math fluency; and verbal and visual-spatial WM mediated the relation of group to math calculation. Attention was not a significant mediator of the effects of group for any aspect of math in this study. Conclusions Results are discussed with reference to models of attention, WM, and mathematical cognition. PMID:26011113

  13. Neurocognitive predictors of mathematical processing in school-aged children with spina bifida and their typically developing peers: Attention, working memory, and fine motor skills.

    PubMed

    Raghubar, Kimberly P; Barnes, Marcia A; Dennis, Maureen; Cirino, Paul T; Taylor, Heather; Landry, Susan

    2015-11-01

    Math and attention are related in neurobiological and behavioral models of mathematical cognition. This study employed model-driven assessments of attention and math in children with spina bifida myelomeningocele (SBM), who have known math difficulties and specific attentional deficits, to more directly examine putative relations between attention and mathematical processing. The relation of other domain general abilities and math was also investigated. Participants were 9.5-year-old children with SBM (n = 44) and typically developing children (n = 50). Participants were administered experimental exact and approximate arithmetic tasks, and standardized measures of math fluency and calculation. Cognitive measures included the Attention Network Test (ANT), and standardized measures of fine motor skills, verbal working memory (WM), and visual-spatial WM. Children with SBM performed similarly to peers on exact arithmetic, but more poorly on approximate and standardized arithmetic measures. On the ANT, children with SBM differed from controls on orienting attention, but not on alerting and executive attention. Multiple mediation models showed that fine motor skills and verbal WM mediated the relation of group to approximate arithmetic; fine motor skills and visual-spatial WM mediated the relation of group to math fluency; and verbal and visual-spatial WM mediated the relation of group to math calculation. Attention was not a significant mediator of the effects of group for any aspect of math in this study. Results are discussed with reference to models of attention, WM, and mathematical cognition. (c) 2015 APA, all rights reserved).

  14. Plain packaging increases visual attention to health warnings on cigarette packs in non-smokers and weekly smokers but not daily smokers.

    PubMed

    Munafò, Marcus R; Roberts, Nicole; Bauld, Linda; Leonards, Ute

    2011-08-01

    To assess the impact of plain packaging on visual attention towards health warning information on cigarette packs. Mixed-model experimental design, comprising smoking status as a between-subjects factor, and package type (branded versus plain) as a within-subjects factor. University laboratory. Convenience sample of young adults, comprising non-smokers (n = 15), weekly smokers (n = 14) and daily smokers (n = 14). Number of saccades (eye movements) towards health warnings on cigarette packs, to directly index visual attention. Analysis of variance indicated more eye movements (i.e. greater visual attention) towards health warnings compared to brand information on plain packs versus branded packs. This effect was observed among non-smokers and weekly smokers, but not daily smokers. Among non-smokers and non-daily cigarette smokers, plain packaging appears to increase visual attention towards health warning information and away from brand information. © 2011 The Authors, Addiction © 2011 Society for the Study of Addiction.

  15. Is Posner's "beam" the same as Treisman's "glue"?: On the relation between visual orienting and feature integration theory.

    PubMed

    Briand, K A; Klein, R M

    1987-05-01

    In the present study we investigated whether the visually allocated "beam" studied by Posner and others is the same visual attentional resource that performs the role of feature integration in Treisman's model. Subjects were cued to attend to a certain spatial location by a visual cue, and performance at expected and unexpected stimulus locations was compared. Subjects searched for a target letter (R) with distractor letters that either could give rise to illusory conjunctions (PQ) or could not (PB). Results from three separate experiments showed that orienting attention in response to central cues (endogenous orienting) showed similar effects for both conjunction and feature search. However, when attention was oriented with peripheral visual cues (exogenous orienting), conjunction search showed larger effects of attention than did feature search. It is suggested that the attentional systems that are oriented in response to central and peripheral cues may not be the same and that only the latter performs a role in feature integration. Possibilities for future research are discussed.

  16. An insect-inspired model for visual binding II: functional analysis and visual attention.

    PubMed

    Northcutt, Brandon D; Higgins, Charles M

    2017-04-01

    We have developed a neural network model capable of performing visual binding inspired by neuronal circuitry in the optic glomeruli of flies: a brain area that lies just downstream of the optic lobes where early visual processing is performed. This visual binding model is able to detect objects in dynamic image sequences and bind together their respective characteristic visual features-such as color, motion, and orientation-by taking advantage of their common temporal fluctuations. Visual binding is represented in the form of an inhibitory weight matrix which learns over time which features originate from a given visual object. In the present work, we show that information represented implicitly in this weight matrix can be used to explicitly count the number of objects present in the visual image, to enumerate their specific visual characteristics, and even to create an enhanced image in which one particular object is emphasized over others, thus implementing a simple form of visual attention. Further, we present a detailed analysis which reveals the function and theoretical limitations of the visual binding network and in this context describe a novel network learning rule which is optimized for visual binding.

  17. Resource-sharing between internal maintenance and external selection modulates attentional capture by working memory content.

    PubMed

    Kiyonaga, Anastasia; Egner, Tobias

    2014-01-01

    It is unclear why and under what circumstances working memory (WM) and attention interact. Here, we apply the logic of the time-based resource-sharing (TBRS) model of WM (e.g., Barrouillet et al., 2004) to explore the mixed findings of a separate, but related, literature that studies the guidance of visual attention by WM contents. Specifically, we hypothesize that the linkage between WM representations and visual attention is governed by a time-shared cognitive resource that alternately refreshes internal (WM) and selects external (visual attention) information. If this were the case, WM content should guide visual attention (involuntarily), but only when there is time for it to be refreshed in an internal focus of attention. To provide an initial test for this hypothesis, we examined whether the amount of unoccupied time during a WM delay could impact the magnitude of attentional capture by WM contents. Participants were presented with a series of visual search trials while they maintained a WM cue for a delayed-recognition test. WM cues could coincide with the search target, a distracter, or neither. We varied both the number of searches to be performed, and the amount of available time to perform them. Slowing of visual search by a WM matching distracter-and facilitation by a matching target-were curtailed when the delay was filled with fast-paced (refreshing-preventing) search trials, as was subsequent memory probe accuracy. WM content may, therefore, only capture visual attention when it can be refreshed, suggesting that internal (WM) and external attention demands reciprocally impact one another because they share a limited resource. The TBRS rationale can thus be applied in a novel context to explain why WM contents capture attention, and under what conditions that effect should be observed.

  18. Resource-sharing between internal maintenance and external selection modulates attentional capture by working memory content

    PubMed Central

    Kiyonaga, Anastasia; Egner, Tobias

    2014-01-01

    It is unclear why and under what circumstances working memory (WM) and attention interact. Here, we apply the logic of the time-based resource-sharing (TBRS) model of WM (e.g., Barrouillet et al., 2004) to explore the mixed findings of a separate, but related, literature that studies the guidance of visual attention by WM contents. Specifically, we hypothesize that the linkage between WM representations and visual attention is governed by a time-shared cognitive resource that alternately refreshes internal (WM) and selects external (visual attention) information. If this were the case, WM content should guide visual attention (involuntarily), but only when there is time for it to be refreshed in an internal focus of attention. To provide an initial test for this hypothesis, we examined whether the amount of unoccupied time during a WM delay could impact the magnitude of attentional capture by WM contents. Participants were presented with a series of visual search trials while they maintained a WM cue for a delayed-recognition test. WM cues could coincide with the search target, a distracter, or neither. We varied both the number of searches to be performed, and the amount of available time to perform them. Slowing of visual search by a WM matching distracter—and facilitation by a matching target—were curtailed when the delay was filled with fast-paced (refreshing-preventing) search trials, as was subsequent memory probe accuracy. WM content may, therefore, only capture visual attention when it can be refreshed, suggesting that internal (WM) and external attention demands reciprocally impact one another because they share a limited resource. The TBRS rationale can thus be applied in a novel context to explain why WM contents capture attention, and under what conditions that effect should be observed. PMID:25221499

  19. The interplay of attention and consciousness in visual search, attentional blink and working memory consolidation

    PubMed Central

    Raffone, Antonino; Srinivasan, Narayanan; van Leeuwen, Cees

    2014-01-01

    Despite the acknowledged relationship between consciousness and attention, theories of the two have mostly been developed separately. Moreover, these theories have independently attempted to explain phenomena in which both are likely to interact, such as the attentional blink (AB) and working memory (WM) consolidation. Here, we make an effort to bridge the gap between, on the one hand, a theory of consciousness based on the notion of global workspace (GW) and, on the other, a synthesis of theories of visual attention. We offer a theory of attention and consciousness (TAC) that provides a unified neurocognitive account of several phenomena associated with visual search, AB and WM consolidation. TAC assumes multiple processing stages between early visual representation and conscious access, and extends the dynamics of the global neuronal workspace model to a visual attentional workspace (VAW). The VAW is controlled by executive routers, higher-order representations of executive operations in the GW, without the need for explicit saliency or priority maps. TAC leads to newly proposed mechanisms for illusory conjunctions, AB, inattentional blindness and WM capacity, and suggests neural correlates of phenomenal consciousness. Finally, the theory reconciles the all-or-none and graded perspectives on conscious representation. PMID:24639586

  20. The interplay of attention and consciousness in visual search, attentional blink and working memory consolidation.

    PubMed

    Raffone, Antonino; Srinivasan, Narayanan; van Leeuwen, Cees

    2014-05-05

    Despite the acknowledged relationship between consciousness and attention, theories of the two have mostly been developed separately. Moreover, these theories have independently attempted to explain phenomena in which both are likely to interact, such as the attentional blink (AB) and working memory (WM) consolidation. Here, we make an effort to bridge the gap between, on the one hand, a theory of consciousness based on the notion of global workspace (GW) and, on the other, a synthesis of theories of visual attention. We offer a theory of attention and consciousness (TAC) that provides a unified neurocognitive account of several phenomena associated with visual search, AB and WM consolidation. TAC assumes multiple processing stages between early visual representation and conscious access, and extends the dynamics of the global neuronal workspace model to a visual attentional workspace (VAW). The VAW is controlled by executive routers, higher-order representations of executive operations in the GW, without the need for explicit saliency or priority maps. TAC leads to newly proposed mechanisms for illusory conjunctions, AB, inattentional blindness and WM capacity, and suggests neural correlates of phenomenal consciousness. Finally, the theory reconciles the all-or-none and graded perspectives on conscious representation.

  1. Dynamic interactions between visual working memory and saccade target selection

    PubMed Central

    Schneegans, Sebastian; Spencer, John P.; Schöner, Gregor; Hwang, Seongmin; Hollingworth, Andrew

    2014-01-01

    Recent psychophysical experiments have shown that working memory for visual surface features interacts with saccadic motor planning, even in tasks where the saccade target is unambiguously specified by spatial cues. Specifically, a match between a memorized color and the color of either the designated target or a distractor stimulus influences saccade target selection, saccade amplitudes, and latencies in a systematic fashion. To elucidate these effects, we present a dynamic neural field model in combination with new experimental data. The model captures the neural processes underlying visual perception, working memory, and saccade planning relevant to the psychophysical experiment. It consists of a low-level visual sensory representation that interacts with two separate pathways: a spatial pathway implementing spatial attention and saccade generation, and a surface feature pathway implementing color working memory and feature attention. Due to bidirectional coupling between visual working memory and feature attention in the model, the working memory content can indirectly exert an effect on perceptual processing in the low-level sensory representation. This in turn biases saccadic movement planning in the spatial pathway, allowing the model to quantitatively reproduce the observed interaction effects. The continuous coupling between representations in the model also implies that modulation should be bidirectional, and model simulations provide specific predictions for complementary effects of saccade target selection on visual working memory. These predictions were empirically confirmed in a new experiment: Memory for a sample color was biased toward the color of a task-irrelevant saccade target object, demonstrating the bidirectional coupling between visual working memory and perceptual processing. PMID:25228628

  2. Attentional modulation of neuronal variability in circuit models of cortex

    PubMed Central

    Kanashiro, Tatjana; Ocker, Gabriel Koch; Cohen, Marlene R; Doiron, Brent

    2017-01-01

    The circuit mechanisms behind shared neural variability (noise correlation) and its dependence on neural state are poorly understood. Visual attention is well-suited to constrain cortical models of response variability because attention both increases firing rates and their stimulus sensitivity, as well as decreases noise correlations. We provide a novel analysis of population recordings in rhesus primate visual area V4 showing that a single biophysical mechanism may underlie these diverse neural correlates of attention. We explore model cortical networks where top-down mediated increases in excitability, distributed across excitatory and inhibitory targets, capture the key neuronal correlates of attention. Our models predict that top-down signals primarily affect inhibitory neurons, whereas excitatory neurons are more sensitive to stimulus specific bottom-up inputs. Accounting for trial variability in models of state dependent modulation of neuronal activity is a critical step in building a mechanistic theory of neuronal cognition. DOI: http://dx.doi.org/10.7554/eLife.23978.001 PMID:28590902

  3. Evaluation of perception performance in neck dissection planning using eye tracking and attention landscapes

    NASA Astrophysics Data System (ADS)

    Burgert, Oliver; Örn, Veronika; Velichkovsky, Boris M.; Gessat, Michael; Joos, Markus; Strauß, Gero; Tietjen, Christian; Preim, Bernhard; Hertel, Ilka

    2007-03-01

    Neck dissection is a surgical intervention at which cervical lymph node metastases are removed. Accurate surgical planning is of high importance because wrong judgment of the situation causes severe harm for the patient. Diagnostic perception of radiological images by a surgeon is an acquired skill that can be enhanced by training and experience. To improve accuracy in detecting pathological lymph nodes by newcomers and less experienced professionals, it is essential to understand how surgical experts solve relevant visual and recognition tasks. By using eye tracking and especially the newly-developed attention landscapes visualizations, it could be determined whether visualization options, for example 3D models instead of CT data, help in increasing accuracy and speed of neck dissection planning. Thirteen ORL surgeons with different levels of expertise participated in this study. They inspected different visualizations of 3D models and original CT datasets of patients. Among others, we used scanpath analysis and attention landscapes to interpret the inspection strategies. It was possible to distinguish different patterns of visual exploratory activity. The experienced surgeons exhibited a higher concentration of attention on the limited number of areas of interest and demonstrated less saccadic eye movements indicating a better orientation.

  4. Cognitive load reducing in destination decision system

    NASA Astrophysics Data System (ADS)

    Wu, Chunhua; Wang, Cong; Jiang, Qien; Wang, Jian; Chen, Hong

    2007-12-01

    With limited cognitive resource, the quantity of information can be processed by a person is limited. If the limitation is broken, the whole cognitive process would be affected, so did the final decision. The research of effective ways to reduce the cognitive load is launched from two aspects: cutting down the number of alternatives and directing the user to allocate his limited attention resource based on the selective visual attention theory. Decision-making is such a complex process that people usually have difficulties to express their requirements completely. An effective method to get user's hidden requirements is put forward in this paper. With more requirements be caught, the destination decision system can filtering more quantity of inappropriate alternatives. Different information piece has different utility, if the information with high utility would get attention easily, the decision might be made more easily. After analyzing the current selective visual attention theory, a new presentation style based on user's visual attention also put forward in this paper. This model arranges information presentation according to the movement of sightline. Through visual attention, the user can put their limited attention resource on the important information. Hidden requirements catching and presenting information based on the selective visual attention are effective ways to reducing the cognitive load.

  5. An object-based visual attention model for robotic applications.

    PubMed

    Yu, Yuanlong; Mann, George K I; Gosine, Raymond G

    2010-10-01

    By extending integrated competition hypothesis, this paper presents an object-based visual attention model, which selects one object of interest using low-dimensional features, resulting that visual perception starts from a fast attentional selection procedure. The proposed attention model involves seven modules: learning of object representations stored in a long-term memory (LTM), preattentive processing, top-down biasing, bottom-up competition, mediation between top-down and bottom-up ways, generation of saliency maps, and perceptual completion processing. It works in two phases: learning phase and attending phase. In the learning phase, the corresponding object representation is trained statistically when one object is attended. A dual-coding object representation consisting of local and global codings is proposed. Intensity, color, and orientation features are used to build the local coding, and a contour feature is employed to constitute the global coding. In the attending phase, the model preattentively segments the visual field into discrete proto-objects using Gestalt rules at first. If a task-specific object is given, the model recalls the corresponding representation from LTM and deduces the task-relevant feature(s) to evaluate top-down biases. The mediation between automatic bottom-up competition and conscious top-down biasing is then performed to yield a location-based saliency map. By combination of location-based saliency within each proto-object, the proto-object-based saliency is evaluated. The most salient proto-object is selected for attention, and it is finally put into the perceptual completion processing module to yield a complete object region. This model has been applied into distinct tasks of robots: detection of task-specific stationary and moving objects. Experimental results under different conditions are shown to validate this model.

  6. The contributions of visual and central attention to visual working memory.

    PubMed

    Souza, Alessandra S; Oberauer, Klaus

    2017-10-01

    We investigated the role of two kinds of attention-visual and central attention-for the maintenance of visual representations in working memory (WM). In Experiment 1 we directed attention to individual items in WM by presenting cues during the retention interval of a continuous delayed-estimation task, and instructing participants to think of the cued items. Attending to items improved recall commensurate with the frequency with which items were attended (0, 1, or 2 times). Experiments 1 and 3 further tested which kind of attention-visual or central-was involved in WM maintenance. We assessed the dual-task costs of two types of distractor tasks, one tapping sustained visual attention and one tapping central attention. Only the central attention task yielded substantial dual-task costs, implying that central attention substantially contributes to maintenance of visual information in WM. Experiment 2 confirmed that the visual-attention distractor task was demanding enough to disrupt performance in a task relying on visual attention. We combined the visual-attention and the central-attention distractor tasks with a multiple object tracking (MOT) task. Distracting visual attention, but not central attention, impaired MOT performance. Jointly, the three experiments provide a double dissociation between visual and central attention, and between visual WM and visual object tracking: Whereas tracking multiple targets across the visual filed depends on visual attention, visual WM depends mostly on central attention.

  7. A Statistical Physics Perspective to Understand Social Visual Attention in Autism Spectrum Disorder.

    PubMed

    Liberati, Alessio; Fadda, Roberta; Doneddu, Giuseppe; Congiu, Sara; Javarone, Marco A; Striano, Tricia; Chessa, Alessandro

    2017-08-01

    This study investigated social visual attention in children with Autism Spectrum Disorder (ASD) and with typical development (TD) in the light of Brockmann and Geisel's model of visual attention. The probability distribution of gaze movements and clustering of gaze points, registered with eye-tracking technology, was studied during a free visual exploration of a gaze stimulus. A data-driven analysis of the distribution of eye movements was chosen to overcome any possible methodological problems related to the subjective expectations of the experimenters about the informative contents of the image in addition to a computational model to simulate group differences. Analysis of the eye-tracking data indicated that the scanpaths of children with TD and ASD were characterized by eye movements geometrically equivalent to Lévy flights. Children with ASD showed a higher frequency of long saccadic amplitudes compared with controls. A clustering analysis revealed a greater dispersion of eye movements for these children. Modeling of the results indicated higher values of the model parameter modulating the dispersion of eye movements for children with ASD. Together, the experimental results and the model point to a greater dispersion of gaze points in ASD.

  8. Feature-Selective Attentional Modulations in Human Frontoparietal Cortex.

    PubMed

    Ester, Edward F; Sutterer, David W; Serences, John T; Awh, Edward

    2016-08-03

    Control over visual selection has long been framed in terms of a dichotomy between "source" and "site," where top-down feedback signals originating in frontoparietal cortical areas modulate or bias sensory processing in posterior visual areas. This distinction is motivated in part by observations that frontoparietal cortical areas encode task-level variables (e.g., what stimulus is currently relevant or what motor outputs are appropriate), while posterior sensory areas encode continuous or analog feature representations. Here, we present evidence that challenges this distinction. We used fMRI, a roving searchlight analysis, and an inverted encoding model to examine representations of an elementary feature property (orientation) across the entire human cortical sheet while participants attended either the orientation or luminance of a peripheral grating. Orientation-selective representations were present in a multitude of visual, parietal, and prefrontal cortical areas, including portions of the medial occipital cortex, the lateral parietal cortex, and the superior precentral sulcus (thought to contain the human homolog of the macaque frontal eye fields). Additionally, representations in many-but not all-of these regions were stronger when participants were instructed to attend orientation relative to luminance. Collectively, these findings challenge models that posit a strict segregation between sources and sites of attentional control on the basis of representational properties by demonstrating that simple feature values are encoded by cortical regions throughout the visual processing hierarchy, and that representations in many of these areas are modulated by attention. Influential models of visual attention posit a distinction between top-down control and bottom-up sensory processing networks. These models are motivated in part by demonstrations showing that frontoparietal cortical areas associated with top-down control represent abstract or categorical stimulus information, while visual areas encode parametric feature information. Here, we show that multivariate activity in human visual, parietal, and frontal cortical areas encode representations of a simple feature property (orientation). Moreover, representations in several (though not all) of these areas were modulated by feature-based attention in a similar fashion. These results provide an important challenge to models that posit dissociable top-down control and sensory processing networks on the basis of representational properties. Copyright © 2016 the authors 0270-6474/16/368188-12$15.00/0.

  9. Attentional episodes in visual perception

    PubMed Central

    Wyble, Brad; Potter, Mary C; Bowman, Howard; Nieuwenstein, Mark

    2011-01-01

    Is one's temporal perception of the world truly as seamless as it appears? This paper presents a computationally motivated theory suggesting that visual attention samples information from temporal episodes (episodic Simultaneous Type/ Serial Token model or eSTST; Wyble et al 2009a). Breaks between these episodes are punctuated by periods of suppressed attention, better known as the attentional blink (Raymond, Shapiro & Arnell 1992). We test predictions from this model and demonstrate that subjects are able to report more letters from a sequence of four targets presented in a dense temporal cluster, than from a sequence of four targets that are interleaved with non-targets. However, this superior report accuracy comes at a cost in impaired temporal order perception. Further experiments explore the dynamics of multiple episodes, and the boundary conditions that trigger episodic breaks. Finally, we contrast the importance of attentional control, limited resources and memory capacity constructs in the model. PMID:21604913

  10. Components of Attention Modulated by Temporal Expectation

    ERIC Educational Resources Information Center

    Sørensen, Thomas Alrik; Vangkilde, Signe; Bundesen, Claus

    2015-01-01

    By varying the probabilities that a stimulus would appear at particular times after the presentation of a cue and modeling the data by the theory of visual attention (Bundesen, 1990), Vangkilde, Coull, and Bundesen (2012) provided evidence that the speed of encoding a singly presented stimulus letter into visual short-term memory (VSTM) is…

  11. Remote sensing image ship target detection method based on visual attention model

    NASA Astrophysics Data System (ADS)

    Sun, Yuejiao; Lei, Wuhu; Ren, Xiaodong

    2017-11-01

    The traditional methods of detecting ship targets in remote sensing images mostly use sliding window to search the whole image comprehensively. However, the target usually occupies only a small fraction of the image. This method has high computational complexity for large format visible image data. The bottom-up selective attention mechanism can selectively allocate computing resources according to visual stimuli, thus improving the computational efficiency and reducing the difficulty of analysis. Considering of that, a method of ship target detection in remote sensing images based on visual attention model was proposed in this paper. The experimental results show that the proposed method can reduce the computational complexity while improving the detection accuracy, and improve the detection efficiency of ship targets in remote sensing images.

  12. A competitive interaction theory of attentional selection and decision making in brief, multielement displays.

    PubMed

    Smith, Philip L; Sewell, David K

    2013-07-01

    We generalize the integrated system model of Smith and Ratcliff (2009) to obtain a new theory of attentional selection in brief, multielement visual displays. The theory proposes that attentional selection occurs via competitive interactions among detectors that signal the presence of task-relevant features at particular display locations. The outcome of the competition, together with attention, determines which stimuli are selected into visual short-term memory (VSTM). Decisions about the contents of VSTM are made by a diffusion-process decision stage. The selection process is modeled by coupled systems of shunting equations, which perform gated where-on-what pathway VSTM selection. The theory provides a computational account of key findings from attention tasks with near-threshold stimuli. These are (a) the success of the MAX model of visual search and spatial cuing, (b) the distractor homogeneity effect, (c) the double-target detection deficit, (d) redundancy costs in the post-stimulus probe task, (e) the joint item and information capacity limits of VSTM, and (f) the object-based nature of attentional selection. We argue that these phenomena are all manifestations of an underlying competitive VSTM selection process, which arise as a natural consequence of our theory. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  13. Residual attention guidance in blindsight monkeys watching complex natural scenes.

    PubMed

    Yoshida, Masatoshi; Itti, Laurent; Berg, David J; Ikeda, Takuro; Kato, Rikako; Takaura, Kana; White, Brian J; Munoz, Douglas P; Isa, Tadashi

    2012-08-07

    Patients with damage to primary visual cortex (V1) demonstrate residual performance on laboratory visual tasks despite denial of conscious seeing (blindsight) [1]. After a period of recovery, which suggests a role for plasticity [2], visual sensitivity higher than chance is observed in humans and monkeys for simple luminance-defined stimuli, grating stimuli, moving gratings, and other stimuli [3-7]. Some residual cognitive processes including bottom-up attention and spatial memory have also been demonstrated [8-10]. To date, little is known about blindsight with natural stimuli and spontaneous visual behavior. In particular, is orienting attention toward salient stimuli during free viewing still possible? We used a computational saliency map model to analyze spontaneous eye movements of monkeys with blindsight from unilateral ablation of V1. Despite general deficits in gaze allocation, monkeys were significantly attracted to salient stimuli. The contribution of orientation features to salience was nearly abolished, whereas contributions of motion, intensity, and color features were preserved. Control experiments employing laboratory stimuli confirmed the free-viewing finding that lesioned monkeys retained color sensitivity. Our results show that attention guidance over complex natural scenes is preserved in the absence of V1, thereby directly challenging theories and models that crucially depend on V1 to compute the low-level visual features that guide attention. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. Detection of visual signals by rats: A computational model

    EPA Science Inventory

    We applied a neural network model of classical conditioning proposed by Schmajuk, Lam, and Gray (1996) to visual signal detection and discrimination tasks designed to assess sustained attention in rats (Bushnell, 1999). The model describes the animals’ expectation of receiving fo...

  15. a Novel Ship Detection Method for Large-Scale Optical Satellite Images Based on Visual Lbp Feature and Visual Attention Model

    NASA Astrophysics Data System (ADS)

    Haigang, Sui; Zhina, Song

    2016-06-01

    Reliably ship detection in optical satellite images has a wide application in both military and civil fields. However, this problem is very difficult in complex backgrounds, such as waves, clouds, and small islands. Aiming at these issues, this paper explores an automatic and robust model for ship detection in large-scale optical satellite images, which relies on detecting statistical signatures of ship targets, in terms of biologically-inspired visual features. This model first selects salient candidate regions across large-scale images by using a mechanism based on biologically-inspired visual features, combined with visual attention model with local binary pattern (CVLBP). Different from traditional studies, the proposed algorithm is high-speed and helpful to focus on the suspected ship areas avoiding the separation step of land and sea. Largearea images are cut into small image chips and analyzed in two complementary ways: Sparse saliency using visual attention model and detail signatures using LBP features, thus accordant with sparseness of ship distribution on images. Then these features are employed to classify each chip as containing ship targets or not, using a support vector machine (SVM). After getting the suspicious areas, there are still some false alarms such as microwaves and small ribbon clouds, thus simple shape and texture analysis are adopted to distinguish between ships and nonships in suspicious areas. Experimental results show the proposed method is insensitive to waves, clouds, illumination and ship size.

  16. Mental rotation impairs attention shifting and short-term memory encoding: neurophysiological evidence against the response-selection bottleneck model of dual-task performance.

    PubMed

    Pannebakker, Merel M; Jolicœur, Pierre; van Dam, Wessel O; Band, Guido P H; Ridderinkhof, K Richard; Hommel, Bernhard

    2011-09-01

    Dual tasks and their associated delays have often been used to examine the boundaries of processing in the brain. We used the dual-task procedure and recorded event-related potentials (ERPs) to investigate how mental rotation of a first stimulus (S1) influences the shifting of visual-spatial attention to a second stimulus (S2). Visual-spatial attention was monitored by using the N2pc component of the ERP. In addition, we examined the sustained posterior contralateral negativity (SPCN) believed to index the retention of information in visual short-term memory. We found modulations of both the N2pc and the SPCN, suggesting that engaging mechanisms of mental rotation impairs the deployment of visual-spatial attention and delays the passage of a representation of S2 into visual short-term memory. Both results suggest interactions between mental rotation and visual-spatial attention in capacity-limited processing mechanisms indicating that response selection is not pivotal in dual-task delays and all three processes are likely to share a common resource like executive control. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Why are there eccentricity effects in visual search? Visual and attentional hypotheses.

    PubMed

    Wolfe, J M; O'Neill, P; Bennett, S C

    1998-01-01

    In standard visual search experiments, observers search for a target item among distracting items. The locations of target items are generally random within the display and ignored as a factor in data analysis. Previous work has shown that targets presented near fixation are, in fact, found more efficiently than are targets presented at more peripheral locations. This paper proposes that the primary cause of this "eccentricity effect" (Carrasco, Evert, Chang, & Katz, 1995) is an attentional bias that allocates attention preferentially to central items. The first four experiments dealt with the possibility that visual, and not attentional, factors underlie the eccentricity effect. They showed that the eccentricity effect cannot be accounted for by the peripheral reduction in visual sensitivity, peripheral crowding, or cortical magnification. Experiment 5 tested the attention allocation model and also showed that RT x set size effects can be independent of eccentricity effects. Experiment 6 showed that the effective set size in a search task depends, in part, on the eccentricity of the target because observers search from fixation outward.

  18. Visual attention is required for multiple object tracking.

    PubMed

    Tran, Annie; Hoffman, James E

    2016-12-01

    In the multiple object tracking task, participants attempt to keep track of a moving set of target objects embedded in an identical set of moving distractors. Depending on several display parameters, observers are usually only able to accurately track 3 to 4 objects. Various proposals attribute this limit to a fixed number of discrete indexes (Pylyshyn, 1989), limits in visual attention (Cavanagh & Alvarez, 2005), or "architectural limits" in visual cortical areas (Franconeri, 2013). The present set of experiments examined the specific role of visual attention in tracking using a dual-task methodology in which participants tracked objects while identifying letter probes appearing on the tracked objects and distractors. As predicted by the visual attention model, probe identification was faster and/or more accurate when probes appeared on tracked objects. This was the case even when probes were more than twice as likely to appear on distractors suggesting that some minimum amount of attention is required to maintain accurate tracking performance. When the need to protect tracking accuracy was relaxed, participants were able to allocate more attention to distractors when probes were likely to appear there but only at the expense of large reductions in tracking accuracy. A final experiment showed that people attend to tracked objects even when letters appearing on them are task-irrelevant, suggesting that allocation of attention to tracked objects is an obligatory process. These results support the claim that visual attention is required for tracking objects. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  19. Visual orienting and attention deficits in 5- and 10-month-old preterm infants.

    PubMed

    Ross-Sheehy, Shannon; Perone, Sammy; Macek, Kelsi L; Eschman, Bret

    2017-02-01

    Cognitive outcomes for children born prematurely are well characterized, including increased risk for deficits in memory, attention, processing speed, and executive function. However, little is known about deficits that appear within the first 12 months, and how these early deficits contribute to later outcomes. To probe for functional deficits in visual attention, preterm and full-term infants were tested at 5 and 10 months with the Infant Orienting With Attention task (IOWA; Ross-Sheehy, Schneegans and Spencer, 2015). 5-month-old preterm infants showed significant deficits in orienting speed and task related error. However, 10-month-old preterm infants showed only selective deficits in spatial attention, particularly reflexive orienting responses, and responses that required some inhibition. These emergent deficits in spatial attention suggest preterm differences may be related to altered postnatal developmental trajectories. Moreover, we found no evidence of a dose-response relation between increased gestational risk and spatial attention. These results highlight the critical role of postnatal visual experience, and suggest that visual orienting may be a sensitive measure of attentional delay. Results reported here both inform current theoretical models of early perceptual/cognitive development, and future intervention efforts. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Different Signal Enhancement Pathways of Attention and Consciousness Underlie Perception in Humans.

    PubMed

    van Boxtel, Jeroen J A

    2017-06-14

    It is not yet known whether attention and consciousness operate through similar or largely different mechanisms. Visual processing mechanisms are routinely characterized by measuring contrast response functions (CRFs). In this report, behavioral CRFs were obtained in humans (both males and females) by measuring afterimage durations over the entire range of inducer stimulus contrasts to reveal visual mechanisms behind attention and consciousness. Deviations relative to the standard CRF, i.e., gain functions, describe the strength of signal enhancement, which were assessed for both changes due to attentional task and conscious perception. It was found that attention displayed a response-gain function, whereas consciousness displayed a contrast-gain function. Through model comparisons, which only included contrast-gain modulations, both contrast-gain and response-gain effects can be explained with a two-level normalization model, in which consciousness affects only the first level and attention affects only the second level. These results demonstrate that attention and consciousness can effectively show different gain functions because they operate through different signal enhancement mechanisms. SIGNIFICANCE STATEMENT The relationship between attention and consciousness is still debated. Mapping contrast response functions (CRFs) has allowed (neuro)scientists to gain important insights into the mechanistic underpinnings of visual processing. Here, the influence of both attention and consciousness on these functions were measured and they displayed a strong dissociation. First, attention lowered CRFs, whereas consciousness raised them. Second, attention manifests itself as a response-gain function, whereas consciousness manifests itself as a contrast-gain function. Extensive model comparisons show that these results are best explained in a two-level normalization model in which consciousness affects only the first level, whereas attention affects only the second level. These findings show dissociations between both the computational mechanisms behind attention and consciousness and the perceptual consequences that they induce. Copyright © 2017 the authors 0270-6474/17/375912-11$15.00/0.

  1. Selective visual processing across competition episodes: a theory of task-driven visual attention and working memory

    PubMed Central

    Schneider, Werner X.

    2013-01-01

    The goal of this review is to introduce a theory of task-driven visual attention and working memory (TRAM). Based on a specific biased competition model, the ‘theory of visual attention’ (TVA) and its neural interpretation (NTVA), TRAM introduces the following assumption. First, selective visual processing over time is structured in competition episodes. Within an episode, that is, during its first two phases, a limited number of proto-objects are competitively encoded—modulated by the current task—in activation-based visual working memory (VWM). In processing phase 3, relevant VWM objects are transferred via a short-term consolidation into passive VWM. Second, each time attentional priorities change (e.g. after an eye movement), a new competition episode is initiated. Third, if a phase 3 VWM process (e.g. short-term consolidation) is not finished, whereas a new episode is called, a protective maintenance process allows its completion. After a VWM object change, its protective maintenance process is followed by an encapsulation of the VWM object causing attentional resource costs in trailing competition episodes. Viewed from this perspective, a new explanation of key findings of the attentional blink will be offered. Finally, a new suggestion will be made as to how VWM items might interact with visual search processes. PMID:24018722

  2. Nonuniform Changes in the Distribution of Visual Attention from Visual Complexity and Action: A Driving Simulation Study.

    PubMed

    Park, George D; Reed, Catherine L

    2015-02-01

    Researchers acknowledge the interplay between action and attention, but typically consider action as a response to successful attentional selection or the correlation of performance on separate action and attention tasks. We investigated how concurrent action with spatial monitoring affects the distribution of attention across the visual field. We embedded a functional field of view (FFOV) paradigm with concurrent central object recognition and peripheral target localization tasks in a simulated driving environment. Peripheral targets varied across 20-60 deg eccentricity at 11 radial spokes. Three conditions assessed the effects of visual complexity and concurrent action on the size and shape of the FFOV: (1) with no background, (2) with driving background, and (3) with driving background and vehicle steering. The addition of visual complexity slowed task performance and reduced the FFOV size but did not change the baseline shape. In contrast, the addition of steering produced not only shrinkage of the FFOV, but also changes in the FFOV shape. Nonuniform performance decrements occurred in proximal regions used for the central task and for steering, independent of interference from context elements. Multifocal attention models should consider the role of action and account for nonhomogeneities in the distribution of attention. © 2015 SAGE Publications.

  3. The influence of anaesthetists' experience on workload, performance and visual attention during simulated critical incidents.

    PubMed

    Schulz, Christian M; Schneider, Erich; Kohlbecher, Stefan; Hapfelmeier, Alexander; Heuser, Fabian; Wagner, Klaus J; Kochs, Eberhard F; Schneider, Gerhard

    2014-10-01

    Development of accurate Situation Awareness (SA) depends on experience and may be impaired during excessive workload. In order to gain adequate SA for decision making and performance, anaesthetists need to distribute visual attention effectively. Therefore, we hypothesized that in more experienced anaesthetists performance is better and increase of physiological workload is less during critical incidents. Additionally, we investigated the relation between physiological workload indicators and distribution of visual attention. In fifteen anaesthetists, the increase of pupil size and heart rate was assessed in course of a simulated critical incident. Simulator log files were used for performance assessment. An eye-tracking device (EyeSeeCam) provided data about the anaesthetists' distribution of visual attention. Performance was assessed as time until definitive treatment. T tests and multivariate generalized linear models (MANOVA) were used for retrospective statistical analysis. Mean pupil diameter increase was 8.1% (SD ± 4.3) in the less experienced and 15.8% (±10.4) in the more experienced subjects (p = 0.191). Mean heart rate increase was 10.2% (±6.7) and 10.5% (±8.3, p = 0.956), respectively. Performance did not depend on experience. Pupil diameter and heart rate increases were associated with a shift of visual attention from monitoring towards manual tasks (not significant). For the first time, the following four variables were assessed simultaneously: physiological workload indicators, performance, experience, and distribution of visual attention between "monitoring" and "manual" tasks. However, we were unable to detect significant interactions between these variables. This experimental model could prove valuable in the investigation of gaining and maintaining SA in the operation theatre.

  4. Selective attention to imagined facial ugliness is specific to body dysmorphic disorder.

    PubMed

    Grocholewski, Anja; Kliem, Sören; Heinrichs, Nina

    2012-03-01

    Cognitive-behavioral models postulate that biases in selective attention are key factors contributing to susceptibility to and maintenance of body dysmorphic disorder (BDD). Visual attention in particular toward the imagined defect in appearance may be a crucial element. The present study therefore examined whether individuals with BDD showed increased visual attention to flaws in their own and in unfamiliar faces. Twenty individuals with BDD, 20 individuals with social phobia, and 20 mentally healthy individuals participated in an eye-tracking experiment. Participants were instructed to gaze at the photographs of 15 pictures of themselves and several unfamiliar faces. Only patients with BDD showed heightened selective visual attention to the imagined defect in their own face, as well to corresponding regions in other, unfamiliar faces. The results support the assumption that there is a specific attentional bias in BDD. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Learning-based saliency model with depth information.

    PubMed

    Ma, Chih-Yao; Hang, Hsueh-Ming

    2015-01-01

    Most previous studies on visual saliency focused on two-dimensional (2D) scenes. Due to the rapidly growing three-dimensional (3D) video applications, it is very desirable to know how depth information affects human visual attention. In this study, we first conducted eye-fixation experiments on 3D images. Our fixation data set comprises 475 3D images and 16 subjects. We used a Tobii TX300 eye tracker (Tobii, Stockholm, Sweden) to track the eye movement of each subject. In addition, this database contains 475 computed depth maps. Due to the scarcity of public-domain 3D fixation data, this data set should be useful to the 3D visual attention research community. Then, a learning-based visual attention model was designed to predict human attention. In addition to the popular 2D features, we included the depth map and its derived features. The results indicate that the extra depth information can enhance the saliency estimation accuracy specifically for close-up objects hidden in a complex-texture background. In addition, we examined the effectiveness of various low-, mid-, and high-level features on saliency prediction. Compared with both 2D and 3D state-of-the-art saliency estimation models, our methods show better performance on the 3D test images. The eye-tracking database and the MATLAB source codes for the proposed saliency model and evaluation methods are available on our website.

  6. Visual attention: The past 25 years

    PubMed Central

    Carrasco, Marisa

    2012-01-01

    This review focuses on covert attention and how it alters early vision. I explain why attention is considered a selective process, the constructs of covert attention, spatial endogenous and exogenous attention, and feature-based attention. I explain how in the last 25 years research on attention has characterized the effects of covert attention on spatial filters and how attention influences the selection of stimuli of interest. This review includes the effects of spatial attention on discriminability and appearance in tasks mediated by contrast sensitivity and spatial resolution; the effects of feature-based attention on basic visual processes, and a comparison of the effects of spatial and feature-based attention. The emphasis of this review is on psychophysical studies, but relevant electrophysiological and neuroimaging studies and models regarding how and where neuronal responses are modulated are also discussed. PMID:21549742

  7. Visual attention: the past 25 years.

    PubMed

    Carrasco, Marisa

    2011-07-01

    This review focuses on covert attention and how it alters early vision. I explain why attention is considered a selective process, the constructs of covert attention, spatial endogenous and exogenous attention, and feature-based attention. I explain how in the last 25 years research on attention has characterized the effects of covert attention on spatial filters and how attention influences the selection of stimuli of interest. This review includes the effects of spatial attention on discriminability and appearance in tasks mediated by contrast sensitivity and spatial resolution; the effects of feature-based attention on basic visual processes, and a comparison of the effects of spatial and feature-based attention. The emphasis of this review is on psychophysical studies, but relevant electrophysiological and neuroimaging studies and models regarding how and where neuronal responses are modulated are also discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Cognitive aging on latent constructs for visual processing capacity: a novel structural equation modeling framework with causal assumptions based on a theory of visual attention.

    PubMed

    Nielsen, Simon; Wilms, L Inge

    2014-01-01

    We examined the effects of normal aging on visual cognition in a sample of 112 healthy adults aged 60-75. A testbattery was designed to capture high-level measures of visual working memory and low-level measures of visuospatial attention and memory. To answer questions of how cognitive aging affects specific aspects of visual processing capacity, we used confirmatory factor analyses in Structural Equation Modeling (SEM; Model 2), informed by functional structures that were modeled with path analyses in SEM (Model 1). The results show that aging effects were selective to measures of visual processing speed compared to visual short-term memory (VSTM) capacity (Model 2). These results are consistent with some studies reporting selective aging effects on processing speed, and inconsistent with other studies reporting aging effects on both processing speed and VSTM capacity. In the discussion we argue that this discrepancy may be mediated by differences in age ranges, and variables of demography. The study demonstrates that SEM is a sensitive method to detect cognitive aging effects even within a narrow age-range, and a useful approach to structure the relationships between measured variables, and the cognitive functional foundation they supposedly represent.

  9. The Effect of Retrieval Cues on Visual Preferences and Memory in Infancy: Evidence for a Four-Phase Attention Function.

    ERIC Educational Resources Information Center

    Bahrick, Lorraine E.; Hernandez-Reif, Maria; Pickens, Jeffrey N.

    1997-01-01

    Tested hypothesis from Bahrick and Pickens' infant attention model that retrieval cues increase memory accessibility and shift visual preferences toward greater novelty to resemble recent memories. Found that after retention intervals associated with remote or intermediate memory, previous familiarity preferences shifted to null or novelty…

  10. Slow Perceptual Processing at the Core of Developmental Dyslexia: A Parameter-Based Assessment of Visual Attention

    ERIC Educational Resources Information Center

    Stenneken, Prisca; Egetemeir, Johanna; Schulte-Korne, Gerd; Muller, Hermann J.; Schneider, Werner X.; Finke, Kathrin

    2011-01-01

    The cognitive causes as well as the neurological and genetic basis of developmental dyslexia, a complex disorder of written language acquisition, are intensely discussed with regard to multiple-deficit models. Accumulating evidence has revealed dyslexics' impairments in a variety of tasks requiring visual attention. The heterogeneity of these…

  11. Serial and Parallel Attentive Visual Searches: Evidence from Cumulative Distribution Functions of Response Times

    ERIC Educational Resources Information Center

    Sung, Kyongje

    2008-01-01

    Participants searched a visual display for a target among distractors. Each of 3 experiments tested a condition proposed to require attention and for which certain models propose a serial search. Serial versus parallel processing was tested by examining effects on response time means and cumulative distribution functions. In 2 conditions, the…

  12. Visual question answering using hierarchical dynamic memory networks

    NASA Astrophysics Data System (ADS)

    Shang, Jiayu; Li, Shiren; Duan, Zhikui; Huang, Junwei

    2018-04-01

    Visual Question Answering (VQA) is one of the most popular research fields in machine learning which aims to let the computer learn to answer natural language questions with images. In this paper, we propose a new method called hierarchical dynamic memory networks (HDMN), which takes both question attention and visual attention into consideration impressed by Co-Attention method, which is the best (or among the best) algorithm for now. Additionally, we use bi-directional LSTMs, which have a better capability to remain more information from the question and image, to replace the old unit so that we can capture information from both past and future sentences to be used. Then we rebuild the hierarchical architecture for not only question attention but also visual attention. What's more, we accelerate the algorithm via a new technic called Batch Normalization which helps the network converge more quickly than other algorithms. The experimental result shows that our model improves the state of the art on the large COCO-QA dataset, compared with other methods.

  13. Attention modulates spatial priority maps in the human occipital, parietal and frontal cortices

    PubMed Central

    Sprague, Thomas C.; Serences, John T.

    2014-01-01

    Computational theories propose that attention modulates the topographical landscape of spatial ‘priority’ maps in regions of visual cortex so that the location of an important object is associated with higher activation levels. While single-unit recording studies have demonstrated attention-related increases in the gain of neural responses and changes in the size of spatial receptive fields, the net effect of these modulations on the topography of region-level priority maps has not been investigated. Here, we used fMRI and a multivariate encoding model to reconstruct spatial representations of attended and ignored stimuli using activation patterns across entire visual areas. These reconstructed spatial representations reveal the influence of attention on the amplitude and size of stimulus representations within putative priority maps across the visual hierarchy. Our results suggest that attention increases the amplitude of stimulus representations in these spatial maps, particularly in higher visual areas, but does not substantively change their size. PMID:24212672

  14. Timing divided attention.

    PubMed

    Hogendoorn, Hinze; Carlson, Thomas A; VanRullen, Rufin; Verstraten, Frans A J

    2010-11-01

    Visual attention can be divided over multiple objects or locations. However, there is no single theoretical framework within which the effects of dividing attention can be interpreted. In order to develop such a model, here we manipulated the stage of visual processing at which attention was divided, while simultaneously probing the costs of dividing attention on two dimensions. We show that dividing attention incurs dissociable time and precision costs, which depend on whether attention is divided during monitoring or during access. Dividing attention during monitoring resulted in progressively delayed access to attended locations as additional locations were monitored, as well as a one-off precision cost. When dividing attention during access, time costs were systematically lower at one of the accessed locations than at the other, indicating that divided attention during access, in fact, involves rapid sequential allocation of undivided attention. We propose a model in which divided attention is understood as the simultaneous parallel preparation and subsequent sequential execution of multiple shifts of undivided attention. This interpretation has the potential to bring together diverse findings from both the divided-attention and saccade preparation literature and provides a framework within which to integrate the broad spectrum of divided-attention methodologies.

  15. Aligning Where to See and What to Tell: Image Captioning with Region-Based Attention and Scene-Specific Contexts.

    PubMed

    Fu, Kun; Jin, Junqi; Cui, Runpeng; Sha, Fei; Zhang, Changshui

    2017-12-01

    Recent progress on automatic generation of image captions has shown that it is possible to describe the most salient information conveyed by images with accurate and meaningful sentences. In this paper, we propose an image captioning system that exploits the parallel structures between images and sentences. In our model, the process of generating the next word, given the previously generated ones, is aligned with the visual perception experience where the attention shifts among the visual regions-such transitions impose a thread of ordering in visual perception. This alignment characterizes the flow of latent meaning, which encodes what is semantically shared by both the visual scene and the text description. Our system also makes another novel modeling contribution by introducing scene-specific contexts that capture higher-level semantic information encoded in an image. The contexts adapt language models for word generation to specific scene types. We benchmark our system and contrast to published results on several popular datasets, using both automatic evaluation metrics and human evaluation. We show that either region-based attention or scene-specific contexts improves systems without those components. Furthermore, combining these two modeling ingredients attains the state-of-the-art performance.

  16. Objective assessment of the contribution of dental esthetics and facial attractiveness in men via eye tracking.

    PubMed

    Baker, Robin S; Fields, Henry W; Beck, F Michael; Firestone, Allen R; Rosenstiel, Stephen F

    2018-04-01

    Recently, greater emphasis has been placed on smile esthetics in dentistry. Eye tracking has been used to objectively evaluate attention to the dentition (mouth) in female models with different levels of dental esthetics quantified by the aesthetic component of the Index of Orthodontic Treatment Need (IOTN). This has not been accomplished in men. Our objective was to determine the visual attention to the mouth in men with different levels of dental esthetics (IOTN levels) and background facial attractiveness, for both male and female raters, using eye tracking. Facial images of men rated as unattractive, average, and attractive were digitally manipulated and paired with validated oral images, IOTN levels 1 (no treatment need), 7 (borderline treatment need), and 10 (definite treatment need). Sixty-four raters meeting the inclusion criteria were included in the data analysis. Each rater was calibrated in the eye tracker and randomly viewed the composite images for 3 seconds, twice for reliability. Reliability was good or excellent (intraclass correlation coefficients, 0.6-0.9). Significant interactions were observed with factorial repeated-measures analysis of variance and the Tukey-Kramer method for density and duration of fixations in the interactions of model facial attractiveness by area of the face (P <0.0001, P <0.0001, respectively), dental esthetics (IOTN) by area of the face (P <0.0001, P <0.0001, respectively), and rater sex by area of the face (P = 0.0166, P = 0.0290, respectively). For area by facial attractiveness, the hierarchy of visual attention in unattractive and attractive models was eye, mouth, and nose, but for men of average attractiveness, it was mouth, eye, and nose. For dental esthetics by area, at IOTN 7, the mouth had significantly more visual attention than it did at IOTN 1 and significantly more than the nose. At IOTN 10, the mouth received significantly more attention than at IOTN 7 and surpassed the nose and eye. These findings were irrespective of facial attractiveness levels. For rater sex by area in visual density, women showed significantly more attention to the eyes than did men, and only men showed significantly more attention to the mouth over the nose. Visual attention to the mouth was the greatest in men of average facial attractiveness, irrespective of dental esthetics. In borderline dental esthetics (IOTN 7), the eye and mouth were statistically indistinguishable, but in the most unesthetic dental attractiveness level (IOTN 10), the mouth exceeded the eye. The most unesthetic malocclusion significantly attracted visual attention in men. Male and female raters showed differences in their visual attention to male faces. Laypersons gave significant visual attention to poor dental esthetics in men, irrespective of background attractiveness; this was counter to what was seen in women. Copyright © 2017 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  17. Attentional biases towards familiar and unfamiliar foods in children. The role of food neophobia.

    PubMed

    Maratos, Frances A; Staples, Paul

    2015-08-01

    Familiarity of food stimuli is one factor that has been proposed to explain food preferences and food neophobia in children, with some research suggesting that food neophobia (and familiarity) is at first a predominant of the visual domain. Considering visual attentional biases are a key factor implicated in a majority of fear-related phobias/anxieties, the purpose of this research was to investigate attentional biases to familiar and unfamiliar fruit and vegetables in 8 to 11 year old children with differing levels of food neophobia. To this end, 70 primary aged children completed a visual-probe task measuring attentional biases towards familiar and unfamiliar fruit/vegetables, as well as the food neophobia, general neophobia and willingness to try self-report measures. Results revealed that as an undifferentiated population all children appeared to demonstrate an attentional bias towards the unfamiliar fruit and vegetable stimuli. However, when considering food neophobia, this bias was significantly exaggerated for children self-reporting high food neophobia and negligible for children self-reporting low food neophobia. In addition, willingness to try the food stimuli was inversely correlated with attentional bias towards the unfamiliar fruits/vegetables. Our results demonstrate that visual aspects of food stimuli (e.g. familiarity) play an important role in childhood food neophobia. This study provides the first empirical test of recent theory/models of food neophobia (e.g. Brown & Harris, 2012). Findings are discussed in light of these models and related anxiety models, along with implications concerning the treatment of childhood food neophobia. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Visual-Spatial Attention Aids the Maintenance of Object Representations in Visual Working Memory

    PubMed Central

    Williams, Melonie; Pouget, Pierre; Boucher, Leanne; Woodman, Geoffrey F.

    2013-01-01

    Theories have proposed that the maintenance of object representations in visual working memory is aided by a spatial rehearsal mechanism. In this study, we used two different approaches to test the hypothesis that overt and covert visual-spatial attention mechanisms contribute to the maintenance of object representations in visual working memory. First, we tracked observers’ eye movements while remembering a variable number of objects during change-detection tasks. We observed that during the blank retention interval, participants spontaneously shifted gaze to the locations that the objects had occupied in the memory array. Next, we hypothesized that if attention mechanisms contribute to the maintenance of object representations, then drawing attention away from the object locations during the retention interval would impair object memory during these change-detection tasks. Supporting this prediction, we found that attending to the fixation point in anticipation of a brief probe stimulus during the retention interval reduced change-detection accuracy even on the trials in which no probe occurred. These findings support models of working memory in which visual-spatial selection mechanisms contribute to the maintenance of object representations. PMID:23371773

  19. Traffic Sign Detection Based on Biologically Visual Mechanism

    NASA Astrophysics Data System (ADS)

    Hu, X.; Zhu, X.; Li, D.

    2012-07-01

    TSR (Traffic sign recognition) is an important problem in ITS (intelligent traffic system), which is being paid more and more attention for realizing drivers assisting system and unmanned vehicle etc. TSR consists of two steps: detection and recognition, and this paper describe a new traffic sign detection method. The design principle of the traffic sign is comply with the visual attention mechanism of human, so we propose a method using visual attention mechanism to detect traffic sign ,which is reasonable. In our method, the whole scene will firstly be analyzed by visual attention model to acquire the area where traffic signs might be placed. And then, these candidate areas will be analyzed according to the shape characteristics of the traffic sign to detect traffic signs. In traffic sign detection experiments, the result shows the proposed method is effectively and robust than other existing saliency detection method.

  20. Parallel perceptual enhancement and hierarchic relevance evaluation in an audio-visual conjunction task.

    PubMed

    Potts, Geoffrey F; Wood, Susan M; Kothmann, Delia; Martin, Laura E

    2008-10-21

    Attention directs limited-capacity information processing resources to a subset of available perceptual representations. The mechanisms by which attention selects task-relevant representations for preferential processing are not fully known. Triesman and Gelade's [Triesman, A., Gelade, G., 1980. A feature integration theory of attention. Cognit. Psychol. 12, 97-136.] influential attention model posits that simple features are processed preattentively, in parallel, but that attention is required to serially conjoin multiple features into an object representation. Event-related potentials have provided evidence for this model showing parallel processing of perceptual features in the posterior Selection Negativity (SN) and serial, hierarchic processing of feature conjunctions in the Frontal Selection Positivity (FSP). Most prior studies have been done on conjunctions within one sensory modality while many real-world objects have multimodal features. It is not known if the same neural systems of posterior parallel processing of simple features and frontal serial processing of feature conjunctions seen within a sensory modality also operate on conjunctions between modalities. The current study used ERPs and simultaneously presented auditory and visual stimuli in three task conditions: Attend Auditory (auditory feature determines the target, visual features are irrelevant), Attend Visual (visual features relevant, auditory irrelevant), and Attend Conjunction (target defined by the co-occurrence of an auditory and a visual feature). In the Attend Conjunction condition when the auditory but not the visual feature was a target there was an SN over auditory cortex, when the visual but not auditory stimulus was a target there was an SN over visual cortex, and when both auditory and visual stimuli were targets (i.e. conjunction target) there were SNs over both auditory and visual cortex, indicating parallel processing of the simple features within each modality. In contrast, an FSP was present when either the visual only or both auditory and visual features were targets, but not when only the auditory stimulus was a target, indicating that the conjunction target determination was evaluated serially and hierarchically with visual information taking precedence. This indicates that the detection of a target defined by audio-visual conjunction is achieved via the same mechanism as within a single perceptual modality, through separate, parallel processing of the auditory and visual features and serial processing of the feature conjunction elements, rather than by evaluation of a fused multimodal percept.

  1. You see what you have learned. Evidence for an interrelation of associative learning and visual selective attention.

    PubMed

    Feldmann-Wüstefeld, Tobias; Uengoer, Metin; Schubö, Anna

    2015-11-01

    Besides visual salience and observers' current intention, prior learning experience may influence deployment of visual attention. Associative learning models postulate that observers pay more attention to stimuli previously experienced as reliable predictors of specific outcomes. To investigate the impact of learning experience on deployment of attention, we combined an associative learning task with a visual search task and measured event-related potentials of the EEG as neural markers of attention deployment. In the learning task, participants categorized stimuli varying in color/shape with only one dimension being predictive of category membership. In the search task, participants searched a shape target while disregarding irrelevant color distractors. Behavioral results showed that color distractors impaired performance to a greater degree when color rather than shape was predictive in the learning task. Neurophysiological results show that the amplified distraction was due to differential attention deployment (N2pc). Experiment 2 showed that when color was predictive for learning, color distractors captured more attention in the search task (ND component) and more suppression of color distractor was required (PD component). The present results thus demonstrate that priority in visual attention is biased toward predictive stimuli, which allows learning experience to shape selection. We also show that learning experience can overrule strong top-down control (blocked tasks, Experiment 3) and that learning experience has a longer-term effect on attention deployment (tasks on two successive days, Experiment 4). © 2015 Society for Psychophysiological Research.

  2. Is that disgust I see? Political ideology and biased visual attention.

    PubMed

    Oosterhoff, Benjamin; Shook, Natalie J; Ford, Cameron

    2018-01-15

    Considerable evidence suggests that political liberals and conservatives vary in the way they process and respond to valenced (i.e., negative versus positive) information, with conservatives generally displaying greater negativity biases than liberals. Less is known about whether liberals and conservatives differentially prioritize certain forms of negative information over others. Across two studies using eye-tracking methodology, we examined differences in visual attention to negative scenes and facial expressions based on self-reported political ideology. In Study 1, scenes rated high in fear, disgust, sadness, and neutrality were presented simultaneously. Greater endorsement of socially conservative political attitudes was associated with less attentional engagement (i.e., lower dwell time) of disgust scenes and more attentional engagement toward neutral scenes. Socially conservative political attitudes were not significantly associated with visual attention to fear or sad scenes. In Study 2, images depicting facial expressions of fear, disgust, sadness, and neutrality were presented simultaneously. Greater endorsement of socially conservative political attitudes was associated with greater attentional engagement with facial expressions depicting disgust and less attentional engagement toward neutral faces. Visual attention to fearful or sad faces was not related to social conservatism. Endorsement of economically conservative political attitudes was not consistently associated with biases in visual attention across both studies. These findings support disease-avoidance models and suggest that social conservatism may be rooted within a greater sensitivity to disgust-related information. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Converging levels of analysis in the cognitive neuroscience of visual attention.

    PubMed Central

    Duncan, J

    1998-01-01

    Experiments using behavioural, lesion, functional imaging and single neuron methods are considered in the context of a neuropsychological model of visual attention. According to this model, inputs compete for representation in multiple visually responsive brain systems, sensory and motor, cortical and subcortical. Competition is biased by advance priming of neurons responsive to current behavioural targets. Across systems competition is integrated such that the same, selected object tends to become dominant throughout. The behavioural studies reviewed concern divided attention within and between modalities. They implicate within-modality competition as one main restriction on concurrent stimulus identification. In contrast to the conventional association of lateral attentional focus with parietal lobe function, the lesion studies show attentional bias to be a widespread consequence of unilateral cortical damage. Although the clinical syndrome of unilateral neglect may indeed be associated with parietal lesions, this probably reflects an assortment of further deficits accompanying a simple attentional imbalance. The functional imaging studies show joint involvement of lateral prefrontal and occipital cortex in lateral attentional focus and competition. The single unit studies suggest how competition in several regions of extrastriate cortex is biased by advance priming of neurons responsive to current behavioural targets. Together, the concepts of competition, priming and integration allow a unified theoretical approach to findings from behavioural to single neuron levels. PMID:9770224

  4. Two different mechanisms support selective attention at different phases of training.

    PubMed

    Itthipuripat, Sirawaj; Cha, Kexin; Byers, Anna; Serences, John T

    2017-06-01

    Selective attention supports the prioritized processing of relevant sensory information to facilitate goal-directed behavior. Studies in human subjects demonstrate that attentional gain of cortical responses can sufficiently account for attention-related improvements in behavior. On the other hand, studies using highly trained nonhuman primates suggest that reductions in neural noise can better explain attentional facilitation of behavior. Given the importance of selective information processing in nearly all domains of cognition, we sought to reconcile these competing accounts by testing the hypothesis that extensive behavioral training alters the neural mechanisms that support selective attention. We tested this hypothesis using electroencephalography (EEG) to measure stimulus-evoked visual responses from human subjects while they performed a selective spatial attention task over the course of ~1 month. Early in training, spatial attention led to an increase in the gain of stimulus-evoked visual responses. Gain was apparent within ~100 ms of stimulus onset, and a quantitative model based on signal detection theory (SDT) successfully linked the magnitude of this gain modulation to attention-related improvements in behavior. However, after extensive training, this early attentional gain was eliminated even though there were still substantial attention-related improvements in behavior. Accordingly, the SDT-based model required noise reduction to account for the link between the stimulus-evoked visual responses and attentional modulations of behavior. These findings suggest that training can lead to fundamental changes in the way attention alters the early cortical responses that support selective information processing. Moreover, these data facilitate the translation of results across different species and across experimental procedures that employ different behavioral training regimes.

  5. Two different mechanisms support selective attention at different phases of training

    PubMed Central

    Cha, Kexin; Byers, Anna; Serences, John T.

    2017-01-01

    Selective attention supports the prioritized processing of relevant sensory information to facilitate goal-directed behavior. Studies in human subjects demonstrate that attentional gain of cortical responses can sufficiently account for attention-related improvements in behavior. On the other hand, studies using highly trained nonhuman primates suggest that reductions in neural noise can better explain attentional facilitation of behavior. Given the importance of selective information processing in nearly all domains of cognition, we sought to reconcile these competing accounts by testing the hypothesis that extensive behavioral training alters the neural mechanisms that support selective attention. We tested this hypothesis using electroencephalography (EEG) to measure stimulus-evoked visual responses from human subjects while they performed a selective spatial attention task over the course of ~1 month. Early in training, spatial attention led to an increase in the gain of stimulus-evoked visual responses. Gain was apparent within ~100 ms of stimulus onset, and a quantitative model based on signal detection theory (SDT) successfully linked the magnitude of this gain modulation to attention-related improvements in behavior. However, after extensive training, this early attentional gain was eliminated even though there were still substantial attention-related improvements in behavior. Accordingly, the SDT-based model required noise reduction to account for the link between the stimulus-evoked visual responses and attentional modulations of behavior. These findings suggest that training can lead to fundamental changes in the way attention alters the early cortical responses that support selective information processing. Moreover, these data facilitate the translation of results across different species and across experimental procedures that employ different behavioral training regimes. PMID:28654635

  6. The prevalence of visual hallucinations in non-affective psychosis, and the role of perception and attention.

    PubMed

    van Ommen, M M; van Beilen, M; Cornelissen, F W; Smid, H G O M; Knegtering, H; Aleman, A; van Laar, T

    2016-06-01

    Little is known about visual hallucinations (VH) in psychosis. We investigated the prevalence and the role of bottom-up and top-down processing in VH. The prevailing view is that VH are probably related to altered top-down processing, rather than to distorted bottom-up processing. Conversely, VH in Parkinson's disease are associated with impaired visual perception and attention, as proposed by the Perception and Attention Deficit (PAD) model. Auditory hallucinations (AH) in psychosis, however, are thought to be related to increased attention. Our retrospective database study included 1119 patients with non-affective psychosis and 586 controls. The Community Assessment of Psychic Experiences established the VH rate. Scores on visual perception tests [Degraded Facial Affect Recognition (DFAR), Benton Facial Recognition Task] and attention tests [Response Set-shifting Task, Continuous Performance Test-HQ (CPT-HQ)] were compared between 75 VH patients, 706 non-VH patients and 485 non-VH controls. The lifetime VH rate was 37%. The patient groups performed similarly on cognitive tasks; both groups showed worse perception (DFAR) than controls. Non-VH patients showed worse attention (CPT-HQ) than controls, whereas VH patients did not perform differently. We did not find significant VH-related impairments in bottom-up processing or direct top-down alterations. However, the results suggest a relatively spared attentional performance in VH patients, whereas face perception and processing speed were equally impaired in both patient groups relative to controls. This would match better with the increased attention hypothesis than with the PAD model. Our finding that VH frequently co-occur with AH may support an increased attention-induced 'hallucination proneness'.

  7. Hemisphere-Dependent Attentional Modulation of Human Parietal Visual Field Representations

    PubMed Central

    Silver, Michael A.

    2015-01-01

    Posterior parietal cortex contains several areas defined by topographically organized maps of the contralateral visual field. However, recent studies suggest that ipsilateral stimuli can elicit larger responses in the right than left hemisphere within these areas, depending on task demands. Here we determined the effects of spatial attention on the set of visual field locations (the population receptive field [pRF]) that evoked a response for each voxel in human topographic parietal cortex. A two-dimensional Gaussian was used to model the pRF in each voxel, and we measured the effects of attention on not only the center (preferred visual field location) but also the size (visual field extent) of the pRF. In both hemispheres, larger pRFs were associated with attending to the mapping stimulus compared with attending to a central fixation point. In the left hemisphere, attending to the stimulus also resulted in more peripheral preferred locations of contralateral representations, compared with attending fixation. These effects of attention on both pRF size and preferred location preserved contralateral representations in the left hemisphere. In contrast, attentional modulation of pRF size but not preferred location significantly increased representation of the ipsilateral (right) visual hemifield in right parietal cortex. Thus, attention effects in topographic parietal cortex exhibit hemispheric asymmetries similar to those seen in hemispatial neglect. Our findings suggest potential mechanisms underlying the behavioral deficits associated with this disorder. PMID:25589746

  8. Attention Determines Contextual Enhancement versus Suppression in Human Primary Visual Cortex.

    PubMed

    Flevaris, Anastasia V; Murray, Scott O

    2015-09-02

    Neural responses in primary visual cortex (V1) depend on stimulus context in seemingly complex ways. For example, responses to an oriented stimulus can be suppressed when it is flanked by iso-oriented versus orthogonally oriented stimuli but can also be enhanced when attention is directed to iso-oriented versus orthogonal flanking stimuli. Thus the exact same contextual stimulus arrangement can have completely opposite effects on neural responses-in some cases leading to orientation-tuned suppression and in other cases leading to orientation-tuned enhancement. Here we show that stimulus-based suppression and enhancement of fMRI responses in humans depends on small changes in the focus of attention and can be explained by a model that combines feature-based attention with response normalization. Neurons in the primary visual cortex (V1) respond to stimuli within a restricted portion of the visual field, termed their "receptive field." However, neuronal responses can also be influenced by stimuli that surround a receptive field, although the nature of these contextual interactions and underlying neural mechanisms are debated. Here we show that the response in V1 to a stimulus in the same context can either be suppressed or enhanced depending on the focus of attention. We are able to explain the results using a simple computational model that combines two well established properties of visual cortical responses: response normalization and feature-based enhancement. Copyright © 2015 the authors 0270-6474/15/3512273-08$15.00/0.

  9. The Development of Visual Search in Infancy: Attention to Faces versus Salience

    ERIC Educational Resources Information Center

    Kwon, Mee-Kyoung; Setoodehnia, Mielle; Baek, Jongsoo; Luck, Steven J.; Oakes, Lisa M.

    2016-01-01

    Four experiments examined how faces compete with physically salient stimuli for the control of attention in 4-, 6-, and 8-month-old infants (N = 117 total). Three computational models were used to quantify physical salience. We presented infants with visual search arrays containing a face and familiar object(s), such as shoes and flowers. Six- and…

  10. A Model of the Superior Colliculus Predicts Fixation Locations during Scene Viewing and Visual Search.

    PubMed

    Adeli, Hossein; Vitu, Françoise; Zelinsky, Gregory J

    2017-02-08

    Modern computational models of attention predict fixations using saliency maps and target maps, which prioritize locations for fixation based on feature contrast and target goals, respectively. But whereas many such models are biologically plausible, none have looked to the oculomotor system for design constraints or parameter specification. Conversely, although most models of saccade programming are tightly coupled to underlying neurophysiology, none have been tested using real-world stimuli and tasks. We combined the strengths of these two approaches in MASC, a model of attention in the superior colliculus (SC) that captures known neurophysiological constraints on saccade programming. We show that MASC predicted the fixation locations of humans freely viewing naturalistic scenes and performing exemplar and categorical search tasks, a breadth achieved by no other existing model. Moreover, it did this as well or better than its more specialized state-of-the-art competitors. MASC's predictive success stems from its inclusion of high-level but core principles of SC organization: an over-representation of foveal information, size-invariant population codes, cascaded population averaging over distorted visual and motor maps, and competition between motor point images for saccade programming, all of which cause further modulation of priority (attention) after projection of saliency and target maps to the SC. Only by incorporating these organizing brain principles into our models can we fully understand the transformation of complex visual information into the saccade programs underlying movements of overt attention. With MASC, a theoretical footing now exists to generate and test computationally explicit predictions of behavioral and neural responses in visually complex real-world contexts. SIGNIFICANCE STATEMENT The superior colliculus (SC) performs a visual-to-motor transformation vital to overt attention, but existing SC models cannot predict saccades to visually complex real-world stimuli. We introduce a brain-inspired SC model that outperforms state-of-the-art image-based competitors in predicting the sequences of fixations made by humans performing a range of everyday tasks (scene viewing and exemplar and categorical search), making clear the value of looking to the brain for model design. This work is significant in that it will drive new research by making computationally explicit predictions of SC neural population activity in response to naturalistic stimuli and tasks. It will also serve as a blueprint for the construction of other brain-inspired models, helping to usher in the next generation of truly intelligent autonomous systems. Copyright © 2017 the authors 0270-6474/17/371453-15$15.00/0.

  11. Common capacity-limited neural mechanisms of selective attention and spatial working memory encoding

    PubMed Central

    Fusser, Fabian; Linden, David E J; Rahm, Benjamin; Hampel, Harald; Haenschel, Corinna; Mayer, Jutta S

    2011-01-01

    One characteristic feature of visual working memory (WM) is its limited capacity, and selective attention has been implicated as limiting factor. A possible reason why attention constrains the number of items that can be encoded into WM is that the two processes share limited neural resources. Functional magnetic resonance imaging (fMRI) studies have indeed demonstrated commonalities between the neural substrates of WM and attention. Here we investigated whether such overlapping activations reflect interacting neural mechanisms that could result in capacity limitations. To independently manipulate the demands on attention and WM encoding within one single task, we combined visual search and delayed discrimination of spatial locations. Participants were presented with a search array and performed easy or difficult visual search in order to encode one, three or five positions of target items into WM. Our fMRI data revealed colocalised activation for attention-demanding visual search and WM encoding in distributed posterior and frontal regions. However, further analysis yielded two patterns of results. Activity in prefrontal regions increased additively with increased demands on WM and attention, indicating regional overlap without functional interaction. Conversely, the WM load-dependent activation in visual, parietal and premotor regions was severely reduced during high attentional demand. We interpret this interaction as indicating the sites of shared capacity-limited neural resources. Our findings point to differential contributions of prefrontal and posterior regions to the common neural mechanisms that support spatial WM encoding and attention, providing new imaging evidence for attention-based models of WM encoding. PMID:21781193

  12. Attention stabilizes the shared gain of V4 populations

    PubMed Central

    Rabinowitz, Neil C; Goris, Robbe L; Cohen, Marlene; Simoncelli, Eero P

    2015-01-01

    Responses of sensory neurons represent stimulus information, but are also influenced by internal state. For example, when monkeys direct their attention to a visual stimulus, the response gain of specific subsets of neurons in visual cortex changes. Here, we develop a functional model of population activity to investigate the structure of this effect. We fit the model to the spiking activity of bilateral neural populations in area V4, recorded while the animal performed a stimulus discrimination task under spatial attention. The model reveals four separate time-varying shared modulatory signals, the dominant two of which each target task-relevant neurons in one hemisphere. In attention-directed conditions, the associated shared modulatory signal decreases in variance. This finding provides an interpretable and parsimonious explanation for previous observations that attention reduces variability and noise correlations of sensory neurons. Finally, the recovered modulatory signals reflect previous reward, and are predictive of subsequent choice behavior. DOI: http://dx.doi.org/10.7554/eLife.08998.001 PMID:26523390

  13. Situated sentence processing: the coordinated interplay account and a neurobehavioral model.

    PubMed

    Crocker, Matthew W; Knoeferle, Pia; Mayberry, Marshall R

    2010-03-01

    Empirical evidence demonstrating that sentence meaning is rapidly reconciled with the visual environment has been broadly construed as supporting the seamless interaction of visual and linguistic representations during situated comprehension. Based on recent behavioral and neuroscientific findings, however, we argue for the more deeply rooted coordination of the mechanisms underlying visual and linguistic processing, and for jointly considering the behavioral and neural correlates of scene-sentence reconciliation during situated comprehension. The Coordinated Interplay Account (CIA; Knoeferle, P., & Crocker, M. W. (2007). The influence of recent scene events on spoken comprehension: Evidence from eye movements. Journal of Memory and Language, 57(4), 519-543) asserts that incremental linguistic interpretation actively directs attention in the visual environment, thereby increasing the salience of attended scene information for comprehension. We review behavioral and neuroscientific findings in support of the CIA's three processing stages: (i) incremental sentence interpretation, (ii) language-mediated visual attention, and (iii) the on-line influence of non-linguistic visual context. We then describe a recently developed connectionist model which both embodies the central CIA proposals and has been successfully applied in modeling a range of behavioral findings from the visual world paradigm (Mayberry, M. R., Crocker, M. W., & Knoeferle, P. (2009). Learning to attend: A connectionist model of situated language comprehension. Cognitive Science). Results from a new simulation suggest the model also correlates with event-related brain potentials elicited by the immediate use of visual context for linguistic disambiguation (Knoeferle, P., Habets, B., Crocker, M. W., & Münte, T. F. (2008). Visual scenes trigger immediate syntactic reanalysis: Evidence from ERPs during situated spoken comprehension. Cerebral Cortex, 18(4), 789-795). Finally, we argue that the mechanisms underlying interpretation, visual attention, and scene apprehension are not only in close temporal synchronization, but have co-adapted to optimize real-time visual grounding of situated spoken language, thus facilitating the association of linguistic, visual and motor representations that emerge during the course of our embodied linguistic experience in the world. Copyright 2009 Elsevier Inc. All rights reserved.

  14. Effects of Spatial and Non-Spatial Multi-Modal Cues on Orienting of Visual-Spatial Attention in an Augmented Environment

    DTIC Science & Technology

    2007-11-01

    information into awareness. Broadbent’s (1958) " Filter " model of attention (see Figure 1) maps the flow of information from the senses through a number of...benefits of an attentional cueing paradigm can be explained within these models . For example, the selective filter is augmented by the information...capacity filter ’, while Wickens’ model represents this with a limited amount of ’attentional resources’ available to perception, decision making

  15. A visual salience map in the primate frontal eye field.

    PubMed

    Thompson, Kirk G; Bichot, Narcisse P

    2005-01-01

    Models of attention and saccade target selection propose that within the brain there is a topographic map of visual salience that combines bottom-up and top-down influences to identify locations for further processing. The results of a series of experiments with monkeys performing visual search tasks have identified a population of frontal eye field (FEF) visually responsive neurons that exhibit all of the characteristics of a visual salience map. The activity of these FEF neurons is not sensitive to specific features of visual stimuli; but instead, their activity evolves over time to select the target of the search array. This selective activation reflects both the bottom-up intrinsic conspicuousness of the stimuli and the top-down knowledge and goals of the viewer. The peak response within FEF specifies the target for the overt gaze shift. However, the selective activity in FEF is not in itself a motor command because the magnitude of activation reflects the relative behavioral significance of the different stimuli in the visual scene and occurs even when no saccade is made. Identifying a visual salience map in FEF validates the theoretical concept of a salience map in many models of attention. In addition, it strengthens the emerging view that FEF is not only involved in producing overt gaze shifts, but is also important for directing covert spatial attention.

  16. Visual arts training is linked to flexible attention to local and global levels of visual stimuli.

    PubMed

    Chamberlain, Rebecca; Wagemans, Johan

    2015-10-01

    Observational drawing skill has been shown to be associated with the ability to focus on local visual details. It is unclear whether superior performance in local processing is indicative of the ability to attend to, and flexibly switch between, local and global levels of visual stimuli. It is also unknown whether these attentional enhancements remain specific to observational drawing skill or are a product of a wide range of artistic activities. The current study aimed to address these questions by testing if flexible visual processing predicts artistic group membership and observational drawing skill in a sample of first-year bachelor's degree art students (n=23) and non-art students (n=23). A pattern of local and global visual processing enhancements was found in relation to artistic group membership and drawing skill, with local processing ability found to be specifically related to individual differences in drawing skill. Enhanced global processing and more fluent switching between local and global levels of hierarchical stimuli predicted both drawing skill and artistic group membership, suggesting that these are beneficial attentional mechanisms for art-making in a range of domains. These findings support a top-down attentional model of artistic expertise and shed light on the domain specific and domain-general attentional enhancements induced by proficiency in the visual arts. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Modeling human comprehension of data visualizations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matzen, Laura E.; Haass, Michael Joseph; Divis, Kristin Marie

    This project was inspired by two needs. The first is a need for tools to help scientists and engineers to design effective data visualizations for communicating information, whether to the user of a system, an analyst who must make decisions based on complex data, or in the context of a technical report or publication. Most scientists and engineers are not trained in visualization design, and they could benefit from simple metrics to assess how well their visualization's design conveys the intended message. In other words, will the most important information draw the viewer's attention? The second is the need formore » cognition-based metrics for evaluating new types of visualizations created by researchers in the information visualization and visual analytics communities. Evaluating visualizations is difficult even for experts. However, all visualization methods and techniques are intended to exploit the properties of the human visual system to convey information efficiently to a viewer. Thus, developing evaluation methods that are rooted in the scientific knowledge of the human visual system could be a useful approach. In this project, we conducted fundamental research on how humans make sense of abstract data visualizations, and how this process is influenced by their goals and prior experience. We then used that research to develop a new model, the Data Visualization Saliency Model, that can make accurate predictions about which features in an abstract visualization will draw a viewer's attention. The model is an evaluation tool that can address both of the needs described above, supporting both visualization research and Sandia mission needs.« less

  18. Visual search, visual streams, and visual architectures.

    PubMed

    Green, M

    1991-10-01

    Most psychological, physiological, and computational models of early vision suggest that retinal information is divided into a parallel set of feature modules. The dominant theories of visual search assume that these modules form a "blackboard" architecture: a set of independent representations that communicate only through a central processor. A review of research shows that blackboard-based theories, such as feature-integration theory, cannot easily explain the existing data. The experimental evidence is more consistent with a "network" architecture, which stresses that: (1) feature modules are directly connected to one another, (2) features and their locations are represented together, (3) feature detection and integration are not distinct processing stages, and (4) no executive control process, such as focal attention, is needed to integrate features. Attention is not a spotlight that synthesizes objects from raw features. Instead, it is better to conceptualize attention as an aperture which masks irrelevant visual information.

  19. Gaze distribution analysis and saliency prediction across age groups.

    PubMed

    Krishna, Onkar; Helo, Andrea; Rämä, Pia; Aizawa, Kiyoharu

    2018-01-01

    Knowledge of the human visual system helps to develop better computational models of visual attention. State-of-the-art models have been developed to mimic the visual attention system of young adults that, however, largely ignore the variations that occur with age. In this paper, we investigated how visual scene processing changes with age and we propose an age-adapted framework that helps to develop a computational model that can predict saliency across different age groups. Our analysis uncovers how the explorativeness of an observer varies with age, how well saliency maps of an age group agree with fixation points of observers from the same or different age groups, and how age influences the center bias tendency. We analyzed the eye movement behavior of 82 observers belonging to four age groups while they explored visual scenes. Explorative- ness was quantified in terms of the entropy of a saliency map, and area under the curve (AUC) metrics was used to quantify the agreement analysis and the center bias tendency. Analysis results were used to develop age adapted saliency models. Our results suggest that the proposed age-adapted saliency model outperforms existing saliency models in predicting the regions of interest across age groups.

  20. Attention, working memory, and phenomenal experience of WM content: memory levels determined by different types of top-down modulation.

    PubMed

    Jacob, Jane; Jacobs, Christianne; Silvanto, Juha

    2015-01-01

    What is the role of top-down attentional modulation in consciously accessing working memory (WM) content? In influential WM models, information can exist in different states, determined by allocation of attention; placing the original memory representation in the center of focused attention gives rise to conscious access. Here we discuss various lines of evidence indicating that such attentional modulation is not sufficient for memory content to be phenomenally experienced. We propose that, in addition to attentional modulation of the memory representation, another type of top-down modulation is required: suppression of all incoming visual information, via inhibition of early visual cortex. In this view, there are three distinct memory levels, as a function of the top-down control associated with them: (1) Nonattended, nonconscious associated with no attentional modulation; (2) attended, phenomenally nonconscious memory, associated with attentional enhancement of the actual memory trace; (3) attended, phenomenally conscious memory content, associated with enhancement of the memory trace and top-down suppression of all incoming visual input.

  1. Serial vs. parallel models of attention in visual search: accounting for benchmark RT-distributions.

    PubMed

    Moran, Rani; Zehetleitner, Michael; Liesefeld, Heinrich René; Müller, Hermann J; Usher, Marius

    2016-10-01

    Visual search is central to the investigation of selective visual attention. Classical theories propose that items are identified by serially deploying focal attention to their locations. While this accounts for set-size effects over a continuum of task difficulties, it has been suggested that parallel models can account for such effects equally well. We compared the serial Competitive Guided Search model with a parallel model in their ability to account for RT distributions and error rates from a large visual search data-set featuring three classical search tasks: 1) a spatial configuration search (2 vs. 5); 2) a feature-conjunction search; and 3) a unique feature search (Wolfe, Palmer & Horowitz Vision Research, 50(14), 1304-1311, 2010). In the parallel model, each item is represented by a diffusion to two boundaries (target-present/absent); the search corresponds to a parallel race between these diffusors. The parallel model was highly flexible in that it allowed both for a parametric range of capacity-limitation and for set-size adjustments of identification boundaries. Furthermore, a quit unit allowed for a continuum of search-quitting policies when the target is not found, with "single-item inspection" and exhaustive searches comprising its extremes. The serial model was found to be superior to the parallel model, even before penalizing the parallel model for its increased complexity. We discuss the implications of the results and the need for future studies to resolve the debate.

  2. The attentive brain: insights from developmental cognitive neuroscience.

    PubMed

    Amso, Dima; Scerif, Gaia

    2015-10-01

    Visual attention functions as a filter to select environmental information for learning and memory, making it the first step in the eventual cascade of thought and action systems. Here, we review studies of typical and atypical visual attention development and explain how they offer insights into the mechanisms of adult visual attention. We detail interactions between visual processing and visual attention, as well as the contribution of visual attention to memory. Finally, we discuss genetic mechanisms underlying attention disorders and how attention may be modified by training.

  3. Role of Oculoproprioception in Coding the Locus of Attention.

    PubMed

    Odoj, Bartholomaeus; Balslev, Daniela

    2016-03-01

    The most common neural representations for spatial attention encode locations retinotopically, relative to center of gaze. To keep track of visual objects across saccades or to orient toward sounds, retinotopic representations must be combined with information about the rotation of one's own eyes in the orbits. Although gaze input is critical for a correct allocation of attention, the source of this input has so far remained unidentified. Two main signals are available: corollary discharge (copy of oculomotor command) and oculoproprioception (feedback from extraocular muscles). Here we asked whether the oculoproprioceptive signal relayed from the somatosensory cortex contributes to coding the locus of attention. We used continuous theta burst stimulation (cTBS) over a human oculoproprioceptive area in the postcentral gyrus (S1EYE). S1EYE-cTBS reduces proprioceptive processing, causing ∼1° underestimation of gaze angle. Participants discriminated visual targets whose location was cued in a nonvisual modality. Throughout the visual space, S1EYE-cTBS shifted the locus of attention away from the cue by ∼1°, in the same direction and by the same magnitude as the oculoproprioceptive bias. This systematic shift cannot be attributed to visual mislocalization. Accuracy of open-loop pointing to the same visual targets, a function thought to rely mainly on the corollary discharge, was unchanged. We argue that oculoproprioception is selective for attention maps. By identifying a potential substrate for the coupling between eye and attention, this study contributes to the theoretical models for spatial attention.

  4. Perceptual learning effect on decision and confidence thresholds.

    PubMed

    Solovey, Guillermo; Shalom, Diego; Pérez-Schuster, Verónica; Sigman, Mariano

    2016-10-01

    Practice can enhance of perceptual sensitivity, a well-known phenomenon called perceptual learning. However, the effect of practice on subjective perception has received little attention. We approach this problem from a visual psychophysics and computational modeling perspective. In a sequence of visual search experiments, subjects significantly increased the ability to detect a "trained target". Before and after training, subjects performed two psychophysical protocols that parametrically vary the visibility of the "trained target": an attentional blink and a visual masking task. We found that confidence increased after learning only in the attentional blink task. Despite large differences in some observables and task settings, we identify common mechanisms for decision-making and confidence. Specifically, our behavioral results and computational model suggest that perceptual ability is independent of processing time, indicating that changes in early cortical representations are effective, and learning changes decision criteria to convey choice and confidence. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Dissociable Modulation of Overt Visual Attention in Valence and Arousal Revealed by Topology of Scan Path

    PubMed Central

    Ni, Jianguang; Jiang, Huihui; Jin, Yixiang; Chen, Nanhui; Wang, Jianhong; Wang, Zhengbo; Luo, Yuejia; Ma, Yuanye; Hu, Xintian

    2011-01-01

    Emotional stimuli have evolutionary significance for the survival of organisms; therefore, they are attention-grabbing and are processed preferentially. The neural underpinnings of two principle emotional dimensions in affective space, valence (degree of pleasantness) and arousal (intensity of evoked emotion), have been shown to be dissociable in the olfactory, gustatory and memory systems. However, the separable roles of valence and arousal in scene perception are poorly understood. In this study, we asked how these two emotional dimensions modulate overt visual attention. Twenty-two healthy volunteers freely viewed images from the International Affective Picture System (IAPS) that were graded for affective levels of valence and arousal (high, medium, and low). Subjects' heads were immobilized and eye movements were recorded by camera to track overt shifts of visual attention. Algebraic graph-based approaches were introduced to model scan paths as weighted undirected path graphs, generating global topology metrics that characterize the algebraic connectivity of scan paths. Our data suggest that human subjects show different scanning patterns to stimuli with different affective ratings. Valence salient stimuli (with neutral arousal) elicited faster and larger shifts of attention, while arousal salient stimuli (with neutral valence) elicited local scanning, dense attention allocation and deep processing. Furthermore, our model revealed that the modulatory effect of valence was linearly related to the valence level, whereas the relation between the modulatory effect and the level of arousal was nonlinear. Hence, visual attention seems to be modulated by mechanisms that are separate for valence and arousal. PMID:21494331

  6. Attentional gating models of object substitution masking.

    PubMed

    Põder, Endel

    2013-11-01

    Di Lollo, Enns, and Rensink (2000) proposed the computational model of object substitution (CMOS) to explain their experimental results with sparse visual maskers. This model supposedly is based on reentrant hypotheses testing in the visual system, and the modeled experiments are believed to demonstrate these reentrant processes in human vision. In this study, I analyze the main assumptions of this model. I argue that CMOS is a version of the attentional gating model and that its relationship with reentrant processing is rather illusory. The fit of this model to the data indicates that reentrant hypotheses testing is not necessary for the explanation of object substitution masking (OSM). Further, the original CMOS cannot predict some important aspects of the experimental data. I test 2 new models incorporating an unselective processing (divided attention) stage; these models are more consistent with data from OSM experiments. My modeling shows that the apparent complexity of OSM can be reduced to a few simple and well-known mechanisms of perception and memory. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  7. The Role of Visual Processing Speed in Reading Speed Development

    PubMed Central

    Lobier, Muriel; Dubois, Matthieu; Valdois, Sylviane

    2013-01-01

    A steady increase in reading speed is the hallmark of normal reading acquisition. However, little is known of the influence of visual attention capacity on children's reading speed. The number of distinct visual elements that can be simultaneously processed at a glance (dubbed the visual attention span), predicts single-word reading speed in both normal reading and dyslexic children. However, the exact processes that account for the relationship between the visual attention span and reading speed remain to be specified. We used the Theory of Visual Attention to estimate visual processing speed and visual short-term memory capacity from a multiple letter report task in eight and nine year old children. The visual attention span and text reading speed were also assessed. Results showed that visual processing speed and visual short term memory capacity predicted the visual attention span. Furthermore, visual processing speed predicted reading speed, but visual short term memory capacity did not. Finally, the visual attention span mediated the effect of visual processing speed on reading speed. These results suggest that visual attention capacity could constrain reading speed in elementary school children. PMID:23593117

  8. The role of visual processing speed in reading speed development.

    PubMed

    Lobier, Muriel; Dubois, Matthieu; Valdois, Sylviane

    2013-01-01

    A steady increase in reading speed is the hallmark of normal reading acquisition. However, little is known of the influence of visual attention capacity on children's reading speed. The number of distinct visual elements that can be simultaneously processed at a glance (dubbed the visual attention span), predicts single-word reading speed in both normal reading and dyslexic children. However, the exact processes that account for the relationship between the visual attention span and reading speed remain to be specified. We used the Theory of Visual Attention to estimate visual processing speed and visual short-term memory capacity from a multiple letter report task in eight and nine year old children. The visual attention span and text reading speed were also assessed. Results showed that visual processing speed and visual short term memory capacity predicted the visual attention span. Furthermore, visual processing speed predicted reading speed, but visual short term memory capacity did not. Finally, the visual attention span mediated the effect of visual processing speed on reading speed. These results suggest that visual attention capacity could constrain reading speed in elementary school children.

  9. Changing the Spatial Scope of Attention Alters Patterns of Neural Gain in Human Cortex

    PubMed Central

    Garcia, Javier O.; Rungratsameetaweemana, Nuttida; Sprague, Thomas C.

    2014-01-01

    Over the last several decades, spatial attention has been shown to influence the activity of neurons in visual cortex in various ways. These conflicting observations have inspired competing models to account for the influence of attention on perception and behavior. Here, we used electroencephalography (EEG) to assess steady-state visual evoked potentials (SSVEP) in human subjects and showed that highly focused spatial attention primarily enhanced neural responses to high-contrast stimuli (response gain), whereas distributed attention primarily enhanced responses to medium-contrast stimuli (contrast gain). Together, these data suggest that different patterns of neural modulation do not reflect fundamentally different neural mechanisms, but instead reflect changes in the spatial extent of attention. PMID:24381272

  10. Cognitive and Neural Bases of Skilled Performance

    DTIC Science & Technology

    1989-05-12

    Pergamon, 1958. Broadbent , D.F., A mechanical model for human attention and immediate memory. Psychol. Rev., 64: 205-215’ 1957. Cherry, C. On the...material on the efficiency of selective listening. Amer. J. Psychol., 77: 533-546, 1964. Treisman, A. Strategies and models of selective attention ...cortex reveal suong effects of attention , these results suggest that the visual attentional " filter " may be located at a later stage. This is consistent

  11. Serial and parallel attentive visual searches: evidence from cumulative distribution functions of response times.

    PubMed

    Sung, Kyongje

    2008-12-01

    Participants searched a visual display for a target among distractors. Each of 3 experiments tested a condition proposed to require attention and for which certain models propose a serial search. Serial versus parallel processing was tested by examining effects on response time means and cumulative distribution functions. In 2 conditions, the results suggested parallel rather than serial processing, even though the tasks produced significant set-size effects. Serial processing was produced only in a condition with a difficult discrimination and a very large set-size effect. The results support C. Bundesen's (1990) claim that an extreme set-size effect leads to serial processing. Implications for parallel models of visual selection are discussed.

  12. Video attention deviation estimation using inter-frame visual saliency map analysis

    NASA Astrophysics Data System (ADS)

    Feng, Yunlong; Cheung, Gene; Le Callet, Patrick; Ji, Yusheng

    2012-01-01

    A viewer's visual attention during video playback is the matching of his eye gaze movement to the changing video content over time. If the gaze movement matches the video content (e.g., follow a rolling soccer ball), then the viewer keeps his visual attention. If the gaze location moves from one video object to another, then the viewer shifts his visual attention. A video that causes a viewer to shift his attention often is a "busy" video. Determination of which video content is busy is an important practical problem; a busy video is difficult for encoder to deploy region of interest (ROI)-based bit allocation, and hard for content provider to insert additional overlays like advertisements, making the video even busier. One way to determine the busyness of video content is to conduct eye gaze experiments with a sizable group of test subjects, but this is time-consuming and costineffective. In this paper, we propose an alternative method to determine the busyness of video-formally called video attention deviation (VAD): analyze the spatial visual saliency maps of the video frames across time. We first derive transition probabilities of a Markov model for eye gaze using saliency maps of a number of consecutive frames. We then compute steady state probability of the saccade state in the model-our estimate of VAD. We demonstrate that the computed steady state probability for saccade using saliency map analysis matches that computed using actual gaze traces for a range of videos with different degrees of busyness. Further, our analysis can also be used to segment video into shorter clips of different degrees of busyness by computing the Kullback-Leibler divergence using consecutive motion compensated saliency maps.

  13. Both memory and attention systems contribute to visual search for targets cued by implicitly learned context

    PubMed Central

    Giesbrecht, Barry; Sy, Jocelyn L.; Guerin, Scott A.

    2012-01-01

    Environmental context learned without awareness can facilitate visual processing of goal-relevant information. According to one view, the benefit of implicitly learned context relies on the neural systems involved in spatial attention and hippocampus-mediated memory. While this view has received empirical support, it contradicts traditional models of hippocampal function. The purpose of the present work was to clarify the influence of spatial context on visual search performance and on brain structures involved memory and attention. Event-related functional magnetic resonance imaging revealed that activity in the hippocampus as well as in visual and parietal cortex was modulated by learned visual context even though participants’ subjective reports and performance on a post-experiment recognition task indicated no explicit knowledge of the learned context. Moreover, the magnitude of the initial selective hippocampus response predicted the magnitude of the behavioral benefit due to context observed at the end of the experiment. The results suggest that implicit contextual learning is mediated by attention and memory and that these systems interact to support search of our environment. PMID:23099047

  14. Attentional sensitivity and asymmetries of vertical saccade generation in monkey

    NASA Technical Reports Server (NTRS)

    Zhou, Wu; King, W. M.; Shelhamer, M. J. (Principal Investigator)

    2002-01-01

    The first goal of this study was to systematically document asymmetries in vertical saccade generation. We found that visually guided upward saccades have not only shorter latencies, but higher peak velocities, shorter durations and smaller errors. The second goal was to identify possible mechanisms underlying the asymmetry in vertical saccade latencies. Based on a recent model of saccade generation, three stages of saccade generation were investigated using specific behavioral paradigms: attention shift to a visual target (CUED paradigm), initiation of saccade generation (GAP paradigm) and release of the motor command to execute the saccade (DELAY paradigm). Our results suggest that initiation of a saccade (or "ocular disengagement") and its motor release contribute little to the asymmetry in vertical saccade latency. However, analysis of saccades made in the CUED paradigm indicated that it took less time to shift attention to a target in the upper visual field than to a target in the lower visual field. These data suggest that higher attentional sensitivity to targets in the upper visual field may contribute to shorter latencies of upward saccades.

  15. Positive mood broadens visual attention to positive stimuli.

    PubMed

    Wadlinger, Heather A; Isaacowitz, Derek M

    2006-03-01

    In an attempt to investigate the impact of positive emotions on visual attention within the context of Fredrickson's (1998) broaden-and-build model, eye tracking was used in two studies to measure visual attentional preferences of college students (n=58, n=26) to emotional pictures. Half of each sample experienced induced positive mood immediately before viewing slides of three similarly-valenced images, in varying central-peripheral arrays. Attentional breadth was determined by measuring the percentage viewing time to peripheral images as well as by the number of visual saccades participants made per slide. Consistent with Fredrickson's theory, the first study showed that individuals induced into positive mood fixated more on peripheral stimuli than did control participants; however, this only held true for highly-valenced positive stimuli. Participants under induced positive mood also made more frequent saccades for slides of neutral and positive valence. A second study showed that these effects were not simply due to differences in emotional arousal between stimuli. Selective attentional broadening to positive stimuli may act both to facilitate later building of resources as well as to maintain current positive affective states.

  16. A Parallel and Distributed Processing Model of Joint Attention, Social-Cognition and Autism

    PubMed Central

    Mundy, Peter; Sullivan, Lisa; Mastergeorge, Ann M.

    2009-01-01

    Scientific Abstract The impaired development of joint attention is a cardinal feature of autism. Therefore, understanding the nature of joint attention is a central to research on this disorder. Joint attention may be best defined in terms of an information processing system that begins to develop by 4–6 months of age. This system integrates the parallel processing of internal information about one’s own visual attention with external information about the visual attention of other people. This type of joint encoding of information about self and other attention requires the activation of a distributed anterior and posterior cortical attention network. Genetic regulation, in conjunction with self-organizing behavioral activity guides the development of functional connectivity in this network. With practice in infancy the joint processing of self-other attention becomes automatically engaged as an executive function. It can be argued that this executive joint-attention is fundamental to human learning, as well as the development of symbolic thought, social-cognition and social-competence throughout the life span. One advantage of this parallel and distributed processing model of joint attention (PDPM) is that it directly connects theory on social pathology to a range of phenomenon in autism associated with neural connectivity, constructivist and connectionist models of cognitive development, early intervention, activity-dependent gene expression, and atypical ocular motor control. PMID:19358304

  17. Attention enhances multi-voxel representation of novel objects in frontal, parietal and visual cortices.

    PubMed

    Woolgar, Alexandra; Williams, Mark A; Rich, Anina N

    2015-04-01

    Selective attention is fundamental for human activity, but the details of its neural implementation remain elusive. One influential theory, the adaptive coding hypothesis (Duncan, 2001, An adaptive coding model of neural function in prefrontal cortex, Nature Reviews Neuroscience 2:820-829), proposes that single neurons in certain frontal and parietal regions dynamically adjust their responses to selectively encode relevant information. This selective representation may in turn support selective processing in more specialized brain regions such as the visual cortices. Here, we use multi-voxel decoding of functional magnetic resonance images to demonstrate selective representation of attended--and not distractor--objects in frontal, parietal, and visual cortices. In addition, we highlight a critical role for task demands in determining which brain regions exhibit selective coding. Strikingly, representation of attended objects in frontoparietal cortex was highest under conditions of high perceptual demand, when stimuli were hard to perceive and coding in early visual cortex was weak. Coding in early visual cortex varied as a function of attention and perceptual demand, while coding in higher visual areas was sensitive to the allocation of attention but robust to changes in perceptual difficulty. Consistent with high-profile reports, peripherally presented objects could also be decoded from activity at the occipital pole, a region which corresponds to the fovea. Our results emphasize the flexibility of frontoparietal and visual systems. They support the hypothesis that attention enhances the multi-voxel representation of information in the brain, and suggest that the engagement of this attentional mechanism depends critically on current task demands. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. A deep (learning) dive into visual search behaviour of breast radiologists

    NASA Astrophysics Data System (ADS)

    Mall, Suneeta; Brennan, Patrick C.; Mello-Thoms, Claudia

    2018-03-01

    Visual search, the process of detecting and identifying objects using the eye movements (saccades) and the foveal vision, has been studied for identification of root causes of errors in the interpretation of mammography. The aim of this study is to model visual search behaviour of radiologists and their interpretation of mammograms using deep machine learning approaches. Our model is based on a deep convolutional neural network, a biologically-inspired multilayer perceptron that simulates the visual cortex, and is reinforced with transfer learning techniques. Eye tracking data obtained from 8 radiologists (of varying experience levels in reading mammograms) reviewing 120 two-view digital mammography cases (59 cancers) have been used to train the model, which was pre-trained with the ImageNet dataset for transfer learning. Areas of the mammogram that received direct (foveally fixated), indirect (peripherally fixated) or no (never fixated) visual attention were extracted from radiologists' visual search maps (obtained by a head mounted eye tracking device). These areas, along with the radiologists' assessment (including confidence of the assessment) of suspected malignancy were used to model: 1) Radiologists' decision; 2) Radiologists' confidence on such decision; and 3) The attentional level (i.e. foveal, peripheral or none) obtained by an area of the mammogram. Our results indicate high accuracy and low misclassification in modelling such behaviours.

  19. There's Waldo! A Normalization Model of Visual Search Predicts Single-Trial Human Fixations in an Object Search Task

    PubMed Central

    Miconi, Thomas; Groomes, Laura; Kreiman, Gabriel

    2016-01-01

    When searching for an object in a scene, how does the brain decide where to look next? Visual search theories suggest the existence of a global “priority map” that integrates bottom-up visual information with top-down, target-specific signals. We propose a mechanistic model of visual search that is consistent with recent neurophysiological evidence, can localize targets in cluttered images, and predicts single-trial behavior in a search task. This model posits that a high-level retinotopic area selective for shape features receives global, target-specific modulation and implements local normalization through divisive inhibition. The normalization step is critical to prevent highly salient bottom-up features from monopolizing attention. The resulting activity pattern constitues a priority map that tracks the correlation between local input and target features. The maximum of this priority map is selected as the locus of attention. The visual input is then spatially enhanced around the selected location, allowing object-selective visual areas to determine whether the target is present at this location. This model can localize objects both in array images and when objects are pasted in natural scenes. The model can also predict single-trial human fixations, including those in error and target-absent trials, in a search task involving complex objects. PMID:26092221

  20. Components of Attention in Grapheme-Color Synesthesia: A Modeling Approach.

    PubMed

    Ásgeirsson, Árni Gunnar; Nordfang, Maria; Sørensen, Thomas Alrik

    2015-01-01

    Grapheme-color synesthesia is a condition where the perception of graphemes consistently and automatically evokes an experience of non-physical color. Many have studied how synesthesia affects the processing of achromatic graphemes, but less is known about the synesthetic processing of physically colored graphemes. Here, we investigated how the visual processing of colored letters is affected by the congruence or incongruence of synesthetic grapheme-color associations. We briefly presented graphemes (10-150 ms) to 9 grapheme-color synesthetes and to 9 control observers. Their task was to report as many letters (targets) as possible, while ignoring digit (distractors). Graphemes were either congruently or incongruently colored with the synesthetes' reported grapheme-color association. A mathematical model, based on Bundesen's (1990) Theory of Visual Attention (TVA), was fitted to each observer's data, allowing us to estimate discrete components of visual attention. The models suggested that the synesthetes processed congruent letters faster than incongruent ones, and that they were able to retain more congruent letters in visual short-term memory, while the control group's model parameters were not significantly affected by congruence. The increase in processing speed, when synesthetes process congruent letters, suggests that synesthesia affects the processing of letters at a perceptual level. To account for the benefit in processing speed, we propose that synesthetic associations become integrated into the categories of graphemes, and that letter colors are considered as evidence for making certain perceptual categorizations in the visual system. We also propose that enhanced visual short-term memory capacity for congruently colored graphemes can be explained by the synesthetes' expertise regarding their specific grapheme-color associations.

  1. The scope and control of attention as separate aspects of working memory.

    PubMed

    Shipstead, Zach; Redick, Thomas S; Hicks, Kenny L; Engle, Randall W

    2012-01-01

    The present study examines two varieties of working memory (WM) capacity task: visual arrays (i.e., a measure of the amount of information that can be maintained in working memory) and complex span (i.e., a task that taps WM-related attentional control). Using previously collected data sets we employ confirmatory factor analysis to demonstrate that visual arrays and complex span tasks load on separate, but correlated, factors. A subsequent series of structural equation models and regression analyses demonstrate that these factors contribute both common and unique variance to the prediction of general fluid intelligence (Gf). However, while visual arrays does contribute uniquely to higher cognition, its overall correlation to Gf is largely mediated by variance associated with the complex span factor. Thus we argue that visual arrays performance is not strictly driven by a limited-capacity storage system (e.g., the focus of attention; Cowan, 2001), but may also rely on control processes such as selective attention and controlled memory search.

  2. Serial grouping of 2D-image regions with object-based attention in humans.

    PubMed

    Jeurissen, Danique; Self, Matthew W; Roelfsema, Pieter R

    2016-06-13

    After an initial stage of local analysis within the retina and early visual pathways, the human visual system creates a structured representation of the visual scene by co-selecting image elements that are part of behaviorally relevant objects. The mechanisms underlying this perceptual organization process are only partially understood. We here investigate the time-course of perceptual grouping of two-dimensional image-regions by measuring the reaction times of human participants and report that it is associated with the gradual spread of object-based attention. Attention spreads fastest over large and homogeneous areas and is slowed down at locations that require small-scale processing. We find that the time-course of the object-based selection process is well explained by a 'growth-cone' model, which selects surface elements in an incremental, scale-dependent manner. We discuss how the visual cortical hierarchy can implement this scale-dependent spread of object-based attention, leveraging the different receptive field sizes in distinct cortical areas.

  3. Visual Attention during Spatial Language Comprehension

    PubMed Central

    Burigo, Michele; Knoeferle, Pia

    2015-01-01

    Spatial terms such as “above”, “in front of”, and “on the left of” are all essential for describing the location of one object relative to another object in everyday communication. Apprehending such spatial relations involves relating linguistic to object representations by means of attention. This requires at least one attentional shift, and models such as the Attentional Vector Sum (AVS) predict the direction of that attention shift, from the sausage to the box for spatial utterances such as “The box is above the sausage”. To the extent that this prediction generalizes to overt gaze shifts, a listener’s visual attention should shift from the sausage to the box. However, listeners tend to rapidly look at referents in their order of mention and even anticipate them based on linguistic cues, a behavior that predicts a converse attentional shift from the box to the sausage. Four eye-tracking experiments assessed the role of overt attention in spatial language comprehension by examining to which extent visual attention is guided by words in the utterance and to which extent it also shifts “against the grain” of the unfolding sentence. The outcome suggests that comprehenders’ visual attention is predominantly guided by their interpretation of the spatial description. Visual shifts against the grain occurred only when comprehenders had some extra time, and their absence did not affect comprehension accuracy. However, the timing of this reverse gaze shift on a trial correlated with that trial’s verification time. Thus, while the timing of these gaze shifts is subtly related to the verification time, their presence is not necessary for successful verification of spatial relations. PMID:25607540

  4. Multisensory brand search: How the meaning of sounds guides consumers' visual attention.

    PubMed

    Knoeferle, Klemens M; Knoeferle, Pia; Velasco, Carlos; Spence, Charles

    2016-06-01

    Building on models of crossmodal attention, the present research proposes that brand search is inherently multisensory, in that the consumers' visual search for a specific brand can be facilitated by semantically related stimuli that are presented in another sensory modality. A series of 5 experiments demonstrates that the presentation of spatially nonpredictive auditory stimuli associated with products (e.g., usage sounds or product-related jingles) can crossmodally facilitate consumers' visual search for, and selection of, products. Eye-tracking data (Experiment 2) revealed that the crossmodal effect of auditory cues on visual search manifested itself not only in RTs, but also in the earliest stages of visual attentional processing, thus suggesting that the semantic information embedded within sounds can modulate the perceptual saliency of the target products' visual representations. Crossmodal facilitation was even observed for newly learnt associations between unfamiliar brands and sonic logos, implicating multisensory short-term learning in establishing audiovisual semantic associations. The facilitation effect was stronger when searching complex rather than simple visual displays, thus suggesting a modulatory role of perceptual load. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  5. Attentional models of multitask pilot performance using advanced display technology.

    PubMed

    Wickens, Christopher D; Goh, Juliana; Helleberg, John; Horrey, William J; Talleur, Donald A

    2003-01-01

    In the first part of the reported research, 12 instrument-rated pilots flew a high-fidelity simulation, in which air traffic control presentation of auditory (voice) information regarding traffic and flight parameters was compared with advanced display technology presentation of equivalent information regarding traffic (cockpit display of traffic information) and flight parameters (data link display). Redundant combinations were also examined while pilots flew the aircraft simulation, monitored for outside traffic, and read back communications messages. The data suggested a modest cost for visual presentation over auditory presentation, a cost mediated by head-down visual scanning, and no benefit for redundant presentation. The effects in Part 1 were modeled by multiple-resource and preemption models of divided attention. In the second part of the research, visual scanning in all conditions was fit by an expected value model of selective attention derived from a previous experiment. This model accounted for 94% of the variance in the scanning data and 90% of the variance in a second validation experiment. Actual or potential applications of this research include guidance on choosing the appropriate modality for presenting in-cockpit information and understanding task strategies induced by introducing new aviation technology.

  6. The attention-weighted sample-size model of visual short-term memory: Attention capture predicts resource allocation and memory load.

    PubMed

    Smith, Philip L; Lilburn, Simon D; Corbett, Elaine A; Sewell, David K; Kyllingsbæk, Søren

    2016-09-01

    We investigated the capacity of visual short-term memory (VSTM) in a phase discrimination task that required judgments about the configural relations between pairs of black and white features. Sewell et al. (2014) previously showed that VSTM capacity in an orientation discrimination task was well described by a sample-size model, which views VSTM as a resource comprised of a finite number of noisy stimulus samples. The model predicts the invariance of [Formula: see text] , the sum of squared sensitivities across items, for displays of different sizes. For phase discrimination, the set-size effect significantly exceeded that predicted by the sample-size model for both simultaneously and sequentially presented stimuli. Instead, the set-size effect and the serial position curves with sequential presentation were predicted by an attention-weighted version of the sample-size model, which assumes that one of the items in the display captures attention and receives a disproportionate share of resources. The choice probabilities and response time distributions from the task were well described by a diffusion decision model in which the drift rates embodied the assumptions of the attention-weighted sample-size model. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  7. Measuring the effect of attention on simple visual search.

    PubMed

    Palmer, J; Ames, C T; Lindsey, D T

    1993-02-01

    Set-size in visual search may be due to 1 or more of 3 factors: sensory processes such as lateral masking between stimuli, attentional processes limiting the perception of individual stimuli, or attentional processes affecting the decision rules for combining information from multiple stimuli. These possibilities were evaluated in tasks such as searching for a longer line among shorter lines. To evaluate sensory contributions, display set-size effects were compared with cuing conditions that held sensory phenomena constant. Similar effects for the display and cue manipulations suggested that sensory processes contributed little under the conditions of this experiment. To evaluate the contribution of decision processes, the set-size effects were modeled with signal detection theory. In these models, a decision effect alone was sufficient to predict the set-size effects without any attentional limitation due to perception.

  8. Preservation of crossmodal selective attention in healthy aging

    PubMed Central

    Hugenschmidt, Christina E.; Peiffer, Ann M.; McCoy, Thomas P.; Hayasaka, Satoru; Laurienti, Paul J.

    2010-01-01

    The goal of the present study was to determine if older adults benefited from attention to a specific sensory modality in a voluntary attention task and evidenced changes in voluntary or involuntary attention when compared to younger adults. Suppressing and enhancing effects of voluntary attention were assessed using two cued forced-choice tasks, one that asked participants to localize and one that asked them to categorize visual and auditory targets. Involuntary attention was assessed using the same tasks, but with no attentional cues. The effects of attention were evaluated using traditional comparisons of means and Cox proportional hazards models. All analyses showed that older adults benefited behaviorally from selective attention in both visual and auditory conditions, including robust suppressive effects of attention. Of note, the performance of the older adults was commensurate with that of younger adults in almost all analyses, suggesting that older adults can successfully engage crossmodal attention processes. Thus, age-related increases in distractibility across sensory modalities are likely due to mechanisms other than deficits in attentional processing. PMID:19404621

  9. Beyond time and space: The effect of a lateralized sustained attention task and brain stimulation on spatial and selective attention.

    PubMed

    Shalev, Nir; De Wandel, Linde; Dockree, Paul; Demeyere, Nele; Chechlacz, Magdalena

    2017-10-03

    The Theory of Visual Attention (TVA) provides a mathematical formalisation of the "biased competition" account of visual attention. Applying this model to individual performance in a free recall task allows the estimation of 5 independent attentional parameters: visual short-term memory (VSTM) capacity, speed of information processing, perceptual threshold of visual detection; attentional weights representing spatial distribution of attention (spatial bias), and the top-down selectivity index. While the TVA focuses on selection in space, complementary accounts of attention describe how attention is maintained over time, and how temporal processes interact with selection. A growing body of evidence indicates that different facets of attention interact and share common neural substrates. The aim of the current study was to modulate a spatial attentional bias via transfer effects, based on a mechanistic understanding of the interplay between spatial, selective and temporal aspects of attention. Specifically, we examined here: (i) whether a single administration of a lateralized sustained attention task could prime spatial orienting and lead to transferable changes in attentional weights (assigned to the left vs right hemi-field) and/or other attentional parameters assessed within the framework of TVA (Experiment 1); (ii) whether the effects of such spatial-priming on TVA parameters could be further enhanced by bi-parietal high frequency transcranial random noise stimulation (tRNS) (Experiment 2). Our results demonstrate that spatial attentional bias, as assessed within the TVA framework, was primed by sustaining attention towards the right hemi-field, but this spatial-priming effect did not occur when sustaining attention towards the left. Furthermore, we show that bi-parietal high-frequency tRNS combined with the rightward spatial-priming resulted in an increased attentional selectivity. To conclude, we present a novel, theory-driven method for attentional modulation providing important insights into how the spatial and temporal processes in attention interact with attentional selection. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Metacontrast masking and attention do not interact.

    PubMed

    Agaoglu, Sevda; Breitmeyer, Bruno; Ogmen, Haluk

    2016-07-01

    Visual masking and attention have been known to control the transfer of information from sensory memory to visual short-term memory. A natural question is whether these processes operate independently or interact. Recent evidence suggests that studies that reported interactions between masking and attention suffered from ceiling and/or floor effects. The objective of the present study was to investigate whether metacontrast masking and attention interact by using an experimental design in which saturation effects are avoided. We asked observers to report the orientation of a target bar randomly selected from a display containing either two or six bars. The mask was a ring that surrounded the target bar. Attentional load was controlled by set-size and masking strength by the stimulus onset asynchrony between the target bar and the mask ring. We investigated interactions between masking and attention by analyzing two different aspects of performance: (i) the mean absolute response errors and (ii) the distribution of signed response errors. Our results show that attention affects observers' performance without interacting with masking. Statistical modeling of response errors suggests that attention and metacontrast masking exert their effects by independently modulating the probability of "guessing" behavior. Implications of our findings for models of attention are discussed.

  11. The Effects of Spatial Endogenous Pre-cueing across Eccentricities

    PubMed Central

    Feng, Jing; Spence, Ian

    2017-01-01

    Frequently, we use expectations about likely locations of a target to guide the allocation of our attention. Despite the importance of this attentional process in everyday tasks, examination of pre-cueing effects on attention, particularly endogenous pre-cueing effects, has been relatively little explored outside an eccentricity of 20°. Given the visual field has functional subdivisions that attentional processes can differ significantly among the foveal, perifoveal, and more peripheral areas, how endogenous pre-cues that carry spatial information of targets influence our allocation of attention across a large visual field (especially in the more peripheral areas) remains unclear. We present two experiments examining how the expectation of the location of the target shapes the distribution of attention across eccentricities in the visual field. We measured participants’ ability to pick out a target among distractors in the visual field after the presentation of a highly valid cue indicating the size of the area in which the target was likely to occur, or the likely direction of the target (left or right side of the display). Our first experiment showed that participants had a higher target detection rate with faster responses, particularly at eccentricities of 20° and 30°. There was also a marginal advantage of pre-cueing effects when trials of the same size cue were blocked compared to when trials were mixed. Experiment 2 demonstrated a higher target detection rate when the target occurred at the cued direction. This pre-cueing effect was greater at larger eccentricities and with a longer cue-target interval. Our findings on the endogenous pre-cueing effects across a large visual area were summarized using a simple model to assist in conceptualizing the modifications of the distribution of attention over the visual field. We discuss our finding in light of cognitive penetration of perception, and highlight the importance of examining attentional process across a large area of the visual field. PMID:28638353

  12. The Effects of Spatial Endogenous Pre-cueing across Eccentricities.

    PubMed

    Feng, Jing; Spence, Ian

    2017-01-01

    Frequently, we use expectations about likely locations of a target to guide the allocation of our attention. Despite the importance of this attentional process in everyday tasks, examination of pre-cueing effects on attention, particularly endogenous pre-cueing effects, has been relatively little explored outside an eccentricity of 20°. Given the visual field has functional subdivisions that attentional processes can differ significantly among the foveal, perifoveal, and more peripheral areas, how endogenous pre-cues that carry spatial information of targets influence our allocation of attention across a large visual field (especially in the more peripheral areas) remains unclear. We present two experiments examining how the expectation of the location of the target shapes the distribution of attention across eccentricities in the visual field. We measured participants' ability to pick out a target among distractors in the visual field after the presentation of a highly valid cue indicating the size of the area in which the target was likely to occur, or the likely direction of the target (left or right side of the display). Our first experiment showed that participants had a higher target detection rate with faster responses, particularly at eccentricities of 20° and 30°. There was also a marginal advantage of pre-cueing effects when trials of the same size cue were blocked compared to when trials were mixed. Experiment 2 demonstrated a higher target detection rate when the target occurred at the cued direction. This pre-cueing effect was greater at larger eccentricities and with a longer cue-target interval. Our findings on the endogenous pre-cueing effects across a large visual area were summarized using a simple model to assist in conceptualizing the modifications of the distribution of attention over the visual field. We discuss our finding in light of cognitive penetration of perception, and highlight the importance of examining attentional process across a large area of the visual field.

  13. A model of attention-guided visual perception and recognition.

    PubMed

    Rybak, I A; Gusakova, V I; Golovan, A V; Podladchikova, L N; Shevtsova, N A

    1998-08-01

    A model of visual perception and recognition is described. The model contains: (i) a low-level subsystem which performs both a fovea-like transformation and detection of primary features (edges), and (ii) a high-level subsystem which includes separated 'what' (sensory memory) and 'where' (motor memory) structures. Image recognition occurs during the execution of a 'behavioral recognition program' formed during the primary viewing of the image. The recognition program contains both programmed attention window movements (stored in the motor memory) and predicted image fragments (stored in the sensory memory) for each consecutive fixation. The model shows the ability to recognize complex images (e.g. faces) invariantly with respect to shift, rotation and scale.

  14. Keeping your eyes on the prize: anger and visual attention to threats and rewards.

    PubMed

    Ford, Brett Q; Tamir, Maya; Brunyé, Tad T; Shirer, William R; Mahoney, Caroline R; Taylor, Holly A

    2010-08-01

    People's emotional states influence what they focus their attention on in their environment. For example, fear focuses people's attention on threats, whereas excitement may focus their attention on rewards. This study examined the effect of anger on overt visual attention to threats and rewards. Anger is an unpleasant emotion associated with approach motivation. If the effect of emotion on visual attention depends on valence, we would expect anger to focus people's attention on threats. If, however, the effect of emotion on visual attention depends on motivation, we would expect anger to focus people's attention on rewards. Using an eye tracker, we examined the effects of anger, fear, excitement, and a neutral emotional state on participants' overt visual attention to threatening, rewarding, and control images. We found that anger increased visual attention to rewarding information, but not to threatening information. These findings demonstrate that anger increases attention to potential rewards and suggest that the effects of emotions on visual attention are motivationally driven.

  15. Neuronal basis of covert spatial attention in the frontal eye field.

    PubMed

    Thompson, Kirk G; Biscoe, Keri L; Sato, Takashi R

    2005-10-12

    The influential "premotor theory of attention" proposes that developing oculomotor commands mediate covert visual spatial attention. A likely source of this attentional bias is the frontal eye field (FEF), an area of the frontal cortex involved in converting visual information into saccade commands. We investigated the link between FEF activity and covert spatial attention by recording from FEF visual and saccade-related neurons in monkeys performing covert visual search tasks without eye movements. Here we show that the source of attention signals in the FEF is enhanced activity of visually responsive neurons. At the time attention is allocated to the visual search target, nonvisually responsive saccade-related movement neurons are inhibited. Therefore, in the FEF, spatial attention signals are independent of explicit saccade command signals. We propose that spatially selective activity in FEF visually responsive neurons corresponds to the mental spotlight of attention via modulation of ongoing visual processing.

  16. Infant Joint Attention, Neural Networks and Social Cognition

    PubMed Central

    Mundy, Peter; Jarrold, William

    2010-01-01

    Neural network models of attention can provide a unifying approach to the study of human cognitive and emotional development (Posner & Rothbart, 2007). This paper we argue that a neural networks approach to the infant development of joint attention can inform our understanding of the nature of human social learning, symbolic thought process and social cognition. At its most basic, joint attention involves the capacity to coordinate one’s own visual attention with that of another person. We propose that joint attention development involves increments in the capacity to engage in simultaneous or parallel processing of information about one’s own attention and the attention of other people. Infant practice with joint attention is both a consequence and organizer of the development of a distributed and integrated brain network involving frontal and parietal cortical systems. This executive distributed network first serves to regulate the capacity of infants to respond to and direct the overt behavior of other people in order to share experience with others through the social coordination of visual attention. In this paper we describe this parallel and distributed neural network model of joint attention development and discuss two hypotheses that stem from this model. One is that activation of this distributed network during coordinated attention enhances to depth of information processing and encoding beginning in the first year of life. We also propose that with development joint attention becomes internalized as the capacity to socially coordinate mental attention to internal representations. As this occurs the executive joint attention network makes vital contributions to the development of human symbolic thinking and social cognition. PMID:20884172

  17. A Brief Period of Postnatal Visual Deprivation Alters the Balance between Auditory and Visual Attention.

    PubMed

    de Heering, Adélaïde; Dormal, Giulia; Pelland, Maxime; Lewis, Terri; Maurer, Daphne; Collignon, Olivier

    2016-11-21

    Is a short and transient period of visual deprivation early in life sufficient to induce lifelong changes in how we attend to, and integrate, simple visual and auditory information [1, 2]? This question is of crucial importance given the recent demonstration in both animals and humans that a period of blindness early in life permanently affects the brain networks dedicated to visual, auditory, and multisensory processing [1-16]. To address this issue, we compared a group of adults who had been treated for congenital bilateral cataracts during early infancy with a group of normally sighted controls on a task requiring simple detection of lateralized visual and auditory targets, presented alone or in combination. Redundancy gains obtained from the audiovisual conditions were similar between groups and surpassed the reaction time distribution predicted by Miller's race model. However, in comparison to controls, cataract-reversal patients were faster at processing simple auditory targets and showed differences in how they shifted attention across modalities. Specifically, they were faster at switching attention from visual to auditory inputs than in the reverse situation, while an opposite pattern was observed for controls. Overall, these results reveal that the absence of visual input during the first months of life does not prevent the development of audiovisual integration but enhances the salience of simple auditory inputs, leading to a different crossmodal distribution of attentional resources between auditory and visual stimuli. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Distinct neural markers of TVA-based visual processing speed and short-term storage capacity parameters.

    PubMed

    Wiegand, Iris; Töllner, Thomas; Habekost, Thomas; Dyrholm, Mads; Müller, Hermann J; Finke, Kathrin

    2014-08-01

    An individual's visual attentional capacity is characterized by 2 central processing resources, visual perceptual processing speed and visual short-term memory (vSTM) storage capacity. Based on Bundesen's theory of visual attention (TVA), independent estimates of these parameters can be obtained from mathematical modeling of performance in a whole report task. The framework's neural interpretation (NTVA) further suggests distinct brain mechanisms underlying these 2 functions. Using an interindividual difference approach, the present study was designed to establish the respective ERP correlates of both parameters. Participants with higher compared to participants with lower processing speed were found to show significantly reduced visual N1 responses, indicative of higher efficiency in early visual processing. By contrast, for participants with higher relative to lower vSTM storage capacity, contralateral delay activity over visual areas was enhanced while overall nonlateralized delay activity was reduced, indicating that holding (the maximum number of) items in vSTM relies on topographically specific sustained activation within the visual system. Taken together, our findings show that the 2 main aspects of visual attentional capacity are reflected in separable neurophysiological markers, validating a central assumption of NTVA. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  19. Behavioral decoding of working memory items inside and outside the focus of attention.

    PubMed

    Mallett, Remington; Lewis-Peacock, Jarrod A

    2018-03-31

    How we attend to our thoughts affects how we attend to our environment. Holding information in working memory can automatically bias visual attention toward matching information. By observing attentional biases on reaction times to visual search during a memory delay, it is possible to reconstruct the source of that bias using machine learning techniques and thereby behaviorally decode the content of working memory. Can this be done when more than one item is held in working memory? There is some evidence that multiple items can simultaneously bias attention, but the effects have been inconsistent. One explanation may be that items are stored in different states depending on the current task demands. Recent models propose functionally distinct states of representation for items inside versus outside the focus of attention. Here, we use behavioral decoding to evaluate whether multiple memory items-including temporarily irrelevant items outside the focus of attention-exert biases on visual attention. Only the single item in the focus of attention was decodable. The other item showed a brief attentional bias that dissipated until it returned to the focus of attention. These results support the idea of dynamic, flexible states of working memory across time and priority. © 2018 New York Academy of Sciences.

  20. Effects of visual attention on chromatic and achromatic detection sensitivities.

    PubMed

    Uchikawa, Keiji; Sato, Masayuki; Kuwamura, Keiko

    2014-05-01

    Visual attention has a significant effect on various visual functions, such as response time, detection and discrimination sensitivity, and color appearance. It has been suggested that visual attention may affect visual functions in the early visual pathways. In this study we examined selective effects of visual attention on sensitivities of the chromatic and achromatic pathways to clarify whether visual attention modifies responses in the early visual system. We used a dual task paradigm in which the observer detected a peripheral test stimulus presented at 4 deg eccentricities while the observer concurrently carried out an attention task in the central visual field. In experiment 1, it was confirmed that peripheral spectral sensitivities were reduced more for short and long wavelengths than for middle wavelengths with the central attention task so that the spectral sensitivity function changed its shape by visual attention. This indicated that visual attention affected the chromatic response more strongly than the achromatic response. In experiment 2 it was obtained that the detection thresholds increased in greater degrees in the red-green and yellow-blue chromatic directions than in the white-black achromatic direction in the dual task condition. In experiment 3 we showed that the peripheral threshold elevations depended on the combination of color-directions of the central and peripheral stimuli. Since the chromatic and achromatic responses were separately processed in the early visual pathways, the present results provided additional evidence that visual attention affects responses in the early visual pathways.

  1. Hyperspectral image visualization based on a human visual model

    NASA Astrophysics Data System (ADS)

    Zhang, Hongqin; Peng, Honghong; Fairchild, Mark D.; Montag, Ethan D.

    2008-02-01

    Hyperspectral image data can provide very fine spectral resolution with more than 200 bands, yet presents challenges for visualization techniques for displaying such rich information on a tristimulus monitor. This study developed a visualization technique by taking advantage of both the consistent natural appearance of a true color image and the feature separation of a PCA image based on a biologically inspired visual attention model. The key part is to extract the informative regions in the scene. The model takes into account human contrast sensitivity functions and generates a topographic saliency map for both images. This is accomplished using a set of linear "center-surround" operations simulating visual receptive fields as the difference between fine and coarse scales. A difference map between the saliency map of the true color image and that of the PCA image is derived and used as a mask on the true color image to select a small number of interesting locations where the PCA image has more salient features than available in the visible bands. The resulting representations preserve hue for vegetation, water, road etc., while the selected attentional locations may be analyzed by more advanced algorithms.

  2. Attention Priority Map of Face Images in Human Early Visual Cortex.

    PubMed

    Mo, Ce; He, Dongjun; Fang, Fang

    2018-01-03

    Attention priority maps are topographic representations that are used for attention selection and guidance of task-related behavior during visual processing. Previous studies have identified attention priority maps of simple artificial stimuli in multiple cortical and subcortical areas, but investigating neural correlates of priority maps of natural stimuli is complicated by the complexity of their spatial structure and the difficulty of behaviorally characterizing their priority map. To overcome these challenges, we reconstructed the topographic representations of upright/inverted face images from fMRI BOLD signals in human early visual areas primary visual cortex (V1) and the extrastriate cortex (V2 and V3) based on a voxelwise population receptive field model. We characterized the priority map behaviorally as the first saccadic eye movement pattern when subjects performed a face-matching task relative to the condition in which subjects performed a phase-scrambled face-matching task. We found that the differential first saccadic eye movement pattern between upright/inverted and scrambled faces could be predicted from the reconstructed topographic representations in V1-V3 in humans of either sex. The coupling between the reconstructed representation and the eye movement pattern increased from V1 to V2/3 for the upright faces, whereas no such effect was found for the inverted faces. Moreover, face inversion modulated the coupling in V2/3, but not in V1. Our findings provide new evidence for priority maps of natural stimuli in early visual areas and extend traditional attention priority map theories by revealing another critical factor that affects priority maps in extrastriate cortex in addition to physical salience and task goal relevance: image configuration. SIGNIFICANCE STATEMENT Prominent theories of attention posit that attention sampling of visual information is mediated by a series of interacting topographic representations of visual space known as attention priority maps. Until now, neural evidence of attention priority maps has been limited to studies involving simple artificial stimuli and much remains unknown about the neural correlates of priority maps of natural stimuli. Here, we show that attention priority maps of face stimuli could be found in primary visual cortex (V1) and the extrastriate cortex (V2 and V3). Moreover, representations in extrastriate visual areas are strongly modulated by image configuration. These findings extend our understanding of attention priority maps significantly by showing that they are modulated, not only by physical salience and task-goal relevance, but also by the configuration of stimuli images. Copyright © 2018 the authors 0270-6474/18/380149-09$15.00/0.

  3. Context and competition in the capture of visual attention.

    PubMed

    Hickey, Clayton; Theeuwes, Jan

    2011-10-01

    Competition-based models of visual attention propose that perceptual ambiguity is resolved through inhibition, which is stronger when objects share a greater number of neural receptive fields (RFs). According to this theory, the misallocation of attention to a salient distractor--that is, the capture of attention--can be indexed in RF-scaled interference costs. We used this pattern to investigate distractor-related costs in visual search across several manipulations of temporal context. Distractor costs are generally larger under circumstances in which the distractor can be defined by features that have recently characterised the target, suggesting that capture occurs in these trials. However, our results show that search for a target in the presence of a salient distractor also produces RF-scaled costs when the features defining the target and distractor do not vary from trial to trial. Contextual differences in distractor costs appear to reflect something other than capture, perhaps a qualitative difference in the type of attentional mechanism deployed to the distractor.

  4. Deep Visual Attention Prediction

    NASA Astrophysics Data System (ADS)

    Wang, Wenguan; Shen, Jianbing

    2018-05-01

    In this work, we aim to predict human eye fixation with view-free scenes based on an end-to-end deep learning architecture. Although Convolutional Neural Networks (CNNs) have made substantial improvement on human attention prediction, it is still needed to improve CNN based attention models by efficiently leveraging multi-scale features. Our visual attention network is proposed to capture hierarchical saliency information from deep, coarse layers with global saliency information to shallow, fine layers with local saliency response. Our model is based on a skip-layer network structure, which predicts human attention from multiple convolutional layers with various reception fields. Final saliency prediction is achieved via the cooperation of those global and local predictions. Our model is learned in a deep supervision manner, where supervision is directly fed into multi-level layers, instead of previous approaches of providing supervision only at the output layer and propagating this supervision back to earlier layers. Our model thus incorporates multi-level saliency predictions within a single network, which significantly decreases the redundancy of previous approaches of learning multiple network streams with different input scales. Extensive experimental analysis on various challenging benchmark datasets demonstrate our method yields state-of-the-art performance with competitive inference time.

  5. An Extended Normalization Model of Attention Accounts for Feature-Based Attentional Enhancement of Both Response and Coherence Gain

    PubMed Central

    Krishna, B. Suresh; Treue, Stefan

    2016-01-01

    Paying attention to a sensory feature improves its perception and impairs that of others. Recent work has shown that a Normalization Model of Attention (NMoA) can account for a wide range of physiological findings and the influence of different attentional manipulations on visual performance. A key prediction of the NMoA is that attention to a visual feature like an orientation or a motion direction will increase the response of neurons preferring the attended feature (response gain) rather than increase the sensory input strength of the attended stimulus (input gain). This effect of feature-based attention on neuronal responses should translate to similar patterns of improvement in behavioral performance, with psychometric functions showing response gain rather than input gain when attention is directed to the task-relevant feature. In contrast, we report here that when human subjects are cued to attend to one of two motion directions in a transparent motion display, attentional effects manifest as a combination of input and response gain. Further, the impact on input gain is greater when attention is directed towards a narrow range of motion directions than when it is directed towards a broad range. These results are captured by an extended NMoA, which either includes a stimulus-independent attentional contribution to normalization or utilizes direction-tuned normalization. The proposed extensions are consistent with the feature-similarity gain model of attention and the attentional modulation in extrastriate area MT, where neuronal responses are enhanced and suppressed by attention to preferred and non-preferred motion directions respectively. PMID:27977679

  6. Top-down influences on visual attention during listening are modulated by observer sex.

    PubMed

    Shen, John; Itti, Laurent

    2012-07-15

    In conversation, women have a small advantage in decoding non-verbal communication compared to men. In light of these findings, we sought to determine whether sex differences also existed in visual attention during a related listening task, and if so, if the differences existed among attention to high-level aspects of the scene or to conspicuous visual features. Using eye-tracking and computational techniques, we present direct evidence that men and women orient attention differently during conversational listening. We tracked the eyes of 15 men and 19 women who watched and listened to 84 clips featuring 12 different speakers in various outdoor settings. At the fixation following each saccadic eye movement, we analyzed the type of object that was fixated. Men gazed more often at the mouth and women at the eyes of the speaker. Women more often exhibited "distracted" saccades directed away from the speaker and towards a background scene element. Examining the multi-scale center-surround variation in low-level visual features (static: color, intensity, orientation, and dynamic: motion energy), we found that men consistently selected regions which expressed more variation in dynamic features, which can be attributed to a male preference for motion and a female preference for areas that may contain nonverbal information about the speaker. In sum, significant differences were observed, which we speculate arise from different integration strategies of visual cues in selecting the final target of attention. Our findings have implications for studies of sex in nonverbal communication, as well as for more predictive models of visual attention. Published by Elsevier Ltd.

  7. Do Multielement Visual Tracking and Visual Search Draw Continuously on the Same Visual Attention Resources?

    ERIC Educational Resources Information Center

    Alvarez, George A.; Horowitz, Todd S.; Arsenio, Helga C.; DiMase, Jennifer S.; Wolfe, Jeremy M.

    2005-01-01

    Multielement visual tracking and visual search are 2 tasks that are held to require visual-spatial attention. The authors used the attentional operating characteristic (AOC) method to determine whether both tasks draw continuously on the same attentional resource (i.e., whether the 2 tasks are mutually exclusive). The authors found that observers…

  8. Auditory and Visual Attention Performance in Children With ADHD: The Attentional Deficiency of ADHD Is Modality Specific.

    PubMed

    Lin, Hung-Yu; Hsieh, Hsieh-Chun; Lee, Posen; Hong, Fu-Yuan; Chang, Wen-Dien; Liu, Kuo-Cheng

    2017-08-01

    This study explored auditory and visual attention in children with ADHD. In a randomized, two-period crossover design, 50 children with ADHD and 50 age- and sex-matched typically developing peers were measured with the Test of Various Attention (TOVA). The deficiency of visual attention is more serious than that of auditory attention in children with ADHD. On the auditory modality, only the deficit of attentional inconsistency is sufficient to explain most cases of ADHD; however, most of the children with ADHD suffered from deficits of sustained attention, response inhibition, and attentional inconsistency on the visual modality. Our results also showed that the deficit of attentional inconsistency is the most important indicator in diagnosing and intervening in ADHD when both auditory and visual modalities are considered. The findings provide strong evidence that the deficits of auditory attention are different from those of visual attention in children with ADHD.

  9. A computational visual saliency model based on statistics and machine learning.

    PubMed

    Lin, Ru-Je; Lin, Wei-Song

    2014-08-01

    Identifying the type of stimuli that attracts human visual attention has been an appealing topic for scientists for many years. In particular, marking the salient regions in images is useful for both psychologists and many computer vision applications. In this paper, we propose a computational approach for producing saliency maps using statistics and machine learning methods. Based on four assumptions, three properties (Feature-Prior, Position-Prior, and Feature-Distribution) can be derived and combined by a simple intersection operation to obtain a saliency map. These properties are implemented by a similarity computation, support vector regression (SVR) technique, statistical analysis of training samples, and information theory using low-level features. This technique is able to learn the preferences of human visual behavior while simultaneously considering feature uniqueness. Experimental results show that our approach performs better in predicting human visual attention regions than 12 other models in two test databases. © 2014 ARVO.

  10. A foreground object features-based stereoscopic image visual comfort assessment model

    NASA Astrophysics Data System (ADS)

    Jin, Xin; Jiang, G.; Ying, H.; Yu, M.; Ding, S.; Peng, Z.; Shao, F.

    2014-11-01

    Since stereoscopic images provide observers with both realistic and discomfort viewing experience, it is necessary to investigate the determinants of visual discomfort. By considering that foreground object draws most attention when human observing stereoscopic images. This paper proposes a new foreground object based visual comfort assessment (VCA) metric. In the first place, a suitable segmentation method is applied to disparity map and then the foreground object is ascertained as the one having the biggest average disparity. In the second place, three visual features being average disparity, average width and spatial complexity of foreground object are computed from the perspective of visual attention. Nevertheless, object's width and complexity do not consistently influence the perception of visual comfort in comparison with disparity. In accordance with this psychological phenomenon, we divide the whole images into four categories on the basis of different disparity and width, and exert four different models to more precisely predict its visual comfort in the third place. Experimental results show that the proposed VCA metric outperformance other existing metrics and can achieve a high consistency between objective and subjective visual comfort scores. The Pearson Linear Correlation Coefficient (PLCC) and Spearman Rank Order Correlation Coefficient (SROCC) are over 0.84 and 0.82, respectively.

  11. A crossmodal crossover: opposite effects of visual and auditory perceptual load on steady-state evoked potentials to irrelevant visual stimuli.

    PubMed

    Jacoby, Oscar; Hall, Sarah E; Mattingley, Jason B

    2012-07-16

    Mechanisms of attention are required to prioritise goal-relevant sensory events under conditions of stimulus competition. According to the perceptual load model of attention, the extent to which task-irrelevant inputs are processed is determined by the relative demands of discriminating the target: the more perceptually demanding the target task, the less unattended stimuli will be processed. Although much evidence supports the perceptual load model for competing stimuli within a single sensory modality, the effects of perceptual load in one modality on distractor processing in another is less clear. Here we used steady-state evoked potentials (SSEPs) to measure neural responses to irrelevant visual checkerboard stimuli while participants performed either a visual or auditory task that varied in perceptual load. Consistent with perceptual load theory, increasing visual task load suppressed SSEPs to the ignored visual checkerboards. In contrast, increasing auditory task load enhanced SSEPs to the ignored visual checkerboards. This enhanced neural response to irrelevant visual stimuli under auditory load suggests that exhausting capacity within one modality selectively compromises inhibitory processes required for filtering stimuli in another. Copyright © 2012 Elsevier Inc. All rights reserved.

  12. All Set! Evidence of Simultaneous Attentional Control Settings for Multiple Target Colors

    ERIC Educational Resources Information Center

    Irons, Jessica L.; Folk, Charles L.; Remington, Roger W.

    2012-01-01

    Although models of visual search have often assumed that attention can only be set for a single feature or property at a time, recent studies have suggested that it may be possible to maintain more than one attentional control setting. The aim of the present study was to investigate whether spatial attention could be guided by multiple attentional…

  13. Components of Attention in Grapheme-Color Synesthesia: A Modeling Approach

    PubMed Central

    Ásgeirsson, Árni Gunnar; Nordfang, Maria; Sørensen, Thomas Alrik

    2015-01-01

    Grapheme-color synesthesia is a condition where the perception of graphemes consistently and automatically evokes an experience of non-physical color. Many have studied how synesthesia affects the processing of achromatic graphemes, but less is known about the synesthetic processing of physically colored graphemes. Here, we investigated how the visual processing of colored letters is affected by the congruence or incongruence of synesthetic grapheme-color associations. We briefly presented graphemes (10–150 ms) to 9 grapheme-color synesthetes and to 9 control observers. Their task was to report as many letters (targets) as possible, while ignoring digit (distractors). Graphemes were either congruently or incongruently colored with the synesthetes’ reported grapheme-color association. A mathematical model, based on Bundesen’s (1990) Theory of Visual Attention (TVA), was fitted to each observer’s data, allowing us to estimate discrete components of visual attention. The models suggested that the synesthetes processed congruent letters faster than incongruent ones, and that they were able to retain more congruent letters in visual short-term memory, while the control group’s model parameters were not significantly affected by congruence. The increase in processing speed, when synesthetes process congruent letters, suggests that synesthesia affects the processing of letters at a perceptual level. To account for the benefit in processing speed, we propose that synesthetic associations become integrated into the categories of graphemes, and that letter colors are considered as evidence for making certain perceptual categorizations in the visual system. We also propose that enhanced visual short-term memory capacity for congruently colored graphemes can be explained by the synesthetes’ expertise regarding their specific grapheme-color associations. PMID:26252019

  14. Structural Variability within Frontoparietal Networks and Individual Differences in Attentional Functions: An Approach Using the Theory of Visual Attention.

    PubMed

    Chechlacz, Magdalena; Gillebert, Celine R; Vangkilde, Signe A; Petersen, Anders; Humphreys, Glyn W

    2015-07-29

    Visuospatial attention allows us to select and act upon a subset of behaviorally relevant visual stimuli while ignoring distraction. Bundesen's theory of visual attention (TVA) (Bundesen, 1990) offers a quantitative analysis of the different facets of attention within a unitary model and provides a powerful analytic framework for understanding individual differences in attentional functions. Visuospatial attention is contingent upon large networks, distributed across both hemispheres, consisting of several cortical areas interconnected by long-association frontoparietal pathways, including three branches of the superior longitudinal fasciculus (SLF I-III) and the inferior fronto-occipital fasciculus (IFOF). Here we examine whether structural variability within human frontoparietal networks mediates differences in attention abilities as assessed by the TVA. Structural measures were based on spherical deconvolution and tractography-derived indices of tract volume and hindrance-modulated orientational anisotropy (HMOA). Individual differences in visual short-term memory (VSTM) were linked to variability in the microstructure (HMOA) of SLF II, SLF III, and IFOF within the right hemisphere. Moreover, VSTM and speed of information processing were linked to hemispheric lateralization within the IFOF. Differences in spatial bias were mediated by both variability in microstructure and volume of the right SLF II. Our data indicate that the microstructural and macrostrucutral organization of white matter pathways differentially contributes to both the anatomical lateralization of frontoparietal attentional networks and to individual differences in attentional functions. We conclude that individual differences in VSTM capacity, processing speed, and spatial bias, as assessed by TVA, link to variability in structural organization within frontoparietal pathways. Copyright © 2015 Chechlacz et al.

  15. The Effects of Context and Attention on Spiking Activity in Human Early Visual Cortex.

    PubMed

    Self, Matthew W; Peters, Judith C; Possel, Jessy K; Reithler, Joel; Goebel, Rainer; Ris, Peterjan; Jeurissen, Danique; Reddy, Leila; Claus, Steven; Baayen, Johannes C; Roelfsema, Pieter R

    2016-03-01

    Here we report the first quantitative analysis of spiking activity in human early visual cortex. We recorded multi-unit activity from two electrodes in area V2/V3 of a human patient implanted with depth electrodes as part of her treatment for epilepsy. We observed well-localized multi-unit receptive fields with tunings for contrast, orientation, spatial frequency, and size, similar to those reported in the macaque. We also observed pronounced gamma oscillations in the local-field potential that could be used to estimate the underlying spiking response properties. Spiking responses were modulated by visual context and attention. We observed orientation-tuned surround suppression: responses were suppressed by image regions with a uniform orientation and enhanced by orientation contrast. Additionally, responses were enhanced on regions that perceptually segregated from the background, indicating that neurons in the human visual cortex are sensitive to figure-ground structure. Spiking responses were also modulated by object-based attention. When the patient mentally traced a curve through the neurons' receptive fields, the accompanying shift of attention enhanced neuronal activity. These results demonstrate that the tuning properties of cells in the human early visual cortex are similar to those in the macaque and that responses can be modulated by both contextual factors and behavioral relevance. Our results, therefore, imply that the macaque visual system is an excellent model for the human visual cortex.

  16. The Effects of Context and Attention on Spiking Activity in Human Early Visual Cortex

    PubMed Central

    Reithler, Joel; Goebel, Rainer; Ris, Peterjan; Jeurissen, Danique; Reddy, Leila; Claus, Steven; Baayen, Johannes C.; Roelfsema, Pieter R.

    2016-01-01

    Here we report the first quantitative analysis of spiking activity in human early visual cortex. We recorded multi-unit activity from two electrodes in area V2/V3 of a human patient implanted with depth electrodes as part of her treatment for epilepsy. We observed well-localized multi-unit receptive fields with tunings for contrast, orientation, spatial frequency, and size, similar to those reported in the macaque. We also observed pronounced gamma oscillations in the local-field potential that could be used to estimate the underlying spiking response properties. Spiking responses were modulated by visual context and attention. We observed orientation-tuned surround suppression: responses were suppressed by image regions with a uniform orientation and enhanced by orientation contrast. Additionally, responses were enhanced on regions that perceptually segregated from the background, indicating that neurons in the human visual cortex are sensitive to figure-ground structure. Spiking responses were also modulated by object-based attention. When the patient mentally traced a curve through the neurons’ receptive fields, the accompanying shift of attention enhanced neuronal activity. These results demonstrate that the tuning properties of cells in the human early visual cortex are similar to those in the macaque and that responses can be modulated by both contextual factors and behavioral relevance. Our results, therefore, imply that the macaque visual system is an excellent model for the human visual cortex. PMID:27015604

  17. Does attention speed up processing? Decreases and increases of processing rates in visual prior entry.

    PubMed

    Tünnermann, Jan; Petersen, Anders; Scharlau, Ingrid

    2015-03-02

    Selective visual attention improves performance in many tasks. Among others, it leads to "prior entry"--earlier perception of an attended compared to an unattended stimulus. Whether this phenomenon is purely based on an increase of the processing rate of the attended stimulus or if a decrease in the processing rate of the unattended stimulus also contributes to the effect is, up to now, unanswered. Here we describe a novel approach to this question based on Bundesen's Theory of Visual Attention, which we use to overcome the limitations of earlier prior-entry assessment with temporal order judgments (TOJs) that only allow relative statements regarding the processing speed of attended and unattended stimuli. Prevalent models of prior entry in TOJs either indirectly predict a pure acceleration or cannot model the difference between acceleration and deceleration. In a paradigm that combines a letter-identification task with TOJs, we show that indeed acceleration of the attended and deceleration of the unattended stimuli conjointly cause prior entry. © 2015 ARVO.

  18. An integrative, experience-based theory of attentional control.

    PubMed

    Wilder, Matthew H; Mozer, Michael C; Wickens, Christopher D

    2011-02-09

    Although diverse, theories of visual attention generally share the notion that attention is controlled by some combination of three distinct strategies: (1) exogenous cuing from locally contrasting primitive visual features, such as abrupt onsets or color singletons (e.g., L. Itti, C. Koch, & E. Neiber, 1998), (2) endogenous gain modulation of exogenous activations, used to guide attention to task-relevant features (e.g., V. Navalpakkam & L. Itti, 2007; J. Wolfe, 1994, 2007), and (3) endogenous prediction of likely locations of interest, based on task and scene gist (e.g., A. Torralba, A. Oliva, M. Castelhano, & J. Henderson, 2006). However, little work has been done to synthesize these disparate theories. In this work, we propose a unifying conceptualization in which attention is controlled along two dimensions: the degree of task focus and the contextual scale of operation. Previously proposed strategies-and their combinations-can be viewed as instances of this one mechanism. Thus, this theory serves not as a replacement for existing models but as a means of bringing them into a coherent framework. We present an implementation of this theory and demonstrate its applicability to a wide range of attentional phenomena. The model accounts for key results in visual search with synthetic images and makes reasonable predictions for human eye movements in search tasks involving real-world images. In addition, the theory offers an unusual perspective on attention that places a fundamental emphasis on the role of experience and task-related knowledge.

  19. Combined contributions of feedforward and feedback inputs to bottom-up attention

    PubMed Central

    Khorsand, Peyman; Moore, Tirin; Soltani, Alireza

    2015-01-01

    In order to deal with a large amount of information carried by visual inputs entering the brain at any given point in time, the brain swiftly uses the same inputs to enhance processing in one part of visual field at the expense of the others. These processes, collectively called bottom-up attentional selection, are assumed to solely rely on feedforward processing of the external inputs, as it is implied by the nomenclature. Nevertheless, evidence from recent experimental and modeling studies points to the role of feedback in bottom-up attention. Here, we review behavioral and neural evidence that feedback inputs are important for the formation of signals that could guide attentional selection based on exogenous inputs. Moreover, we review results from a modeling study elucidating mechanisms underlying the emergence of these signals in successive layers of neural populations and how they depend on feedback from higher visual areas. We use these results to interpret and discuss more recent findings that can further unravel feedforward and feedback neural mechanisms underlying bottom-up attention. We argue that while it is descriptively useful to separate feedforward and feedback processes underlying bottom-up attention, these processes cannot be mechanistically separated into two successive stages as they occur at almost the same time and affect neural activity within the same brain areas using similar neural mechanisms. Therefore, understanding the interaction and integration of feedforward and feedback inputs is crucial for better understanding of bottom-up attention. PMID:25784883

  20. When size matters: attention affects performance by contrast or response gain.

    PubMed

    Herrmann, Katrin; Montaser-Kouhsari, Leila; Carrasco, Marisa; Heeger, David J

    2010-12-01

    Covert attention, the selective processing of visual information in the absence of eye movements, improves behavioral performance. We found that attention, both exogenous (involuntary) and endogenous (voluntary), can affect performance by contrast or response gain changes, depending on the stimulus size and the relative size of the attention field. These two variables were manipulated in a cueing task while stimulus contrast was varied. We observed a change in behavioral performance consonant with a change in contrast gain for small stimuli paired with spatial uncertainty and a change in response gain for large stimuli presented at one location (no uncertainty) and surrounded by irrelevant flanking distracters. A complementary neuroimaging experiment revealed that observers' attention fields were wider with than without spatial uncertainty. Our results support important predictions of the normalization model of attention and reconcile previous, seemingly contradictory findings on the effects of visual attention.

  1. Assessing public concern for landscape quality: a potential model to identify visual thresholds

    Treesearch

    Arthur W. Magill

    1990-01-01

    Considerable public criticism and sometimes legal obstructions have been directed toward landscape management in relation to the extraction of natural resources. Many managers do not understand public concerns for visually attractive resources. Managers need to know when landscape alterations, like clearcuts, attract public attention and become visually objectionable....

  2. Experimental Effects and Individual Differences in Linear Mixed Models: Estimating the Relationship between Spatial, Object, and Attraction Effects in Visual Attention

    PubMed Central

    Kliegl, Reinhold; Wei, Ping; Dambacher, Michael; Yan, Ming; Zhou, Xiaolin

    2011-01-01

    Linear mixed models (LMMs) provide a still underused methodological perspective on combining experimental and individual-differences research. Here we illustrate this approach with two-rectangle cueing in visual attention (Egly et al., 1994). We replicated previous experimental cue-validity effects relating to a spatial shift of attention within an object (spatial effect), to attention switch between objects (object effect), and to the attraction of attention toward the display centroid (attraction effect), also taking into account the design-inherent imbalance of valid and other trials. We simultaneously estimated variance/covariance components of subject-related random effects for these spatial, object, and attraction effects in addition to their mean reaction times (RTs). The spatial effect showed a strong positive correlation with mean RT and a strong negative correlation with the attraction effect. The analysis of individual differences suggests that slow subjects engage attention more strongly at the cued location than fast subjects. We compare this joint LMM analysis of experimental effects and associated subject-related variances and correlations with two frequently used alternative statistical procedures. PMID:21833292

  3. The role of attention in figure-ground segregation in areas V1 and V4 of the visual cortex.

    PubMed

    Poort, Jasper; Raudies, Florian; Wannig, Aurel; Lamme, Victor A F; Neumann, Heiko; Roelfsema, Pieter R

    2012-07-12

    Our visual system segments images into objects and background. Figure-ground segregation relies on the detection of feature discontinuities that signal boundaries between the figures and the background and on a complementary region-filling process that groups together image regions with similar features. The neuronal mechanisms for these processes are not well understood and it is unknown how they depend on visual attention. We measured neuronal activity in V1 and V4 in a task where monkeys either made an eye movement to texture-defined figures or ignored them. V1 activity predicted the timing and the direction of the saccade if the figures were task relevant. We found that boundary detection is an early process that depends little on attention, whereas region filling occurs later and is facilitated by visual attention, which acts in an object-based manner. Our findings are explained by a model with local, bottom-up computations for boundary detection and feedback processing for region filling. Copyright © 2012 Elsevier Inc. All rights reserved.

  4. Serial grouping of 2D-image regions with object-based attention in humans

    PubMed Central

    Jeurissen, Danique; Self, Matthew W; Roelfsema, Pieter R

    2016-01-01

    After an initial stage of local analysis within the retina and early visual pathways, the human visual system creates a structured representation of the visual scene by co-selecting image elements that are part of behaviorally relevant objects. The mechanisms underlying this perceptual organization process are only partially understood. We here investigate the time-course of perceptual grouping of two-dimensional image-regions by measuring the reaction times of human participants and report that it is associated with the gradual spread of object-based attention. Attention spreads fastest over large and homogeneous areas and is slowed down at locations that require small-scale processing. We find that the time-course of the object-based selection process is well explained by a 'growth-cone' model, which selects surface elements in an incremental, scale-dependent manner. We discuss how the visual cortical hierarchy can implement this scale-dependent spread of object-based attention, leveraging the different receptive field sizes in distinct cortical areas. DOI: http://dx.doi.org/10.7554/eLife.14320.001 PMID:27291188

  5. Project DyAdd: Visual Attention in Adult Dyslexia and ADHD

    ERIC Educational Resources Information Center

    Laasonen, Marja; Salomaa, Jonna; Cousineau, Denis; Leppamaki, Sami; Tani, Pekka; Hokkanen, Laura; Dye, Matthew

    2012-01-01

    In this study of the project DyAdd, three aspects of visual attention were investigated in adults (18-55 years) with dyslexia (n = 35) or attention deficit/hyperactivity disorder (ADHD, n = 22), and in healthy controls (n = 35). Temporal characteristics of visual attention were assessed with Attentional Blink (AB), capacity of visual attention…

  6. The role of visual attention in predicting driving impairment in older adults.

    PubMed

    Hoffman, Lesa; McDowd, Joan M; Atchley, Paul; Dubinsky, Richard

    2005-12-01

    This study evaluated the role of visual attention (as measured by the DriverScan change detection task and the Useful Field of View Test [UFOV]) in the prediction of driving impairment in 155 adults between the ages of 63 and 87. In contrast to previous research, participants were not oversampled for visual impairment or history of automobile accidents. Although a history of automobile accidents within the past 3 years could not be predicted using any variable, driving performance in a low-fidelity simulator could be significantly predicted by performance in the change detection task and by the divided and selection attention subtests of the UFOV in structural equation models. The sensitivity and specificity of each measure in identifying at-risk drivers were also evaluated with receiver operating characteristic curves.

  7. Predictors of reading fluency in Italian orthography: evidence from a cross-sectional study of primary school students.

    PubMed

    Tobia, Valentina; Marzocchi, Gian Marco

    2014-01-01

    This study investigates the role of linguistic and visuospatial attentional processes in predicting reading fluency in typical Italian readers attending primary school. Tasks were administered to 651 children with reading fluency z scores > -1.5 standard deviation to evaluate their phonological awareness, rapid automatized naming (RAN), verbal short-term memory, vocabulary, visual search skills, verbal-visual recall, and visual-spatial attention. Hybrid models combining confirmatory factor analysis and path analysis were used to evaluate the data obtained from younger (first and second grade) and older (third-fifth grade) children, respectively. The results showed that phonological awareness and RAN played a significant role among younger children, while also vocabulary, verbal short-term memory, and visuospatial attention were significant factors among older children.

  8. Interactions between Visual Attention and Episodic Retrieval: Dissociable Contributions of Parietal Regions during Gist-Based False Recognition

    PubMed Central

    Guerin, Scott A.; Robbins, Clifford A.; Gilmore, Adrian W.; Schacter, Daniel L.

    2012-01-01

    SUMMARY The interaction between episodic retrieval and visual attention is relatively unexplored. Given that systems mediating attention and episodic memory appear to be segregated, and perhaps even in competition, it is unclear how visual attention is recruited during episodic retrieval. We investigated the recruitment of visual attention during the suppression of gist-based false recognition, the tendency to falsely recognize items that are similar to previously encountered items. Recruitment of visual attention was associated with activity in the dorsal attention network. The inferior parietal lobule, often implicated in episodic retrieval, tracked veridical retrieval of perceptual detail and showed reduced activity during the engagement of visual attention, consistent with a competitive relationship with the dorsal attention network. These findings suggest that the contribution of the parietal cortex to interactions between visual attention and episodic retrieval entails distinct systems that contribute to different components of the task while also suppressing each other. PMID:22998879

  9. Gender-Specificity of Initial and Controlled Visual Attention to Sexual Stimuli in Androphilic Women and Gynephilic Men

    PubMed Central

    Dawson, Samantha J.; Chivers, Meredith L.

    2016-01-01

    Research across groups and methods consistently finds a gender difference in patterns of specificity of genital response; however, empirically supported mechanisms to explain this difference are lacking. The information-processing model of sexual arousal posits that automatic and controlled cognitive processes are requisite for the generation of sexual responses. Androphilic women’s gender-nonspecific response patterns may be the result of sexually-relevant cues that are common to both preferred and nonpreferred genders capturing attention and initiating an automatic sexual response, whereas men’s attentional system may be biased towards the detection and response to sexually-preferred cues only. In the present study, we used eye tracking to assess visual attention to sexually-preferred and nonpreferred cues in a sample of androphilic women and gynephilic men. Results support predictions from the information-processing model regarding gendered processing of sexual stimuli in men and women. Men’s initial attention patterns were gender-specific, whereas women’s were nonspecific. In contrast, both men and women exhibited gender-specific patterns of controlled attention, although this effect was stronger among men. Finally, measures of attention and self-reported attraction were positively related in both men and women. These findings are discussed in the context of the information-processing model and evolutionary mechanisms that may have evolved to promote gendered attentional systems. PMID:27088358

  10. Visual attention: low-level and high-level viewpoints

    NASA Astrophysics Data System (ADS)

    Stentiford, Fred W. M.

    2012-06-01

    This paper provides a brief outline of the approaches to modeling human visual attention. Bottom-up and top-down mechanisms are described together with some of the problems that they face. It has been suggested in brain science that memory functions by trading measurement precision for associative power; sensory inputs from the environment are never identical on separate occasions, but the associations with memory compensate for the differences. A graphical representation for image similarity is described that relies on the size of maximally associative structures (cliques) that are found to reflect between pairs of images. This is applied to the recognition of movie posters, the location and recognition of characters, and the recognition of faces. The similarity mechanism is shown to model popout effects when constraints are placed on the physical separation of pixels that correspond to nodes in the maximal cliques. The effect extends to modeling human visual behaviour on the Poggendorff illusion.

  11. Large-Scale Brain Systems in ADHD: Beyond the Prefrontal-Striatal Model

    PubMed Central

    Castellanos, F. Xavier; Proal, Erika

    2012-01-01

    Attention-deficit/hyperactivity disorder (ADHD) has long been thought to reflect dysfunction of prefrontal-striatal circuitry, with involvement of other circuits largely ignored. Recent advances in systems neuroscience-based approaches to brain dysfunction enable the development of models of ADHD pathophysiology that encompass a number of different large-scale “resting state” networks. Here we review progress in delineating large-scale neural systems and illustrate their relevance to ADHD. We relate frontoparietal, dorsal attentional, motor, visual, and default networks to the ADHD functional and structural literature. Insights emerging from mapping intrinsic brain connectivity networks provide a potentially mechanistic framework for understanding aspects of ADHD, such as neuropsychological and behavioral inconsistency, and the possible role of primary visual cortex in attentional dysfunction in the disorder. PMID:22169776

  12. 3-D vision and figure-ground separation by visual cortex.

    PubMed

    Grossberg, S

    1994-01-01

    A neural network theory of three-dimensional (3-D) vision, called FACADE theory, is described. The theory proposes a solution of the classical figure-ground problem for biological vision. It does so by suggesting how boundary representations and surface representations are formed within a boundary contour system (BCS) and a feature contour system (FCS). The BCS and FCS interact reciprocally to form 3-D boundary and surface representations that are mutually consistent. Their interactions generate 3-D percepts wherein occluding and occluded object parts are separated, completed, and grouped. The theory clarifies how preattentive processes of 3-D perception and figure-ground separation interact reciprocally with attentive processes of spatial localization, object recognition, and visual search. A new theory of stereopsis is proposed that predicts how cells sensitive to multiple spatial frequencies, disparities, and orientations are combined by context-sensitive filtering, competition, and cooperation to form coherent BCS boundary segmentations. Several factors contribute to figure-ground pop-out, including: boundary contrast between spatially contiguous boundaries, whether due to scenic differences in luminance, color, spatial frequency, or disparity; partially ordered interactions from larger spatial scales and disparities to smaller scales and disparities; and surface filling-in restricted to regions surrounded by a connected boundary. Phenomena such as 3-D pop-out from a 2-D picture, Da Vinci stereopsis, 3-D neon color spreading, completion of partially occluded objects, and figure-ground reversals are analyzed. The BCS and FCS subsystems model aspects of how the two parvocellular cortical processing streams that join the lateral geniculate nucleus to prestriate cortical area V4 interact to generate a multiplexed representation of Form-And-Color-And-DEpth, or FACADE, within area V4. Area V4 is suggested to support figure-ground separation and to interact with cortical mechanisms of spatial attention, attentive object learning, and visual search. Adaptive resonance theory (ART) mechanisms model aspects of how prestriate visual cortex interacts reciprocally with a visual object recognition system in inferotemporal (IT) cortex for purposes of attentive object learning and categorization. Object attention mechanisms of the What cortical processing stream through IT cortex are distinguished from spatial attention mechanisms of the Where cortical processing stream through parietal cortex. Parvocellular BCS and FCS signals interact with the model What stream. Parvocellular FCS and magnocellular motion BCS signals interact with the model Where stream.(ABSTRACT TRUNCATED AT 400 WORDS)

  13. Infant joint attention, neural networks and social cognition.

    PubMed

    Mundy, Peter; Jarrold, William

    2010-01-01

    Neural network models of attention can provide a unifying approach to the study of human cognitive and emotional development (Posner & Rothbart, 2007). In this paper we argue that a neural network approach to the infant development of joint attention can inform our understanding of the nature of human social learning, symbolic thought process and social cognition. At its most basic, joint attention involves the capacity to coordinate one's own visual attention with that of another person. We propose that joint attention development involves increments in the capacity to engage in simultaneous or parallel processing of information about one's own attention and the attention of other people. Infant practice with joint attention is both a consequence and an organizer of the development of a distributed and integrated brain network involving frontal and parietal cortical systems. This executive distributed network first serves to regulate the capacity of infants to respond to and direct the overt behavior of other people in order to share experience with others through the social coordination of visual attention. In this paper we describe this parallel and distributed neural network model of joint attention development and discuss two hypotheses that stem from this model. One is that activation of this distributed network during coordinated attention enhances the depth of information processing and encoding beginning in the first year of life. We also propose that with development, joint attention becomes internalized as the capacity to socially coordinate mental attention to internal representations. As this occurs the executive joint attention network makes vital contributions to the development of human symbolic thinking and social cognition. Copyright © 2010 Elsevier Ltd. All rights reserved.

  14. Visual Attention Patterns of Women with Androphilic and Gynephilic Sexual Attractions.

    PubMed

    Dawson, Samantha J; Fretz, Katherine M; Chivers, Meredith L

    2017-01-01

    Women who report exclusive sexual attractions to men (i.e., androphilia) exhibit gender-nonspecific patterns of sexual response-similar magnitude of genital response to both male and female targets. Interestingly, women reporting any degree of attraction to women (i.e., gynephilia) show significantly greater sexual responses to stimuli depicting female targets compared to male targets. At present, the mechanism(s) underlying these patterns are unknown. According to the information processing model (IPM), attentional processing of sexual cues initiates sexual responding; thus, attention to sexual cues may be one mechanism to explain the observed within-gender differences in specificity findings among women. The purpose of the present study was to examine patterns of initial and controlled visual attention among women with varying sexual attractions. We used eye tracking to assess visual attention to sexually preferred and nonpreferred cues in a sample of 164 women who differed in their degree of androphilia and gynephilia. We found that both exclusively and predominantly androphilic women showed gender-nonspecific patterns of initial attention. In contrast, ambiphilic (i.e., concurrent androphilia and gynephilia) and predominantly/exclusively gynephilic women oriented more quickly toward female targets. Controlled attention patterns mirrored patterns of self-reported sexual attractions for three of these four groups of women, such that gender-specific patterns of visual attention were found for androphilic and gynephilic women. Ambiphilic women looked significantly longer at female targets compared to male targets. These findings support predictions from the IPM and suggest that both initial and controlled attention to sexual cues may be mechanisms contributing to within-gender variation in sexual responding.

  15. Threat captures attention but does not affect learning of contextual regularities.

    PubMed

    Yamaguchi, Motonori; Harwood, Sarah L

    2017-04-01

    Some of the stimulus features that guide visual attention are abstract properties of objects such as potential threat to one's survival, whereas others are complex configurations such as visual contexts that are learned through past experiences. The present study investigated the two functions that guide visual attention, threat detection and learning of contextual regularities, in visual search. Search arrays contained images of threat and non-threat objects, and their locations were fixed on some trials but random on other trials. Although they were irrelevant to the visual search task, threat objects facilitated attention capture and impaired attention disengagement. Search time improved for fixed configurations more than for random configurations, reflecting learning of visual contexts. Nevertheless, threat detection had little influence on learning of the contextual regularities. The results suggest that factors guiding visual attention are different from factors that influence learning to guide visual attention.

  16. Feature-Based Memory-Driven Attentional Capture: Visual Working Memory Content Affects Visual Attention

    ERIC Educational Resources Information Center

    Olivers, Christian N. L.; Meijer, Frank; Theeuwes, Jan

    2006-01-01

    In 7 experiments, the authors explored whether visual attention (the ability to select relevant visual information) and visual working memory (the ability to retain relevant visual information) share the same content representations. The presence of singleton distractors interfered more strongly with a visual search task when it was accompanied by…

  17. A human visual model-based approach of the visual attention and performance evaluation

    NASA Astrophysics Data System (ADS)

    Le Meur, Olivier; Barba, Dominique; Le Callet, Patrick; Thoreau, Dominique

    2005-03-01

    In this paper, a coherent computational model of visual selective attention for color pictures is described and its performances are precisely evaluated. The model based on some important behaviours of the human visual system is composed of four parts: visibility, perception, perceptual grouping and saliency map construction. This paper focuses mainly on its performances assessment by achieving extended subjective and objective comparisons with real fixation points captured by an eye-tracking system used by the observers in a task-free viewing mode. From the knowledge of the ground truth, qualitatively and quantitatively comparisons have been made in terms of the measurement of the linear correlation coefficient (CC) and of the Kulback Liebler divergence (KL). On a set of 10 natural color images, the results show that the linear correlation coefficient and the Kullback Leibler divergence are of about 0.71 and 0.46, respectively. CC and Kl measures with this model are respectively improved by about 4% and 7% compared to the best model proposed by L.Itti. Moreover, by comparing the ability of our model to predict eye movements produced by an average observer, we can conclude that our model succeeds quite well in predicting the spatial locations of the most important areas of the image content.

  18. Objects Classification by Learning-Based Visual Saliency Model and Convolutional Neural Network.

    PubMed

    Li, Na; Zhao, Xinbo; Yang, Yongjia; Zou, Xiaochun

    2016-01-01

    Humans can easily classify different kinds of objects whereas it is quite difficult for computers. As a hot and difficult problem, objects classification has been receiving extensive interests with broad prospects. Inspired by neuroscience, deep learning concept is proposed. Convolutional neural network (CNN) as one of the methods of deep learning can be used to solve classification problem. But most of deep learning methods, including CNN, all ignore the human visual information processing mechanism when a person is classifying objects. Therefore, in this paper, inspiring the completed processing that humans classify different kinds of objects, we bring forth a new classification method which combines visual attention model and CNN. Firstly, we use the visual attention model to simulate the processing of human visual selection mechanism. Secondly, we use CNN to simulate the processing of how humans select features and extract the local features of those selected areas. Finally, not only does our classification method depend on those local features, but also it adds the human semantic features to classify objects. Our classification method has apparently advantages in biology. Experimental results demonstrated that our method made the efficiency of classification improve significantly.

  19. The Influence of Selective and Divided Attention on Audiovisual Integration in Children.

    PubMed

    Yang, Weiping; Ren, Yanna; Yang, Dan Ou; Yuan, Xue; Wu, Jinglong

    2016-01-24

    This article aims to investigate whether there is a difference in audiovisual integration in school-aged children (aged 6 to 13 years; mean age = 9.9 years) between the selective attention condition and divided attention condition. We designed a visual and/or auditory detection task that included three blocks (divided attention, visual-selective attention, and auditory-selective attention). The results showed that the response to bimodal audiovisual stimuli was faster than to unimodal auditory or visual stimuli under both divided attention and auditory-selective attention conditions. However, in the visual-selective attention condition, no significant difference was found between the unimodal visual and bimodal audiovisual stimuli in response speed. Moreover, audiovisual behavioral facilitation effects were compared between divided attention and selective attention (auditory or visual attention). In doing so, we found that audiovisual behavioral facilitation was significantly difference between divided attention and selective attention. The results indicated that audiovisual integration was stronger in the divided attention condition than that in the selective attention condition in children. Our findings objectively support the notion that attention can modulate audiovisual integration in school-aged children. Our study might offer a new perspective for identifying children with conditions that are associated with sustained attention deficit, such as attention-deficit hyperactivity disorder. © The Author(s) 2016.

  20. 37 CFR 41.47 - Oral hearing.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... on and call the Board's attention to a recent court or Board opinion which could have an effect on the manner in which the appeal is decided. (k) Visual aids. Visual aids may be used at an oral hearing... copy of each visual aid (photograph in the case of an artifact, a model or an exhibit) for each judge...

  1. Using eye tracking to test for individual differences in attention to attractive faces

    PubMed Central

    Valuch, Christian; Pflüger, Lena S.; Wallner, Bernard; Laeng, Bruno; Ansorge, Ulrich

    2015-01-01

    We assessed individual differences in visual attention toward faces in relation to their attractiveness via saccadic reaction times. Motivated by the aim to understand individual differences in attention to faces, we tested three hypotheses: (a) Attractive faces hold or capture attention more effectively than less attractive faces; (b) men show a stronger bias toward attractive opposite-sex faces than women; and (c) blue-eyed men show a stronger bias toward blue-eyed than brown-eyed feminine faces. The latter test was included because prior research suggested a high effect size. Our data supported hypotheses (a) and (b) but not (c). By conducting separate tests for disengagement of attention and attention capture, we found that individual differences exist at distinct stages of attentional processing but these differences are of varying robustness and importance. In our conclusion, we also advocate the use of linear mixed effects models as the most appropriate statistical approach for studying inter-individual differences in visual attention with naturalistic stimuli. PMID:25698993

  2. Using eye tracking to test for individual differences in attention to attractive faces.

    PubMed

    Valuch, Christian; Pflüger, Lena S; Wallner, Bernard; Laeng, Bruno; Ansorge, Ulrich

    2015-01-01

    We assessed individual differences in visual attention toward faces in relation to their attractiveness via saccadic reaction times. Motivated by the aim to understand individual differences in attention to faces, we tested three hypotheses: (a) Attractive faces hold or capture attention more effectively than less attractive faces; (b) men show a stronger bias toward attractive opposite-sex faces than women; and (c) blue-eyed men show a stronger bias toward blue-eyed than brown-eyed feminine faces. The latter test was included because prior research suggested a high effect size. Our data supported hypotheses (a) and (b) but not (c). By conducting separate tests for disengagement of attention and attention capture, we found that individual differences exist at distinct stages of attentional processing but these differences are of varying robustness and importance. In our conclusion, we also advocate the use of linear mixed effects models as the most appropriate statistical approach for studying inter-individual differences in visual attention with naturalistic stimuli.

  3. The Role of Attention in Information Processing Implications for the Design of Displays

    DTIC Science & Technology

    1989-12-01

    processing system. Psychological Review, J, 214-255. Neisser , U . (1967). Cognitive Rsycholo&X. New York, NY: Appleton- Century-Crofts. Neisser , U . (1969...in the visual display is now an important part of a number of attention models. A related model suggested by Neisser (1967) is that successful...to filter attenuation theory have been proposed by Neisser (1967, 1969). According to Neisser’s theory, selective attention is an active process of

  4. Auditory Working Memory Load Impairs Visual Ventral Stream Processing: Toward a Unified Model of Attentional Load

    ERIC Educational Resources Information Center

    Klemen, Jane; Buchel, Christian; Buhler, Mira; Menz, Mareike M.; Rose, Michael

    2010-01-01

    Attentional interference between tasks performed in parallel is known to have strong and often undesired effects. As yet, however, the mechanisms by which interference operates remain elusive. A better knowledge of these processes may facilitate our understanding of the effects of attention on human performance and the debilitating consequences…

  5. Proceedings of the Lake Wilderness Attention Conference. Interim Technical Report, August 1, 1980 through September 30, 1980.

    ERIC Educational Resources Information Center

    Lansman, Marcy, Ed.; Hunt, Earl, Ed.

    This technical report contains papers prepared by the 11 speakers at the 1980 Lake Wilderness (Seattle, Washington) Conference on Attention. The papers are divided into general models, physiological evidence, and visual attention categories. Topics of the papers include the following: (1) willed versus automatic control of behavior; (2) multiple…

  6. A neural model of figure-ground organization.

    PubMed

    Craft, Edward; Schütze, Hartmut; Niebur, Ernst; von der Heydt, Rüdiger

    2007-06-01

    Psychophysical studies suggest that figure-ground organization is a largely autonomous process that guides--and thus precedes--allocation of attention and object recognition. The discovery of border-ownership representation in single neurons of early visual cortex has confirmed this view. Recent theoretical studies have demonstrated that border-ownership assignment can be modeled as a process of self-organization by lateral interactions within V2 cortex. However, the mechanism proposed relies on propagation of signals through horizontal fibers, which would result in increasing delays of the border-ownership signal with increasing size of the visual stimulus, in contradiction with experimental findings. It also remains unclear how the resulting border-ownership representation would interact with attention mechanisms to guide further processing. Here we present a model of border-ownership coding based on dedicated neural circuits for contour grouping that produce border-ownership assignment and also provide handles for mechanisms of selective attention. The results are consistent with neurophysiological and psychophysical findings. The model makes predictions about the hypothetical grouping circuits and the role of feedback between cortical areas.

  7. Common neural substrates for visual working memory and attention.

    PubMed

    Mayer, Jutta S; Bittner, Robert A; Nikolić, Danko; Bledowski, Christoph; Goebel, Rainer; Linden, David E J

    2007-06-01

    Humans are severely limited in their ability to memorize visual information over short periods of time. Selective attention has been implicated as a limiting factor. Here we used functional magnetic resonance imaging to test the hypothesis that this limitation is due to common neural resources shared by visual working memory (WM) and selective attention. We combined visual search and delayed discrimination of complex objects and independently modulated the demands on selective attention and WM encoding. Participants were presented with a search array and performed easy or difficult visual search in order to encode one or three complex objects into visual WM. Overlapping activation for attention-demanding visual search and WM encoding was observed in distributed posterior and frontal regions. In the right prefrontal cortex and bilateral insula blood oxygen-level-dependent activation additively increased with increased WM load and attentional demand. Conversely, several visual, parietal and premotor areas showed overlapping activation for the two task components and were severely reduced in their WM load response under the condition with high attentional demand. Regions in the left prefrontal cortex were selectively responsive to WM load. Areas selectively responsive to high attentional demand were found within the right prefrontal and bilateral occipital cortex. These results indicate that encoding into visual WM and visual selective attention require to a high degree access to common neural resources. We propose that competition for resources shared by visual attention and WM encoding can limit processing capabilities in distributed posterior brain regions.

  8. Recovery of Visual Search following Moderate to Severe Traumatic Brain Injury

    PubMed Central

    Schmitter-Edgecombe, Maureen; Robertson, Kayela

    2015-01-01

    Introduction Deficits in attentional abilities can significantly impact rehabilitation and recovery from traumatic brain injury (TBI). This study investigated the nature and recovery of pre-attentive (parallel) and attentive (serial) visual search abilities after TBI. Methods Participants were 40 individuals with moderate to severe TBI who were tested following emergence from post-traumatic amnesia and approximately 8-months post-injury, as well as 40 age and education matched controls. Pre-attentive (automatic) and attentive (controlled) visual search situations were created by manipulating the saliency of the target item amongst distractor items in visual displays. The relationship between pre-attentive and attentive visual search rates and follow-up community integration were also explored. Results The results revealed intact parallel (automatic) processing skills in the TBI group both post-acutely and at follow-up. In contrast, when attentional demands on visual search were increased by reducing the saliency of the target, the TBI group demonstrated poorer performances compared to the control group both post-acutely and 8-months post-injury. Neither pre-attentive nor attentive visual search slope values correlated with follow-up community integration. Conclusions These results suggest that utilizing intact pre-attentive visual search skills during rehabilitation may help to reduce high mental workload situations, thereby improving the rehabilitation process. For example, making commonly used objects more salient in the environment should increase reliance or more automatic visual search processes and reduce visual search time for individuals with TBI. PMID:25671675

  9. Auditory and Visual Capture during Focused Visual Attention

    ERIC Educational Resources Information Center

    Koelewijn, Thomas; Bronkhorst, Adelbert; Theeuwes, Jan

    2009-01-01

    It is well known that auditory and visual onsets presented at a particular location can capture a person's visual attention. However, the question of whether such attentional capture disappears when attention is focused endogenously beforehand has not yet been answered. Moreover, previous studies have not differentiated between capture by onsets…

  10. Value-driven attentional capture in the auditory domain.

    PubMed

    Anderson, Brian A

    2016-01-01

    It is now well established that the visual attention system is shaped by reward learning. When visual features are associated with a reward outcome, they acquire high priority and can automatically capture visual attention. To date, evidence for value-driven attentional capture has been limited entirely to the visual system. In the present study, I demonstrate that previously reward-associated sounds also capture attention, interfering more strongly with the performance of a visual task. This finding suggests that value-driven attention reflects a broad principle of information processing that can be extended to other sensory modalities and that value-driven attention can bias cross-modal stimulus competition.

  11. Attention to Color Sharpens Neural Population Tuning via Feedback Processing in the Human Visual Cortex Hierarchy.

    PubMed

    Bartsch, Mandy V; Loewe, Kristian; Merkel, Christian; Heinze, Hans-Jochen; Schoenfeld, Mircea A; Tsotsos, John K; Hopf, Jens-Max

    2017-10-25

    Attention can facilitate the selection of elementary object features such as color, orientation, or motion. This is referred to as feature-based attention and it is commonly attributed to a modulation of the gain and tuning of feature-selective units in visual cortex. Although gain mechanisms are well characterized, little is known about the cortical processes underlying the sharpening of feature selectivity. Here, we show with high-resolution magnetoencephalography in human observers (men and women) that sharpened selectivity for a particular color arises from feedback processing in the human visual cortex hierarchy. To assess color selectivity, we analyze the response to a color probe that varies in color distance from an attended color target. We find that attention causes an initial gain enhancement in anterior ventral extrastriate cortex that is coarsely selective for the target color and transitions within ∼100 ms into a sharper tuned profile in more posterior ventral occipital cortex. We conclude that attention sharpens selectivity over time by attenuating the response at lower levels of the cortical hierarchy to color values neighboring the target in color space. These observations support computational models proposing that attention tunes feature selectivity in visual cortex through backward-propagating attenuation of units less tuned to the target. SIGNIFICANCE STATEMENT Whether searching for your car, a particular item of clothing, or just obeying traffic lights, in everyday life, we must select items based on color. But how does attention allow us to select a specific color? Here, we use high spatiotemporal resolution neuromagnetic recordings to examine how color selectivity emerges in the human brain. We find that color selectivity evolves as a coarse to fine process from higher to lower levels within the visual cortex hierarchy. Our observations support computational models proposing that feature selectivity increases over time by attenuating the responses of less-selective cells in lower-level brain areas. These data emphasize that color perception involves multiple areas across a hierarchy of regions, interacting with each other in a complex, recursive manner. Copyright © 2017 the authors 0270-6474/17/3710346-12$15.00/0.

  12. Enhanced HMAX model with feedforward feature learning for multiclass categorization.

    PubMed

    Li, Yinlin; Wu, Wei; Zhang, Bo; Li, Fengfu

    2015-01-01

    In recent years, the interdisciplinary research between neuroscience and computer vision has promoted the development in both fields. Many biologically inspired visual models are proposed, and among them, the Hierarchical Max-pooling model (HMAX) is a feedforward model mimicking the structures and functions of V1 to posterior inferotemporal (PIT) layer of the primate visual cortex, which could generate a series of position- and scale- invariant features. However, it could be improved with attention modulation and memory processing, which are two important properties of the primate visual cortex. Thus, in this paper, based on recent biological research on the primate visual cortex, we still mimic the first 100-150 ms of visual cognition to enhance the HMAX model, which mainly focuses on the unsupervised feedforward feature learning process. The main modifications are as follows: (1) To mimic the attention modulation mechanism of V1 layer, a bottom-up saliency map is computed in the S1 layer of the HMAX model, which can support the initial feature extraction for memory processing; (2) To mimic the learning, clustering and short-term memory to long-term memory conversion abilities of V2 and IT, an unsupervised iterative clustering method is used to learn clusters with multiscale middle level patches, which are taken as long-term memory; (3) Inspired by the multiple feature encoding mode of the primate visual cortex, information including color, orientation, and spatial position are encoded in different layers of the HMAX model progressively. By adding a softmax layer at the top of the model, multiclass categorization experiments can be conducted, and the results on Caltech101 show that the enhanced model with a smaller memory size exhibits higher accuracy than the original HMAX model, and could also achieve better accuracy than other unsupervised feature learning methods in multiclass categorization task.

  13. Finding regions of interest in pathological images: an attentional model approach

    NASA Astrophysics Data System (ADS)

    Gómez, Francisco; Villalón, Julio; Gutierrez, Ricardo; Romero, Eduardo

    2009-02-01

    This paper introduces an automated method for finding diagnostic regions-of-interest (RoIs) in histopathological images. This method is based on the cognitive process of visual selective attention that arises during a pathologist's image examination. Specifically, it emulates the first examination phase, which consists in a coarse search for tissue structures at a "low zoom" to separate the image into relevant regions.1 The pathologist's cognitive performance depends on inherent image visual cues - bottom-up information - and on acquired clinical medicine knowledge - top-down mechanisms -. Our pathologist's visual attention model integrates the latter two components. The selected bottom-up information includes local low level features such as intensity, color, orientation and texture information. Top-down information is related to the anatomical and pathological structures known by the expert. A coarse approximation to these structures is achieved by an oversegmentation algorithm, inspired by psychological grouping theories. The algorithm parameters are learned from an expert pathologist's segmentation. Top-down and bottom-up integration is achieved by calculating a unique index for each of the low level characteristics inside the region. Relevancy is estimated as a simple average of these indexes. Finally, a binary decision rule defines whether or not a region is interesting. The method was evaluated on a set of 49 images using a perceptually-weighted evaluation criterion, finding a quality gain of 3dB when comparing to a classical bottom-up model of attention.

  14. Electrophysiological evidence for altered visual, but not auditory, selective attention in adolescent cochlear implant users.

    PubMed

    Harris, Jill; Kamke, Marc R

    2014-11-01

    Selective attention fundamentally alters sensory perception, but little is known about the functioning of attention in individuals who use a cochlear implant. This study aimed to investigate visual and auditory attention in adolescent cochlear implant users. Event related potentials were used to investigate the influence of attention on visual and auditory evoked potentials in six cochlear implant users and age-matched normally-hearing children. Participants were presented with streams of alternating visual and auditory stimuli in an oddball paradigm: each modality contained frequently presented 'standard' and infrequent 'deviant' stimuli. Across different blocks attention was directed to either the visual or auditory modality. For the visual stimuli attention boosted the early N1 potential, but this effect was larger for cochlear implant users. Attention was also associated with a later P3 component for the visual deviant stimulus, but there was no difference between groups in the later attention effects. For the auditory stimuli, attention was associated with a decrease in N1 latency as well as a robust P3 for the deviant tone. Importantly, there was no difference between groups in these auditory attention effects. The results suggest that basic mechanisms of auditory attention are largely normal in children who are proficient cochlear implant users, but that visual attention may be altered. Ultimately, a better understanding of how selective attention influences sensory perception in cochlear implant users will be important for optimising habilitation strategies. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  15. The Deployment of Visual Attention

    DTIC Science & Technology

    2006-03-01

    targets: Evidence for memory-based control of attention. Psychonomic Bulletin & Review , 11(1), 71-76. Torralba, A. (2003). Modeling global scene...S., Fencsik, D. E., Tran, L., & Wolfe, J. M. (in press). How do we track invisible objects? Psychonomic Bulletin & Review . *Horowitz, T. S. (in press

  16. Hebbian learning in a model with dynamic rate-coded neurons: an alternative to the generative model approach for learning receptive fields from natural scenes.

    PubMed

    Hamker, Fred H; Wiltschut, Jan

    2007-09-01

    Most computational models of coding are based on a generative model according to which the feedback signal aims to reconstruct the visual scene as close as possible. We here explore an alternative model of feedback. It is derived from studies of attention and thus, probably more flexible with respect to attentive processing in higher brain areas. According to this model, feedback implements a gain increase of the feedforward signal. We use a dynamic model with presynaptic inhibition and Hebbian learning to simultaneously learn feedforward and feedback weights. The weights converge to localized, oriented, and bandpass filters similar as the ones found in V1. Due to presynaptic inhibition the model predicts the organization of receptive fields within the feedforward pathway, whereas feedback primarily serves to tune early visual processing according to the needs of the task.

  17. The compensatory dynamic of inter-hemispheric interactions in visuospatial attention revealed using rTMS and fMRI.

    PubMed

    Plow, Ela B; Cattaneo, Zaira; Carlson, Thomas A; Alvarez, George A; Pascual-Leone, Alvaro; Battelli, Lorella

    2014-01-01

    A balance of mutual tonic inhibition between bi-hemispheric posterior parietal cortices is believed to play an important role in bilateral visual attention. However, experimental support for this notion has been mainly drawn from clinical models of unilateral damage. We have previously shown that low-frequency repetitive TMS (rTMS) over the intraparietal sulcus (IPS) generates a contralateral attentional deficit in bilateral visual tracking. Here, we used functional magnetic resonance imaging (fMRI) to study whether rTMS temporarily disrupts the inter-hemispheric balance between bilateral IPS in visual attention. Following application of 1 Hz rTMS over the left IPS, subjects performed a bilateral visual tracking task while their brain activity was recorded using fMRI. Behaviorally, tracking accuracy was reduced immediately following rTMS. Areas ventro-lateral to left IPS, including inferior parietal lobule (IPL), lateral IPS (LIPS), and middle occipital gyrus (MoG), showed decreased activity following rTMS, while dorsomedial areas, such as Superior Parietal Lobule (SPL), Superior occipital gyrus (SoG), and lingual gyrus, as well as middle temporal areas (MT+), showed higher activity. The brain activity of the homologues of these regions in the un-stimulated, right hemisphere was reversed. Interestingly, the evolution of network-wide activation related to attentional behavior following rTMS showed that activation of most occipital synergists adaptively compensated for contralateral and ipsilateral decrement after rTMS, while activation of parietal synergists, and SoG remained competing. This pattern of ipsilateral and contralateral activations empirically supports the hypothesized loss of inter-hemispheric balance that underlies clinical manifestation of visual attentional extinction.

  18. Is attention based on spatial contextual memory preferentially guided by low spatial frequency signals?

    PubMed

    Patai, Eva Zita; Buckley, Alice; Nobre, Anna Christina

    2013-01-01

    A popular model of visual perception states that coarse information (carried by low spatial frequencies) along the dorsal stream is rapidly transmitted to prefrontal and medial temporal areas, activating contextual information from memory, which can in turn constrain detailed input carried by high spatial frequencies arriving at a slower rate along the ventral visual stream, thus facilitating the processing of ambiguous visual stimuli. We were interested in testing whether this model contributes to memory-guided orienting of attention. In particular, we asked whether global, low-spatial frequency (LSF) inputs play a dominant role in triggering contextual memories in order to facilitate the processing of the upcoming target stimulus. We explored this question over four experiments. The first experiment replicated the LSF advantage reported in perceptual discrimination tasks by showing that participants were faster and more accurate at matching a low spatial frequency version of a scene, compared to a high spatial frequency version, to its original counterpart in a forced-choice task. The subsequent three experiments tested the relative contributions of low versus high spatial frequencies during memory-guided covert spatial attention orienting tasks. Replicating the effects of memory-guided attention, pre-exposure to scenes associated with specific spatial memories for target locations (memory cues) led to higher perceptual discrimination and faster response times to identify targets embedded in the scenes. However, either high or low spatial frequency cues were equally effective; LSF signals did not selectively or preferentially contribute to the memory-driven attention benefits to performance. Our results challenge a generalized model that LSFs activate contextual memories, which in turn bias attention and facilitate perception.

  19. Is Attention Based on Spatial Contextual Memory Preferentially Guided by Low Spatial Frequency Signals?

    PubMed Central

    Patai, Eva Zita; Buckley, Alice; Nobre, Anna Christina

    2013-01-01

    A popular model of visual perception states that coarse information (carried by low spatial frequencies) along the dorsal stream is rapidly transmitted to prefrontal and medial temporal areas, activating contextual information from memory, which can in turn constrain detailed input carried by high spatial frequencies arriving at a slower rate along the ventral visual stream, thus facilitating the processing of ambiguous visual stimuli. We were interested in testing whether this model contributes to memory-guided orienting of attention. In particular, we asked whether global, low-spatial frequency (LSF) inputs play a dominant role in triggering contextual memories in order to facilitate the processing of the upcoming target stimulus. We explored this question over four experiments. The first experiment replicated the LSF advantage reported in perceptual discrimination tasks by showing that participants were faster and more accurate at matching a low spatial frequency version of a scene, compared to a high spatial frequency version, to its original counterpart in a forced-choice task. The subsequent three experiments tested the relative contributions of low versus high spatial frequencies during memory-guided covert spatial attention orienting tasks. Replicating the effects of memory-guided attention, pre-exposure to scenes associated with specific spatial memories for target locations (memory cues) led to higher perceptual discrimination and faster response times to identify targets embedded in the scenes. However, either high or low spatial frequency cues were equally effective; LSF signals did not selectively or preferentially contribute to the memory-driven attention benefits to performance. Our results challenge a generalized model that LSFs activate contextual memories, which in turn bias attention and facilitate perception. PMID:23776509

  20. Aging and Visual Attention

    PubMed Central

    Madden, David J.

    2007-01-01

    Older adults are often slower and less accurate than are younger adults in performing visual-search tasks, suggesting an age-related decline in attentional functioning. Age-related decline in attention, however, is not entirely pervasive. Visual search that is based on the observer’s expectations (i.e., top-down attention) is relatively preserved as a function of adult age. Neuroimaging research suggests that age-related decline occurs in the structure and function of brain regions mediating the visual sensory input, whereas activation of regions in the frontal and parietal lobes is often greater for older adults than for younger adults. This increased activation may represent an age-related increase in the role of top-down attention during visual tasks. To obtain a more complete account of age-related decline and preservation of visual attention, current research is beginning to explore the relation of neuroimaging measures of brain structure and function to behavioral measures of visual attention. PMID:18080001

  1. Looking you in the mouth: abnormal gaze in autism resulting from impaired top-down modulation of visual attention.

    PubMed

    Neumann, Dirk; Spezio, Michael L; Piven, Joseph; Adolphs, Ralph

    2006-12-01

    People with autism are impaired in their social behavior, including their eye contact with others, but the processes that underlie this impairment remain elusive. We combined high-resolution eye tracking with computational modeling in a group of 10 high-functioning individuals with autism to address this issue. The group fixated the location of the mouth in facial expressions more than did matched controls, even when the mouth was not shown, even in faces that were inverted and most noticeably at latencies of 200-400 ms. Comparisons with a computational model of visual saliency argue that the abnormal bias for fixating the mouth in autism is not driven by an exaggerated sensitivity to the bottom-up saliency of the features, but rather by an abnormal top-down strategy for allocating visual attention.

  2. Visual attention shifting in autism spectrum disorders.

    PubMed

    Richard, Annette E; Lajiness-O'Neill, Renee

    2015-01-01

    Abnormal visual attention has been frequently observed in autism spectrum disorders (ASD). Abnormal shifting of visual attention is related to abnormal development of social cognition and has been identified as a key neuropsychological finding in ASD. Better characterizing attention shifting in ASD and its relationship with social functioning may help to identify new targets for intervention and improving social communication in these disorders. Thus, the current study investigated deficits in attention shifting in ASD as well as relationships between attention shifting and social communication in ASD and neurotypicals (NT). To investigate deficits in visual attention shifting in ASD, 20 ASD and 20 age- and gender-matched NT completed visual search (VS) and Navon tasks with attention-shifting demands as well as a set-shifting task. VS was a feature search task with targets defined in one of two dimensions; Navon required identification of a target letter presented at the global or local level. Psychomotor and processing speed were entered as covariates. Relationships between visual attention shifting, set shifting, and social functioning were also examined. ASD and NT showed comparable costs of shifting attention. However, psychomotor and processing speed were slower in ASD than in NT, and psychomotor and processing speed were positively correlated with attention-shifting costs on Navon and VS, respectively, for both groups. Attention shifting on VS and Navon were correlated among NT, while attention shifting on Navon was correlated with set shifting among ASD. Attention-shifting costs on Navon were positively correlated with restricted and repetitive behaviors among ASD. Relationships between attention shifting and psychomotor and processing speed, as well as relationships between measures of different aspects of visual attention shifting, suggest inefficient top-down influences over preattentive visual processing in ASD. Inefficient attention shifting may be related to restricted and repetitive behaviors in these disorders.

  3. A computational model of visual marking using an inter-connected network of spiking neurons: the spiking search over time & space model (sSoTS).

    PubMed

    Mavritsaki, Eirini; Heinke, Dietmar; Humphreys, Glyn W; Deco, Gustavo

    2006-01-01

    In the real world, visual information is selected over time as well as space, when we prioritise new stimuli for attention. Watson and Humphreys [Watson, D., Humphreys, G.W., 1997. Visual marking: prioritizing selection for new objects by top-down attentional inhibition of old objects. Psychological Review 104, 90-122] presented evidence that new information in search tasks is prioritised by (amongst other processes) active ignoring of old items - a process they termed visual marking. In this paper we present, for the first time, an explicit computational model of visual marking using biologically plausible activation functions. The "spiking search over time and space" model (sSoTS) incorporates different synaptic components (NMDA, AMPA, GABA) and a frequency adaptation mechanism based on [Ca(2+)] sensitive K(+) current. This frequency adaptation current can act as a mechanism that suppresses the previously attended items. We show that, when coupled with a process of active inhibition applied to old items, frequency adaptation leads to old items being de-prioritised (and new items prioritised) across time in search. Furthermore, the time course of these processes mimics the time course of the preview effect in human search. The results indicate that the sSoTS model can provide a biologically plausible account of human search over time as well as space.

  4. Active listening impairs visual perception and selectivity: an ERP study of auditory dual-task costs on visual attention.

    PubMed

    Gherri, Elena; Eimer, Martin

    2011-04-01

    The ability to drive safely is disrupted by cell phone conversations, and this has been attributed to a diversion of attention from the visual environment. We employed behavioral and ERP measures to study whether the attentive processing of spoken messages is, in itself, sufficient to produce visual-attentional deficits. Participants searched for visual targets defined by a unique feature (Experiment 1) or feature conjunction (Experiment 2), and simultaneously listened to narrated text passages that had to be recalled later (encoding condition), or heard backward-played speech sounds that could be ignored (control condition). Responses to targets were slower in the encoding condition, and ERPs revealed that the visual processing of search arrays and the attentional selection of target stimuli were less efficient in the encoding relative to the control condition. Results demonstrate that the attentional processing of visual information is impaired when concurrent spoken messages are encoded and maintained, in line with cross-modal links in selective attention, but inconsistent with the view that attentional resources are modality-specific. The distraction of visual attention by active listening could contribute to the adverse effects of cell phone use on driving performance.

  5. Object-based attention underlies the rehearsal of feature binding in visual working memory.

    PubMed

    Shen, Mowei; Huang, Xiang; Gao, Zaifeng

    2015-04-01

    Feature binding is a core concept in many research fields, including the study of working memory (WM). Over the past decade, it has been debated whether keeping the feature binding in visual WM consumes more visual attention than the constituent single features. Previous studies have only explored the contribution of domain-general attention or space-based attention in the binding process; no study so far has explored the role of object-based attention in retaining binding in visual WM. We hypothesized that object-based attention underlay the mechanism of rehearsing feature binding in visual WM. Therefore, during the maintenance phase of a visual WM task, we inserted a secondary mental rotation (Experiments 1-3), transparent motion (Experiment 4), or an object-based feature report task (Experiment 5) to consume the object-based attention available for binding. In line with the prediction of the object-based attention hypothesis, Experiments 1-5 revealed a more significant impairment for binding than for constituent single features. However, this selective binding impairment was not observed when inserting a space-based visual search task (Experiment 6). We conclude that object-based attention underlies the rehearsal of binding representation in visual WM. (c) 2015 APA, all rights reserved.

  6. A formal theory of feature binding in object perception.

    PubMed

    Ashby, F G; Prinzmetal, W; Ivry, R; Maddox, W T

    1996-01-01

    Visual objects are perceived correctly only if their features are identified and then bound together. Illusory conjunctions result when feature identification is correct but an error occurs during feature binding. A new model is proposed that assumes feature binding errors occur because of uncertainty about the location of visual features. This model accounted for data from 2 new experiments better than a model derived from A. M. Treisman and H. Schmidt's (1982) feature integration theory. The traditional method for detecting the occurrence of true illusory conjunctions is shown to be fundamentally flawed. A reexamination of 2 previous studies provided new insights into the role of attention and location information in object perception and a reinterpretation of the deficits in patients who exhibit attentional disorders.

  7. The mechanisms of feature inheritance as predicted by a systems-level model of visual attention and decision making.

    PubMed

    Hamker, Fred H

    2008-07-15

    Feature inheritance provides evidence that properties of an invisible target stimulus can be attached to a following mask. We apply a systemslevel model of attention and decision making to explore the influence of memory and feedback connections in feature inheritance. We find that the presence of feedback loops alone is sufficient to account for feature inheritance. Although our simulations do not cover all experimental variations and focus only on the general principle, our result appears of specific interest since the model was designed for a completely different purpose than to explain feature inheritance. We suggest that feedback is an important property in visual perception and provide a description of its mechanism and its role in perception.

  8. Structural Model of the Relationships among Cognitive Processes, Visual Motor Integration, and Academic Achievement in Students with Mild Intellectual Disability (MID)

    ERIC Educational Resources Information Center

    Taha, Mohamed Mostafa

    2016-01-01

    This study aimed to test a proposed structural model of the relationships and existing paths among cognitive processes (attention and planning), visual motor integration, and academic achievement in reading, writing, and mathematics. The study sample consisted of 50 students with mild intellectual disability or MID. The average age of these…

  9. Attending to Eye Movements and Retinal Eccentricity: Evidence for the Activity Distribution Model of Attention Reconsidered

    ERIC Educational Resources Information Center

    Turk-Browne, Nicholas B.; Pratt, Jay

    2005-01-01

    When testing between spotlight and activity distribution models of visual attention, D. LaBerge, R. L. Carlson, J. K. Williams, and B. G. Bunney (1997) used an experimental paradigm in which targets are embedded in 3 brief displays. This paradigm, however, may be confounded by retinal eccentricity effects and saccadic eye movements. When the…

  10. The effect of spatial organization of targets and distractors on the capacity to selectively memorize objects in visual short-term memory.

    PubMed

    Abbes, Aymen Ben; Gavault, Emmanuelle; Ripoll, Thierry

    2014-01-01

    We conducted a series of experiments to explore how the spatial configuration of objects influences the selection and the processing of these objects in a visual short-term memory task. We designed a new experiment in which participants had to memorize 4 targets presented among 4 distractors. Targets were cued during the presentation of distractor objects. Their locations varied according to 4 spatial configurations. From the first to the last configuration, the distance between targets' locations was progressively increased. The results revealed a high capacity to select and memorize targets embedded among distractors even when targets were extremely distant from each other. This capacity is discussed in relation to the unitary conception of attention, models of split attention, and the competitive interaction model. Finally, we propose that the spatial dispersion of objects has different effects on attentional allocation and processing stages. Thus, when targets are extremely distant from each other, attentional allocation becomes more difficult while processing becomes easier. This finding implicates that these 2 aspects of attention need to be more clearly distinguished in future research.

  11. Behavioral model of visual perception and recognition

    NASA Astrophysics Data System (ADS)

    Rybak, Ilya A.; Golovan, Alexander V.; Gusakova, Valentina I.

    1993-09-01

    In the processes of visual perception and recognition human eyes actively select essential information by way of successive fixations at the most informative points of the image. A behavioral program defining a scanpath of the image is formed at the stage of learning (object memorizing) and consists of sequential motor actions, which are shifts of attention from one to another point of fixation, and sensory signals expected to arrive in response to each shift of attention. In the modern view of the problem, invariant object recognition is provided by the following: (1) separated processing of `what' (object features) and `where' (spatial features) information at high levels of the visual system; (2) mechanisms of visual attention using `where' information; (3) representation of `what' information in an object-based frame of reference (OFR). However, most recent models of vision based on OFR have demonstrated the ability of invariant recognition of only simple objects like letters or binary objects without background, i.e. objects to which a frame of reference is easily attached. In contrast, we use not OFR, but a feature-based frame of reference (FFR), connected with the basic feature (edge) at the fixation point. This has provided for our model, the ability for invariant representation of complex objects in gray-level images, but demands realization of behavioral aspects of vision described above. The developed model contains a neural network subsystem of low-level vision which extracts a set of primary features (edges) in each fixation, and high- level subsystem consisting of `what' (Sensory Memory) and `where' (Motor Memory) modules. The resolution of primary features extraction decreases with distances from the point of fixation. FFR provides both the invariant representation of object features in Sensor Memory and shifts of attention in Motor Memory. Object recognition consists in successive recall (from Motor Memory) and execution of shifts of attention and successive verification of the expected sets of features (stored in Sensory Memory). The model shows the ability of recognition of complex objects (such as faces) in gray-level images invariant with respect to shift, rotation, and scale.

  12. Changes in the distribution of sustained attention alter the perceived structure of visual space.

    PubMed

    Fortenbaugh, Francesca C; Robertson, Lynn C; Esterman, Michael

    2017-02-01

    Visual spatial attention is a critical process that allows for the selection and enhanced processing of relevant objects and locations. While studies have shown attentional modulations of perceived location and the representation of distance information across multiple objects, there remains disagreement regarding what influence spatial attention has on the underlying structure of visual space. The present study utilized a method of magnitude estimation in which participants must judge the location of briefly presented targets within the boundaries of their individual visual fields in the absence of any other objects or boundaries. Spatial uncertainty of target locations was used to assess perceived locations across distributed and focused attention conditions without the use of external stimuli, such as visual cues. Across two experiments we tested locations along the cardinal and 45° oblique axes. We demonstrate that focusing attention within a region of space can expand the perceived size of visual space; even in cases where doing so makes performance less accurate. Moreover, the results of the present studies show that when fixation is actively maintained, focusing attention along a visual axis leads to an asymmetrical stretching of visual space that is predominantly focused across the central half of the visual field, consistent with an expansive gradient along the focus of voluntary attention. These results demonstrate that focusing sustained attention peripherally during active fixation leads to an asymmetrical expansion of visual space within the central visual field. Published by Elsevier Ltd.

  13. Higher dietary diversity is related to better visual and auditory sustained attention.

    PubMed

    Shiraseb, Farideh; Siassi, Fereydoun; Qorbani, Mostafa; Sotoudeh, Gity; Rostami, Reza; Narmaki, Elham; Yavari, Parvaneh; Aghasi, Mohadeseh; Shaibu, Osman Mohammed

    2016-04-01

    Attention is a complex cognitive function that is necessary for learning, for following social norms of behaviour and for effective performance of responsibilities and duties. It is especially important in sensitive occupations requiring sustained attention. Improvement of dietary diversity (DD) is recognised as an important factor in health promotion, but its association with sustained attention is unknown. The aim of this study was to determine the association between auditory and visual sustained attention and DD. A cross-sectional study was carried out on 400 women aged 20-50 years who attended sports clubs at Tehran Municipality. Sustained attention was evaluated on the basis of the Integrated Visual and Auditory Continuous Performance Test using Integrated Visual and Auditory software. A single 24-h dietary recall questionnaire was used for DD assessment. Dietary diversity scores (DDS) were determined using the FAO guidelines. The mean visual and auditory sustained attention scores were 40·2 (sd 35·2) and 42·5 (sd 38), respectively. The mean DDS was 4·7 (sd 1·5). After adjusting for age, education years, physical activity, energy intake and BMI, mean visual and auditory sustained attention showed a significant increase as the quartiles of DDS increased (P=0·001). In addition, the mean subscales of attention, including auditory consistency and vigilance, visual persistence, visual and auditory focus, speed, comprehension and full attention, increased significantly with increasing DDS (P<0·05). In conclusion, higher DDS is associated with better visual and auditory sustained attention.

  14. Visual search and attention: an overview.

    PubMed

    Davis, Elizabeth T; Palmer, John

    2004-01-01

    This special feature issue is devoted to attention and visual search. Attention is a central topic in psychology and visual search is both a versatile paradigm for the study of visual attention and a topic of study in itself. Visual search depends on sensory, perceptual, and cognitive processes. As a result, the search paradigm has been used to investigate a diverse range of phenomena. Manipulating the search task can vary the demands on attention. In turn, attention modulates visual search by selecting and limiting the information available at various levels of processing. Focusing on the intersection of attention and search provides a relatively structured window into the wide world of attentional phenomena. In particular, the effects of divided attention are illustrated by the effects of set size (the number of stimuli in a display) and the effects of selective attention are illustrated by cueing subsets of stimuli within the display. These two phenomena provide the starting point for the articles in this special issue. The articles are organized into four general topics to help structure the issues of attention and search.

  15. Object-Based Visual Attention in 8-Month-Old Infants: Evidence from an Eye-Tracking Study

    ERIC Educational Resources Information Center

    Bulf, Hermann; Valenza, Eloisa

    2013-01-01

    Visual attention is one of the infant's primary tools for gathering relevant information from the environment for further processing and learning. The space-based component of visual attention in infants has been widely investigated; however, the object-based component of visual attention has received scarce interest. This scarcity is…

  16. Data Visualization Saliency Model: A Tool for Evaluating Abstract Data Visualizations

    DOE PAGES

    Matzen, Laura E.; Haass, Michael J.; Divis, Kristin M.; ...

    2017-08-29

    Evaluating the effectiveness of data visualizations is a challenging undertaking and often relies on one-off studies that test a visualization in the context of one specific task. Researchers across the fields of data science, visualization, and human-computer interaction are calling for foundational tools and principles that could be applied to assessing the effectiveness of data visualizations in a more rapid and generalizable manner. One possibility for such a tool is a model of visual saliency for data visualizations. Visual saliency models are typically based on the properties of the human visual cortex and predict which areas of a scene havemore » visual features (e.g. color, luminance, edges) that are likely to draw a viewer's attention. While these models can accurately predict where viewers will look in a natural scene, they typically do not perform well for abstract data visualizations. In this paper, we discuss the reasons for the poor performance of existing saliency models when applied to data visualizations. We introduce the Data Visualization Saliency (DVS) model, a saliency model tailored to address some of these weaknesses, and we test the performance of the DVS model and existing saliency models by comparing the saliency maps produced by the models to eye tracking data obtained from human viewers. In conclusion, we describe how modified saliency models could be used as general tools for assessing the effectiveness of visualizations, including the strengths and weaknesses of this approach.« less

  17. Data Visualization Saliency Model: A Tool for Evaluating Abstract Data Visualizations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matzen, Laura E.; Haass, Michael J.; Divis, Kristin M.

    Evaluating the effectiveness of data visualizations is a challenging undertaking and often relies on one-off studies that test a visualization in the context of one specific task. Researchers across the fields of data science, visualization, and human-computer interaction are calling for foundational tools and principles that could be applied to assessing the effectiveness of data visualizations in a more rapid and generalizable manner. One possibility for such a tool is a model of visual saliency for data visualizations. Visual saliency models are typically based on the properties of the human visual cortex and predict which areas of a scene havemore » visual features (e.g. color, luminance, edges) that are likely to draw a viewer's attention. While these models can accurately predict where viewers will look in a natural scene, they typically do not perform well for abstract data visualizations. In this paper, we discuss the reasons for the poor performance of existing saliency models when applied to data visualizations. We introduce the Data Visualization Saliency (DVS) model, a saliency model tailored to address some of these weaknesses, and we test the performance of the DVS model and existing saliency models by comparing the saliency maps produced by the models to eye tracking data obtained from human viewers. In conclusion, we describe how modified saliency models could be used as general tools for assessing the effectiveness of visualizations, including the strengths and weaknesses of this approach.« less

  18. Visuospatial selective attention in chickens.

    PubMed

    Sridharan, Devarajan; Ramamurthy, Deepa L; Schwarz, Jason S; Knudsen, Eric I

    2014-05-13

    Voluntary control of attention promotes intelligent, adaptive behaviors by enabling the selective processing of information that is most relevant for making decisions. Despite extensive research on attention in primates, the capacity for selective attention in nonprimate species has never been quantified. Here we demonstrate selective attention in chickens by applying protocols that have been used to characterize visual spatial attention in primates. Chickens were trained to localize and report the vertical position of a target in the presence of task-relevant distracters. A spatial cue, the location of which varied across individual trials, indicated the horizontal, but not vertical, position of the upcoming target. Spatial cueing improved localization performance: accuracy (d') increased and reaction times decreased in a space-specific manner. Distracters severely impaired perceptual performance, and this impairment was greatly reduced by spatial cueing. Signal detection analysis with an "indecision" model demonstrated that spatial cueing significantly increased choice certainty in localizing targets. By contrast, error-aversion certainty (certainty of not making an error) remained essentially constant across cueing protocols, target contrasts, and individuals. The results show that chickens shift spatial attention rapidly and dynamically, following principles of stimulus selection that closely parallel those documented in primates. The findings suggest that the mechanisms that control attention have been conserved through evolution, and establish chickens--a highly visual species that is easily trained and amenable to cutting-edge experimental technologies--as an attractive model for linking behavior to neural mechanisms of selective attention.

  19. Proceedings of the Lake Wilderness Attention Conference Held at Seattle Washington, 22-24 September 1980.

    DTIC Science & Technology

    1981-07-10

    Pohlmann, L. D. Some models of observer behavior in two-channel auditory signal detection. Perception and Psychophy- sics, 1973, 14, 101-109. Spelke...spatial), and processing modalities ( auditory versus visual input, vocal versus manual response). If validated, this configuration has both theoretical...conclusion that auditory and visual processes will compete, as will spatial and verbal (albeit to a lesser extent than auditory - auditory , visual-visual

  20. Spatial Scaling of the Profile of Selective Attention in the Visual Field.

    PubMed

    Gannon, Matthew A; Knapp, Ashley A; Adams, Thomas G; Long, Stephanie M; Parks, Nathan A

    2016-01-01

    Neural mechanisms of selective attention must be capable of adapting to variation in the absolute size of an attended stimulus in the ever-changing visual environment. To date, little is known regarding how attentional selection interacts with fluctuations in the spatial expanse of an attended object. Here, we use event-related potentials (ERPs) to investigate the scaling of attentional enhancement and suppression across the visual field. We measured ERPs while participants performed a task at fixation that varied in its attentional demands (attentional load) and visual angle (1.0° or 2.5°). Observers were presented with a stream of task-relevant stimuli while foveal, parafoveal, and peripheral visual locations were probed by irrelevant distractor stimuli. We found two important effects in the N1 component of visual ERPs. First, N1 modulations to task-relevant stimuli indexed attentional selection of stimuli during the load task and further correlated with task performance. Second, with increased task size, attentional modulation of the N1 to distractor stimuli showed a differential pattern that was consistent with a scaling of attentional selection. Together, these results demonstrate that the size of an attended stimulus scales the profile of attentional selection across the visual field and provides insights into the attentional mechanisms associated with such spatial scaling.

  1. Visual-spatial abilities relate to mathematics achievement in children with heavy prenatal alcohol exposure

    PubMed Central

    Crocker, N.; Riley, E.P.; Mattson, S.N.

    2014-01-01

    Objective The current study examined the relationship between mathematics and attention, working memory, and visual memory in children with heavy prenatal alcohol exposure and controls. Method Fifty-six children (29 AE, 27 CON) were administered measures of global mathematics achievement (WRAT-3 Arithmetic & WISC-III Written Arithmetic), attention, (WISC-III Digit Span forward and Spatial Span forward), working memory (WISC-III Digit Span backward and Spatial Span backward), and visual memory (CANTAB Spatial Recognition Memory and Pattern Recognition Memory). The contribution of cognitive domains to mathematics achievement was analyzed using linear regression techniques. Attention, working memory and visual memory data were entered together on step 1 followed by group on step 2, and the interaction terms on step 3. Results Model 1 accounted for a significant amount of variance in both mathematics achievement measures, however, model fit improved with the addition of group on step 2. Significant predictors of mathematics achievement were Spatial Span forward and backward and Spatial Recognition Memory. Conclusions These findings suggest that deficits in spatial processing may be related to math impairments seen in FASD. In addition, prenatal alcohol exposure was associated with deficits in mathematics achievement, above and beyond the contribution of general cognitive abilities. PMID:25000323

  2. Visual-spatial abilities relate to mathematics achievement in children with heavy prenatal alcohol exposure.

    PubMed

    Crocker, Nicole; Riley, Edward P; Mattson, Sarah N

    2015-01-01

    The current study examined the relationship between mathematics and attention, working memory, and visual memory in children with heavy prenatal alcohol exposure and controls. Subjects were 56 children (29 AE, 27 CON) who were administered measures of global mathematics achievement (WRAT-3 Arithmetic & WISC-III Written Arithmetic), attention, (WISC-III Digit Span forward and Spatial Span forward), working memory (WISC-III Digit Span backward and Spatial Span backward), and visual memory (CANTAB Spatial Recognition Memory and Pattern Recognition Memory). The contribution of cognitive domains to mathematics achievement was analyzed using linear regression techniques. Attention, working memory, and visual memory data were entered together on Step 1 followed by group on Step 2, and the interaction terms on Step 3. Model 1 accounted for a significant amount of variance in both mathematics achievement measures; however, model fit improved with the addition of group on Step 2. Significant predictors of mathematics achievement were Spatial Span forward and backward and Spatial Recognition Memory. These findings suggest that deficits in spatial processing may be related to math impairments seen in FASD. In addition, prenatal alcohol exposure was associated with deficits in mathematics achievement, above and beyond the contribution of general cognitive abilities. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  3. The role of early visual cortex in visual short-term memory and visual attention.

    PubMed

    Offen, Shani; Schluppeck, Denis; Heeger, David J

    2009-06-01

    We measured cortical activity with functional magnetic resonance imaging to probe the involvement of early visual cortex in visual short-term memory and visual attention. In four experimental tasks, human subjects viewed two visual stimuli separated by a variable delay period. The tasks placed differential demands on short-term memory and attention, but the stimuli were visually identical until after the delay period. Early visual cortex exhibited sustained responses throughout the delay when subjects performed attention-demanding tasks, but delay-period activity was not distinguishable from zero when subjects performed a task that required short-term memory. This dissociation reveals different computational mechanisms underlying the two processes.

  4. [Attention and eye movements in human: psychophysiological concepts, neurophysiological models and EEG correlates].

    PubMed

    Slavutskaia, M V; Moiseeva, V V; Shul'govskiĭ, V V

    2008-01-01

    A review. Recently published articles concerning the problem of attention are discussed, the most popular psychophysiological concepts and neurophysiological models of attention are described, and correlation of spatial attention and saccadic eyes movements is shown. The evidence for reflection of attention mechanisms and saccade preparation in intensity and topography of the visual evoked potentials and event-related potentials is given. On the basis of the results obtained by the authors and literature data, the contribution of attention to preparation of a saccade and its programming is shown. Different kinds of attention are reflected in a complex of EEG potentials of various duration and polarity. The analysis of parameters and topography of these potentials can serve a tool for investigation of the attention mechanisms.

  5. Characterizing the effects of feature salience and top-down attention in the early visual system.

    PubMed

    Poltoratski, Sonia; Ling, Sam; McCormack, Devin; Tong, Frank

    2017-07-01

    The visual system employs a sophisticated balance of attentional mechanisms: salient stimuli are prioritized for visual processing, yet observers can also ignore such stimuli when their goals require directing attention elsewhere. A powerful determinant of visual salience is local feature contrast: if a local region differs from its immediate surround along one or more feature dimensions, it will appear more salient. We used high-resolution functional MRI (fMRI) at 7T to characterize the modulatory effects of bottom-up salience and top-down voluntary attention within multiple sites along the early visual pathway, including visual areas V1-V4 and the lateral geniculate nucleus (LGN). Observers viewed arrays of spatially distributed gratings, where one of the gratings immediately to the left or right of fixation differed from all other items in orientation or motion direction, making it salient. To investigate the effects of directed attention, observers were cued to attend to the grating to the left or right of fixation, which was either salient or nonsalient. Results revealed reliable additive effects of top-down attention and stimulus-driven salience throughout visual areas V1-hV4. In comparison, the LGN exhibited significant attentional enhancement but was not reliably modulated by orientation- or motion-defined salience. Our findings indicate that top-down effects of spatial attention can influence visual processing at the earliest possible site along the visual pathway, including the LGN, whereas the processing of orientation- and motion-driven salience primarily involves feature-selective interactions that take place in early cortical visual areas. NEW & NOTEWORTHY While spatial attention allows for specific, goal-driven enhancement of stimuli, salient items outside of the current focus of attention must also be prioritized. We used 7T fMRI to compare salience and spatial attentional enhancement along the early visual hierarchy. We report additive effects of attention and bottom-up salience in early visual areas, suggesting that salience enhancement is not contingent on the observer's attentional state. Copyright © 2017 the American Physiological Society.

  6. More insight into the interplay of response selection and visual attention in dual-tasks: masked visual search and response selection are performed in parallel.

    PubMed

    Reimer, Christina B; Schubert, Torsten

    2017-09-15

    Both response selection and visual attention are limited in capacity. According to the central bottleneck model, the response selection processes of two tasks in a dual-task situation are performed sequentially. In conjunction search, visual attention is required to select the items and to bind their features (e.g., color and form), which results in a serial search process. Search time increases as items are added to the search display (i.e., set size effect). When the search display is masked, visual attention deployment is restricted to a brief period of time and target detection decreases as a function of set size. Here, we investigated whether response selection and visual attention (i.e., feature binding) rely on a common or on distinct capacity limitations. In four dual-task experiments, participants completed an auditory Task 1 and a conjunction search Task 2 that were presented with an experimentally modulated temporal interval between them (Stimulus Onset Asynchrony, SOA). In Experiment 1, Task 1 was a two-choice discrimination task and the conjunction search display was not masked. In Experiment 2, the response selection difficulty in Task 1 was increased to a four-choice discrimination and the search task was the same as in Experiment 1. We applied the locus-of-slack method in both experiments to analyze conjunction search time, that is, we compared the set size effects across SOAs. Similar set size effects across SOAs (i.e., additive effects of SOA and set size) would indicate sequential processing of response selection and visual attention. However, a significantly smaller set size effect at short SOA compared to long SOA (i.e., underadditive interaction of SOA and set size) would indicate parallel processing of response selection and visual attention. In both experiments, we found underadditive interactions of SOA and set size. In Experiments 3 and 4, the conjunction search display in Task 2 was masked. Task 1 was the same as in Experiments 1 and 2, respectively. In both experiments, the d' analysis revealed that response selection did not affect target detection. Overall, Experiments 1-4 indicated that neither the response selection difficulty in the auditory Task 1 (i.e., two-choice vs. four-choice) nor the type of presentation of the search display in Task 2 (i.e., not masked vs. masked) impaired parallel processing of response selection and conjunction search. We concluded that in general, response selection and visual attention (i.e., feature binding) rely on distinct capacity limitations.

  7. Training of attention functions in children with attention deficit hyperactivity disorder.

    PubMed

    Tucha, Oliver; Tucha, Lara; Kaumann, Gesa; König, Sebastian; Lange, Katharina M; Stasik, Dorota; Streather, Zoe; Engelschalk, Tobias; Lange, Klaus W

    2011-09-01

    Pharmacological treatment of children with ADHD has been shown to be successful; however, medication may not normalize attention functions. The present study was based on a neuropsychological model of attention and assessed the effect of an attention training program on attentional functioning of children with ADHD. Thirty-two children with ADHD and 16 healthy children participated in the study. Children with ADHD were randomly assigned to one of the two conditions, i.e., an attention training program which trained aspects of vigilance, selective attention and divided attention, or a visual perception training which trained perceptual skills, such as perception of figure and ground, form constancy and position in space. The training programs were applied in individual sessions, twice a week, for a period of four consecutive weeks. Healthy children did not receive any training. Alertness, vigilance, selective attention, divided attention, and flexibility were examined prior to and following the interventions. Children with ADHD were assessed and trained while on ADHD medications. Data analysis revealed that the attention training used in the present study led to significant improvements of various aspects of attention, including vigilance, divided attention, and flexibility, while the visual perception training had no specific effects. The findings indicate that attention training programs have the potential to facilitate attentional functioning in children with ADHD treated with ADHD drugs.

  8. A recurrent neural model for proto-object based contour integration and figure-ground segregation.

    PubMed

    Hu, Brian; Niebur, Ernst

    2017-12-01

    Visual processing of objects makes use of both feedforward and feedback streams of information. However, the nature of feedback signals is largely unknown, as is the identity of the neuronal populations in lower visual areas that receive them. Here, we develop a recurrent neural model to address these questions in the context of contour integration and figure-ground segregation. A key feature of our model is the use of grouping neurons whose activity represents tentative objects ("proto-objects") based on the integration of local feature information. Grouping neurons receive input from an organized set of local feature neurons, and project modulatory feedback to those same neurons. Additionally, inhibition at both the local feature level and the object representation level biases the interpretation of the visual scene in agreement with principles from Gestalt psychology. Our model explains several sets of neurophysiological results (Zhou et al. Journal of Neuroscience, 20(17), 6594-6611 2000; Qiu et al. Nature Neuroscience, 10(11), 1492-1499 2007; Chen et al. Neuron, 82(3), 682-694 2014), and makes testable predictions about the influence of neuronal feedback and attentional selection on neural responses across different visual areas. Our model also provides a framework for understanding how object-based attention is able to select both objects and the features associated with them.

  9. Great expectations: top-down attention modulates the costs of clutter and eccentricity.

    PubMed

    Steelman, Kelly S; McCarley, Jason S; Wickens, Christopher D

    2013-12-01

    An experiment and modeling effort examined interactions between bottom-up and top-down attentional control in visual alert detection. Participants performed a manual tracking task while monitoring peripheral display channels for alerts of varying salience, eccentricity, and spatial expectancy. Spatial expectancy modulated the influence of salience and eccentricity; alerts in low-probability locations engendered higher miss rates, longer detection times, and larger costs of visual clutter and eccentricity, indicating that top-down attentional control offset the costs of poor bottom-up stimulus quality. Data were compared to the predictions of a computational model of scanning and noticing that incorporates bottom-up and top-down sources of attentional control. The model accounted well for the overall pattern of miss rates and response times, predicting each of the observed main effects and interactions. Empirical results suggest that designers should expect the costs of poor bottom-up visibility to be greater for low expectancy signals, and that the placement of alerts within a display should be determined based on the combination of alert expectancy and response priority. Model fits suggest that the current model can serve as a useful tool for exploring a design space as a precursor to empirical data collection and for generating hypotheses for future experiments. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  10. Object selection costs in visual working memory: A diffusion model analysis of the focus of attention.

    PubMed

    Sewell, David K; Lilburn, Simon D; Smith, Philip L

    2016-11-01

    A central question in working memory research concerns the degree to which information in working memory is accessible to other cognitive processes (e.g., decision-making). Theories assuming that the focus of attention can only store a single object at a time require the focus to orient to a target representation before further processing can occur. The need to orient the focus of attention implies that single-object accounts typically predict response time costs associated with object selection even when working memory is not full (i.e., memory load is less than 4 items). For other theories that assume storage of multiple items in the focus of attention, predictions depend on specific assumptions about the way resources are allocated among items held in the focus, and how this affects the time course of retrieval of items from the focus. These broad theoretical accounts have been difficult to distinguish because conventional analyses fail to separate components of empirical response times related to decision-making from components related to selection and retrieval processes associated with accessing information in working memory. To better distinguish these response time components from one another, we analyze data from a probed visual working memory task using extensions of the diffusion decision model. Analysis of model parameters revealed that increases in memory load resulted in (a) reductions in the quality of the underlying stimulus representations in a manner consistent with a sample size model of visual working memory capacity and (b) systematic increases in the time needed to selectively access a probed representation in memory. The results are consistent with single-object theories of the focus of attention. The results are also consistent with a subset of theories that assume a multiobject focus of attention in which resource allocation diminishes both the quality and accessibility of the underlying representations. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  11. Concurrent deployment of visual attention and response selection bottleneck in a dual-task: Electrophysiological and behavioural evidence.

    PubMed

    Reimer, Christina B; Strobach, Tilo; Schubert, Torsten

    2017-12-01

    Visual attention and response selection are limited in capacity. Here, we investigated whether visual attention requires the same bottleneck mechanism as response selection in a dual-task of the psychological refractory period (PRP) paradigm. The dual-task consisted of an auditory two-choice discrimination Task 1 and a conjunction search Task 2, which were presented at variable temporal intervals (stimulus onset asynchrony, SOA). In conjunction search, visual attention is required to select items and to bind their features resulting in a serial search process around the items in the search display (i.e., set size). We measured the reaction time of the visual search task (RT2) and the N2pc, an event-related potential (ERP), which reflects lateralized visual attention processes. If the response selection processes in Task 1 influence the visual attention processes in Task 2, N2pc latency and amplitude would be delayed and attenuated at short SOA compared to long SOA. The results, however, showed that latency and amplitude were independent of SOA, indicating that visual attention was concurrently deployed to response selection. Moreover, the RT2 analysis revealed an underadditive interaction of SOA and set size. We concluded that visual attention does not require the same bottleneck mechanism as response selection in dual-tasks.

  12. Extracting alpha band modulation during visual spatial attention without flickering stimuli using common spatial pattern.

    PubMed

    Fujisawa, Junya; Touyama, Hideaki; Hirose, Michitaka

    2008-01-01

    In this paper, alpha band modulation during visual spatial attention without visual stimuli was focused. Visual spatial attention has been expected to provide a new channel of non-invasive independent brain computer interface (BCI), but little work has been done on the new interfacing method. The flickering stimuli used in previous work cause a decline of independency and have difficulties in a practical use. Therefore we investigated whether visual spatial attention could be detected without such stimuli. Further, the common spatial patterns (CSP) were for the first time applied to the brain states during visual spatial attention. The performance evaluation was based on three brain states of left, right and center direction attention. The 30-channel scalp electroencephalographic (EEG) signals over occipital cortex were recorded for five subjects. Without CSP, the analyses made 66.44 (range 55.42 to 72.27) % of average classification performance in discriminating left and right attention classes. With CSP, the averaged classification accuracy was 75.39 (range 63.75 to 86.13) %. It is suggested that CSP is useful in the context of visual spatial attention, and the alpha band modulation during visual spatial attention without flickering stimuli has the possibility of a new channel for independent BCI as well as motor imagery.

  13. Attention Increases Spike Count Correlations between Visual Cortical Areas.

    PubMed

    Ruff, Douglas A; Cohen, Marlene R

    2016-07-13

    Visual attention, which improves perception of attended locations or objects, has long been known to affect many aspects of the responses of neuronal populations in visual cortex. There are two nonmutually exclusive hypotheses concerning the neuronal mechanisms that underlie these perceptual improvements. The first hypothesis, that attention improves the information encoded by a population of neurons in a particular cortical area, has considerable physiological support. The second hypothesis is that attention improves perception by selectively communicating relevant visual information. This idea has been tested primarily by measuring interactions between neurons on very short timescales, which are mathematically nearly independent of neuronal interactions on longer timescales. We tested the hypothesis that attention changes the way visual information is communicated between cortical areas on longer timescales by recording simultaneously from neurons in primary visual cortex (V1) and the middle temporal area (MT) in rhesus monkeys. We used two independent and complementary approaches. Our correlative experiment showed that attention increases the trial-to-trial response variability that is shared between the two areas. In our causal experiment, we electrically microstimulated V1 and found that attention increased the effect of stimulation on MT responses. Together, our results suggest that attention affects both the way visual stimuli are encoded within a cortical area and the extent to which visual information is communicated between areas on behaviorally relevant timescales. Visual attention dramatically improves the perception of attended stimuli. Attention has long been thought to act by selecting relevant visual information for further processing. It has been hypothesized that this selection is accomplished by increasing communication between neurons that encode attended information in different cortical areas. We recorded simultaneously from neurons in primary visual cortex and the middle temporal area while rhesus monkeys performed an attention task. We found that attention increased shared variability between neurons in the two areas and that attention increased the effect of microstimulation in V1 on the firing rates of MT neurons. Our results provide support for the hypothesis that attention increases communication between neurons in different brain areas on behaviorally relevant timescales. Copyright © 2016 the authors 0270-6474/16/367523-12$15.00/0.

  14. Attention Increases Spike Count Correlations between Visual Cortical Areas

    PubMed Central

    Cohen, Marlene R.

    2016-01-01

    Visual attention, which improves perception of attended locations or objects, has long been known to affect many aspects of the responses of neuronal populations in visual cortex. There are two nonmutually exclusive hypotheses concerning the neuronal mechanisms that underlie these perceptual improvements. The first hypothesis, that attention improves the information encoded by a population of neurons in a particular cortical area, has considerable physiological support. The second hypothesis is that attention improves perception by selectively communicating relevant visual information. This idea has been tested primarily by measuring interactions between neurons on very short timescales, which are mathematically nearly independent of neuronal interactions on longer timescales. We tested the hypothesis that attention changes the way visual information is communicated between cortical areas on longer timescales by recording simultaneously from neurons in primary visual cortex (V1) and the middle temporal area (MT) in rhesus monkeys. We used two independent and complementary approaches. Our correlative experiment showed that attention increases the trial-to-trial response variability that is shared between the two areas. In our causal experiment, we electrically microstimulated V1 and found that attention increased the effect of stimulation on MT responses. Together, our results suggest that attention affects both the way visual stimuli are encoded within a cortical area and the extent to which visual information is communicated between areas on behaviorally relevant timescales. SIGNIFICANCE STATEMENT Visual attention dramatically improves the perception of attended stimuli. Attention has long been thought to act by selecting relevant visual information for further processing. It has been hypothesized that this selection is accomplished by increasing communication between neurons that encode attended information in different cortical areas. We recorded simultaneously from neurons in primary visual cortex and the middle temporal area while rhesus monkeys performed an attention task. We found that attention increased shared variability between neurons in the two areas and that attention increased the effect of microstimulation in V1 on the firing rates of MT neurons. Our results provide support for the hypothesis that attention increases communication between neurons in different brain areas on behaviorally relevant timescales. PMID:27413161

  15. Emotional metacontrol of attention: Top-down modulation of sensorimotor processes in a robotic visual search task.

    PubMed

    Belkaid, Marwen; Cuperlier, Nicolas; Gaussier, Philippe

    2017-01-01

    Emotions play a significant role in internal regulatory processes. In this paper, we advocate four key ideas. First, novelty detection can be grounded in the sensorimotor experience and allow higher order appraisal. Second, cognitive processes, such as those involved in self-assessment, influence emotional states by eliciting affects like boredom and frustration. Third, emotional processes such as those triggered by self-assessment influence attentional processes. Last, close emotion-cognition interactions implement an efficient feedback loop for the purpose of top-down behavior regulation. The latter is what we call 'Emotional Metacontrol'. We introduce a model based on artificial neural networks. This architecture is used to control a robotic system in a visual search task. The emotional metacontrol intervenes to bias the robot visual attention during active object recognition. Through a behavioral and statistical analysis, we show that this mechanism increases the robot performance and fosters the exploratory behavior to avoid deadlocks.

  16. Selective attention in anxiety: distraction and enhancement in visual search.

    PubMed

    Rinck, Mike; Becker, Eni S; Kellermann, Jana; Roth, Walton T

    2003-01-01

    According to cognitive models of anxiety, anxiety patients exhibit an attentional bias towards threat, manifested as greater distractibility by threat stimuli and enhanced detection of them. Both phenomena were studied in two experiments, using a modified visual search task, in which participants were asked to find single target words (GAD-related, speech-related, neutral, or positive) hidden in matrices made up of distractor words (also GAD-related, speech-related, neutral, or positive). Generalized anxiety disorder (GAD) patients, social phobia (SP) patients afraid of giving speeches, and healthy controls participated in the visual search task. GAD patients were slowed by GAD-related distractor words but did not show statistically reliable evidence of enhanced detection of GAD-related target words. SP patients showed neither distraction nor enhancement effects. These results extend previous findings of attentional biases observed with other experimental paradigms. Copyright 2003 Wiley-Liss, Inc.

  17. Perceptual organization and visual attention.

    PubMed

    Kimchi, Ruth

    2009-01-01

    Perceptual organization--the processes structuring visual information into coherent units--and visual attention--the processes by which some visual information in a scene is selected--are crucial for the perception of our visual environment and to visuomotor behavior. Recent research points to important relations between attentional and organizational processes. Several studies demonstrated that perceptual organization constrains attentional selectivity, and other studies suggest that attention can also constrain perceptual organization. In this chapter I focus on two aspects of the relationship between perceptual organization and attention. The first addresses the question of whether or not perceptual organization can take place without attention. I present findings demonstrating that some forms of grouping and figure-ground segmentation can occur without attention, whereas others require controlled attentional processing, depending on the processes involved and the conditions prevailing for each process. These findings challenge the traditional view, which assumes that perceptual organization is a unitary entity that operates preattentively. The second issue addresses the question of whether perceptual organization can affect the automatic deployment of attention. I present findings showing that the mere organization of some elements in the visual field by Gestalt factors into a coherent perceptual unit (an "object"), with no abrupt onset or any other unique transient, can capture attention automatically in a stimulus-driven manner. Taken together, the findings discussed in this chapter demonstrate the multifaceted, interactive relations between perceptual organization and visual attention.

  18. Visual Hybrid Development Learning System (VHDLS) framework for children with autism.

    PubMed

    Banire, Bilikis; Jomhari, Nazean; Ahmad, Rodina

    2015-10-01

    The effect of education on children with autism serves as a relative cure for their deficits. As a result of this, they require special techniques to gain their attention and interest in learning as compared to typical children. Several studies have shown that these children are visual learners. In this study, we proposed a Visual Hybrid Development Learning System (VHDLS) framework that is based on an instructional design model, multimedia cognitive learning theory, and learning style in order to guide software developers in developing learning systems for children with autism. The results from this study showed that the attention of children with autism increased more with the proposed VHDLS framework.

  19. The attentional drift-diffusion model extends to simple purchasing decisions.

    PubMed

    Krajbich, Ian; Lu, Dingchao; Camerer, Colin; Rangel, Antonio

    2012-01-01

    How do we make simple purchasing decisions (e.g., whether or not to buy a product at a given price)? Previous work has shown that the attentional drift-diffusion model (aDDM) can provide accurate quantitative descriptions of the psychometric data for binary and trinary value-based choices, and of how the choice process is guided by visual attention. Here we extend the aDDM to the case of purchasing decisions, and test it using an eye-tracking experiment. We find that the model also provides a reasonably accurate quantitative description of the relationship between choice, reaction time, and visual fixations using parameters that are very similar to those that best fit the previous data. The only critical difference is that the choice biases induced by the fixations are about half as big in purchasing decisions as in binary choices. This suggests that a similar computational process is used to make binary choices, trinary choices, and simple purchasing decisions.

  20. The Attentional Drift-Diffusion Model Extends to Simple Purchasing Decisions

    PubMed Central

    Krajbich, Ian; Lu, Dingchao; Camerer, Colin; Rangel, Antonio

    2012-01-01

    How do we make simple purchasing decisions (e.g., whether or not to buy a product at a given price)? Previous work has shown that the attentional drift-diffusion model (aDDM) can provide accurate quantitative descriptions of the psychometric data for binary and trinary value-based choices, and of how the choice process is guided by visual attention. Here we extend the aDDM to the case of purchasing decisions, and test it using an eye-tracking experiment. We find that the model also provides a reasonably accurate quantitative description of the relationship between choice, reaction time, and visual fixations using parameters that are very similar to those that best fit the previous data. The only critical difference is that the choice biases induced by the fixations are about half as big in purchasing decisions as in binary choices. This suggests that a similar computational process is used to make binary choices, trinary choices, and simple purchasing decisions. PMID:22707945

  1. The Speed of Serial Attention Shifts in Visual Search: Evidence from the N2pc Component.

    PubMed

    Grubert, Anna; Eimer, Martin

    2016-02-01

    Finding target objects among distractors in visual search display is often assumed to be based on sequential movements of attention between different objects. However, the speed of such serial attention shifts is still under dispute. We employed a search task that encouraged the successive allocation of attention to two target objects in the same search display and measured N2pc components to determine how fast attention moved between these objects. Each display contained one digit in a known color (fixed-color target) and another digit whose color changed unpredictably across trials (variable-color target) together with two gray distractor digits. Participants' task was to find the fixed-color digit and compare its numerical value with that of the variable-color digit. N2pc components to fixed-color targets preceded N2pc components to variable-color digits, demonstrating that these two targets were indeed selected in a fixed serial order. The N2pc to variable-color digits emerged approximately 60 msec after the N2pc to fixed-color digits, which shows that attention can be reallocated very rapidly between different target objects in the visual field. When search display durations were increased, thereby relaxing the temporal demands on serial selection, the two N2pc components to fixed-color and variable-color targets were elicited within 90 msec of each other. Results demonstrate that sequential shifts of attention between different target locations can operate very rapidly at speeds that are in line with the assumptions of serial selection models of visual search.

  2. Neuroelectrical signs of selective attention to color in boys with attention-deficit hyperactivity disorder.

    PubMed

    van der Stelt, O; van der Molen, M; Boudewijn Gunning, W; Kok, A

    2001-10-01

    In order to gain insight into the functional and macroanatomical loci of visual selective processing deficits that may be basic to attention-deficit hyperactivity disorder (ADHD), the present study examined multi-channel event-related potentials (ERPs) recorded from 7- to 11-year-old boys clinically diagnosed as having ADHD (n=24) and age-matched healthy control boys (n=24) while they performed a visual (color) selective attention task. The spatio-temporal dynamics of several ERP components related to attention to color were characterized using topographic profile analysis, topographic mapping of the ERP and associated scalp current density distributions, and spatio-temporal source potential modeling. Boys with ADHD showed a lower target hit rate, a higher false-alarm rate, and a lower perceptual sensitivity than controls. Also, whereas color attention induced in the ERPs from controls a characteristic early frontally maximal selection positivity (FSP), ADHD boys displayed little or no FSP. Similarly, ADHD boys manifested P3b amplitude decrements that were partially lateralized (i.e., maximal at left temporal scalp locations) as well as affected by maturation. These results indicate that ADHD boys suffer from deficits at both relatively early (sensory) and late (semantic) levels of visual selective information processing. The data also support the hypothesis that the visual selective processing deficits observed in the ADHD boys originate from deficits in the strength of activation of a neural network comprising prefrontal and occipito-temporal brain regions. This network seems to be actively engaged during attention to color and may contain the major intracerebral generating sources of the associated scalp-recorded ERP components.

  3. Acute exercise and aerobic fitness influence selective attention during visual search.

    PubMed

    Bullock, Tom; Giesbrecht, Barry

    2014-01-01

    Successful goal directed behavior relies on a human attention system that is flexible and able to adapt to different conditions of physiological stress. However, the effects of physical activity on multiple aspects of selective attention and whether these effects are mediated by aerobic capacity, remains unclear. The aim of the present study was to investigate the effects of a prolonged bout of physical activity on visual search performance and perceptual distraction. Two groups of participants completed a hybrid visual search flanker/response competition task in an initial baseline session and then at 17-min intervals over a 2 h 16 min test period. Participants assigned to the exercise group engaged in steady-state aerobic exercise between completing blocks of the visual task, whereas participants assigned to the control group rested in between blocks. The key result was a correlation between individual differences in aerobic capacity and visual search performance, such that those individuals that were more fit performed the search task more quickly. Critically, this relationship only emerged in the exercise group after the physical activity had begun. The relationship was not present in either group at baseline and never emerged in the control group during the test period, suggesting that under these task demands, aerobic capacity may be an important determinant of visual search performance under physical stress. The results enhance current understanding about the relationship between exercise and cognition, and also inform current models of selective attention.

  4. Acute exercise and aerobic fitness influence selective attention during visual search

    PubMed Central

    Bullock, Tom; Giesbrecht, Barry

    2014-01-01

    Successful goal directed behavior relies on a human attention system that is flexible and able to adapt to different conditions of physiological stress. However, the effects of physical activity on multiple aspects of selective attention and whether these effects are mediated by aerobic capacity, remains unclear. The aim of the present study was to investigate the effects of a prolonged bout of physical activity on visual search performance and perceptual distraction. Two groups of participants completed a hybrid visual search flanker/response competition task in an initial baseline session and then at 17-min intervals over a 2 h 16 min test period. Participants assigned to the exercise group engaged in steady-state aerobic exercise between completing blocks of the visual task, whereas participants assigned to the control group rested in between blocks. The key result was a correlation between individual differences in aerobic capacity and visual search performance, such that those individuals that were more fit performed the search task more quickly. Critically, this relationship only emerged in the exercise group after the physical activity had begun. The relationship was not present in either group at baseline and never emerged in the control group during the test period, suggesting that under these task demands, aerobic capacity may be an important determinant of visual search performance under physical stress. The results enhance current understanding about the relationship between exercise and cognition, and also inform current models of selective attention. PMID:25426094

  5. A feedback model of visual attention.

    PubMed

    Spratling, M W; Johnson, M H

    2004-03-01

    Feedback connections are a prominent feature of cortical anatomy and are likely to have a significant functional role in neural information processing. We present a neural network model of cortical feedback that successfully simulates neurophysiological data associated with attention. In this domain, our model can be considered a more detailed, and biologically plausible, implementation of the biased competition model of attention. However, our model is more general as it can also explain a variety of other top-down processes in vision, such as figure/ground segmentation and contextual cueing. This model thus suggests that a common mechanism, involving cortical feedback pathways, is responsible for a range of phenomena and provides a unified account of currently disparate areas of research.

  6. Robotic Attention Processing And Its Application To Visual Guidance

    NASA Astrophysics Data System (ADS)

    Barth, Matthew; Inoue, Hirochika

    1988-03-01

    This paper describes a method of real-time visual attention processing for robots performing visual guidance. This robot attention processing is based on a novel vision processor, the multi-window vision system that was developed at the University of Tokyo. The multi-window vision system is unique in that it only processes visual information inside local area windows. These local area windows are quite flexible in their ability to move anywhere on the visual screen, change their size and shape, and alter their pixel sampling rate. By using these windows for specific attention tasks, it is possible to perform high speed attention processing. The primary attention skills of detecting motion, tracking an object, and interpreting an image are all performed at high speed on the multi-window vision system. A basic robotic attention scheme using the attention skills was developed. The attention skills involved detection and tracking of salient visual features. The tracking and motion information thus obtained was utilized in producing the response to the visual stimulus. The response of the attention scheme was quick enough to be applicable to the real-time vision processing tasks of playing a video 'pong' game, and later using an automobile driving simulator. By detecting the motion of a 'ball' on a video screen and then tracking the movement, the attention scheme was able to control a 'paddle' in order to keep the ball in play. The response was faster than that of a human's, allowing the attention scheme to play the video game at higher speeds. Further, in the application to the driving simulator, the attention scheme was able to control both direction and velocity of a simulated vehicle following a lead car. These two applications show the potential of local visual processing in its use for robotic attention processing.

  7. Occipitoparietal alpha-band responses to the graded allocation of top-down spatial attention.

    PubMed

    Dombrowe, Isabel; Hilgetag, Claus C

    2014-09-15

    The voluntary, top-down allocation of visual spatial attention has been linked to changes in the alpha-band of the electroencephalogram (EEG) signal measured over occipital and parietal lobes. In the present study, we investigated how occipitoparietal alpha-band activity changes when people allocate their attentional resources in a graded fashion across the visual field. We asked participants to either completely shift their attention into one hemifield, to balance their attention equally across the entire visual field, or to attribute more attention to one-half of the visual field than to the other. As expected, we found that alpha-band amplitudes decreased stronger contralaterally than ipsilaterally to the attended side when attention was shifted completely. Alpha-band amplitudes decreased bilaterally when attention was balanced equally across the visual field. However, when participants allocated more attentional resources to one-half of the visual field, this was not reflected in the alpha-band amplitudes, which just decreased bilaterally. We found that the performance of the participants was more strongly reflected in the coherence between frontal and occipitoparietal brain regions. We conclude that low alpha-band amplitudes seem to be necessary for stimulus detection. Furthermore, complete shifts of attention are directly reflected in the lateralization of alpha-band amplitudes. In the present study, a gradual allocation of visual attention across the visual field was only indirectly reflected in the alpha-band activity over occipital and parietal cortexes. Copyright © 2014 the American Physiological Society.

  8. Neural Mechanisms of Selective Visual Attention.

    PubMed

    Moore, Tirin; Zirnsak, Marc

    2017-01-03

    Selective visual attention describes the tendency of visual processing to be confined largely to stimuli that are relevant to behavior. It is among the most fundamental of cognitive functions, particularly in humans and other primates for whom vision is the dominant sense. We review recent progress in identifying the neural mechanisms of selective visual attention. We discuss evidence from studies of different varieties of selective attention and examine how these varieties alter the processing of stimuli by neurons within the visual system, current knowledge of their causal basis, and methods for assessing attentional dysfunctions. In addition, we identify some key questions that remain in identifying the neural mechanisms that give rise to the selective processing of visual information.

  9. The Interplay Among Children's Negative Family Representations, Visual Processing of Negative Emotions, and Externalizing Symptoms.

    PubMed

    Davies, Patrick T; Coe, Jesse L; Hentges, Rochelle F; Sturge-Apple, Melissa L; van der Kloet, Erika

    2018-03-01

    This study examined the transactional interplay among children's negative family representations, visual processing of negative emotions, and externalizing symptoms in a sample of 243 preschool children (M age  = 4.60 years). Children participated in three annual measurement occasions. Cross-lagged autoregressive models were conducted with multimethod, multi-informant data to identify mediational pathways. Consistent with schema-based top-down models, negative family representations were associated with attention to negative faces in an eye-tracking task and their externalizing symptoms. Children's negative representations of family relationships specifically predicted decreases in their attention to negative emotions, which, in turn, was associated with subsequent increases in their externalizing symptoms. Follow-up analyses indicated that the mediational role of diminished attention to negative emotions was particularly pronounced for angry faces. © 2017 The Authors. Child Development © 2017 Society for Research in Child Development, Inc.

  10. Modulation of auditory spatial attention by visual emotional cues: differential effects of attentional engagement and disengagement for pleasant and unpleasant cues.

    PubMed

    Harrison, Neil R; Woodhouse, Rob

    2016-05-01

    Previous research has demonstrated that threatening, compared to neutral pictures, can bias attention towards non-emotional auditory targets. Here we investigated which subcomponents of attention contributed to the influence of emotional visual stimuli on auditory spatial attention. Participants indicated the location of an auditory target, after brief (250 ms) presentation of a spatially non-predictive peripheral visual cue. Responses to targets were faster at the location of the preceding visual cue, compared to at the opposite location (cue validity effect). The cue validity effect was larger for targets following pleasant and unpleasant cues compared to neutral cues, for right-sided targets. For unpleasant cues, the crossmodal cue validity effect was driven by delayed attentional disengagement, and for pleasant cues, it was driven by enhanced engagement. We conclude that both pleasant and unpleasant visual cues influence the distribution of attention across modalities and that the associated attentional mechanisms depend on the valence of the visual cue.

  11. Shared and distinct factors driving attention and temporal processing across modalities

    PubMed Central

    Berry, Anne S.; Li, Xu; Lin, Ziyong; Lustig, Cindy

    2013-01-01

    In addition to the classic finding that “sounds are judged longer than lights,” the timing of auditory stimuli is often more precise and accurate than is the timing of visual stimuli. In cognitive models of temporal processing, these modality differences are explained by positing that auditory stimuli more automatically capture and hold attention, more efficiently closing an attentional switch that allows the accumulation of pulses marking the passage of time (Block & Zakay, 1997; Meck, 1991; Penney, 2003). However, attention is a multifaceted construct, and there has been little attempt to determine which aspects of attention may be related to modality effects. We used visual and auditory versions of the Continuous Temporal Expectancy Task (CTET; O'Connell et al., 2009) a timing task previously linked to behavioral and electrophysiological measures of mind-wandering and attention lapses, and tested participants with or without the presence of a video distractor. Performance in the auditory condition was generally superior to that in the visual condition, replicating standard results in the timing literature. The auditory modality was also less affected by declines in sustained attention indexed by declines in performance over time. In contrast, distraction had an equivalent impact on performance in the two modalities. Analysis of individual differences in performance revealed further differences between the two modalities: Poor performance in the auditory condition was primarily related to boredom whereas poor performance in the visual condition was primarily related to distractibility. These results suggest that: 1) challenges to different aspects of attention reveal both modality-specific and nonspecific effects on temporal processing, and 2) different factors drive individual differences when testing across modalities. PMID:23978664

  12. Visual Field Asymmetries in Attention Vary with Self-Reported Attention Deficits

    ERIC Educational Resources Information Center

    Poynter, William; Ingram, Paul; Minor, Scott

    2010-01-01

    The purpose of this study was to determine whether an index of self-reported attention deficits predicts the pattern of visual field asymmetries observed in behavioral measures of attention. Studies of "normal" subjects do not present a consistent pattern of asymmetry in attention functions, with some studies showing better left visual field (LVF)…

  13. The compensatory dynamic of inter-hemispheric interactions in visuospatial attention revealed using rTMS and fMRI

    PubMed Central

    Plow, Ela B.; Cattaneo, Zaira; Carlson, Thomas A.; Alvarez, George A.; Pascual-Leone, Alvaro; Battelli, Lorella

    2014-01-01

    A balance of mutual tonic inhibition between bi-hemispheric posterior parietal cortices is believed to play an important role in bilateral visual attention. However, experimental support for this notion has been mainly drawn from clinical models of unilateral damage. We have previously shown that low-frequency repetitive TMS (rTMS) over the intraparietal sulcus (IPS) generates a contralateral attentional deficit in bilateral visual tracking. Here, we used functional magnetic resonance imaging (fMRI) to study whether rTMS temporarily disrupts the inter-hemispheric balance between bilateral IPS in visual attention. Following application of 1 Hz rTMS over the left IPS, subjects performed a bilateral visual tracking task while their brain activity was recorded using fMRI. Behaviorally, tracking accuracy was reduced immediately following rTMS. Areas ventro-lateral to left IPS, including inferior parietal lobule (IPL), lateral IPS (LIPS), and middle occipital gyrus (MoG), showed decreased activity following rTMS, while dorsomedial areas, such as Superior Parietal Lobule (SPL), Superior occipital gyrus (SoG), and lingual gyrus, as well as middle temporal areas (MT+), showed higher activity. The brain activity of the homologues of these regions in the un-stimulated, right hemisphere was reversed. Interestingly, the evolution of network-wide activation related to attentional behavior following rTMS showed that activation of most occipital synergists adaptively compensated for contralateral and ipsilateral decrement after rTMS, while activation of parietal synergists, and SoG remained competing. This pattern of ipsilateral and contralateral activations empirically supports the hypothesized loss of inter-hemispheric balance that underlies clinical manifestation of visual attentional extinction. PMID:24860462

  14. Reliability-Weighted Integration of Audiovisual Signals Can Be Modulated by Top-down Attention

    PubMed Central

    Noppeney, Uta

    2018-01-01

    Abstract Behaviorally, it is well established that human observers integrate signals near-optimally weighted in proportion to their reliabilities as predicted by maximum likelihood estimation. Yet, despite abundant behavioral evidence, it is unclear how the human brain accomplishes this feat. In a spatial ventriloquist paradigm, participants were presented with auditory, visual, and audiovisual signals and reported the location of the auditory or the visual signal. Combining psychophysics, multivariate functional MRI (fMRI) decoding, and models of maximum likelihood estimation (MLE), we characterized the computational operations underlying audiovisual integration at distinct cortical levels. We estimated observers’ behavioral weights by fitting psychometric functions to participants’ localization responses. Likewise, we estimated the neural weights by fitting neurometric functions to spatial locations decoded from regional fMRI activation patterns. Our results demonstrate that low-level auditory and visual areas encode predominantly the spatial location of the signal component of a region’s preferred auditory (or visual) modality. By contrast, intraparietal sulcus forms spatial representations by integrating auditory and visual signals weighted by their reliabilities. Critically, the neural and behavioral weights and the variance of the spatial representations depended not only on the sensory reliabilities as predicted by the MLE model but also on participants’ modality-specific attention and report (i.e., visual vs. auditory). These results suggest that audiovisual integration is not exclusively determined by bottom-up sensory reliabilities. Instead, modality-specific attention and report can flexibly modulate how intraparietal sulcus integrates sensory signals into spatial representations to guide behavioral responses (e.g., localization and orienting). PMID:29527567

  15. Memory Performance for Everyday Motivational and Neutral Objects Is Dissociable from Attention

    PubMed Central

    Schomaker, Judith; Wittmann, Bianca C.

    2017-01-01

    Episodic memory is typically better for items coupled with monetary reward or punishment during encoding. It is yet unclear whether memory is also enhanced for everyday objects with appetitive or aversive values learned through a lifetime of experience, and to what extent episodic memory enhancement for motivational and neutral items is attributable to attention. In a first experiment, we investigated attention to everyday motivational objects using eye-tracking during free-viewing and subsequently tested episodic memory using a remember/know procedure. Attention was directed more to aversive stimuli, as evidenced by longer viewing durations, whereas recollection was higher for both appetitive and aversive objects. In the second experiment, we manipulated the visual contrast of neutral objects through changes of contrast to further dissociate attention and memory encoding. While objects presented with high visual contrast were looked at longer, recollection was best for objects presented in unmodified, medium contrast. Generalized logistic mixed models on recollection performance showed that attention as measured by eye movements did not enhance subsequent memory, while motivational value (Experiment 1) and visual contrast (Experiment 2) had quadratic effects in opposite directions. Our findings suggest that an enhancement of incidental memory encoding for appetitive items can occur without an increase in attention and, vice versa, that enhanced attention towards salient neutral objects is not necessarily associated with memory improvement. Together, our results provide evidence for a double dissociation of attention and memory effects under certain conditions. PMID:28694774

  16. [Attention characteristics of children with different clinical subtypes of attention deficit hyperactivity disorder].

    PubMed

    Liu, Wen-Long; Zhao, Xu; Tan, Jian-Hui; Wang, Juan

    2014-09-01

    To explore the attention characteristics of children with different clinical subtypes of attention deficit hyperactivity disorder (ADHD) and to provide a basis for clinical intervention. A total of 345 children diagnosed with ADHD were selected and the subtypes were identified. Attention assessment was performed by the intermediate visual and auditory continuous performance test at diagnosis, and the visual and auditory attention characteristics were compared between children with different subtypes. A total of 122 normal children were recruited in the control group and their attention characteristics were compared with those of children with ADHD. The scores of full scale attention quotient (AQ) and full scale response control quotient (RCQ) of children with all three subtypes of ADHD were significantly lower than those of normal children (P<0.01). The score of auditory RCQ was significantly lower than that of visual RCQ in children with ADHD-hyperactive/impulsive subtype (P<0.05). The scores of auditory AQ and speed quotient (SQ) were significantly higher than those of visual AQ and SQ in three subtypes of ADHD children (P<0.01), while the score of visual precaution quotient (PQ) was significantly higher than that of auditory PQ (P<0.01). No significant differences in auditory or visual AQ were observed between the three subtypes of ADHD. The attention function of children with ADHD is worse than that of normal children, and the impairment of visual attention function is severer than that of auditory attention function. The degree of functional impairment of visual or auditory attention shows no significant differences between three subtypes of ADHD.

  17. Attention and Visual Motor Integration in Young Children with Uncorrected Hyperopia.

    PubMed

    Kulp, Marjean Taylor; Ciner, Elise; Maguire, Maureen; Pistilli, Maxwell; Candy, T Rowan; Ying, Gui-Shuang; Quinn, Graham; Cyert, Lynn; Moore, Bruce

    2017-10-01

    Among 4- and 5-year-old children, deficits in measures of attention, visual-motor integration (VMI) and visual perception (VP) are associated with moderate, uncorrected hyperopia (3 to 6 diopters [D]) accompanied by reduced near visual function (near visual acuity worse than 20/40 or stereoacuity worse than 240 seconds of arc). To compare attention, visual motor, and visual perceptual skills in uncorrected hyperopes and emmetropes attending preschool or kindergarten and evaluate their associations with visual function. Participants were 4 and 5 years of age with either hyperopia (≥3 to ≤6 D, astigmatism ≤1.5 D, anisometropia ≤1 D) or emmetropia (hyperopia ≤1 D; astigmatism, anisometropia, and myopia each <1 D), without amblyopia or strabismus. Examiners masked to refractive status administered tests of attention (sustained, receptive, and expressive), VMI, and VP. Binocular visual acuity, stereoacuity, and accommodative accuracy were also assessed at near. Analyses were adjusted for age, sex, race/ethnicity, and parent's/caregiver's education. Two hundred forty-four hyperopes (mean, +3.8 ± [SD] 0.8 D) and 248 emmetropes (+0.5 ± 0.5 D) completed testing. Mean sustained attention score was worse in hyperopes compared with emmetropes (mean difference, -4.1; P < .001 for 3 to 6 D). Mean Receptive Attention score was worse in 4 to 6 D hyperopes compared with emmetropes (by -2.6, P = .01). Hyperopes with reduced near visual acuity (20/40 or worse) had worse scores than emmetropes (-6.4, P < .001 for sustained attention; -3.0, P = .004 for Receptive Attention; -0.7, P = .006 for VMI; -1.3, P = .008 for VP). Hyperopes with stereoacuity of 240 seconds of arc or worse scored significantly worse than emmetropes (-6.7, P < .001 for sustained attention; -3.4, P = .03 for Expressive Attention; -2.2, P = .03 for Receptive Attention; -0.7, P = .01 for VMI; -1.7, P < .001 for VP). Overall, hyperopes with better near visual function generally performed similarly to emmetropes. Moderately hyperopic children were found to have deficits in measures of attention. Hyperopic children with reduced near visual function also had lower scores on VMI and VP than emmetropic children.

  18. Structures and Functions of Selective Attention.

    ERIC Educational Resources Information Center

    Posner, Michael I.

    While neuropsychology relates the neural structures damaged in traumatic brain injury with their cognitive functions in daily life, this report reviews evidence that elementary operations of cognition as defined by cognitive studies are the level at which the brain localizes its computations. Orienting of visual attention is used as a model task.…

  19. A Split-Attention Effect in Multimedia Learning: Evidence for Dual Processing Systems in Working Memory.

    ERIC Educational Resources Information Center

    Mayer, Richard E.; Moreno, Roxana

    1998-01-01

    Multimedia learners (n=146 college students) were able to integrate words and computer-presented pictures more easily when the words were presented aurally rather than visually. This split-attention effect is consistent with a dual-processing model of working memory. (SLD)

  20. Deep hierarchical attention network for video description

    NASA Astrophysics Data System (ADS)

    Li, Shuohao; Tang, Min; Zhang, Jun

    2018-03-01

    Pairing video to natural language description remains a challenge in computer vision and machine translation. Inspired by image description, which uses an encoder-decoder model for reducing visual scene into a single sentence, we propose a deep hierarchical attention network for video description. The proposed model uses convolutional neural network (CNN) and bidirectional LSTM network as encoders while a hierarchical attention network is used as the decoder. Compared to encoder-decoder models used in video description, the bidirectional LSTM network can capture the temporal structure among video frames. Moreover, the hierarchical attention network has an advantage over single-layer attention network on global context modeling. To make a fair comparison with other methods, we evaluate the proposed architecture with different types of CNN structures and decoders. Experimental results on the standard datasets show that our model has a more superior performance than the state-of-the-art techniques.

  1. Auditory and visual capture during focused visual attention.

    PubMed

    Koelewijn, Thomas; Bronkhorst, Adelbert; Theeuwes, Jan

    2009-10-01

    It is well known that auditory and visual onsets presented at a particular location can capture a person's visual attention. However, the question of whether such attentional capture disappears when attention is focused endogenously beforehand has not yet been answered. Moreover, previous studies have not differentiated between capture by onsets presented at a nontarget (invalid) location and possible performance benefits occurring when the target location is (validly) cued. In this study, the authors modulated the degree of attentional focus by presenting endogenous cues with varying reliability and by displaying placeholders indicating the precise areas where the target stimuli could occur. By using not only valid and invalid exogenous cues but also neutral cues that provide temporal but no spatial information, they found performance benefits as well as costs when attention is not strongly focused. The benefits disappear when the attentional focus is increased. These results indicate that there is bottom-up capture of visual attention by irrelevant auditory and visual stimuli that cannot be suppressed by top-down attentional control. PsycINFO Database Record (c) 2009 APA, all rights reserved.

  2. Toward the influence of temporal attention on the selection of targets in a visual search task: An ERP study.

    PubMed

    Rolke, Bettina; Festl, Freya; Seibold, Verena C

    2016-11-01

    We used ERPs to investigate whether temporal attention interacts with spatial attention and feature-based attention to enhance visual processing. We presented a visual search display containing one singleton stimulus among a set of homogenous distractors. Participants were asked to respond only to target singletons of a particular color and shape that were presented in an attended spatial position. We manipulated temporal attention by presenting a warning signal before each search display and varying the foreperiod (FP) between the warning signal and the search display in a blocked manner. We observed distinctive ERP effects of both spatial and temporal attention. The amplitudes for the N2pc, SPCN, and P3 were enhanced by spatial attention indicating a processing benefit of relevant stimulus features at the attended side. Temporal attention accelerated stimulus processing; this was indexed by an earlier onset of the N2pc component and a reduction in reaction times to targets. Most importantly, temporal attention did not interact with spatial attention or stimulus features to influence visual processing. Taken together, the results suggest that temporal attention fosters visual perceptual processing in a visual search task independently from spatial attention and feature-based attention; this provides support for the nonspecific enhancement hypothesis of temporal attention. © 2016 Society for Psychophysiological Research.

  3. Retrospective Attention Gates Discrete Conscious Access to Past Sensory Stimuli.

    PubMed

    Thibault, Louis; van den Berg, Ronald; Cavanagh, Patrick; Sergent, Claire

    2016-01-01

    Cueing attention after the disappearance of visual stimuli biases which items will be remembered best. This observation has historically been attributed to the influence of attention on memory as opposed to subjective visual experience. We recently challenged this view by showing that cueing attention after the stimulus can improve the perception of a single Gabor patch at threshold levels of contrast. Here, we test whether this retro-perception actually increases the frequency of consciously perceiving the stimulus, or simply allows for a more precise recall of its features. We used retro-cues in an orientation-matching task and performed mixture-model analysis to independently estimate the proportion of guesses and the precision of non-guess responses. We find that the improvements in performance conferred by retrospective attention are overwhelmingly determined by a reduction in the proportion of guesses, providing strong evidence that attracting attention to the target's location after its disappearance increases the likelihood of perceiving it consciously.

  4. Is goal-directed attentional guidance just intertrial priming? A review.

    PubMed

    Lamy, Dominique F; Kristjánsson, Arni

    2013-07-01

    According to most models of selective visual attention, our goals at any given moment and saliency in the visual field determine attentional priority. But selection is not carried out in isolation--we typically track objects through space and time. This is not well captured within the distinction between goal-directed and saliency-based attentional guidance. Recent studies have shown that selection is strongly facilitated when the characteristics of the objects to be attended and of those to be ignored remain constant between consecutive selections. These studies have generated the proposal that goal-directed or top-down effects are best understood as intertrial priming effects. Here, we provide a detailed overview and critical appraisal of the arguments, experimental strategies, and findings that have been used to promote this idea, along with a review of studies providing potential counterarguments. We divide this review according to different types of attentional control settings that observers are thought to adopt during visual search: feature-based settings, dimension-based settings, and singleton detection mode. We conclude that priming accounts for considerable portions of effects attributed to top-down guidance, but that top-down guidance can be independent of intertrial priming.

  5. Spatial and object-based attention modulates broadband high-frequency responses across the human visual cortical hierarchy.

    PubMed

    Davidesco, Ido; Harel, Michal; Ramot, Michal; Kramer, Uri; Kipervasser, Svetlana; Andelman, Fani; Neufeld, Miri Y; Goelman, Gadi; Fried, Itzhak; Malach, Rafael

    2013-01-16

    One of the puzzling aspects in the visual attention literature is the discrepancy between electrophysiological and fMRI findings: whereas fMRI studies reveal strong attentional modulation in the earliest visual areas, single-unit and local field potential studies yielded mixed results. In addition, it is not clear to what extent spatial attention effects extend from early to high-order visual areas. Here we addressed these issues using electrocorticography recordings in epileptic patients. The patients performed a task that allowed simultaneous manipulation of both spatial and object-based attention. They were presented with composite stimuli, consisting of a small object (face or house) superimposed on a large one, and in separate blocks, were instructed to attend one of the objects. We found a consistent increase in broadband high-frequency (30-90 Hz) power, but not in visual evoked potentials, associated with spatial attention starting with V1/V2 and continuing throughout the visual hierarchy. The magnitude of the attentional modulation was correlated with the spatial selectivity of each electrode and its distance from the occipital pole. Interestingly, the latency of the attentional modulation showed a significant decrease along the visual hierarchy. In addition, electrodes placed over high-order visual areas (e.g., fusiform gyrus) showed both effects of spatial and object-based attention. Overall, our results help to reconcile previous observations of discrepancy between fMRI and electrophysiology. They also imply that spatial attention effects can be found both in early and high-order visual cortical areas, in parallel with their stimulus tuning properties.

  6. Degraded attentional modulation of cortical neural populations in strabismic amblyopia

    PubMed Central

    Hou, Chuan; Kim, Yee-Joon; Lai, Xin Jie; Verghese, Preeti

    2016-01-01

    Behavioral studies have reported reduced spatial attention in amblyopia, a developmental disorder of spatial vision. However, the neural populations in the visual cortex linked with these behavioral spatial attention deficits have not been identified. Here, we use functional MRI–informed electroencephalography source imaging to measure the effect of attention on neural population activity in the visual cortex of human adult strabismic amblyopes who were stereoblind. We show that compared with controls, the modulatory effects of selective visual attention on the input from the amblyopic eye are substantially reduced in the primary visual cortex (V1) as well as in extrastriate visual areas hV4 and hMT+. Degraded attentional modulation is also found in the normal-acuity fellow eye in areas hV4 and hMT+ but not in V1. These results provide electrophysiological evidence that abnormal binocular input during a developmental critical period may impact cortical connections between the visual cortex and higher level cortices beyond the known amblyopic losses in V1 and V2, suggesting that a deficit of attentional modulation in the visual cortex is an important component of the functional impairment in amblyopia. Furthermore, we find that degraded attentional modulation in V1 is correlated with the magnitude of interocular suppression and the depth of amblyopia. These results support the view that the visual suppression often seen in strabismic amblyopia might be a form of attentional neglect of the visual input to the amblyopic eye. PMID:26885628

  7. Degraded attentional modulation of cortical neural populations in strabismic amblyopia.

    PubMed

    Hou, Chuan; Kim, Yee-Joon; Lai, Xin Jie; Verghese, Preeti

    2016-01-01

    Behavioral studies have reported reduced spatial attention in amblyopia, a developmental disorder of spatial vision. However, the neural populations in the visual cortex linked with these behavioral spatial attention deficits have not been identified. Here, we use functional MRI-informed electroencephalography source imaging to measure the effect of attention on neural population activity in the visual cortex of human adult strabismic amblyopes who were stereoblind. We show that compared with controls, the modulatory effects of selective visual attention on the input from the amblyopic eye are substantially reduced in the primary visual cortex (V1) as well as in extrastriate visual areas hV4 and hMT+. Degraded attentional modulation is also found in the normal-acuity fellow eye in areas hV4 and hMT+ but not in V1. These results provide electrophysiological evidence that abnormal binocular input during a developmental critical period may impact cortical connections between the visual cortex and higher level cortices beyond the known amblyopic losses in V1 and V2, suggesting that a deficit of attentional modulation in the visual cortex is an important component of the functional impairment in amblyopia. Furthermore, we find that degraded attentional modulation in V1 is correlated with the magnitude of interocular suppression and the depth of amblyopia. These results support the view that the visual suppression often seen in strabismic amblyopia might be a form of attentional neglect of the visual input to the amblyopic eye.

  8. Cross-modal cueing of attention alters appearance and early cortical processing of visual stimuli

    PubMed Central

    Störmer, Viola S.; McDonald, John J.; Hillyard, Steven A.

    2009-01-01

    The question of whether attention makes sensory impressions appear more intense has been a matter of debate for over a century. Recent psychophysical studies have reported that attention increases apparent contrast of visual stimuli, but the issue continues to be debated. We obtained converging neurophysiological evidence from human observers as they judged the relative contrast of visual stimuli presented to the left and right visual fields following a lateralized auditory cue. Cross-modal cueing of attention boosted the apparent contrast of the visual target in association with an enlarged neural response in the contralateral visual cortex that began within 100 ms after target onset. The magnitude of the enhanced neural response was positively correlated with perceptual reports of the cued target being higher in contrast. The results suggest that attention increases the perceived contrast of visual stimuli by boosting early sensory processing in the visual cortex. PMID:20007778

  9. Cross-modal cueing of attention alters appearance and early cortical processing of visual stimuli.

    PubMed

    Störmer, Viola S; McDonald, John J; Hillyard, Steven A

    2009-12-29

    The question of whether attention makes sensory impressions appear more intense has been a matter of debate for over a century. Recent psychophysical studies have reported that attention increases apparent contrast of visual stimuli, but the issue continues to be debated. We obtained converging neurophysiological evidence from human observers as they judged the relative contrast of visual stimuli presented to the left and right visual fields following a lateralized auditory cue. Cross-modal cueing of attention boosted the apparent contrast of the visual target in association with an enlarged neural response in the contralateral visual cortex that began within 100 ms after target onset. The magnitude of the enhanced neural response was positively correlated with perceptual reports of the cued target being higher in contrast. The results suggest that attention increases the perceived contrast of visual stimuli by boosting early sensory processing in the visual cortex.

  10. Attentional Processes in Young Children with Congenital Visual Impairment

    ERIC Educational Resources Information Center

    Tadic, Valerie; Pring, Linda; Dale, Naomi

    2009-01-01

    The study investigated attentional processes of 32 preschool children with congenital visual impairment (VI). Children with profound visual impairment (PVI) and severe visual impairment (SVI) were compared to a group of typically developing sighted children in their ability to respond to adult directed attention in terms of establishing,…

  11. Visual Attention to Antismoking PSAs: Smoking Cues versus Other Attention-Grabbing Features

    ERIC Educational Resources Information Center

    Sanders-Jackson, Ashley N.; Cappella, Joseph N.; Linebarger, Deborah L.; Piotrowski, Jessica Taylor; O'Keeffe, Moira; Strasser, Andrew A.

    2011-01-01

    This study examines how addicted smokers attend visually to smoking-related public service announcements (PSAs) in adults smokers. Smokers' onscreen visual fixation is an indicator of cognitive resources allocated to visual attention. Characteristic of individuals with addictive tendencies, smokers are expected to be appetitively activated by…

  12. Attention versus consciousness in the visual brain: differences in conception, phenomenology, behavior, neuroanatomy, and physiology.

    PubMed

    Baars, B J

    1999-07-01

    A common confound between consciousness and attention makes it difficult to think clearly about recent advances in the understanding of the visual brain. Visual consciousness involves phenomenal experience of the visual world, but visual attention is more plausibly treated as a function that selects and maintains the selection of potential conscious contents, often unconsciously. In the same sense, eye movements select conscious visual events, which are not the same as conscious visual experience. According to common sense, visual experience is consciousness, and selective processes are labeled as attention. The distinction is reflected in very different behavioral measures and in very different brain anatomy and physiology. Visual consciousness tends to be associated with the "what" stream of visual feature neurons in the ventral temporal lobe. In contrast, attentional selection and maintenance are mediated by other brain regions, ranging from superior colliculi to thalamus, prefrontal cortex, and anterior cingulate. The author applied the common-sense distinction between attention and consciousness to the theoretical positions of M. I. Posner (1992, 1994) and D. LaBerge (1997, 1998) to show how it helps to clarify the evidence. He concluded that clarity of thought is served by calling a thing by its proper name.

  13. A Comparative Study on the Visual Perceptions of Children with Attention Deficit Hyperactivity Disorder

    NASA Astrophysics Data System (ADS)

    Ahmetoglu, Emine; Aral, Neriman; Butun Ayhan, Aynur

    This study was conducted in order to (a) compare the visual perceptions of seven-year-old children diagnosed with attention deficit hyperactivity disorder with those of normally developing children of the same age and development level and (b) determine whether the visual perceptions of children with attention deficit hyperactivity disorder vary with respect to gender, having received preschool education and parents` educational level. A total of 60 children, 30 with attention deficit hyperactivity disorder and 30 with normal development, were assigned to the study. Data about children with attention deficit hyperactivity disorder and their families was collected by using a General Information Form and the visual perception of children was examined through the Frostig Developmental Test of Visual Perception. The Mann-Whitney U-test and Kruskal-Wallis variance analysis was used to determine whether there was a difference of between the visual perceptions of children with normal development and those diagnosed with attention deficit hyperactivity disorder and to discover whether the variables of gender, preschool education and parents` educational status affected the visual perceptions of children with attention deficit hyperactivity disorder. The results showed that there was a statistically meaningful difference between the visual perceptions of the two groups and that the visual perceptions of children with attention deficit hyperactivity disorder were affected meaningfully by gender, preschool education and parents` educational status.

  14. The role of visual attention in multiple object tracking: evidence from ERPs.

    PubMed

    Doran, Matthew M; Hoffman, James E

    2010-01-01

    We examined the role of visual attention in the multiple object tracking (MOT) task by measuring the amplitude of the N1 component of the event-related potential (ERP) to probe flashes presented on targets, distractors, or empty background areas. We found evidence that visual attention enhances targets and suppresses distractors (Experiment 1 & 3). However, we also found that when tracking load was light (two targets and two distractors), accurate tracking could be carried out without any apparent contribution from the visual attention system (Experiment 2). Our results suggest that attentional selection during MOT is flexibly determined by task demands as well as tracking load and that visual attention may not always be necessary for accurate tracking.

  15. Top-down attention based on object representation and incremental memory for knowledge building and inference.

    PubMed

    Kim, Bumhwi; Ban, Sang-Woo; Lee, Minho

    2013-10-01

    Humans can efficiently perceive arbitrary visual objects based on an incremental learning mechanism with selective attention. This paper proposes a new task specific top-down attention model to locate a target object based on its form and color representation along with a bottom-up saliency based on relativity of primitive visual features and some memory modules. In the proposed model top-down bias signals corresponding to the target form and color features are generated, which draw the preferential attention to the desired object by the proposed selective attention model in concomitance with the bottom-up saliency process. The object form and color representation and memory modules have an incremental learning mechanism together with a proper object feature representation scheme. The proposed model includes a Growing Fuzzy Topology Adaptive Resonance Theory (GFTART) network which plays two important roles in object color and form biased attention; one is to incrementally learn and memorize color and form features of various objects, and the other is to generate a top-down bias signal to localize a target object by focusing on the candidate local areas. Moreover, the GFTART network can be utilized for knowledge inference which enables the perception of new unknown objects on the basis of the object form and color features stored in the memory during training. Experimental results show that the proposed model is successful in focusing on the specified target objects, in addition to the incremental representation and memorization of various objects in natural scenes. In addition, the proposed model properly infers new unknown objects based on the form and color features of previously trained objects. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. The involvement of central attention in visual search is determined by task demands.

    PubMed

    Han, Suk Won

    2017-04-01

    Attention, the mechanism by which a subset of sensory inputs is prioritized over others, operates at multiple processing stages. Specifically, attention enhances weak sensory signal at the perceptual stage, while it serves to select appropriate responses or consolidate sensory representations into short-term memory at the central stage. This study investigated the independence and interaction between perceptual and central attention. To do so, I used a dual-task paradigm, pairing a four-alternative choice task with a visual search task. The results showed that central attention for response selection was engaged in perceptual processing for visual search when the number of search items increased, thereby increasing the demand for serial allocation of focal attention. By contrast, central attention and perceptual attention remained independent as far as the demand for serial shifting of focal attention remained constant; decreasing stimulus contrast or increasing the set size of a parallel search did not evoke the involvement of central attention in visual search. These results suggest that the nature of concurrent visual search process plays a crucial role in the functional interaction between two different types of attention.

  17. Choice reaching with a LEGO arm robot (CoRLEGO): The motor system guides visual attention to movement-relevant information

    PubMed Central

    Strauss, Soeren; Woodgate, Philip J.W.; Sami, Saber A.; Heinke, Dietmar

    2015-01-01

    We present an extension of a neurobiologically inspired robotics model, termed CoRLEGO (Choice reaching with a LEGO arm robot). CoRLEGO models experimental evidence from choice reaching tasks (CRT). In a CRT participants are asked to rapidly reach and touch an item presented on the screen. These experiments show that non-target items can divert the reaching movement away from the ideal trajectory to the target item. This is seen as evidence attentional selection of reaching targets can leak into the motor system. Using competitive target selection and topological representations of motor parameters (dynamic neural fields) CoRLEGO is able to mimic this leakage effect. Furthermore if the reaching target is determined by its colour oddity (i.e. a green square among red squares or vice versa), the reaching trajectories become straighter with repetitions of the target colour (colour streaks). This colour priming effect can also be modelled with CoRLEGO. The paper also presents an extension of CoRLEGO. This extension mimics findings that transcranial direct current stimulation (tDCS) over the motor cortex modulates the colour priming effect (Woodgate et al., 2015). The results with the new CoRLEGO suggest that feedback connections from the motor system to the brain’s attentional system (parietal cortex) guide visual attention to extract movement-relevant information (i.e. colour) from visual stimuli. This paper adds to growing evidence that there is a close interaction between the motor system and the attention system. This evidence contradicts the traditional conceptualization of the motor system as the endpoint of a serial chain of processing stages. At the end of the paper we discuss CoRLEGO’s predictions and also lessons for neurobiologically inspired robotics emerging from this work. PMID:26667353

  18. Conscious visual memory with minimal attention.

    PubMed

    Pinto, Yair; Vandenbroucke, Annelinde R; Otten, Marte; Sligte, Ilja G; Seth, Anil K; Lamme, Victor A F

    2017-02-01

    Is conscious visual perception limited to the locations that a person attends? The remarkable phenomenon of change blindness, which shows that people miss nearly all unattended changes in a visual scene, suggests the answer is yes. However, change blindness is found after visual interference (a mask or a new scene), so that subjects have to rely on working memory (WM), which has limited capacity, to detect the change. Before such interference, however, a much larger capacity store, called fragile memory (FM), which is easily overwritten by newly presented visual information, is present. Whether these different stores depend equally on spatial attention is central to the debate on the role of attention in conscious vision. In 2 experiments, we found that minimizing spatial attention almost entirely erases visual WM, as expected. Critically, FM remains largely intact. Moreover, minimally attended FM responses yield accurate metacognition, suggesting that conscious memory persists with limited spatial attention. Together, our findings help resolve the fundamental issue of how attention affects perception: Both visual consciousness and memory can be supported by only minimal attention. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  19. Attentional gain and processing capacity limits predict the propensity to neglect unexpected visual stimuli.

    PubMed

    Papera, Massimiliano; Richards, Anne

    2016-05-01

    Exogenous allocation of attentional resources allows the visual system to encode and maintain representations of stimuli in visual working memory (VWM). However, limits in the processing capacity to allocate resources can prevent unexpected visual stimuli from gaining access to VWM and thereby to consciousness. Using a novel approach to create unbiased stimuli of increasing saliency, we investigated visual processing during a visual search task in individuals who show a high or low propensity to neglect unexpected stimuli. When propensity to inattention is high, ERP recordings show a diminished amplification concomitantly with a decrease in theta band power during the N1 latency, followed by a poor target enhancement during the N2 latency. Furthermore, a later modulation in the P3 latency was also found in individuals showing propensity to visual neglect, suggesting that more effort is required for conscious maintenance of visual information in VWM. Effects during early stages of processing (N80 and P1) were also observed suggesting that sensitivity to contrasts and medium-to-high spatial frequencies may be modulated by low-level saliency (albeit no statistical group differences were found). In accordance with the Global Workplace Model, our data indicate that a lack of resources in low-level processors and visual attention may be responsible for the failure to "ignite" a state of high-level activity spread across several brain areas that is necessary for stimuli to access awareness. These findings may aid in the development of diagnostic tests and intervention to detect/reduce inattention propensity to visual neglect of unexpected stimuli. © 2016 Society for Psychophysiological Research.

  20. Peripheral Visual Cues: Their Fate in Processing and Effects on Attention and Temporal-Order Perception.

    PubMed

    Tünnermann, Jan; Scharlau, Ingrid

    2016-01-01

    Peripheral visual cues lead to large shifts in psychometric distributions of temporal-order judgments. In one view, such shifts are attributed to attention speeding up processing of the cued stimulus, so-called prior entry. However, sometimes these shifts are so large that it is unlikely that they are caused by attention alone. Here we tested the prevalent alternative explanation that the cue is sometimes confused with the target on a perceptual level, bolstering the shift of the psychometric function. We applied a novel model of cued temporal-order judgments, derived from Bundesen's Theory of Visual Attention. We found that cue-target confusions indeed contribute to shifting psychometric functions. However, cue-induced changes in the processing rates of the target stimuli play an important role, too. At smaller cueing intervals, the cue increased the processing speed of the target. At larger intervals, inhibition of return was predominant. Earlier studies of cued TOJs were insensitive to these effects because in psychometric distributions they are concealed by the conjoint effects of cue-target confusions and processing rate changes.

  1. Low-level visual attention and its relation to joint attention in autism spectrum disorder.

    PubMed

    Jaworski, Jessica L Bean; Eigsti, Inge-Marie

    2017-04-01

    Visual attention is integral to social interaction and is a critical building block for development in other domains (e.g., language). Furthermore, atypical attention (especially joint attention) is one of the earliest markers of autism spectrum disorder (ASD). The current study assesses low-level visual attention and its relation to social attentional processing in youth with ASD and typically developing (TD) youth, aged 7 to 18 years. The findings indicate difficulty overriding incorrect attentional cues in ASD, particularly with non-social (arrow) cues relative to social (face) cues. The findings also show reduced competition in ASD from cues that remain on-screen. Furthermore, social attention, autism severity, and age were all predictors of competing cue processing. The results suggest that individuals with ASD may be biased towards speeded rather than accurate responding, and further, that reduced engagement with visual information may impede responses to visual attentional cues. Once attention is engaged, individuals with ASD appear to interpret directional cues as meaningful. These findings from a controlled, experimental paradigm were mirrored in results from an ecologically valid measure of social attention. Attentional difficulties may be exacerbated during the complex and dynamic experience of actual social interaction. Implications for intervention are discussed.

  2. Attention mediates the flexible allocation of visual working memory resources.

    PubMed

    Emrich, Stephen M; Lockhart, Holly A; Al-Aidroos, Naseem

    2017-07-01

    Though it is clear that it is impossible to store an unlimited amount of information in visual working memory (VWM), the limiting mechanisms remain elusive. While several models of VWM limitations exist, these typically characterize changes in performance as a function of the number of to-be-remembered items. Here, we examine whether changes in spatial attention could better account for VWM performance, independent of load. Across 2 experiments, performance was better predicted by the prioritization of memory items (i.e., attention) than by the number of items to be remembered (i.e., memory load). This relationship followed a power law, and held regardless of whether performance was assessed based on overall precision or any of 3 measures in a mixture model. Moreover, at large set sizes, even minimally attended items could receive a small proportion of resources, without any evidence for a discrete-capacity on the number of items that could be maintained in VWM. Finally, the observed data were best fit by a variable-precision model in which response error was related to the proportion of resources allocated to each item, consistent with a model of VWM in which performance is determined by the continuous allocation of attentional resources during encoding. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  3. Spatial attention increases high-frequency gamma synchronisation in human medial visual cortex.

    PubMed

    Koelewijn, Loes; Rich, Anina N; Muthukumaraswamy, Suresh D; Singh, Krish D

    2013-10-01

    Visual information processing involves the integration of stimulus and goal-driven information, requiring neuronal communication. Gamma synchronisation is linked to neuronal communication, and is known to be modulated in visual cortex both by stimulus properties and voluntarily-directed attention. Stimulus-driven modulations of gamma activity are particularly associated with early visual areas such as V1, whereas attentional effects are generally localised to higher visual areas such as V4. The absence of a gamma increase in early visual cortex is at odds with robust attentional enhancements found with other measures of neuronal activity in this area. Here we used magnetoencephalography (MEG) to explore the effect of spatial attention on gamma activity in human early visual cortex using a highly effective gamma-inducing stimulus and strong attentional manipulation. In separate blocks, subjects tracked either a parafoveal grating patch that induced gamma activity in contralateral medial visual cortex, or a small line at fixation, effectively attending away from the gamma-inducing grating. Both items were always present, but rotated unpredictably and independently of each other. The rotating grating induced gamma synchronisation in medial visual cortex at 30-70 Hz, and in lateral visual cortex at 60-90 Hz, regardless of whether it was attended. Directing spatial attention to the grating increased gamma synchronisation in medial visual cortex, but only at 60-90 Hz. These results suggest that the generally found increase in gamma activity by spatial attention can be localised to early visual cortex in humans, and that stimulus and goal-driven modulations may be mediated at different frequencies within the gamma range. Copyright © 2013 Elsevier Inc. All rights reserved.

  4. Cognitive Control Network Contributions to Memory-Guided Visual Attention

    PubMed Central

    Rosen, Maya L.; Stern, Chantal E.; Michalka, Samantha W.; Devaney, Kathryn J.; Somers, David C.

    2016-01-01

    Visual attentional capacity is severely limited, but humans excel in familiar visual contexts, in part because long-term memories guide efficient deployment of attention. To investigate the neural substrates that support memory-guided visual attention, we performed a set of functional MRI experiments that contrast long-term, memory-guided visuospatial attention with stimulus-guided visuospatial attention in a change detection task. Whereas the dorsal attention network was activated for both forms of attention, the cognitive control network (CCN) was preferentially activated during memory-guided attention. Three posterior nodes in the CCN, posterior precuneus, posterior callosal sulcus/mid-cingulate, and lateral intraparietal sulcus exhibited the greatest specificity for memory-guided attention. These 3 regions exhibit functional connectivity at rest, and we propose that they form a subnetwork within the broader CCN. Based on the task activation patterns, we conclude that the nodes of this subnetwork are preferentially recruited for long-term memory guidance of visuospatial attention. PMID:25750253

  5. Coding of spatial attention priorities and object features in the macaque lateral intraparietal cortex.

    PubMed

    Levichkina, Ekaterina; Saalmann, Yuri B; Vidyasagar, Trichur R

    2017-03-01

    Primate posterior parietal cortex (PPC) is known to be involved in controlling spatial attention. Neurons in one part of the PPC, the lateral intraparietal area (LIP), show enhanced responses to objects at attended locations. Although many are selective for object features, such as the orientation of a visual stimulus, it is not clear how LIP circuits integrate feature-selective information when providing attentional feedback about behaviorally relevant locations to the visual cortex. We studied the relationship between object feature and spatial attention properties of LIP cells in two macaques by measuring the cells' orientation selectivity and the degree of attentional enhancement while performing a delayed match-to-sample task. Monkeys had to match both the location and orientation of two visual gratings presented separately in time. We found a wide range in orientation selectivity and degree of attentional enhancement among LIP neurons. However, cells with significant attentional enhancement had much less orientation selectivity in their response than cells which showed no significant modulation by attention. Additionally, orientation-selective cells showed working memory activity for their preferred orientation, whereas cells showing attentional enhancement also synchronized with local neuronal activity. These results are consistent with models of selective attention incorporating two stages, where an initial feature-selective process guides a second stage of focal spatial attention. We suggest that LIP contributes to both stages, where the first stage involves orientation-selective LIP cells that support working memory of the relevant feature, and the second stage involves attention-enhanced LIP cells that synchronize to provide feedback on spatial priorities. © 2017 The Authors. Physiological Reports published by Wiley Periodicals, Inc. on behalf of The Physiological Society and the American Physiological Society.

  6. Attention enhances contrast appearance via increased input baseline of neural responses

    PubMed Central

    Cutrone, Elizabeth K.; Heeger, David J.; Carrasco, Marisa

    2014-01-01

    Covert spatial attention increases the perceived contrast of stimuli at attended locations, presumably via enhancement of visual neural responses. However, the relation between perceived contrast and the underlying neural responses has not been characterized. In this study, we systematically varied stimulus contrast, using a two-alternative, forced-choice comparison task to probe the effect of attention on appearance across the contrast range. We modeled performance in the task as a function of underlying neural contrast-response functions. Fitting this model to the observed data revealed that an increased input baseline in the neural responses accounted for the enhancement of apparent contrast with spatial attention. PMID:25549920

  7. Visual attention during the evaluation of facial attractiveness is influenced by facial angles and smile.

    PubMed

    Kim, Seol Hee; Hwang, Soonshin; Hong, Yeon-Ju; Kim, Jae-Jin; Kim, Kyung-Ho; Chung, Chooryung J

    2018-05-01

    To examine the changes in visual attention influenced by facial angles and smile during the evaluation of facial attractiveness. Thirty-three young adults were asked to rate the overall facial attractiveness (task 1 and 3) or to select the most attractive face (task 2) by looking at multiple panel stimuli consisting of 0°, 15°, 30°, 45°, 60°, and 90° rotated facial photos with or without a smile for three model face photos and a self-photo (self-face). Eye gaze and fixation time (FT) were monitored by the eye-tracking device during the performance. Participants were asked to fill out a subjective questionnaire asking, "Which face was primarily looked at when evaluating facial attractiveness?" When rating the overall facial attractiveness (task 1) for model faces, FT was highest for the 0° face and lowest for the 90° face regardless of the smile ( P < .01). However, when the most attractive face was to be selected (task 2), the FT of the 0° face decreased, while it significantly increased for the 45° face ( P < .001). When facial attractiveness was evaluated with the simplified panels combined with facial angles and smile (task 3), the FT of the 0° smiling face was the highest ( P < .01). While most participants reported that they looked mainly at the 0° smiling face when rating facial attractiveness, visual attention was broadly distributed within facial angles. Laterally rotated faces and presence of a smile highly influence visual attention during the evaluation of facial esthetics.

  8. Application of Visual Attention in Seismic Attribute Analysis

    NASA Astrophysics Data System (ADS)

    He, M.; Gu, H.; Wang, F.

    2016-12-01

    It has been proved that seismic attributes can be used to predict reservoir. The joint of multi-attribute and geological statistics, data mining, artificial intelligence, further promote the development of the seismic attribute analysis. However, the existing methods tend to have multiple solutions and insufficient generalization ability, which is mainly due to the complex relationship between seismic data and geological information, and undoubtedly own partly to the methods applied. Visual attention is a mechanism model of the human visual system which can concentrate on a few significant visual objects rapidly, even in a mixed scene. Actually, the model qualify good ability of target detection and recognition. In our study, the targets to be predicted are treated as visual objects, and an object representation based on well data is made in the attribute dimensions. Then in the same attribute space, the representation is served as a criterion to search the potential targets outside the wells. This method need not predict properties by building up a complicated relation between attributes and reservoir properties, but with reference to the standard determined before. So it has pretty good generalization ability, and the problem of multiple solutions can be weakened by defining the threshold of similarity.

  9. Detection of emotional faces: salient physical features guide effective visual search.

    PubMed

    Calvo, Manuel G; Nummenmaa, Lauri

    2008-08-01

    In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent, surprised and disgusted faces was found both under upright and inverted display conditions. Inversion slowed down the detection of these faces less than that of others (fearful, angry, and sad). Accordingly, the detection advantage involves processing of featural rather than configural information. The facial features responsible for the detection advantage are located in the mouth rather than the eye region. Computationally modeled visual saliency predicted both attentional orienting and detection. Saliency was greatest for the faces (happy) and regions (mouth) that were fixated earlier and detected faster, and there was close correspondence between the onset of the modeled saliency peak and the time at which observers initially fixated the faces. The authors conclude that visual saliency of specific facial features--especially the smiling mouth--is responsible for facilitated initial orienting, which thus shortens detection. (PsycINFO Database Record (c) 2008 APA, all rights reserved).

  10. The effect of spatial organization of targets and distractors on the capacity to selectively memorize objects in visual short-term memory

    PubMed Central

    Abbes, Aymen Ben; Gavault, Emmanuelle; Ripoll, Thierry

    2014-01-01

    We conducted a series of experiments to explore how the spatial configuration of objects influences the selection and the processing of these objects in a visual short-term memory task. We designed a new experiment in which participants had to memorize 4 targets presented among 4 distractors. Targets were cued during the presentation of distractor objects. Their locations varied according to 4 spatial configurations. From the first to the last configuration, the distance between targets’ locations was progressively increased. The results revealed a high capacity to select and memorize targets embedded among distractors even when targets were extremely distant from each other. This capacity is discussed in relation to the unitary conception of attention, models of split attention, and the competitive interaction model. Finally, we propose that the spatial dispersion of objects has different effects on attentional allocation and processing stages. Thus, when targets are extremely distant from each other, attentional allocation becomes more difficult while processing becomes easier. This finding implicates that these 2 aspects of attention need to be more clearly distinguished in future research. PMID:25339978

  11. Selective attention in an insect visual neuron.

    PubMed

    Wiederman, Steven D; O'Carroll, David C

    2013-01-21

    Animals need attention to focus on one target amid alternative distracters. Dragonflies, for example, capture flies in swarms comprising prey and conspecifics, a feat that requires neurons to select one moving target from competing alternatives. Diverse evidence, from functional imaging and physiology to psychophysics, highlights the importance of such "competitive selection" in attention for vertebrates. Analogous mechanisms have been proposed in artificial intelligence and even in invertebrates, yet direct neural correlates of attention are scarce from all animal groups. Here, we demonstrate responses from an identified dragonfly visual neuron that perfectly match a model for competitive selection within limits of neuronal variability (r(2) = 0.83). Responses to individual targets moving at different locations within the receptive field differ in both magnitude and time course. However, responses to two simultaneous targets exclusively track those for one target alone rather than any combination of the pair. Irrespective of target size, contrast, or separation, this neuron selects one target from the pair and perfectly preserves the response, regardless of whether the "winner" is the stronger stimulus if presented alone. This neuron is amenable to electrophysiological recordings, providing neuroscientists with a new model system for studying selective attention. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Changes in White Matter Microstructure Impact Cognition by Disrupting the Ability of Neural Assemblies to Synchronize.

    PubMed

    Bells, Sonya; Lefebvre, Jérémie; Prescott, Steven A; Dockstader, Colleen; Bouffet, Eric; Skocic, Jovanka; Laughlin, Suzanne; Mabbott, Donald J

    2017-08-23

    Cognition is compromised by white matter (WM) injury but the neurophysiological alterations linking them remain unclear. We hypothesized that reduced neural synchronization caused by disruption of neural signal propagation is involved. To test this, we evaluated group differences in: diffusion tensor WM microstructure measures within the optic radiations, primary visual area (V1), and cuneus; neural phase synchrony to a visual attention cue during visual-motor task; and reaction time to a response cue during the same task between 26 pediatric patients (17/9: male/female) treated with cranial radiation treatment for a brain tumor (12.67 ± 2.76 years), and 26 healthy children (16/10: male/female; 12.01 ± 3.9 years). We corroborated our findings using a corticocortical computational model representing perturbed signal conduction from myelin. Patients show delayed reaction time, WM compromise, and reduced phase synchrony during visual attention compared with healthy children. Notably, using partial least-squares-path modeling we found that WM insult within the optic radiations, V1, and cuneus is a strong predictor of the slower reaction times via disruption of neural synchrony in visual cortex. Observed changes in synchronization were reproduced in a computational model of WM injury. These findings provide new evidence linking cognition with WM via the reliance of neural synchronization on propagation of neural signals. SIGNIFICANCE STATEMENT By comparing brain tumor patients to healthy children, we establish that changes in the microstructure of the optic radiations and neural synchrony during visual attention predict reaction time. Furthermore, by testing the directionality of these links through statistical modeling and verifying our findings with computational modeling, we infer a causal relationship, namely that changes in white matter microstructure impact cognition in part by disturbing the ability of neural assemblies to synchronize. Together, our human imaging data and computer simulations show a fundamental connection between WM microstructure and neural synchronization that is critical for cognitive processing. Copyright © 2017 the authors 0270-6474/17/378227-12$15.00/0.

  13. Rules infants look by: Testing the assumption of transitivity in visual salience.

    PubMed

    Kibbe, Melissa M; Kaldy, Zsuzsa; Blaser, Erik

    2018-01-01

    What drives infants' attention in complex visual scenes? Early models of infant attention suggested that the degree to which different visual features were detectable determines their attentional priority. Here, we tested this by asking whether two targets - defined by different features, but each equally salient when evaluated independently - would drive attention equally when pitted head-to-head. In Experiment 1, we presented 6-month-old infants with an array of gabor patches in which a target region varied either in color or spatial frequency from the background. Using a forced-choice preferential-looking method, we measured how readily infants fixated the target as its featural difference from the background was parametrically increased. Then, in Experiment 2, we used these psychometric preference functions to choose values for color and spatial frequency targets that were equally salient (preferred), and pitted them against each other within the same display. We reasoned that, if salience is transitive, then the stimuli should be iso-salient and infants should therefore show no systematic preference for either stimulus. On the contrary, we found that infants consistently preferred the color-defined stimulus. This suggests that computing visual salience in more complex scenes needs to include factors above and beyond local salience values.

  14. A Neural Theory of Visual Attention: Bridging Cognition and Neurophysiology

    ERIC Educational Resources Information Center

    Bundesen, Claus; Habekost, Thomas; Kyllingsbaek, Soren

    2005-01-01

    A neural theory of visual attention (NTVA) is presented. NTVA is a neural interpretation of C. Bundesen's (1990) theory of visual attention (TVA). In NTVA, visual processing capacity is distributed across stimuli by dynamic remapping of receptive fields of cortical cells such that more processing resources (cells) are devoted to behaviorally…

  15. Is There a Common Linkage among Reading Comprehension, Visual Attention, and Magnocellular Processing?

    ERIC Educational Resources Information Center

    Solan, Harold A.; Shelley-Tremblay, John F.; Hansen, Peter C.; Larson, Steven

    2007-01-01

    The authors examined the relationships between reading comprehension, visual attention, and magnocellular processing in 42 Grade 7 students. The goal was to quantify the sensitivity of visual attention and magnocellular visual processing as concomitants of poor reading comprehension in the absence of either vision therapy or cognitive…

  16. Spatial Working Memory Interferes with Explicit, but Not Probabilistic Cuing of Spatial Attention

    ERIC Educational Resources Information Center

    Won, Bo-Yeong; Jiang, Yuhong V.

    2015-01-01

    Recent empirical and theoretical work has depicted a close relationship between visual attention and visual working memory. For example, rehearsal in spatial working memory depends on spatial attention, whereas adding a secondary spatial working memory task impairs attentional deployment in visual search. These findings have led to the proposal…

  17. Selective visual attention to emotional words: Early parallel frontal and visual activations followed by interactive effects in visual cortex.

    PubMed

    Schindler, Sebastian; Kissler, Johanna

    2016-10-01

    Human brains spontaneously differentiate between various emotional and neutral stimuli, including written words whose emotional quality is symbolic. In the electroencephalogram (EEG), emotional-neutral processing differences are typically reflected in the early posterior negativity (EPN, 200-300 ms) and the late positive potential (LPP, 400-700 ms). These components are also enlarged by task-driven visual attention, supporting the assumption that emotional content naturally drives attention. Still, the spatio-temporal dynamics of interactions between emotional stimulus content and task-driven attention remain to be specified. Here, we examine this issue in visual word processing. Participants attended to negative, neutral, or positive nouns while high-density EEG was recorded. Emotional content and top-down attention both amplified the EPN component in parallel. On the LPP, by contrast, emotion and attention interacted: Explicit attention to emotional words led to a substantially larger amplitude increase than did explicit attention to neutral words. Source analysis revealed early parallel effects of emotion and attention in bilateral visual cortex and a later interaction of both in right visual cortex. Distinct effects of attention were found in inferior, middle and superior frontal, paracentral, and parietal areas, as well as in the anterior cingulate cortex (ACC). Results specify separate and shared mechanisms of emotion and attention at distinct processing stages. Hum Brain Mapp 37:3575-3587, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  18. Haptic over visual information in the distribution of visual attention after tool-use in near and far space.

    PubMed

    Park, George D; Reed, Catherine L

    2015-10-01

    Despite attentional prioritization for grasping space near the hands, tool-use appears to transfer attentional bias to the tool's end/functional part. The contributions of haptic and visual inputs to attentional distribution along a tool were investigated as a function of tool-use in near (Experiment 1) and far (Experiment 2) space. Visual attention was assessed with a 50/50, go/no-go, target discrimination task, while a tool was held next to targets appearing near the tool-occupied hand or tool-end. Target response times (RTs) and sensitivity (d-prime) were measured at target locations, before and after functional tool practice for three conditions: (1) open-tool: tool-end visible (visual + haptic inputs), (2) hidden-tool: tool-end visually obscured (haptic input only), and (3) short-tool: stick missing tool's length/end (control condition: hand occupied but no visual/haptic input). In near space, both open- and hidden-tool groups showed a tool-end, attentional bias (faster RTs toward tool-end) before practice; after practice, RTs near the hand improved. In far space, the open-tool group showed no bias before practice; after practice, target RTs near the tool-end improved. However, the hidden-tool group showed a consistent tool-end bias despite practice. Lack of short-tool group results suggested that hidden-tool group results were specific to haptic inputs. In conclusion, (1) allocation of visual attention along a tool due to tool practice differs in near and far space, and (2) visual attention is drawn toward the tool's end even when visually obscured, suggesting haptic input provides sufficient information for directing attention along the tool.

  19. Cross-modal illusory conjunctions between vision and touch.

    PubMed

    Cinel, Caterina; Humphreys, Glyn W; Poli, Riccardo

    2002-10-01

    Cross-modal illusory conjunctions (ICs) happen when, under conditions of divided attention, felt textures are reported as being seen or vice versa. Experiments provided evidence for these errors, demonstrated that ICs are more frequent if tactile and visual stimuli are in the same hemispace, and showed that ICs still occur under forced-choice conditions but do not occur when attention to the felt texture is increased. Cross-modal ICs were also found in a patient with parietal damage even with relatively long presentations of visual stimuli. The data are consistent with there being cross-modal integration of sensory information, with the modality of origin sometimes being misattributed when attention is constrained. The empirical conclusions from the experiments are supported by formal models.

  20. Invariant visual object recognition: a model, with lighting invariance.

    PubMed

    Rolls, Edmund T; Stringer, Simon M

    2006-01-01

    How are invariant representations of objects formed in the visual cortex? We describe a neurophysiological and computational approach which focusses on a feature hierarchy model in which invariant representations can be built by self-organizing learning based on the statistics of the visual input. The model can use temporal continuity in an associative synaptic learning rule with a short term memory trace, and/or it can use spatial continuity in Continuous Transformation learning. The model of visual processing in the ventral cortical stream can build representations of objects that are invariant with respect to translation, view, size, and in this paper we show also lighting. The model has been extended to provide an account of invariant representations in the dorsal visual system of the global motion produced by objects such as looming, rotation, and object-based movement. The model has been extended to incorporate top-down feedback connections to model the control of attention by biased competition in for example spatial and object search tasks. The model has also been extended to account for how the visual system can select single objects in complex visual scenes, and how multiple objects can be represented in a scene.

  1. Visual perception and social foraging in birds.

    PubMed

    Fernández-Juricic, Esteban; Erichsen, Jonathan T; Kacelnik, Alex

    2004-01-01

    Birds gather information about their environment mainly through vision by scanning their surroundings. Many prevalent models of social foraging assume that foraging and scanning are mutually exclusive. Although this assumption is valid for birds with narrow visual fields, these models have also been applied to species with wide fields. In fact, available models do not make precise predictions for birds with large visual fields, in which the head-up, head-down dichotomy is not accurate and, moreover, do not consider the effects of detection distance and limited attention. Studies of how different types of visual information are acquired as a function of body posture and of how information flows within flocks offer new insights into the costs and benefits of living in groups.

  2. Video game experience and its influence on visual attention parameters: an investigation using the framework of the Theory of Visual Attention (TVA).

    PubMed

    Schubert, Torsten; Finke, Kathrin; Redel, Petra; Kluckow, Steffen; Müller, Hermann; Strobach, Tilo

    2015-05-01

    Experts with video game experience, in contrast to non-experienced persons, are superior in multiple domains of visual attention. However, it is an open question which basic aspects of attention underlie this superiority. We approached this question using the framework of Theory of Visual Attention (TVA) with tools that allowed us to assess various parameters that are related to different visual attention aspects (e.g., perception threshold, processing speed, visual short-term memory storage capacity, top-down control, spatial distribution of attention) and that are measurable on the same experimental basis. In Experiment 1, we found advantages of video game experts in perception threshold and visual processing speed; the latter being restricted to the lower positions of the used computer display. The observed advantages were not significantly moderated by general person-related characteristics such as personality traits, sensation seeking, intelligence, social anxiety, or health status. Experiment 2 tested a potential causal link between the expert advantages and video game practice with an intervention protocol. It found no effects of action video gaming on perception threshold, visual short-term memory storage capacity, iconic memory storage, top-down control, and spatial distribution of attention after 15 days of training. However, observations of a selected improvement of processing speed at the lower positions of the computer screen after video game training and of retest effects are suggestive for limited possibilities to improve basic aspects of visual attention (TVA) with practice. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Visual modeling in an analysis of multidimensional data

    NASA Astrophysics Data System (ADS)

    Zakharova, A. A.; Vekhter, E. V.; Shklyar, A. V.; Pak, A. J.

    2018-01-01

    The article proposes an approach to solve visualization problems and the subsequent analysis of multidimensional data. Requirements to the properties of visual models, which were created to solve analysis problems, are described. As a perspective direction for the development of visual analysis tools for multidimensional and voluminous data, there was suggested an active use of factors of subjective perception and dynamic visualization. Practical results of solving the problem of multidimensional data analysis are shown using the example of a visual model of empirical data on the current state of studying processes of obtaining silicon carbide by an electric arc method. There are several results of solving this problem. At first, an idea of possibilities of determining the strategy for the development of the domain, secondly, the reliability of the published data on this subject, and changes in the areas of attention of researchers over time.

  4. Sex-Role Learning: A Test of the Selective Attention Hypothesis

    ERIC Educational Resources Information Center

    Bryan, Janice Westlund; Luria, Zella

    1978-01-01

    Describes 2 experiments in which children ages 5-6 and 9-10 years viewed slides of male and female models performing matched acts which were sex-appropriate, sex-inappropriate, or sex-neutral. Visual attention was assessed by the method of feedback electroencephalography. Recall and preference for the slides were also measured. (Author/JMB)

  5. Selection and response bias as determinants of priming of pop-out search: Revelations from diffusion modeling.

    PubMed

    Burnham, Bryan R

    2018-05-03

    During visual search, both top-down factors and bottom-up properties contribute to the guidance of visual attention, but selection history can influence attention independent of bottom-up and top-down factors. For example, priming of pop-out (PoP) is the finding that search for a singleton target is faster when the target and distractor features repeat than when those features trade roles between trials. Studies have suggested that such priming (selection history) effects on pop-out search manifest either early, by biasing the selection of the preceding target feature, or later in processing, by facilitating response and target retrieval processes. The present study was designed to examine the influence of selection history on pop-out search by introducing a speed-accuracy trade-off manipulation in a pop-out search task. Ratcliff diffusion modeling (RDM) was used to examine how selection history influenced both attentional bias and response execution processes. The results support the hypothesis that selection history biases attention toward the preceding target's features on the current trial and also influences selection of the response to the target.

  6. Adolescent fluoxetine exposure produces enduring, sex-specific alterations of visual discrimination and attention in rats.

    PubMed

    LaRoche, Ronee B; Morgan, Russell E

    2007-01-01

    Over the past two decades the use of selective serotonin reuptake inhibitors (SSRIs) to treat behavioral disorders in children has grown rapidly, despite little evidence regarding the safety and efficacy of these drugs for use in children. Utilizing a rat model, this study investigated whether post-weaning exposure to a prototype SSRI, fluoxetine (FLX), influenced performance on visual tasks designed to measure discrimination learning, sustained attention, inhibitory control, and reaction time. Additionally, sex differences in response to varying doses of fluoxetine were examined. In Experiment 1, female rats were administered (P.O.) fluoxetine (10 mg/kg ) or vehicle (apple juice) from PND 25 thru PND 49. After a 14 day washout period, subjects were trained to perform a simultaneous visual discrimination task. Subjects were then tested for 20 sessions on a visual attention task that consisted of varied stimulus delays (0, 3, 6, or 9 s) and cue durations (200, 400, or 700 ms). In Experiment 2, both male and female Long-Evans rats (24 F, 24 M) were administered fluoxetine (0, 5, 10, or 15 mg/kg) then tested in the same visual tasks used in Experiment 1, with the addition of open-field and elevated plus-maze testing. Few FLX-related differences were seen in the visual discrimination, open field, or plus-maze tasks. However, results from the visual attention task indicated a dose-dependent reduction in the performance of fluoxetine-treated males, whereas fluoxetine-treated females tended to improve over baseline. These findings indicate that enduring, behaviorally-relevant alterations of the CNS can occur following pharmacological manipulation of the serotonin system during postnatal development.

  7. An investigation of visual selection priority of objects with texture and crossed and uncrossed disparities

    NASA Astrophysics Data System (ADS)

    Khaustova, Dar'ya; Fournier, Jérôme; Wyckens, Emmanuel; Le Meur, Olivier

    2014-02-01

    The aim of this research is to understand the difference in visual attention to 2D and 3D content depending on texture and amount of depth. Two experiments were conducted using an eye-tracker and a 3DTV display. Collected fixation data were used to build saliency maps and to analyze the differences between 2D and 3D conditions. In the first experiment 51 observers participated in the test. Using scenes that contained objects with crossed disparity, it was discovered that such objects are the most salient, even if observers experience discomfort due to the high level of disparity. The goal of the second experiment is to decide whether depth is a determinative factor for visual attention. During the experiment, 28 observers watched the scenes that contained objects with crossed and uncrossed disparities. We evaluated features influencing the saliency of the objects in stereoscopic conditions by using contents with low-level visual features. With univariate tests of significance (MANOVA), it was detected that texture is more important than depth for selection of objects. Objects with crossed disparity are significantly more important for selection processes when compared to 2D. However, objects with uncrossed disparity have the same influence on visual attention as 2D objects. Analysis of eyemovements indicated that there is no difference in saccade length. Fixation durations were significantly higher in stereoscopic conditions for low-level stimuli than in 2D. We believe that these experiments can help to refine existing models of visual attention for 3D content.

  8. From attentional gating in macaque primary visual cortex to dyslexia in humans.

    PubMed

    Vidyasagar, T R

    2001-01-01

    Selective attention is an important aspect of brain function that we need in coping with the immense and constant barrage of sensory information. One model of attention (Feature Integration Theory) that suggests an early selection of spatial locations of objects via an attentional spotlight would also solve the 'binding problem' (that is how do different attributes of each object get correctly bound together?). Our experiments have demonstrated modulation of specific locations of interest at the level of the primary visual cortex both in visual discrimination and memory tasks, where the actual locations of the targets was also important in being able to perform the task. It is suggested that the feedback mediating the modulation arises from the posterior parietal cortex, which would also be consistent with its known role in attentional control. In primates, the magnocellular (M) and parvocellular (P) pathways are the two major streams of inputs from the retina, carrying distinctly different types of information and they remain fairly segregated in their projections to the primary visual cortex and further into the extra-striate regions. The P inputs go mainly into the ventral (temporal) stream, while the dorsal (parietal) stream is dominated by M inputs. A theory of attentional gating is proposed here where the M dominated dorsal stream gates the P inputs into the ventral stream. This framework is used to provide a neural explanation of the processes involved in reading and in learning to read. This scheme also explains how a magnocellular deficit could cause the common reading impairment, dyslexia.

  9. Visual Attention to Radar Displays

    NASA Technical Reports Server (NTRS)

    Moray, N.; Richards, M.; Brophy, C.

    1984-01-01

    A model is described which predicts the allocation of attention to the features of a radar display. It uses the growth of uncertainty and the probability of near collision to call the eye to a feature of the display. The main source of uncertainty is forgetting following a fixation, which is modelled as a two dimensional diffusion process. The model was used to predict information overload in intercept controllers, and preliminary validation obtained by recording eye movements of intercept controllers in simulated and live (practice) interception.

  10. Functional evolution of new and expanded attention networks in humans

    PubMed Central

    Patel, Gaurav H.; Yang, Danica; Jamerson, Emery C.; Snyder, Lawrence H.; Corbetta, Maurizio; Ferrera, Vincent P.

    2015-01-01

    Macaques are often used as a model system for invasive investigations of the neural substrates of cognition. However, 25 million years of evolution separate humans and macaques from their last common ancestor, and this has likely substantially impacted the function of the cortical networks underlying cognitive processes, such as attention. We examined the homology of frontoparietal networks underlying attention by comparing functional MRI data from macaques and humans performing the same visual search task. Although there are broad similarities, we found fundamental differences between the species. First, humans have more dorsal attention network areas than macaques, indicating that in the course of evolution the human attention system has expanded compared with macaques. Second, potentially homologous areas in the dorsal attention network have markedly different biases toward representing the contralateral hemifield, indicating that the underlying neural architecture of these areas may differ in the most basic of properties, such as receptive field distribution. Third, despite clear evidence of the temporoparietal junction node of the ventral attention network in humans as elicited by this visual search task, we did not find functional evidence of a temporoparietal junction in macaques. None of these differences were the result of differences in training, experimental power, or anatomical variability between the two species. The results of this study indicate that macaque data should be applied to human models of cognition cautiously, and demonstrate how evolution may shape cortical networks. PMID:26170314

  11. Combined Electrophysiological and Behavioral Evidence for the Suppression of Salient Distractors.

    PubMed

    Gaspelin, Nicholas; Luck, Steven J

    2018-05-15

    Researchers have long debated how salient-but-irrelevant features guide visual attention. Pure stimulus-driven theories claim that salient stimuli automatically capture attention irrespective of goals, whereas pure goal-driven theories propose that an individual's attentional control settings determine whether salient stimuli capture attention. However, recent studies have suggested a hybrid model in which salient stimuli attract visual attention but can be actively suppressed by top-down attentional mechanisms. Support for this hybrid model has primarily come from ERP studies demonstrating that salient stimuli, which fail to capture attention, also elicit a distractor positivity (P D ) component, a putative neural index of suppression. Other support comes from a handful of behavioral studies showing that processing at the salient locations is inhibited compared with other locations. The current study was designed to link the behavioral and neural evidence by combining ERP recordings with an experimental paradigm that provides a behavioral measure of suppression. We found that, when a salient distractor item elicited the P D component, processing at the location of this distractor was suppressed below baseline levels. Furthermore, the magnitude of behavioral suppression and the magnitude of the P D component covaried across participants. These findings provide a crucial connection between the behavioral and neural measures of suppression, which opens the door to using the P D component to assess the timing and neural substrates of the behaviorally observed suppression.

  12. Functional evolution of new and expanded attention networks in humans.

    PubMed

    Patel, Gaurav H; Yang, Danica; Jamerson, Emery C; Snyder, Lawrence H; Corbetta, Maurizio; Ferrera, Vincent P

    2015-07-28

    Macaques are often used as a model system for invasive investigations of the neural substrates of cognition. However, 25 million years of evolution separate humans and macaques from their last common ancestor, and this has likely substantially impacted the function of the cortical networks underlying cognitive processes, such as attention. We examined the homology of frontoparietal networks underlying attention by comparing functional MRI data from macaques and humans performing the same visual search task. Although there are broad similarities, we found fundamental differences between the species. First, humans have more dorsal attention network areas than macaques, indicating that in the course of evolution the human attention system has expanded compared with macaques. Second, potentially homologous areas in the dorsal attention network have markedly different biases toward representing the contralateral hemifield, indicating that the underlying neural architecture of these areas may differ in the most basic of properties, such as receptive field distribution. Third, despite clear evidence of the temporoparietal junction node of the ventral attention network in humans as elicited by this visual search task, we did not find functional evidence of a temporoparietal junction in macaques. None of these differences were the result of differences in training, experimental power, or anatomical variability between the two species. The results of this study indicate that macaque data should be applied to human models of cognition cautiously, and demonstrate how evolution may shape cortical networks.

  13. Attention Effects During Visual Short-Term Memory Maintenance: Protection or Prioritization?

    PubMed Central

    Matsukura, Michi; Luck, Steven J.; Vecera, Shaun P.

    2007-01-01

    Interactions between visual attention and visual short-term memory (VSTM) play a central role in cognitive processing. For example, attention can assist in selectively encoding items into visual memory. Attention appears to be able to influence items already stored in visual memory as well; cues that appear long after the presentation of an array of objects can affect memory for those objects (Griffin & Nobre, 2003). In five experiments, we distinguished two possible mechanisms for the effects of cues on items currently stored in VSTM. A protection account proposes that attention protects the cued item from becoming degraded during the retention interval. By contrast, a prioritization account suggests that attention increases a cued item’s priority during the comparison process that occurs when memory is tested. The results of the experiments were consistent with the first of these possibilities, suggesting that attention can serve to protect VSTM representations while they are being maintained. PMID:18078232

  14. Attention distributed across sensory modalities enhances perceptual performance

    PubMed Central

    Mishra, Jyoti; Gazzaley, Adam

    2012-01-01

    This study investigated the interaction between top-down attentional control and multisensory processing in humans. Using semantically congruent and incongruent audiovisual stimulus streams, we found target detection to be consistently improved in the setting of distributed audiovisual attention versus focused visual attention. This performance benefit was manifested as faster reaction times for congruent audiovisual stimuli, and as accuracy improvements for incongruent stimuli, resulting in a resolution of stimulus interference. Electrophysiological recordings revealed that these behavioral enhancements were associated with reduced neural processing of both auditory and visual components of the audiovisual stimuli under distributed vs. focused visual attention. These neural changes were observed at early processing latencies, within 100–300 ms post-stimulus onset, and localized to auditory, visual, and polysensory temporal cortices. These results highlight a novel neural mechanism for top-down driven performance benefits via enhanced efficacy of sensory neural processing during distributed audiovisual attention relative to focused visual attention. PMID:22933811

  15. A Comparison of the Visual Attention Patterns of People with Aphasia and Adults without Neurological Conditions for Camera-Engaged and Task-Engaged Visual Scenes

    ERIC Educational Resources Information Center

    Thiessen, Amber; Beukelman, David; Hux, Karen; Longenecker, Maria

    2016-01-01

    Purpose: The purpose of the study was to compare the visual attention patterns of adults with aphasia and adults without neurological conditions when viewing visual scenes with 2 types of engagement. Method: Eye-tracking technology was used to measure the visual attention patterns of 10 adults with aphasia and 10 adults without neurological…

  16. Attention modulates perception of visual space

    PubMed Central

    Zhou, Liu; Deng, Chenglong; Ooi, Teng Leng; He, Zijiang J.

    2017-01-01

    Attention readily facilitates the detection and discrimination of objects, but it is not known whether it helps to form the vast volume of visual space that contains the objects and where actions are implemented. Conventional wisdom suggests not, given the effortless ease with which we perceive three-dimensional (3D) scenes on opening our eyes. Here, we show evidence to the contrary. In Experiment 1, the observer judged the location of a briefly presented target, placed either on the textured ground or ceiling surface. Judged location was more accurate for a target on the ground, provided that the ground was visible and that the observer directed attention to the lower visual field, not the upper field. This reveals that attention facilitates space perception with reference to the ground. Experiment 2 showed that judged location of a target in mid-air, with both ground and ceiling surfaces present, was more accurate when the observer directed their attention to the lower visual field; this indicates that the attention effect extends to visual space above the ground. These findings underscore the role of attention in anchoring visual orientation in space, which is arguably a primal event that enhances one’s ability to interact with objects and surface layouts within the visual space. The fact that the effect of attention was contingent on the ground being visible suggests that our terrestrial visual system is best served by its ecological niche. PMID:29177198

  17. Automatic Guidance of Visual Attention from Verbal Working Memory

    ERIC Educational Resources Information Center

    Soto, David; Humphreys, Glyn W.

    2007-01-01

    Previous studies have shown that visual attention can be captured by stimuli matching the contents of working memory (WM). Here, the authors assessed the nature of the representation that mediates the guidance of visual attention from WM. Observers were presented with either verbal or visual primes (to hold in memory, Experiment 1; to verbalize,…

  18. Visual Spatial Attention to Multiple Locations At Once: The Jury Is Still Out

    ERIC Educational Resources Information Center

    Jans, Bert; Peters, Judith C.; De Weerd, Peter

    2010-01-01

    Although in traditional attention research the focus of visual spatial attention has been considered as indivisible, many studies in the last 15 years have claimed the contrary. These studies suggest that humans can direct their attention simultaneously to multiple noncontiguous regions of the visual field upon mere instruction. The notion that…

  19. Television Viewing at Home: Age Trends in Visual Attention and Time with TV.

    ERIC Educational Resources Information Center

    Anderson, Daniel R.; And Others

    1986-01-01

    Decribes age trends in television viewing time and visual attention of children and adults videotaped in their homes for 10-day periods. Shows that the increase in visual attention to television during the preschool years is consistent with the theory that television program comprehensibility is a major determinant of attention in young children.…

  20. Slow perceptual processing at the core of developmental dyslexia: a parameter-based assessment of visual attention.

    PubMed

    Stenneken, Prisca; Egetemeir, Johanna; Schulte-Körne, Gerd; Müller, Hermann J; Schneider, Werner X; Finke, Kathrin

    2011-10-01

    The cognitive causes as well as the neurological and genetic basis of developmental dyslexia, a complex disorder of written language acquisition, are intensely discussed with regard to multiple-deficit models. Accumulating evidence has revealed dyslexics' impairments in a variety of tasks requiring visual attention. The heterogeneity of these experimental results, however, points to the need for measures that are sufficiently sensitive to differentiate between impaired and preserved attentional components within a unified framework. This first parameter-based group study of attentional components in developmental dyslexia addresses potentially altered attentional components that have recently been associated with parietal dysfunctions in dyslexia. We aimed to isolate the general attentional resources that might underlie reduced span performance, i.e., either a deficient working memory storage capacity, or a slowing in visual perceptual processing speed, or both. Furthermore, by analysing attentional selectivity in dyslexia, we addressed a potential lateralized abnormality of visual attention, i.e., a previously suggested rightward spatial deviation compared to normal readers. We investigated a group of high-achieving young adults with persisting dyslexia and matched normal readers in an experimental whole report and a partial report of briefly presented letter arrays. Possible deviations in the parametric values of the dyslexic compared to the control group were taken as markers for the underlying deficit. The dyslexic group showed a striking reduction in perceptual processing speed (by 26% compared to controls) while their working memory storage capacity was in the normal range. In addition, a spatial deviation of attentional weighting compared to the control group was confirmed in dyslexic readers, which was larger in participants with a more severe dyslexic disorder. In general, the present study supports the relevance of perceptual processing speed in disorders of written language acquisition and demonstrates that the parametric assessment provides a suitable tool for specifying the underlying deficit within a unitary framework. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. The Tölz Temporal Topography Study: mapping the visual field across the life span. Part II: cognitive factors shaping visual field maps.

    PubMed

    Poggel, Dorothe A; Treutwein, Bernhard; Calmanti, Claudia; Strasburger, Hans

    2012-08-01

    Part I described the topography of visual performance over the life span. Performance decline was explained only partly by deterioration of the optical apparatus. Part II therefore examines the influence of higher visual and cognitive functions. Visual field maps for 95 healthy observers of static perimetry, double-pulse resolution (DPR), reaction times, and contrast thresholds, were correlated with measures of visual attention (alertness, divided attention, spatial cueing), visual search, and the size of the attention focus. Correlations with the attentional variables were substantial, particularly for variables of temporal processing. DPR thresholds depended on the size of the attention focus. The extraction of cognitive variables from the correlations between topographical variables and participant age substantially reduced those correlations. There is a systematic top-down influence on the aging of visual functions, particularly of temporal variables, that largely explains performance decline and the change of the topography over the life span.

  2. Vision in Flies: Measuring the Attention Span

    PubMed Central

    Koenig, Sebastian; Wolf, Reinhard; Heisenberg, Martin

    2016-01-01

    A visual stimulus at a particular location of the visual field may elicit a behavior while at the same time equally salient stimuli in other parts do not. This property of visual systems is known as selective visual attention (SVA). The animal is said to have a focus of attention (FoA) which it has shifted to a particular location. Visual attention normally involves an attention span at the location to which the FoA has been shifted. Here the attention span is measured in Drosophila. The fly is tethered and hence has its eyes fixed in space. It can shift its FoA internally. This shift is revealed using two simultaneous test stimuli with characteristic responses at their particular locations. In tethered flight a wild type fly keeps its FoA at a certain location for up to 4s. Flies with a mutation in the radish gene, that has been suggested to be involved in attention-like mechanisms, display a reduced attention span of only 1s. PMID:26848852

  3. Vision in Flies: Measuring the Attention Span.

    PubMed

    Koenig, Sebastian; Wolf, Reinhard; Heisenberg, Martin

    2016-01-01

    A visual stimulus at a particular location of the visual field may elicit a behavior while at the same time equally salient stimuli in other parts do not. This property of visual systems is known as selective visual attention (SVA). The animal is said to have a focus of attention (FoA) which it has shifted to a particular location. Visual attention normally involves an attention span at the location to which the FoA has been shifted. Here the attention span is measured in Drosophila. The fly is tethered and hence has its eyes fixed in space. It can shift its FoA internally. This shift is revealed using two simultaneous test stimuli with characteristic responses at their particular locations. In tethered flight a wild type fly keeps its FoA at a certain location for up to 4s. Flies with a mutation in the radish gene, that has been suggested to be involved in attention-like mechanisms, display a reduced attention span of only 1s.

  4. Guidance of attention by information held in working memory.

    PubMed

    Calleja, Marissa Ortiz; Rich, Anina N

    2013-05-01

    Information held in working memory (WM) can guide attention during visual search. The authors of recent studies have interpreted the effect of holding verbal labels in WM as guidance of visual attention by semantic information. In a series of experiments, we tested how attention is influenced by visual features versus category-level information about complex objects held in WM. Participants either memorized an object's image or its category. While holding this information in memory, they searched for a target in a four-object search display. On exact-match trials, the memorized item reappeared as a distractor in the search display. On category-match trials, another exemplar of the memorized item appeared as a distractor. On neutral trials, none of the distractors were related to the memorized object. We found attentional guidance in visual search on both exact-match and category-match trials in Experiment 1, in which the exemplars were visually similar. When we controlled for visual similarity among the exemplars by using four possible exemplars (Exp. 2) or by using two exemplars rated as being visually dissimilar (Exp. 3), we found attentional guidance only on exact-match trials when participants memorized the object's image. The same pattern of results held when the target was invariant (Exps. 2-3) and when the target was defined semantically and varied in visual features (Exp. 4). The findings of these experiments suggest that attentional guidance by WM requires active visual information.

  5. Intensive video gaming improves encoding speed to visual short-term memory in young male adults.

    PubMed

    Wilms, Inge L; Petersen, Anders; Vangkilde, Signe

    2013-01-01

    The purpose of this study was to measure the effect of action video gaming on central elements of visual attention using Bundesen's (1990) Theory of Visual Attention. To examine the cognitive impact of action video gaming, we tested basic functions of visual attention in 42 young male adults. Participants were divided into three groups depending on the amount of time spent playing action video games: non-players (<2h/month, N=12), casual players (4-8h/month, N=10), and experienced players (>15h/month, N=20). All participants were tested in three tasks which tap central functions of visual attention and short-term memory: a test based on the Theory of Visual Attention (TVA), an enumeration test and finally the Attentional Network Test (ANT). The results show that action video gaming does not seem to impact the capacity of visual short-term memory. However, playing action video games does seem to improve the encoding speed of visual information into visual short-term memory and the improvement does seem to depend on the time devoted to gaming. This suggests that intense action video gaming improves basic attentional functioning and that this improvement generalizes into other activities. The implications of these findings for cognitive rehabilitation training are discussed. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. Cholinergic enhancement of visual attention and neural oscillations in the human brain.

    PubMed

    Bauer, Markus; Kluge, Christian; Bach, Dominik; Bradbury, David; Heinze, Hans Jochen; Dolan, Raymond J; Driver, Jon

    2012-03-06

    Cognitive processes such as visual perception and selective attention induce specific patterns of brain oscillations. The neurochemical bases of these spectral changes in neural activity are largely unknown, but neuromodulators are thought to regulate processing. The cholinergic system is linked to attentional function in vivo, whereas separate in vitro studies show that cholinergic agonists induce high-frequency oscillations in slice preparations. This has led to theoretical proposals that cholinergic enhancement of visual attention might operate via gamma oscillations in visual cortex, although low-frequency alpha/beta modulation may also play a key role. Here we used MEG to record cortical oscillations in the context of administration of a cholinergic agonist (physostigmine) during a spatial visual attention task in humans. This cholinergic agonist enhanced spatial attention effects on low-frequency alpha/beta oscillations in visual cortex, an effect correlating with a drug-induced speeding of performance. By contrast, the cholinergic agonist did not alter high-frequency gamma oscillations in visual cortex. Thus, our findings show that cholinergic neuromodulation enhances attentional selection via an impact on oscillatory synchrony in visual cortex, for low rather than high frequencies. We discuss this dissociation between high- and low-frequency oscillations in relation to proposals that lower-frequency oscillations are generated by feedback pathways within visual cortex. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Eye movements and attention: The role of pre-saccadic shifts of attention in perception, memory and the control of saccades

    PubMed Central

    Gersch, Timothy M.; Schnitzer, Brian S.; Dosher, Barbara A.; Kowler, Eileen

    2012-01-01

    Saccadic eye movements and perceptual attention work in a coordinated fashion to allow selection of the objects, features or regions with the greatest momentary need for limited visual processing resources. This study investigates perceptual characteristics of pre-saccadic shifts of attention during a sequence of saccades using the visual manipulations employed to study mechanisms of attention during maintained fixation. The first part of this paper reviews studies of the connections between saccades and attention, and their significance for both saccadic control and perception. The second part presents three experiments that examine the effects of pre-saccadic shifts of attention on vision during sequences of saccades. Perceptual enhancements at the saccadic goal location relative to non-goal locations were found across a range of stimulus contrasts, with either perceptual discrimination or detection tasks, with either single or multiple perceptual targets, and regardless of the presence of external noise. The results show that the preparation of saccades can evoke a variety of attentional effects, including attentionally-mediated changes in the strength of perceptual representations, selection of targets for encoding in visual memory, exclusion of external noise, or changes in the levels of internal visual noise. The visual changes evoked by saccadic planning make it possible for the visual system to effectively use saccadic eye movements to explore the visual environment. PMID:22809798

  8. Scene segmentation by spike synchronization in reciprocally connected visual areas. II. Global assemblies and synchronization on larger space and time scales.

    PubMed

    Knoblauch, Andreas; Palm, Günther

    2002-09-01

    We present further simulation results of the model of two reciprocally connected visual areas proposed in the first paper [Knoblauch and Palm (2002) Biol Cybern 87:151-167]. One area corresponds to the orientation-selective subsystem of the primary visual cortex, the other is modeled as an associative memory representing stimulus objects according to Hebbian learning. We examine the scene-segmentation capability of our model on larger time and space scales, and relate it to experimental findings. Scene segmentation is achieved by attention switching on a time-scale longer than the gamma range. We find that the time-scale can vary depending on habituation parameters in the range of tens to hundreds of milliseconds. The switching process can be related to findings concerning attention and biased competition, and we reproduce experimental poststimulus time histograms (PSTHs) of single neurons under different stimulus and attentional conditions. In a larger variant the model exhibits traveling waves of activity on both slow and fast time-scales, with properties similar to those found in experiments. An apparent weakness of our standard model is the tendency to produce anti-phase correlations for fast activity from the two areas. Increasing the inter-areal delays in our model produces alternations of in-phase and anti-phase oscillations. The experimentally observed in-phase correlations can most naturally be obtained by the involvement of both fast and slow inter-areal connections; e.g., by two axon populations corresponding to fast-conducting myelinated and slow-conducting unmyelinated axons.

  9. Focused and divided attention abilities in the acute phase of recovery from moderate to severe traumatic brain injury.

    PubMed

    Robertson, Kayela; Schmitter-Edgecombe, Maureen

    2017-01-01

    Impairments in attention following traumatic brain injury (TBI) can significantly impact recovery and rehabilitation effectiveness. This study investigated the multi-faceted construct of selective attention following TBI, highlighting the differences on visual nonsearch (focused attention) and search (divided attention) tasks. Participants were 30 individuals with moderate to severe TBI who were tested acutely (i.e. following emergence from PTA) and 30 age- and education-matched controls. Participants were presented with visual displays that contained either two or eight items. In the focused attention, nonsearch condition, the location of the target (if present) was cued with a peripheral arrow prior to presentation of the visual displays. In the divided attention, search condition, no spatial cue was provided prior to presentation of the visual displays. The results revealed intact focused, nonsearch, attention abilities in the acute phase of TBI recovery. In contrast, when no spatial cue was provided (divided attention condition), participants with TBI demonstrated slower visual search compared to the control group. The results of this study suggest that capitalizing on intact focused attention abilities by allocating attention during cognitively demanding tasks may help to reduce mental workload and improve rehabilitation effectiveness.

  10. Visual attention and academic performance in children with developmental disabilities and behavioural attention deficits.

    PubMed

    Kirk, Hannah E; Gray, Kylie; Riby, Deborah M; Taffe, John; Cornish, Kim M

    2017-11-01

    Despite well-documented attention deficits in children with intellectual and developmental disabilities (IDD), distinctions across types of attention problems and their association with academic attainment has not been fully explored. This study examines visual attention capacities and inattentive/hyperactive behaviours in 77 children aged 4 to 11 years with IDD and elevated behavioural attention difficulties. Children with autism spectrum disorder (ASD; n = 23), Down syndrome (DS; n = 22), and non-specific intellectual disability (NSID; n = 32) completed computerized visual search and vigilance paradigms. In addition, parents and teachers completed rating scales of inattention and hyperactivity. Concurrent associations between attention abilities and early literacy and numeracy skills were also examined. Children completed measures of receptive vocabulary, phonological abilities and cardinality skills. As expected, the results indicated that all groups had relatively comparable levels of inattentive/hyperactive behaviours as rated by parents and teachers. However, the extent of visual attention deficits varied as a result of group; namely children with DS had poorer visual search and vigilance abilities than children with ASD and NSID. Further, significant associations between visual attention difficulties and poorer literacy and numeracy skills were observed, regardless of group. Collectively the findings demonstrate that in children with IDD who present with homogenous behavioural attention difficulties, at the cognitive level, subtle profiles of attentional problems can be delineated. © 2016 John Wiley & Sons Ltd.

  11. Is attentional prioritisation of infant faces unique in humans?: Comparative demonstrations by modified dot-probe task in monkeys.

    PubMed

    Koda, Hiroki; Sato, Anna; Kato, Akemi

    2013-09-01

    Humans innately perceive infantile features as cute. The ethologist Konrad Lorenz proposed that the infantile features of mammals and birds, known as the baby schema (kindchenschema), motivate caretaking behaviour. As biologically relevant stimuli, newborns are likely to be processed specially in terms of visual attention, perception, and cognition. Recent demonstrations on human participants have shown visual attentional prioritisation to newborn faces (i.e., newborn faces capture visual attention). Although characteristics equivalent to those found in the faces of human infants are found in nonhuman primates, attentional capture by newborn faces has not been tested in nonhuman primates. We examined whether conspecific newborn faces captured the visual attention of two Japanese monkeys using a target-detection task based on dot-probe tasks commonly used in human visual attention studies. Although visual cues enhanced target detection in subject monkeys, our results, unlike those for humans, showed no evidence of an attentional prioritisation for newborn faces by monkeys. Our demonstrations showed the validity of dot-probe task for visual attention studies in monkeys and propose a novel approach to bridge the gap between human and nonhuman primate social cognition research. This suggests that attentional capture by newborn faces is not common to macaques, but it is unclear if nursing experiences influence their perception and recognition of infantile appraisal stimuli. We need additional comparative studies to reveal the evolutionary origins of baby-schema perception and recognition. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. Focused and shifting attention in children with heavy prenatal alcohol exposure.

    PubMed

    Mattson, Sarah N; Calarco, Katherine E; Lang, Aimée R

    2006-05-01

    Attention deficits are a hallmark of the teratogenic effects of alcohol. However, characterization of these deficits remains inconclusive. Children with heavy prenatal alcohol exposure and nonexposed controls were evaluated using a paradigm consisting of three conditions: visual focus, auditory focus, and auditory-visual shift of attention. For the focus conditions, participants responded manually to visual or auditory targets. For the shift condition, participants alternated responses between visual targets and auditory targets. For the visual focus condition, alcohol-exposed children had lower accuracy and slower reaction time for all intertarget intervals (ITIs), while on the auditory focus condition, alcohol-exposed children were less accurate but displayed slower reaction time only on the longest ITI. Finally, for the shift condition, the alcohol-exposed group was accurate but had slowed reaction times. These results indicate that children with heavy prenatal alcohol exposure have pervasive deficits in visual focused attention and deficits in maintaining auditory attention over time. However, no deficits were noted in the ability to disengage and reengage attention when required to shift attention between visual and auditory stimuli, although reaction times to shift were slower. Copyright (c) 2006 APA, all rights reserved.

  13. The effect of search condition and advertising type on visual attention to Internet advertising.

    PubMed

    Kim, Gho; Lee, Jang-Han

    2011-05-01

    This research was conducted to examine the level of consumers' visual attention to Internet advertising. It was predicted that consumers' search type would influence visual attention to advertising. Specifically, it was predicted that more attention to advertising would be attracted in the exploratory search condition than in the goal-directed search condition. It was also predicted that there would be a difference in visual attention depending on the advertisement type (advertising type: text vs. pictorial advertising). An eye tracker was used for measurement. Results revealed that search condition and advertising type influenced advertising effectiveness.

  14. Body posture differentially impacts on visual attention towards tool, graspable, and non-graspable objects.

    PubMed

    Ambrosini, Ettore; Costantini, Marcello

    2017-02-01

    Viewed objects have been shown to afford suitable actions, even in the absence of any intention to act. However, little is known as to whether gaze behavior (i.e., the way we simply look at objects) is sensitive to action afforded by the seen object and how our actual motor possibilities affect this behavior. We recorded participants' eye movements during the observation of tools, graspable and ungraspable objects, while their hands were either freely resting on the table or tied behind their back. The effects of the observed object and hand posture on gaze behavior were measured by comparing the actual fixation distribution with that predicted by 2 widely supported models of visual attention, namely the Graph-Based Visual Saliency and the Adaptive Whitening Salience models. Results showed that saliency models did not accurately predict participants' fixation distributions for tools. Indeed, participants mostly fixated the action-related, functional part of the tools, regardless of its visual saliency. Critically, the restriction of the participants' action possibility led to a significant reduction of this effect and significantly improved the model prediction of the participants' gaze behavior. We suggest, first, that action-relevant object information at least in part guides gaze behavior. Second, postural information interacts with visual information to the generation of priority maps of fixation behavior. We support the view that the kind of information we access from the environment is constrained by our readiness to act. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  15. Salience-Based Selection: Attentional Capture by Distractors Less Salient Than the Target

    PubMed Central

    Goschy, Harriet; Müller, Hermann Joseph

    2013-01-01

    Current accounts of attentional capture predict the most salient stimulus to be invariably selected first. However, existing salience and visual search models assume noise in the map computation or selection process. Consequently, they predict the first selection to be stochastically dependent on salience, implying that attention could even be captured first by the second most salient (instead of the most salient) stimulus in the field. Yet, capture by less salient distractors has not been reported and salience-based selection accounts claim that the distractor has to be more salient in order to capture attention. We tested this prediction using an empirical and modeling approach of the visual search distractor paradigm. For the empirical part, we manipulated salience of target and distractor parametrically and measured reaction time interference when a distractor was present compared to absent. Reaction time interference was strongly correlated with distractor salience relative to the target. Moreover, even distractors less salient than the target captured attention, as measured by reaction time interference and oculomotor capture. In the modeling part, we simulated first selection in the distractor paradigm using behavioral measures of salience and considering the time course of selection including noise. We were able to replicate the result pattern we obtained in the empirical part. We conclude that each salience value follows a specific selection time distribution and attentional capture occurs when the selection time distributions of target and distractor overlap. Hence, selection is stochastic in nature and attentional capture occurs with a certain probability depending on relative salience. PMID:23382820

  16. TVA-Based Assessment of Visual Attention Using Line-Drawings of Fruits and Vegetables

    PubMed Central

    Wang, Tianlu; Gillebert, Celine R.

    2018-01-01

    Visuospatial attention and short-term memory allow us to prioritize, select, and briefly maintain part of the visual information that reaches our senses. These cognitive abilities are quantitatively accounted for by Bundesen’s theory of visual attention (TVA; Bundesen, 1990). Previous studies have suggested that TVA-based assessments are sensitive to inter-individual differences in spatial bias, visual short-term memory capacity, top-down control, and processing speed in healthy volunteers as well as in patients with various neurological and psychiatric conditions. However, most neuropsychological assessments of attention and executive functions, including TVA-based assessment, make use of alphanumeric stimuli and/or are performed verbally, which can pose difficulties for individuals who have troubles processing letters or numbers. Here we examined the reliability of TVA-based assessments when stimuli are used that are not alphanumeric, but instead based on line-drawings of fruits and vegetables. We compared five TVA parameters quantifying the aforementioned cognitive abilities, obtained by modeling accuracy data on a whole/partial report paradigm using conventional alphabet stimuli versus the food stimuli. Significant correlations were found for all TVA parameters, indicating a high parallel-form reliability. Split-half correlations assessing internal reliability, and correlations between predicted and observed data assessing goodness-of-fit were both significant. Our results provide an indication that line-drawings of fruits and vegetables can be used for a reliable assessment of attention and short-term memory. PMID:29535660

  17. Sustained visual-spatial attention produces costs and benefits in response time and evoked neural activity.

    PubMed

    Mangun, G R; Buck, L A

    1998-03-01

    This study investigated the simple reaction time (RT) and event-related potential (ERP) correlates of biasing attention towards a location in the visual field. RTs and ERPs were recorded to stimuli flashed randomly and with equal probability to the left and right visual hemifields in the three blocked, covert attention conditions: (i) attention divided equally to left and right hemifield locations; (ii) attention biased towards the left location; or (iii) attention biased towards the right location. Attention was biased towards left or right by instructions to the subjects, and responses were required to all stimuli. Relative to the divided attention condition, RTs were significantly faster for targets occurring where more attention was allocated (benefits), and slower to targets where less attention was allocated (costs). The early P1 (100-140 msec) component over the lateral occipital scalp regions showed attentional benefits. There were no amplitude modulations of the occipital N1 (125-180 msec) component with attention. Between 200 and 500 msec latency, a late positive deflection (LPD) showed both attentional costs and benefits. The behavioral findings show that when sufficiently induced to bias attention, human observers demonstrate RT benefits as well as costs. The corresponding P1 benefits suggest that the RT benefits of spatial attention may arise as the result of modulations of visual information processing in the extrastriate visual cortex.

  18. Attention restores discrete items to visual short-term memory.

    PubMed

    Murray, Alexandra M; Nobre, Anna C; Clark, Ian A; Cravo, André M; Stokes, Mark G

    2013-04-01

    When a memory is forgotten, is it lost forever? Our study shows that selective attention can restore forgotten items to visual short-term memory (VSTM). In our two experiments, all stimuli presented in a memory array were designed to be equally task relevant during encoding. During the retention interval, however, participants were sometimes given a cue predicting which of the memory items would be probed at the end of the delay. This shift in task relevance improved recall for that item. We found that this type of cuing improved recall for items that otherwise would have been irretrievable, providing critical evidence that attention can restore forgotten information to VSTM. Psychophysical modeling of memory performance has confirmed that restoration of information in VSTM increases the probability that the cued item is available for recall but does not improve the representational quality of the memory. We further suggest that attention can restore discrete items to VSTM.

  19. Temporal production and visuospatial processing.

    PubMed

    Benuzzi, Francesca; Basso, Gianpaolo; Nichelli, Paolo

    2005-12-01

    Current models of prospective timing hypothesize that estimated duration is influenced either by the attentional load or by the short-term memory requirements of a concurrent nontemporal task. In the present study, we addressed this issue with four dual-task experiments. In Exp. 1, the effect of memory load on both reaction time and temporal production was proportional to the number of items of a visuospatial pattern to hold in memory. In Exps. 2, 3, and 4, a temporal production task was combined with two visual search tasks involving either pre-attentive or attentional processing. Visual tasks interfered with temporal production: produced intervals were lengthened proportionally to the display size. In contrast, reaction times increased with display size only when a serial, effortful search was required. It appears that memory and perceptual set size, rather than nonspecific attentional or short-term memory load, can influence prospective timing.

  20. Contingent capture of involuntary visual spatial attention does not differ between normally hearing children and proficient cochlear implant users.

    PubMed

    Kamke, Marc R; Van Luyn, Jeanette; Constantinescu, Gabriella; Harris, Jill

    2014-01-01

    Evidence suggests that deafness-induced changes in visual perception, cognition and attention may compensate for a hearing loss. Such alterations, however, may also negatively influence adaptation to a cochlear implant. This study investigated whether involuntary attentional capture by salient visual stimuli is altered in children who use a cochlear implant. Thirteen experienced implant users (aged 8-16 years) and age-matched normally hearing children were presented with a rapid sequence of simultaneous visual and auditory events. Participants were tasked with detecting numbers presented in a specified color and identifying a change in the tonal frequency whilst ignoring irrelevant visual distractors. Compared to visual distractors that did not possess the target-defining characteristic, target-colored distractors were associated with a decrement in visual performance (response time and accuracy), demonstrating a contingent capture of involuntary attention. Visual distractors did not, however, impair auditory task performance. Importantly, detection performance for the visual and auditory targets did not differ between the groups. These results suggest that proficient cochlear implant users demonstrate normal capture of visuospatial attention by stimuli that match top-down control settings.

  1. Visual attention modulates brain activation to angry voices.

    PubMed

    Mothes-Lasch, Martin; Mentzel, Hans-Joachim; Miltner, Wolfgang H R; Straube, Thomas

    2011-06-29

    In accordance with influential models proposing prioritized processing of threat, previous studies have shown automatic brain responses to angry prosody in the amygdala and the auditory cortex under auditory distraction conditions. However, it is unknown whether the automatic processing of angry prosody is also observed during cross-modal distraction. The current fMRI study investigated brain responses to angry versus neutral prosodic stimuli during visual distraction. During scanning, participants were exposed to angry or neutral prosodic stimuli while visual symbols were displayed simultaneously. By means of task requirements, participants either attended to the voices or to the visual stimuli. While the auditory task revealed pronounced activation in the auditory cortex and amygdala to angry versus neutral prosody, this effect was absent during the visual task. Thus, our results show a limitation of the automaticity of the activation of the amygdala and auditory cortex to angry prosody. The activation of these areas to threat-related voices depends on modality-specific attention.

  2. Featural and temporal attention selectively enhance task-appropriate representations in human V1

    PubMed Central

    Warren, Scott; Yacoub, Essa; Ghose, Geoffrey

    2015-01-01

    Our perceptions are often shaped by focusing our attention toward specific features or periods of time irrespective of location. We explore the physiological bases of these non-spatial forms of attention by imaging brain activity while subjects perform a challenging change detection task. The task employs a continuously varying visual stimulus that, for any moment in time, selectively activates functionally distinct subpopulations of primary visual cortex (V1) neurons. When subjects are cued to the timing and nature of the change, the mapping of orientation preference across V1 was systematically shifts toward the cued stimulus just prior to its appearance. A simple linear model can explain this shift: attentional changes are selectively targeted toward neural subpopulations representing the attended feature at the times the feature was anticipated. Our results suggest that featural attention is mediated by a linear change in the responses of task-appropriate neurons across cortex during appropriate periods of time. PMID:25501983

  3. Emotional metacontrol of attention: Top-down modulation of sensorimotor processes in a robotic visual search task

    PubMed Central

    Cuperlier, Nicolas; Gaussier, Philippe

    2017-01-01

    Emotions play a significant role in internal regulatory processes. In this paper, we advocate four key ideas. First, novelty detection can be grounded in the sensorimotor experience and allow higher order appraisal. Second, cognitive processes, such as those involved in self-assessment, influence emotional states by eliciting affects like boredom and frustration. Third, emotional processes such as those triggered by self-assessment influence attentional processes. Last, close emotion-cognition interactions implement an efficient feedback loop for the purpose of top-down behavior regulation. The latter is what we call ‘Emotional Metacontrol’. We introduce a model based on artificial neural networks. This architecture is used to control a robotic system in a visual search task. The emotional metacontrol intervenes to bias the robot visual attention during active object recognition. Through a behavioral and statistical analysis, we show that this mechanism increases the robot performance and fosters the exploratory behavior to avoid deadlocks. PMID:28934291

  4. Splitting attention across the two visual fields in visual short-term memory.

    PubMed

    Delvenne, Jean-Francois; Holt, Jessica L

    2012-02-01

    Humans have the ability to attentionally select the most relevant visual information from their extrapersonal world and to retain it in a temporary buffer, known as visual short-term memory (VSTM). Research suggests that at least two non-contiguous items can be selected simultaneously when they are distributed across the two visual hemifields. In two experiments, we show that attention can also be split between the left and right sides of internal representations held in VSTM. Participants were asked to remember several colors, while cues presented during the delay instructed them to orient their attention to a subset of memorized colors. Experiment 1 revealed that orienting attention to one or two colors strengthened equally participants' memory for those colors, but only when they were from separate hemifields. Experiment 2 showed that in the absence of attentional cues the distribution of the items in the visual field per se had no effect on memory. These findings strongly suggest the existence of independent attentional resources in the two hemifields for selecting and/or consolidating information in VSTM. Copyright © 2011 Elsevier B.V. All rights reserved.

  5. Parameter-Based Assessment of Disturbed and Intact Components of Visual Attention in Children with Developmental Dyslexia

    ERIC Educational Resources Information Center

    Bogon, Johanna; Finke, Kathrin; Schulte-Körne, Gerd; Müller, Hermann J.; Schneider, Werner X.; Stenneken, Prisca

    2014-01-01

    People with developmental dyslexia (DD) have been shown to be impaired in tasks that require the processing of multiple visual elements in parallel. It has been suggested that this deficit originates from disturbed visual attentional functions. The parameter-based assessment of visual attention based on Bundesen's (1990) theory of visual…

  6. The Influence of Stimulus Material on Attention and Performance in the Visual Expectation Paradigm: A Longitudinal Study with 3- And 6-Month-Old Infants

    ERIC Educational Resources Information Center

    Teubert, Manuel; Lohaus, Arnold; Fassbender, Ina; Vierhaus, Marc; Spangler, Sibylle; Borchert, Sonja; Freitag, Claudia; Goertz, Claudia; Graf, Frauke; Gudi, Helene; Kolling, Thorsten; Lamm, Bettina; Keller, Heidi; Knopf, Monika; Schwarzer, Gudrun

    2012-01-01

    This longitudinal study examined the influence of stimulus material on attention and expectation learning in the visual expectation paradigm. Female faces were used as attention-attracting stimuli, and non-meaningful visual stimuli of comparable complexity (Greebles) were used as low attention-attracting stimuli. Expectation learning performance…

  7. Attention biases visual activity in visual short-term memory.

    PubMed

    Kuo, Bo-Cheng; Stokes, Mark G; Murray, Alexandra M; Nobre, Anna Christina

    2014-07-01

    In the current study, we tested whether representations in visual STM (VSTM) can be biased via top-down attentional modulation of visual activity in retinotopically specific locations. We manipulated attention using retrospective cues presented during the retention interval of a VSTM task. Retrospective cues triggered activity in a large-scale network implicated in attentional control and led to retinotopically specific modulation of activity in early visual areas V1-V4. Importantly, shifts of attention during VSTM maintenance were associated with changes in functional connectivity between pFC and retinotopic regions within V4. Our findings provide new insights into top-down control mechanisms that modulate VSTM representations for flexible and goal-directed maintenance of the most relevant memoranda.

  8. Markers of preparatory attention predict visual short-term memory performance.

    PubMed

    Murray, Alexandra M; Nobre, Anna C; Stokes, Mark G

    2011-05-01

    Visual short-term memory (VSTM) is limited in capacity. Therefore, it is important to encode only visual information that is most likely to be relevant to behaviour. Here we asked which aspects of selective biasing of VSTM encoding predict subsequent memory-based performance. We measured EEG during a selective VSTM encoding task, in which we varied parametrically the memory load and the precision of recall required to compare a remembered item to a subsequent probe item. On half the trials, a spatial cue indicated that participants only needed to encode items from one hemifield. We observed a typical sequence of markers of anticipatory spatial attention: early attention directing negativity (EDAN), anterior attention directing negativity (ADAN), late directing attention positivity (LDAP); as well as of VSTM maintenance: contralateral delay activity (CDA). We found that individual differences in preparatory brain activity (EDAN/ADAN) predicted cue-related changes in recall accuracy, indexed by memory-probe discrimination sensitivity (d'). Importantly, our parametric manipulation of memory-probe similarity also allowed us to model the behavioural data for each participant, providing estimates for the quality of the memory representation and the probability that an item could be retrieved. We found that selective encoding primarily increased the probability of accurate memory recall; that ERP markers of preparatory attention predicted the cue-related changes in recall probability. Copyright © 2011. Published by Elsevier Ltd.

  9. Early Visual Cortex Dynamics during Top-Down Modulated Shifts of Feature-Selective Attention.

    PubMed

    Müller, Matthias M; Trautmann, Mireille; Keitel, Christian

    2016-04-01

    Shifting attention from one color to another color or from color to another feature dimension such as shape or orientation is imperative when searching for a certain object in a cluttered scene. Most attention models that emphasize feature-based selection implicitly assume that all shifts in feature-selective attention underlie identical temporal dynamics. Here, we recorded time courses of behavioral data and steady-state visual evoked potentials (SSVEPs), an objective electrophysiological measure of neural dynamics in early visual cortex to investigate temporal dynamics when participants shifted attention from color or orientation toward color or orientation, respectively. SSVEPs were elicited by four random dot kinematograms that flickered at different frequencies. Each random dot kinematogram was composed of dashes that uniquely combined two features from the dimensions color (red or blue) and orientation (slash or backslash). Participants were cued to attend to one feature (such as color or orientation) and respond to coherent motion targets of the to-be-attended feature. We found that shifts toward color occurred earlier after the shifting cue compared with shifts toward orientation, regardless of the original feature (i.e., color or orientation). This was paralleled in SSVEP amplitude modulations as well as in the time course of behavioral data. Overall, our results suggest different neural dynamics during shifts of attention from color and orientation and the respective shifting destinations, namely, either toward color or toward orientation.

  10. Neural mechanisms of human perceptual choice under focused and divided attention.

    PubMed

    Wyart, Valentin; Myers, Nicholas E; Summerfield, Christopher

    2015-02-25

    Perceptual decisions occur after the evaluation and integration of momentary sensory inputs, and dividing attention between spatially disparate sources of information impairs decision performance. However, it remains unknown whether dividing attention degrades the precision of sensory signals, precludes their conversion into decision signals, or dampens the integration of decision information toward an appropriate response. Here we recorded human electroencephalographic (EEG) activity while participants categorized one of two simultaneous and independent streams of visual gratings according to their average tilt. By analyzing trial-by-trial correlations between EEG activity and the information offered by each sample, we obtained converging behavioral and neural evidence that dividing attention between left and right visual fields does not dampen the encoding of sensory or decision information. Under divided attention, momentary decision information from both visual streams was encoded in slow parietal signals without interference but was lost downstream during their integration as reflected in motor mu- and beta-band (10-30 Hz) signals, resulting in a "leaky" accumulation process that conferred greater behavioral influence to more recent samples. By contrast, sensory inputs that were explicitly cued as irrelevant were not converted into decision signals. These findings reveal that a late cognitive bottleneck on information integration limits decision performance under divided attention, and places new capacity constraints on decision-theoretic models of information integration under cognitive load. Copyright © 2015 the authors 0270-6474/15/353485-14$15.00/0.

  11. Neural mechanisms of human perceptual choice under focused and divided attention

    PubMed Central

    Wyart, Valentin; Myers, Nicholas E.; Summerfield, Christopher

    2015-01-01

    Perceptual decisions occur after evaluation and integration of momentary sensory inputs, and dividing attention between spatially disparate sources of information impairs decision performance. However, it remains unknown whether dividing attention degrades the precision of sensory signals, precludes their conversion into decision signals, or dampens the integration of decision information towards an appropriate response. Here we recorded human electroencephalographic (EEG) activity whilst participants categorised one of two simultaneous and independent streams of visual gratings according to their average tilt. By analyzing trial-by-trial correlations between EEG activity and the information offered by each sample, we obtained converging behavioural and neural evidence that dividing attention between left and right visual fields does not dampen the encoding of sensory or decision information. Under divided attention, momentary decision information from both visual streams was encoded in slow parietal signals without interference but was lost downstream during their integration as reflected in motor mu- and beta-band (10–30 Hz) signals, resulting in a ‘leaky’ accumulation process which conferred greater behavioural influence to more recent samples. By contrast, sensory inputs that were explicitly cued as irrelevant were not converted into decision signals. These findings reveal that a late cognitive bottleneck on information integration limits decision performance under divided attention, and place new capacity constraints on decision-theoretic models of information integration under cognitive load. PMID:25716848

  12. Association of blood antioxidants status with visual and auditory sustained attention.

    PubMed

    Shiraseb, Farideh; Siassi, Fereydoun; Sotoudeh, Gity; Qorbani, Mostafa; Rostami, Reza; Sadeghi-Firoozabadi, Vahid; Narmaki, Elham

    2015-01-01

    A low antioxidants status has been shown to result in oxidative stress and cognitive impairment. Because antioxidants can protect the nervous system, it is expected that a better blood antioxidant status might be related to sustained attention. However, the relationship between the blood antioxidant status and visual and auditory sustained attention has not been investigated. The aim of this study was to evaluate the association of fruits and vegetables intake and the blood antioxidant status with visual and auditory sustained attention in women. This cross-sectional study was performed on 400 healthy women (20-50 years) who attended the sports clubs of Tehran Municipality. Sustained attention was evaluated based on the Integrated Visual and Auditory Continuous Performance Test using the Integrated Visual and Auditory (IVA) software. The 24-hour food recall questionnaire was used for estimating fruits and vegetables intake. Serum total antioxidant capacity (TAC), and erythrocyte superoxide dismutase (SOD) and glutathione peroxidase (GPx) activities were measured in 90 participants. After adjusting for energy intake, age, body mass index (BMI), years of education and physical activity, higher reported fruits, and vegetables intake was associated with better visual and auditory sustained attention (P < 0.001). A high intake of some subgroups of fruits and vegetables (i.e. berries, cruciferous vegetables, green leafy vegetables, and other vegetables) was also associated with better sustained attention (P < 0.02). Serum TAC, and erythrocyte SOD and GPx activities increased with the increase in the tertiles of visual and auditory sustained attention after adjusting for age, years of education, physical activity, energy, BMI, and caffeine intake (P < 0.05). Improved visual and auditory sustained attention is associated with a better blood antioxidant status. Therefore, improvement of the antioxidant status through an appropriate dietary intake can possibly enhance sustained attention.

  13. Short-term retention of visual information: Evidence in support of feature-based attention as an underlying mechanism.

    PubMed

    Sneve, Markus H; Sreenivasan, Kartik K; Alnæs, Dag; Endestad, Tor; Magnussen, Svein

    2015-01-01

    Retention of features in visual short-term memory (VSTM) involves maintenance of sensory traces in early visual cortex. However, the mechanism through which this is accomplished is not known. Here, we formulate specific hypotheses derived from studies on feature-based attention to test the prediction that visual cortex is recruited by attentional mechanisms during VSTM of low-level features. Functional magnetic resonance imaging (fMRI) of human visual areas revealed that neural populations coding for task-irrelevant feature information are suppressed during maintenance of detailed spatial frequency memory representations. The narrow spectral extent of this suppression agrees well with known effects of feature-based attention. Additionally, analyses of effective connectivity during maintenance between retinotopic areas in visual cortex show that the observed highlighting of task-relevant parts of the feature spectrum originates in V4, a visual area strongly connected with higher-level control regions and known to convey top-down influence to earlier visual areas during attentional tasks. In line with this property of V4 during attentional operations, we demonstrate that modulations of earlier visual areas during memory maintenance have behavioral consequences, and that these modulations are a result of influences from V4. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. An attentional drift diffusion model over binary-attribute choice.

    PubMed

    Fisher, Geoffrey

    2017-11-01

    In order to make good decisions, individuals need to identify and properly integrate information about various attributes associated with a choice. Since choices are often complex and made rapidly, they are typically affected by contextual variables that are thought to influence how much attention is paid to different attributes. I propose a modification of the attentional drift-diffusion model, the binary-attribute attentional drift diffusion model (baDDM), which describes the choice process over simple binary-attribute choices and how it is affected by fluctuations in visual attention. Using an eye-tracking experiment, I find the baDDM makes accurate quantitative predictions about several key variables including choices, reaction times, and how these variables are correlated with attention to two attributes in an accept-reject decision. Furthermore, I estimate an attribute-based fixation bias that suggests attention to an attribute increases its subjective weight by 5%, while the unattended attribute's weight is decreased by 10%. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Components of working memory and visual selective attention.

    PubMed

    Burnham, Bryan R; Sabia, Matthew; Langan, Catherine

    2014-02-01

    Load theory (Lavie, N., Hirst, A., De Fockert, J. W., & Viding, E. [2004]. Load theory of selective attention and cognitive control. Journal of Experimental Psychology: General, 133, 339-354.) proposes that control of attention depends on the amount and type of load that is imposed by current processing. Specifically, perceptual load should lead to efficient distractor rejection, whereas working memory load (dual-task coordination) should hinder distractor rejection. Studies support load theory's prediction that working memory load will lead to larger distractor effects; however, these studies used secondary tasks that required only verbal working memory and the central executive. The present study examined which other working memory components (visual, spatial, and phonological) influence visual selective attention. Subjects completed an attentional capture task alone (single-task) or while engaged in a working memory task (dual-task). Results showed that along with the central executive, visual and spatial working memory influenced selective attention, but phonological working memory did not. Specifically, attentional capture was larger when visual or spatial working memory was loaded, but phonological working memory load did not affect attentional capture. The results are consistent with load theory and suggest specific components of working memory influence visual selective attention. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  16. Asymmetrical brain activity induced by voluntary spatial attention depends on the visual hemifield: a functional near-infrared spectroscopy study.

    PubMed

    Harasawa, Masamitsu; Shioiri, Satoshi

    2011-04-01

    The effect of the visual hemifield to which spatial attention was oriented on the activities of the posterior parietal and occipital visual cortices was examined using functional near-infrared spectroscopy in order to investigate the neural substrates of voluntary visuospatial attention. Our brain imaging data support the theory put forth in a previous psychophysical study, namely, the attentional resources for the left and right visual hemifields are distinct. Increasing the attentional load asymmetrically increased the brain activity. Increase in attentional load produced a greater increase in brain activity in the case of the left visual hemifield than in the case of the right visual hemifield. This asymmetry was observed in all the examined brain areas, including the right and left occipital and parietal cortices. These results suggest the existence of asymmetrical inhibitory interactions between the hemispheres and the presence of an extensive inhibitory network. Copyright © 2011 Elsevier Inc. All rights reserved.

  17. Complex regional pain syndrome (CRPS) or continuous unilateral distal experimental pain stimulation in healthy subjects does not bias visual attention towards one hemifield.

    PubMed

    Filippopulos, Filipp M; Grafenstein, Jessica; Straube, Andreas; Eggert, Thomas

    2015-11-01

    In natural life pain automatically draws attention towards the painful body part suggesting that it interacts with different attentional mechanisms such as visual attention. Complex regional pain syndrome (CRPS) patients who typically report on chronic distally located pain of one extremity may suffer from so-called neglect-like symptoms, which have also been linked to attentional mechanisms. The purpose of the study was to further evaluate how continuous pain conditions influence visual attention. Saccade latencies were recorded in two experiments using a common visual attention paradigm whereby orientating saccades to cued or uncued lateral visual targets had to be performed. In the first experiment saccade latencies of healthy subjects were measured under two conditions: one in which continuous experimental pain stimulation was applied to the index finger to imitate a continuous pain situation, and one without pain stimulation. In the second experiment saccade latencies of patients suffering from CRPS were compared to controls. The results showed that neither the continuous experimental pain stimulation during the experiment nor the chronic pain in CRPS led to an unilateral increase of saccade latencies or to a unilateral increase of the cue effect on latency. The results show that unilateral, continuously applied pain stimuli or chronic pain have no or only very limited influence on visual attention. Differently from patients with visual neglect, patients with CRPS did not show strong side asymmetries of saccade latencies or of cue effects on saccade latencies. Thus, neglect-like clinical symptoms of CRPS patients do not involve the allocation of visual attention.

  18. Differences in attentional strategies by novice and experienced operating theatre scrub nurses.

    PubMed

    Koh, Ranieri Y I; Park, Taezoon; Wickens, Christopher D; Ong, Lay Teng; Chia, Soon Noi

    2011-09-01

    This study investigated the effect of nursing experience on attention allocation and task performance during surgery. The prevention of cases of retained foreign bodies after surgery typically depends on scrub nurses, who are responsible for performing multiple tasks that impose heavy demands on the nurses' cognitive resources. However, the relationship between the level of experiences and attention allocation strategies has not been extensively studied. Eye movement data were collected from 10 novice and 10 experienced scrub nurses in the operating theater for caesarean section surgeries. Visual scanning data, analyzed by dividing the workstation into four main areas and the surgery into four stages, were compared to the optimum expected value estimated by SEEV (Salience, Effort, Expectancy, and Value) model. Both experienced and novice nurses showed significant correlations to the optimal percentage dwell time values, and significant differences were found in attention allocation optimality between experienced and novice nurses, with experienced nurses adhering significantly more to the optimal in the stages of high workload. Experienced nurses spent less time on the final count and encountered fewer interruptions during the count than novices indicating better performance in task management, whereas novice nurses switched attention between areas of interest more than experienced nurses. The results provide empirical evidence of a relationship between the application of optimal visual attention management strategies and performance, opening up possibilities to the development of visual attention and interruption training for better performance. (c) 2011 APA, all rights reserved.

  19. 75 FR 71534 - Airworthiness Directives; The Boeing Company Model 737-900ER Series Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-24

    ... the products listed above. This AD requires doing a one-time general visual inspection for a keyway in..., contact Boeing Commercial Airplanes, Attention: Data & Services Management, P.O. Box 3707, MC 2H-65... Register on August 10, 2010 (75 FR 48281). That NPRM proposed to require a general visual inspection for a...

  20. 76 FR 47427 - Airworthiness Directives; The Boeing Company Model 747-400 and -400F Series Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-05

    ... the products listed above. This AD requires a general visual inspection for cracks and holes of the... Boeing Commercial Airplanes, Attention: Data & Services Management, P.O. Box 3707, MC 2H-65, Seattle... published in the Federal Register on February 10, 2011 (76 FR 7513). The NPRM proposed a general visual...

  1. 75 FR 48281 - Airworthiness Directives; The Boeing Company Model 737-900ER Series Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-10

    ... a one-time general visual inspection for a keyway in two fuel tank access door cutouts, and related... proposed AD, contact Boeing Commercial Airplanes, Attention: Data & Services Management, P. O. Box 3707, MC... bulletin describes procedures for a general visual inspection for a keyway in the fuel tank access door...

  2. 76 FR 3561 - Airworthiness Directives; The Boeing Company Model 777-200 and -300 Series Airplanes Equipped...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-20

    ... proposed AD, contact Boeing Commercial Airplanes, Attention: Data & Services Management, P.O. Box 3707, MC... is cracked, has gaps, is loose, or is missing; repetitive general visual inspections of click bond... sealant Doing an NDT and general visual inspection for thermal degradation of the exposed T/R inner wall...

  3. The visual attention saliency map for movie retrospection

    NASA Astrophysics Data System (ADS)

    Rogalska, Anna; Napieralski, Piotr

    2018-04-01

    The visual saliency map is becoming important and challenging for many scientific disciplines (robotic systems, psychophysics, cognitive neuroscience and computer science). Map created by the model indicates possible salient regions by taking into consideration face presence and motion which is essential in motion pictures. By combining we can obtain credible saliency map with a low computational cost.

  4. Bottom-Up Guidance in Visual Search for Conjunctions

    ERIC Educational Resources Information Center

    Proulx, Michael J.

    2007-01-01

    Understanding the relative role of top-down and bottom-up guidance is crucial for models of visual search. Previous studies have addressed the role of top-down and bottom-up processes in search for a conjunction of features but with inconsistent results. Here, the author used an attentional capture method to address the role of top-down and…

  5. Modeling and measuring the visual detection of ecologically relevant motion by an Anolis lizard.

    PubMed

    Pallus, Adam C; Fleishman, Leo J; Castonguay, Philip M

    2010-01-01

    Motion in the visual periphery of lizards, and other animals, often causes a shift of visual attention toward the moving object. This behavioral response must be more responsive to relevant motion (predators, prey, conspecifics) than to irrelevant motion (windblown vegetation). Early stages of visual motion detection rely on simple local circuits known as elementary motion detectors (EMDs). We presented a computer model consisting of a grid of correlation-type EMDs, with videos of natural motion patterns, including prey, predators and windblown vegetation. We systematically varied the model parameters and quantified the relative response to the different classes of motion. We carried out behavioral experiments with the lizard Anolis sagrei and determined that their visual response could be modeled with a grid of correlation-type EMDs with a spacing parameter of 0.3 degrees visual angle, and a time constant of 0.1 s. The model with these parameters gave substantially stronger responses to relevant motion patterns than to windblown vegetation under equivalent conditions. However, the model is sensitive to local contrast and viewer-object distance. Therefore, additional neural processing is probably required for the visual system to reliably distinguish relevant from irrelevant motion under a full range of natural conditions.

  6. Cross-modal attention influences auditory contrast sensitivity: Decreasing visual load improves auditory thresholds for amplitude- and frequency-modulated sounds.

    PubMed

    Ciaramitaro, Vivian M; Chow, Hiu Mei; Eglington, Luke G

    2017-03-01

    We used a cross-modal dual task to examine how changing visual-task demands influenced auditory processing, namely auditory thresholds for amplitude- and frequency-modulated sounds. Observers had to attend to two consecutive intervals of sounds and report which interval contained the auditory stimulus that was modulated in amplitude (Experiment 1) or frequency (Experiment 2). During auditory-stimulus presentation, observers simultaneously attended to a rapid sequential visual presentation-two consecutive intervals of streams of visual letters-and had to report which interval contained a particular color (low load, demanding less attentional resources) or, in separate blocks of trials, which interval contained more of a target letter (high load, demanding more attentional resources). We hypothesized that if attention is a shared resource across vision and audition, an easier visual task should free up more attentional resources for auditory processing on an unrelated task, hence improving auditory thresholds. Auditory detection thresholds were lower-that is, auditory sensitivity was improved-for both amplitude- and frequency-modulated sounds when observers engaged in a less demanding (compared to a more demanding) visual task. In accord with previous work, our findings suggest that visual-task demands can influence the processing of auditory information on an unrelated concurrent task, providing support for shared attentional resources. More importantly, our results suggest that attending to information in a different modality, cross-modal attention, can influence basic auditory contrast sensitivity functions, highlighting potential similarities between basic mechanisms for visual and auditory attention.

  7. Multiple Sensory-Motor Pathways Lead to Coordinated Visual Attention

    PubMed Central

    Yu, Chen; Smith, Linda B.

    2016-01-01

    Joint attention has been extensively studied in the developmental literature because of overwhelming evidence that the ability to socially coordinate visual attention to an object is essential to healthy developmental outcomes, including language learning. The goal of the present study is to understand the complex system of sensory-motor behaviors that may underlie the establishment of joint attention between parents and toddlers. In an experimental task, parents and toddlers played together with multiple toys. We objectively measured joint attention – and the sensory-motor behaviors that underlie it – using a dual head-mounted eye-tracking system and frame-by-frame coding of manual actions. By tracking the momentary visual fixations and hand actions of each participant, we precisely determined just how often they fixated on the same object at the same time, the visual behaviors that preceded joint attention, and manual behaviors that preceded and co-occurred with joint attention. We found that multiple sequential sensory-motor patterns lead to joint attention. In addition, there are developmental changes in this multi-pathway system evidenced as variations in strength among multiple routes. We propose that coordinated visual attention between parents and toddlers is primarily a sensory-motor behavior. Skill in achieving coordinated visual attention in social settings – like skills in other sensory-motor domains – emerges from multiple pathways to the same functional end. PMID:27016038

  8. Multiple Sensory-Motor Pathways Lead to Coordinated Visual Attention.

    PubMed

    Yu, Chen; Smith, Linda B

    2017-02-01

    Joint attention has been extensively studied in the developmental literature because of overwhelming evidence that the ability to socially coordinate visual attention to an object is essential to healthy developmental outcomes, including language learning. The goal of this study was to understand the complex system of sensory-motor behaviors that may underlie the establishment of joint attention between parents and toddlers. In an experimental task, parents and toddlers played together with multiple toys. We objectively measured joint attention-and the sensory-motor behaviors that underlie it-using a dual head-mounted eye-tracking system and frame-by-frame coding of manual actions. By tracking the momentary visual fixations and hand actions of each participant, we precisely determined just how often they fixated on the same object at the same time, the visual behaviors that preceded joint attention and manual behaviors that preceded and co-occurred with joint attention. We found that multiple sequential sensory-motor patterns lead to joint attention. In addition, there are developmental changes in this multi-pathway system evidenced as variations in strength among multiple routes. We propose that coordinated visual attention between parents and toddlers is primarily a sensory-motor behavior. Skill in achieving coordinated visual attention in social settings-like skills in other sensory-motor domains-emerges from multiple pathways to the same functional end. Copyright © 2016 Cognitive Science Society, Inc.

  9. Categorical clustering of the neural representation of color.

    PubMed

    Brouwer, Gijs Joost; Heeger, David J

    2013-09-25

    Cortical activity was measured with functional magnetic resonance imaging (fMRI) while human subjects viewed 12 stimulus colors and performed either a color-naming or diverted attention task. A forward model was used to extract lower dimensional neural color spaces from the high-dimensional fMRI responses. The neural color spaces in two visual areas, human ventral V4 (V4v) and VO1, exhibited clustering (greater similarity between activity patterns evoked by stimulus colors within a perceptual category, compared to between-category colors) for the color-naming task, but not for the diverted attention task. Response amplitudes and signal-to-noise ratios were higher in most visual cortical areas for color naming compared to diverted attention. But only in V4v and VO1 did the cortical representation of color change to a categorical color space. A model is presented that induces such a categorical representation by changing the response gains of subpopulations of color-selective neurons.

  10. Categorical Clustering of the Neural Representation of Color

    PubMed Central

    Heeger, David J.

    2013-01-01

    Cortical activity was measured with functional magnetic resonance imaging (fMRI) while human subjects viewed 12 stimulus colors and performed either a color-naming or diverted attention task. A forward model was used to extract lower dimensional neural color spaces from the high-dimensional fMRI responses. The neural color spaces in two visual areas, human ventral V4 (V4v) and VO1, exhibited clustering (greater similarity between activity patterns evoked by stimulus colors within a perceptual category, compared to between-category colors) for the color-naming task, but not for the diverted attention task. Response amplitudes and signal-to-noise ratios were higher in most visual cortical areas for color naming compared to diverted attention. But only in V4v and VO1 did the cortical representation of color change to a categorical color space. A model is presented that induces such a categorical representation by changing the response gains of subpopulations of color-selective neurons. PMID:24068814

  11. Vegetarianism and food perception. Selective visual attention to meat pictures.

    PubMed

    Stockburger, Jessica; Renner, Britta; Weike, Almut I; Hamm, Alfons O; Schupp, Harald T

    2009-04-01

    Vegetarianism provides a model system to examine the impact of negative affect towards meat, based on ideational reasoning. It was hypothesized that meat stimuli are efficient attention catchers in vegetarians. Event-related brain potential recordings served to index selective attention processes at the level of initial stimulus perception. Consistent with the hypothesis, late positive potentials to meat pictures were enlarged in vegetarians compared to omnivores. This effect was specific for meat pictures and obtained during passive viewing and an explicit attention task condition. These findings demonstrate the attention capture of food stimuli, deriving affective salience from ideational reasoning and symbolic meaning.

  12. Visual Spatial Attention Training Improve Spatial Attention and Motor Control for Unilateral Neglect Patients.

    PubMed

    Wang, Wei; Ji, Xiangtong; Ni, Jun; Ye, Qian; Zhang, Sicong; Chen, Wenli; Bian, Rong; Yu, Cui; Zhang, Wenting; Shen, Guangyu; Machado, Sergio; Yuan, Tifei; Shan, Chunlei

    2015-01-01

    To compare the effect of visual spatial training on the spatial attention to that on motor control and to correlate the improvement of spatial attention to motor control progress after visual spatial training in subjects with unilateral spatial neglect (USN). 9 cases with USN after right cerebral stroke were randomly divided into Conventional treatment group + visual spatial attention and Conventional treatment group. The Conventional treatment group + visual spatial attention received conventional rehabilitation therapy (physical and occupational therapy) and visual spatial attention training (optokinetic stimulation and right half-field eye patching). The Conventional treatment group was only treated with conventional rehabilitation training (physical and occupational therapy). All patients were assessed by behavioral inattention test (BIT), Fugl-Meyer Assessment of motor function (FMA), equilibrium coordination test (ECT) and non-equilibrium coordination test (NCT) before and after 4 weeks treatment. Total scores in both groups (without visual spatial attention/with visual spatial attention) improved significantly (BIT: P=0.021/P=0.000, d=1.667/d=2.116, power=0.69/power=0.98, 95%CI[-0.8839,45.88]/95%CI=[16.96,92.64]; FMA: P=0.002/P=0.000, d=2.521/d=2.700, power=0.93/power=0.98, 95%CI[5.707,30.79]/95%CI=[16.06,53.94]; ECT: P=0.002/ P=0.000, d=2.031/d=1.354, power=0.90/power=0.17, 95%CI[3.380,42.61]/95%CI=[-1.478,39.08]; NCT: P=0.013/P=0.000, d=1.124/d=1.822, power=0.41/power=0.56, 95%CI[-7.980,37.48]/95%CI=[4.798,43.60],) after treatment. Among the 2 groups, the group with visual spatial attention significantly improved in BIT (P=0.003, d=3.103, power=1, 95%CI[15.68,48.92]), FMA of upper extremity (P=0.006, d=2.771, power=1, 95%CI[5.061,20.14]) and NCT (P=0.010, d=2.214, power=0.81-0.90, 95%CI[3.018,15.88]). Correlative analysis shows that the change of BIT scores is positively correlated to the change of FMA total score (r=0.77, P<;0.01), FMA of upper extremity (r=0.81, P<0.01), NCT (r=0.78, P<0.01). Four weeks visual spatial training could improve spatial attention as well as motor control functions in hemineglect patients. The improvement of motor function is positively correlated to the progresses of visual spatial functions after visual spatial attention training.

  13. Expectancy and surprise predict neural and behavioral measures of attention to threatening stimuli

    PubMed Central

    Browning, Michael; Harmer, Catherine J.

    2012-01-01

    Attention is preferentially deployed toward those stimuli which are threatening and those which are surprising. The current paper examines the intersection of these phenomena; how do expectations about the threatening nature of stimuli influence the deployment of attention? The predictions tested were that individuals would direct attention toward stimuli which were expected to be threatening (regardless of whether they were or not) and toward stimuli which were surprising. As anxiety has been associated with deficient control of attention to threat, it was additionally predicted that high levels of trait anxiety would be associated with deficits in the use of threat-expectation to guide attention. During fMRI scanning, 29 healthy volunteers completed a simple task in which threat-expectation was manipulated by altering the frequency with which fearful or neutral faces were presented. Individual estimates of threat-expectation and surprise were created using a Bayesian computational model. The degree to which the model derived estimates of threat-expectation and surprise were able to explain both a behavioral measure of attention to the faces and activity in the visual cortex and anterior attentional control areas was then tested. As predicted, increased threat-expectation and surprise were associated with increases in both the behavioral and neuroimaging measures of attention to the faces. Additionally, regions of the orbitofrontal cortex and left amygdala were found to covary with threat-expectation whereas anterior cingulate and lateral prefrontal cortices covaried with surprise. Individuals with higher levels of trait anxiety were less able to modify neuroimaging measures of attention in response to threat-expectation. These results suggest that continuously calculated estimates of the probability of threat may plausibly be used to influence the deployment of visual attention and that use of this information is perturbed in anxious individuals. PMID:21945791

  14. Dynamic crossmodal links revealed by steady-state responses in auditory-visual divided attention.

    PubMed

    de Jong, Ritske; Toffanin, Paolo; Harbers, Marten

    2010-01-01

    Frequency tagging has been often used to study intramodal attention but not intermodal attention. We used EEG and simultaneous frequency tagging of auditory and visual sources to study intermodal focused and divided attention in detection and discrimination performance. Divided-attention costs were smaller, but still significant, in detection than in discrimination. The auditory steady-state response (SSR) showed no effects of attention at frontocentral locations, but did so at occipital locations where it was evident only when attention was divided between audition and vision. Similarly, the visual SSR at occipital locations was substantially enhanced when attention was divided across modalities. Both effects were equally present in detection and discrimination. We suggest that both effects reflect a common cause: An attention-dependent influence of auditory information processing on early cortical stages of visual information processing, mediated by enhanced effective connectivity between the two modalities under conditions of divided attention. Copyright (c) 2009 Elsevier B.V. All rights reserved.

  15. Dividing time: concurrent timing of auditory and visual events by young and elderly adults.

    PubMed

    McAuley, J Devin; Miller, Jonathan P; Wang, Mo; Pang, Kevin C H

    2010-07-01

    This article examines age differences in individual's ability to produce the durations of learned auditory and visual target events either in isolation (focused attention) or concurrently (divided attention). Young adults produced learned target durations equally well in focused and divided attention conditions. Older adults, in contrast, showed an age-related increase in timing variability in divided attention conditions that tended to be more pronounced for visual targets than for auditory targets. Age-related impairments were associated with a decrease in working memory span; moreover, the relationship between working memory and timing performance was largest for visual targets in divided attention conditions.

  16. An Issue of Learning: The Effect of Visual Split Attention in Classes for Deaf and Hard of Hearing Students

    ERIC Educational Resources Information Center

    Mather, Susan M.; Clark, M. Diane

    2012-01-01

    One of the ongoing challenges teachers of students who are deaf or hard of hearing face is managing the visual split attention implicit in multimedia learning. When a teacher presents various types of visual information at the same time, visual learners have no choice but to divide their attention among those materials and the teacher and…

  17. Evidence for an attentional component of inhibition of return in visual search.

    PubMed

    Pierce, Allison M; Crouse, Monique D; Green, Jessica J

    2017-11-01

    Inhibition of return (IOR) is typically described as an inhibitory bias against returning attention to a recently attended location as a means of promoting efficient visual search. Most studies examining IOR, however, either do not use visual search paradigms or do not effectively isolate attentional processes, making it difficult to conclusively link IOR to a bias in attention. Here, we recorded ERPs during a simple visual search task designed to isolate the attentional component of IOR to examine whether an inhibitory bias of attention is observed and, if so, how it influences visual search behavior. Across successive visual search displays, we found evidence of both a broad, hemisphere-wide inhibitory bias of attention along with a focal, target location-specific facilitation. When the target appeared in the same visual hemifield in successive searches, responses were slower and the N2pc component was reduced, reflecting a bias of attention away from the previously attended side of space. When the target occurred at the same location in successive searches, responses were facilitated and the P1 component was enhanced, likely reflecting spatial priming of the target. These two effects are combined in the response times, leading to a reduction in the IOR effect for repeated target locations. Using ERPs, however, these two opposing effects can be isolated in time, demonstrating that the inhibitory biasing of attention still occurs even when response-time slowing is ameliorated by spatial priming. © 2017 Society for Psychophysiological Research.

  18. Top-down alpha oscillatory network interactions during visuospatial attention orienting.

    PubMed

    Doesburg, Sam M; Bedo, Nicolas; Ward, Lawrence M

    2016-05-15

    Neuroimaging and lesion studies indicate that visual attention is controlled by a distributed network of brain areas. The covert control of visuospatial attention has also been associated with retinotopic modulation of alpha-band oscillations within early visual cortex, which are thought to underlie inhibition of ignored areas of visual space. The relation between distributed networks mediating attention control and more focal oscillatory mechanisms, however, remains unclear. The present study evaluated the hypothesis that alpha-band, directed, network interactions within the attention control network are systematically modulated by the locus of visuospatial attention. We localized brain areas involved in visuospatial attention orienting using magnetoencephalographic (MEG) imaging and investigated alpha-band Granger-causal interactions among activated regions using narrow-band transfer entropy. The deployment of attention to one side of visual space was indexed by lateralization of alpha power changes between about 400ms and 700ms post-cue onset. The changes in alpha power were associated, in the same time period, with lateralization of anterior-to-posterior information flow in the alpha-band from various brain areas involved in attention control, including the anterior cingulate cortex, left middle and inferior frontal gyri, left superior temporal gyrus, and right insula, and inferior parietal lobule, to early visual areas. We interpreted these results to indicate that distributed network interactions mediated by alpha oscillations exert top-down influences on early visual cortex to modulate inhibition of processing for ignored areas of visual space. Copyright © 2016. Published by Elsevier Inc.

  19. Selective maintenance in visual working memory does not require sustained visual attention.

    PubMed

    Hollingworth, Andrew; Maxcey-Richard, Ashleigh M

    2013-08-01

    In four experiments, we tested whether sustained visual attention is required for the selective maintenance of objects in visual working memory (VWM). Participants performed a color change-detection task. During the retention interval, a valid cue indicated the item that would be tested. Change-detection performance was higher in the valid-cue condition than in a neutral-cue control condition. To probe the role of visual attention in the cuing effect, on half of the trials, a difficult search task was inserted after the cue, precluding sustained attention on the cued item. The addition of the search task produced no observable decrement in the magnitude of the cuing effect. In a complementary test, search efficiency was not impaired by simultaneously prioritizing an object for retention in VWM. The results demonstrate that selective maintenance in VWM can be dissociated from the locus of visual attention. 2013 APA, all rights reserved

  20. Evaluating the influence of motor control on selective attention through a stochastic model: the paradigm of motor control dysfunction in cerebellar patient.

    PubMed

    Veneri, Giacomo; Federico, Antonio; Rufa, Alessandra

    2014-01-01

    Attention allows us to selectively process the vast amount of information with which we are confronted, prioritizing some aspects of information and ignoring others by focusing on a certain location or aspect of the visual scene. Selective attention is guided by two cognitive mechanisms: saliency of the image (bottom up) and endogenous mechanisms (top down). These two mechanisms interact to direct attention and plan eye movements; then, the movement profile is sent to the motor system, which must constantly update the command needed to produce the desired eye movement. A new approach is described here to study how the eye motor control could influence this selection mechanism in clinical behavior: two groups of patients (SCA2 and late onset cerebellar ataxia LOCA) with well-known problems of motor control were studied; patients performed a cognitively demanding task; the results were compared to a stochastic model based on Monte Carlo simulations and a group of healthy subjects. The analytical procedure evaluated some energy functions for understanding the process. The implemented model suggested that patients performed an optimal visual search, reducing intrinsic noise sources. Our findings theorize a strict correlation between the "optimal motor system" and the "optimal stimulus encoders."

  1. Effect of oculomotor vision rehabilitation on the visual-evoked potential and visual attention in mild traumatic brain injury.

    PubMed

    Yadav, Naveen K; Thiagarajan, Preethi; Ciuffreda, Kenneth J

    2014-01-01

    The purpose of the experiment was to investigate the effect of oculomotor vision rehabilitation (OVR) on the visual-evoked potential (VEP) and visual attention in the mTBI population. Subjects (n = 7) were adults with a history of mild traumatic brain injury (mTBI). Each received 9 hours of OVR over a 6-week period. The effects of OVR on VEP amplitude and latency, the attention-related alpha band (8-13 Hz) power (µV(2)) and the clinical Visual Search and Attention Test (VSAT) were assessed before and after the OVR. After the OVR, the VEP amplitude increased and its variability decreased. There was no change in VEP latency, which was normal. Alpha band power increased, as did the VSAT score, following the OVR. The significant changes in most test parameters suggest that OVR affects the visual system at early visuo-cortical levels, as well as other pathways which are involved in visual attention.

  2. Visual spatial attention enhances the amplitude of positive and negative fMRI responses to visual stimulation in an eccentricity-dependent manner

    PubMed Central

    Bressler, David W.; Fortenbaugh, Francesca C.; Robertson, Lynn C.; Silver, Michael A.

    2013-01-01

    Endogenous visual spatial attention improves perception and enhances neural responses to visual stimuli at attended locations. Although many aspects of visual processing differ significantly between central and peripheral vision, little is known regarding the neural substrates of the eccentricity dependence of spatial attention effects. We measured amplitudes of positive and negative fMRI responses to visual stimuli as a function of eccentricity in a large number of topographically-organized cortical areas. Responses to each stimulus were obtained when the stimulus was attended and when spatial attention was directed to a stimulus in the opposite visual hemifield. Attending to the stimulus increased both positive and negative response amplitudes in all cortical areas we studied: V1, V2, V3, hV4, VO1, LO1, LO2, V3A/B, IPS0, TO1, and TO2. However, the eccentricity dependence of these effects differed considerably across cortical areas. In early visual, ventral, and lateral occipital cortex, attentional enhancement of positive responses was greater for central compared to peripheral eccentricities. The opposite pattern was observed in dorsal stream areas IPS0 and putative MT homolog TO1, where attentional enhancement of positive responses was greater in the periphery. Both the magnitude and the eccentricity dependence of attentional modulation of negative fMRI responses closely mirrored that of positive responses across cortical areas. PMID:23562388

  3. Perisaccadic Updating of Visual Representations and Attentional States: Linking Behavior and Neurophysiology

    PubMed Central

    Marino, Alexandria C.; Mazer, James A.

    2016-01-01

    During natural vision, saccadic eye movements lead to frequent retinal image changes that result in different neuronal subpopulations representing the same visual feature across fixations. Despite these potentially disruptive changes to the neural representation, our visual percept is remarkably stable. Visual receptive field remapping, characterized as an anticipatory shift in the position of a neuron’s spatial receptive field immediately before saccades, has been proposed as one possible neural substrate for visual stability. Many of the specific properties of remapping, e.g., the exact direction of remapping relative to the saccade vector and the precise mechanisms by which remapping could instantiate stability, remain a matter of debate. Recent studies have also shown that visual attention, like perception itself, can be sustained across saccades, suggesting that the attentional control system can also compensate for eye movements. Classical remapping could have an attentional component, or there could be a distinct attentional analog of visual remapping. At this time we do not yet fully understand how the stability of attentional representations relates to perisaccadic receptive field shifts. In this review, we develop a vocabulary for discussing perisaccadic shifts in receptive field location and perisaccadic shifts of attentional focus, review and synthesize behavioral and neurophysiological studies of perisaccadic perception and perisaccadic attention, and identify open questions that remain to be experimentally addressed. PMID:26903820

  4. Coupling between Theta Oscillations and Cognitive Control Network during Cross-Modal Visual and Auditory Attention: Supramodal vs Modality-Specific Mechanisms.

    PubMed

    Wang, Wuyi; Viswanathan, Shivakumar; Lee, Taraz; Grafton, Scott T

    2016-01-01

    Cortical theta band oscillations (4-8 Hz) in EEG signals have been shown to be important for a variety of different cognitive control operations in visual attention paradigms. However the synchronization source of these signals as defined by fMRI BOLD activity and the extent to which theta oscillations play a role in multimodal attention remains unknown. Here we investigated the extent to which cross-modal visual and auditory attention impacts theta oscillations. Using a simultaneous EEG-fMRI paradigm, healthy human participants performed an attentional vigilance task with six cross-modal conditions using naturalistic stimuli. To assess supramodal mechanisms, modulation of theta oscillation amplitude for attention to either visual or auditory stimuli was correlated with BOLD activity by conjunction analysis. Negative correlation was localized to cortical regions associated with the default mode network and positively with ventral premotor areas. Modality-associated attention to visual stimuli was marked by a positive correlation of theta and BOLD activity in fronto-parietal area that was not observed in the auditory condition. A positive correlation of theta and BOLD activity was observed in auditory cortex, while a negative correlation of theta and BOLD activity was observed in visual cortex during auditory attention. The data support a supramodal interaction of theta activity with of DMN function, and modality-associated processes within fronto-parietal networks related to top-down theta related cognitive control in cross-modal visual attention. On the other hand, in sensory cortices there are opposing effects of theta activity during cross-modal auditory attention.

  5. Lightness computation by the human visual system

    NASA Astrophysics Data System (ADS)

    Rudd, Michael E.

    2017-05-01

    A model of achromatic color computation by the human visual system is presented, which is shown to account in an exact quantitative way for a large body of appearance matching data collected with simple visual displays. The model equations are closely related to those of the original Retinex model of Land and McCann. However, the present model differs in important ways from Land and McCann's theory in that it invokes additional biological and perceptual mechanisms, including contrast gain control, different inherent neural gains for incremental, and decremental luminance steps, and two types of top-down influence on the perceptual weights applied to local luminance steps in the display: edge classification and spatial integration attentional windowing. Arguments are presented to support the claim that these various visual processes must be instantiated by a particular underlying neural architecture. By pointing to correspondences between the architecture of the model and findings from visual neurophysiology, this paper suggests that edge classification involves a top-down gating of neural edge responses in early visual cortex (cortical areas V1 and/or V2) while spatial integration windowing occurs in cortical area V4 or beyond.

  6. Saccade frequency response to visual cues during gait in Parkinson's disease: the selective role of attention.

    PubMed

    Stuart, Samuel; Lord, Sue; Galna, Brook; Rochester, Lynn

    2018-04-01

    Gait impairment is a core feature of Parkinson's disease (PD) with implications for falls risk. Visual cues improve gait in PD, but the underlying mechanisms are unclear. Evidence suggests that attention and vision play an important role; however, the relative contribution from each is unclear. Measurement of visual exploration (specifically saccade frequency) during gait allows for real-time measurement of attention and vision. Understanding how visual cues influence visual exploration may allow inferences of the underlying mechanisms to response which could help to develop effective therapeutics. This study aimed to examine saccade frequency during gait in response to a visual cue in PD and older adults and investigate the roles of attention and vision in visual cue response in PD. A mobile eye-tracker measured saccade frequency during gait in 55 people with PD and 32 age-matched controls. Participants walked in a straight line with and without a visual cue (50 cm transverse lines) presented under single task and dual-task (concurrent digit span recall). Saccade frequency was reduced when walking in PD compared to controls; however, visual cues ameliorated saccadic deficit. Visual cues significantly increased saccade frequency in both PD and controls under both single task and dual-task. Attention rather than visual function was central to saccade frequency and gait response to visual cues in PD. In conclusion, this study highlights the impact of visual cues on visual exploration when walking and the important role of attention in PD. Understanding these complex features will help inform intervention development. © 2018 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  7. Haptic guidance of overt visual attention.

    PubMed

    List, Alexandra; Iordanescu, Lucica; Grabowecky, Marcia; Suzuki, Satoru

    2014-11-01

    Research has shown that information accessed from one sensory modality can influence perceptual and attentional processes in another modality. Here, we demonstrated a novel crossmodal influence of haptic-shape information on visual attention. Participants visually searched for a target object (e.g., an orange) presented among distractor objects, fixating the target as quickly as possible. While searching for the target, participants held (never viewed and out of sight) an item of a specific shape in their hands. In two experiments, we demonstrated that the time for the eyes to reach a target-a measure of overt visual attention-was reduced when the shape of the held item (e.g., a sphere) was consistent with the shape of the visual target (e.g., an orange), relative to when the held shape was unrelated to the target (e.g., a hockey puck) or when no shape was held. This haptic-to-visual facilitation occurred despite the fact that the held shapes were not predictive of the visual targets' shapes, suggesting that the crossmodal influence occurred automatically, reflecting shape-specific haptic guidance of overt visual attention.

  8. Spatial working memory interferes with explicit, but not probabilistic cuing of spatial attention.

    PubMed

    Won, Bo-Yeong; Jiang, Yuhong V

    2015-05-01

    Recent empirical and theoretical work has depicted a close relationship between visual attention and visual working memory. For example, rehearsal in spatial working memory depends on spatial attention, whereas adding a secondary spatial working memory task impairs attentional deployment in visual search. These findings have led to the proposal that working memory is attention directed toward internal representations. Here, we show that the close relationship between these 2 constructs is limited to some but not all forms of spatial attention. In 5 experiments, participants held color arrays, dot locations, or a sequence of dots in working memory. During the memory retention interval, they performed a T-among-L visual search task. Crucially, the probable target location was cued either implicitly through location probability learning or explicitly with a central arrow or verbal instruction. Our results showed that whereas imposing a visual working memory load diminished the effectiveness of explicit cuing, it did not interfere with probability cuing. We conclude that spatial working memory shares similar mechanisms with explicit, goal-driven attention but is dissociated from implicitly learned attention. (c) 2015 APA, all rights reserved).

  9. Spatial working memory interferes with explicit, but not probabilistic cuing of spatial attention

    PubMed Central

    Won, Bo-Yeong; Jiang, Yuhong V.

    2014-01-01

    Recent empirical and theoretical work has depicted a close relationship between visual attention and visual working memory. For example, rehearsal in spatial working memory depends on spatial attention, whereas adding a secondary spatial working memory task impairs attentional deployment in visual search. These findings have led to the proposal that working memory is attention directed toward internal representations. Here we show that the close relationship between these two constructs is limited to some but not all forms of spatial attention. In five experiments, participants held color arrays, dot locations, or a sequence of dots in working memory. During the memory retention interval they performed a T-among-L visual search task. Crucially, the probable target location was cued either implicitly through location probability learning, or explicitly with a central arrow or verbal instruction. Our results showed that whereas imposing a visual working memory load diminished the effectiveness of explicit cuing, it did not interfere with probability cuing. We conclude that spatial working memory shares similar mechanisms with explicit, goal-driven attention but is dissociated from implicitly learned attention. PMID:25401460

  10. Collinearity Impairs Local Element Visual Search

    ERIC Educational Resources Information Center

    Jingling, Li; Tseng, Chia-Huei

    2013-01-01

    In visual searches, stimuli following the law of good continuity attract attention to the global structure and receive attentional priority. Also, targets that have unique features are of high feature contrast and capture attention in visual search. We report on a salient global structure combined with a high orientation contrast to the…

  11. Contextual Cueing: Implicit Learning and Memory of Visual Context Guides Spatial Attention.

    ERIC Educational Resources Information Center

    Chun, Marvin M.; Jiang, Yuhong

    1998-01-01

    Six experiments involving a total of 112 college students demonstrate that a robust memory for visual context exists to guide spatial attention. Results show how implicit learning and memory of visual context can guide spatial attention toward task-relevant aspects of a scene. (SLD)

  12. Visual Memory for Objects Following Foveal Vision Loss

    ERIC Educational Resources Information Center

    Geringswald, Franziska; Herbik, Anne; Hofmüller, Wolfram; Hoffmann, Michael B.; Pollmann, Stefan

    2015-01-01

    Allocation of visual attention is crucial for encoding items into visual long-term memory. In free vision, attention is closely linked to the center of gaze, raising the question whether foveal vision loss entails suboptimal deployment of attention and subsequent impairment of object encoding. To investigate this question, we examined visual…

  13. Local Immediate versus Long-Range Delayed Changes in Functional Connectivity Following rTMS on the Visual Attention Network.

    PubMed

    Battelli, Lorella; Grossman, Emily D; Plow, Ela B

    The interhemispheric competition hypothesis attributes the distribution of selective attention to a balance of mutual inhibition between homotopic, interhemispheric connections in parietal cortex (Kinsbourne 1977; Battelli et al., 2009). In support of this hypothesis, repetitive inhibitory TMS over right parietal cortex in healthy individuals rapidly induces interhemispheric imbalance in cortical activity that spreads beyond the site of stimulation (Plow et al., 2014). Behaviorally, the impacts of inhibitory rTMS may be long delayed from the onset of stimulation, as much as 30 minutes (Agosta et al., 2014; Hubl et al., 2008). In this study, we examine the temporal dynamics of inhibitory rTMS on cortical network integrity that supports sustained visual attention. Healthy individuals received 15 min of 1 Hz offline, inhibitory rTMS (or sham) over left parietal cortex, and then immediately engaged in a bilateral visual tracking task while we recorded brain activity with fMRI. We computed functional connectivity (FC) between three nodes of the attention network engaged by visual tracking: the intraparietal sulcus (IPS), frontal eye fields (FEF) and human MT+ (hMT+). FC immediately and significantly decreased between the stimulation site (left IPS) and all other regions, then recovered to normal levels within 30 minutes. rTMS increased FC between left and right FEF at approximately 36 min following stimulation, and between sites in the unstimulated hemisphere approximately 48 min after stimulation. These findings demonstrate large-scale changes in cortical organization following inhibitory rTMS. The immediate impact of rTMS on connectivity to the stimulation site dovetails with the putative role of interhemispheric balance for bilateral visual sustained attention. The delayed, compensatory increases in functional connectivity have implications for models of dynamic reorganization in networks supporting spatial and nonspatial selective attention, and compensatory mechanisms within these networks that may be stabilized in chronic stroke. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Visual Perceptual Learning and Models.

    PubMed

    Dosher, Barbara; Lu, Zhong-Lin

    2017-09-15

    Visual perceptual learning through practice or training can significantly improve performance on visual tasks. Originally seen as a manifestation of plasticity in the primary visual cortex, perceptual learning is more readily understood as improvements in the function of brain networks that integrate processes, including sensory representations, decision, attention, and reward, and balance plasticity with system stability. This review considers the primary phenomena of perceptual learning, theories of perceptual learning, and perceptual learning's effect on signal and noise in visual processing and decision. Models, especially computational models, play a key role in behavioral and physiological investigations of the mechanisms of perceptual learning and for understanding, predicting, and optimizing human perceptual processes, learning, and performance. Performance improvements resulting from reweighting or readout of sensory inputs to decision provide a strong theoretical framework for interpreting perceptual learning and transfer that may prove useful in optimizing learning in real-world applications.

  15. Model-Free Estimation of Tuning Curves and Their Attentional Modulation, Based on Sparse and Noisy Data.

    PubMed

    Helmer, Markus; Kozyrev, Vladislav; Stephan, Valeska; Treue, Stefan; Geisel, Theo; Battaglia, Demian

    2016-01-01

    Tuning curves are the functions that relate the responses of sensory neurons to various values within one continuous stimulus dimension (such as the orientation of a bar in the visual domain or the frequency of a tone in the auditory domain). They are commonly determined by fitting a model e.g. a Gaussian or other bell-shaped curves to the measured responses to a small subset of discrete stimuli in the relevant dimension. However, as neuronal responses are irregular and experimental measurements noisy, it is often difficult to determine reliably the appropriate model from the data. We illustrate this general problem by fitting diverse models to representative recordings from area MT in rhesus monkey visual cortex during multiple attentional tasks involving complex composite stimuli. We find that all models can be well-fitted, that the best model generally varies between neurons and that statistical comparisons between neuronal responses across different experimental conditions are affected quantitatively and qualitatively by specific model choices. As a robust alternative to an often arbitrary model selection, we introduce a model-free approach, in which features of interest are extracted directly from the measured response data without the need of fitting any model. In our attentional datasets, we demonstrate that data-driven methods provide descriptions of tuning curve features such as preferred stimulus direction or attentional gain modulations which are in agreement with fit-based approaches when a good fit exists. Furthermore, these methods naturally extend to the frequent cases of uncertain model selection. We show that model-free approaches can identify attentional modulation patterns, such as general alterations of the irregular shape of tuning curves, which cannot be captured by fitting stereotyped conventional models. Finally, by comparing datasets across different conditions, we demonstrate effects of attention that are cell- and even stimulus-specific. Based on these proofs-of-concept, we conclude that our data-driven methods can reliably extract relevant tuning information from neuronal recordings, including cells whose seemingly haphazard response curves defy conventional fitting approaches.

  16. Model-Free Estimation of Tuning Curves and Their Attentional Modulation, Based on Sparse and Noisy Data

    PubMed Central

    Helmer, Markus; Kozyrev, Vladislav; Stephan, Valeska; Treue, Stefan; Geisel, Theo; Battaglia, Demian

    2016-01-01

    Tuning curves are the functions that relate the responses of sensory neurons to various values within one continuous stimulus dimension (such as the orientation of a bar in the visual domain or the frequency of a tone in the auditory domain). They are commonly determined by fitting a model e.g. a Gaussian or other bell-shaped curves to the measured responses to a small subset of discrete stimuli in the relevant dimension. However, as neuronal responses are irregular and experimental measurements noisy, it is often difficult to determine reliably the appropriate model from the data. We illustrate this general problem by fitting diverse models to representative recordings from area MT in rhesus monkey visual cortex during multiple attentional tasks involving complex composite stimuli. We find that all models can be well-fitted, that the best model generally varies between neurons and that statistical comparisons between neuronal responses across different experimental conditions are affected quantitatively and qualitatively by specific model choices. As a robust alternative to an often arbitrary model selection, we introduce a model-free approach, in which features of interest are extracted directly from the measured response data without the need of fitting any model. In our attentional datasets, we demonstrate that data-driven methods provide descriptions of tuning curve features such as preferred stimulus direction or attentional gain modulations which are in agreement with fit-based approaches when a good fit exists. Furthermore, these methods naturally extend to the frequent cases of uncertain model selection. We show that model-free approaches can identify attentional modulation patterns, such as general alterations of the irregular shape of tuning curves, which cannot be captured by fitting stereotyped conventional models. Finally, by comparing datasets across different conditions, we demonstrate effects of attention that are cell- and even stimulus-specific. Based on these proofs-of-concept, we conclude that our data-driven methods can reliably extract relevant tuning information from neuronal recordings, including cells whose seemingly haphazard response curves defy conventional fitting approaches. PMID:26785378

  17. Flexible attention allocation to visual and auditory working memory tasks: manipulating reward induces a trade-off.

    PubMed

    Morey, Candice Coker; Cowan, Nelson; Morey, Richard D; Rouder, Jeffery N

    2011-02-01

    Prominent roles for general attention resources are posited in many models of working memory, but the manner in which these can be allocated differs between models or is not sufficiently specified. We varied the payoffs for correct responses in two temporally-overlapping recognition tasks, a visual array comparison task and a tone sequence comparison task. In the critical conditions, an increase in reward for one task corresponded to a decrease in reward for the concurrent task, but memory load remained constant. Our results show patterns of interference consistent with a trade-off between the tasks, suggesting that a shared resource can be flexibly divided, rather than only fully allotted to either of the tasks. Our findings support a role for a domain-general resource in models of working memory, and furthermore suggest that this resource is flexibly divisible.

  18. Age Mediation of Frontoparietal Activation during Visual Feature Search

    PubMed Central

    Madden, David J.; Parks, Emily L.; Davis, Simon W.; Diaz, Michele T.; Potter, Guy G.; Chou, Ying-hui; Chen, Nan-kuei; Cabeza, Roberto

    2014-01-01

    Activation of frontal and parietal brain regions is associated with attentional control during visual search. We used fMRI to characterize age-related differences in frontoparietal activation in a highly efficient feature search task, detection of a shape singleton. On half of the trials, a salient distractor (a color singleton) was present in the display. The hypothesis was that frontoparietal activation mediated the relation between age and attentional capture by the salient distractor. Participants were healthy, community-dwelling individuals, 21 younger adults (19 – 29 years of age) and 21 older adults (60 – 87 years of age). Top-down attention, in the form of target predictability, was associated with an improvement in search performance that was comparable for younger and older adults. The increase in search reaction time (RT) associated with the salient distractor (attentional capture), standardized to correct for generalized age-related slowing, was greater for older adults than for younger adults. On trials with a color singleton distractor, search RT increased as a function of increasing activation in frontal regions, for both age groups combined, suggesting increased task difficulty. Mediational analyses disconfirmed the hypothesized model, in which frontal activation mediated the age-related increase in attentional capture, but supported an alternative model in which age was a mediator of the relation between frontal activation and capture. PMID:25102420

  19. Impaired Limbic Cortico-Striatal Structure and Sustained Visual Attention in a Rodent Model of Schizophrenia

    PubMed Central

    Barnes, Samuel A.; Sawiak, Stephen J.; Caprioli, Daniele; Jupp, Bianca; Buonincontri, Guido; Mar, Adam C.; Harte, Michael K.; Fletcher, Paul C.; Robbins, Trevor W.; Neill, Jo C.

    2015-01-01

    Background: N-methyl-d-aspartate receptor (NMDAR) dysfunction is thought to contribute to the pathophysiology of schizophrenia. Accordingly, NMDAR antagonists such as phencyclidine (PCP) are used widely in experimental animals to model cognitive impairment associated with this disorder. However, it is unclear whether PCP disrupts the structural integrity of brain areas relevant to the profile of cognitive impairment in schizophrenia. Methods: Here we used high-resolution magnetic resonance imaging and voxel-based morphometry to investigate structural alterations associated with sub-chronic PCP treatment in rats. Results: Sub-chronic exposure of rats to PCP (5mg/kg twice daily for 7 days) impaired sustained visual attention on a 5-choice serial reaction time task, notably when the attentional load was increased. In contrast, sub-chronic PCP had no significant effect on the attentional filtering of a pre-pulse auditory stimulus in an acoustic startle paradigm. Voxel-based morphometry revealed significantly reduced grey matter density bilaterally in the hippocampus, anterior cingulate cortex, ventral striatum, and amygdala. PCP-treated rats also exhibited reduced cortical thickness in the insular cortex. Conclusions: These findings demonstrate that sub-chronic NMDA receptor antagonism is sufficient to produce highly-localized morphological abnormalities in brain areas implicated in the pathogenesis of schizophrenia. Furthermore, PCP exposure resulted in dissociable impairments in attentional function. PMID:25552430

  20. Modality-specificity of Selective Attention Networks.

    PubMed

    Stewart, Hannah J; Amitay, Sygal

    2015-01-01

    To establish the modality specificity and generality of selective attention networks. Forty-eight young adults completed a battery of four auditory and visual selective attention tests based upon the Attention Network framework: the visual and auditory Attention Network Tests (vANT, aANT), the Test of Everyday Attention (TEA), and the Test of Attention in Listening (TAiL). These provided independent measures for auditory and visual alerting, orienting, and conflict resolution networks. The measures were subjected to an exploratory factor analysis to assess underlying attention constructs. The analysis yielded a four-component solution. The first component comprised of a range of measures from the TEA and was labeled "general attention." The third component was labeled "auditory attention," as it only contained measures from the TAiL using pitch as the attended stimulus feature. The second and fourth components were labeled as "spatial orienting" and "spatial conflict," respectively-they were comprised of orienting and conflict resolution measures from the vANT, aANT, and TAiL attend-location task-all tasks based upon spatial judgments (e.g., the direction of a target arrow or sound location). These results do not support our a-priori hypothesis that attention networks are either modality specific or supramodal. Auditory attention separated into selectively attending to spatial and non-spatial features, with the auditory spatial attention loading onto the same factor as visual spatial attention, suggesting spatial attention is supramodal. However, since our study did not include a non-spatial measure of visual attention, further research will be required to ascertain whether non-spatial attention is modality-specific.

  1. Early auditory evoked potential is modulated by selective attention and related to individual differences in visual working memory capacity.

    PubMed

    Giuliano, Ryan J; Karns, Christina M; Neville, Helen J; Hillyard, Steven A

    2014-12-01

    A growing body of research suggests that the predictive power of working memory (WM) capacity for measures of intellectual aptitude is due to the ability to control attention and select relevant information. Crucially, attentional mechanisms implicated in controlling access to WM are assumed to be domain-general, yet reports of enhanced attentional abilities in individuals with larger WM capacities are primarily within the visual domain. Here, we directly test the link between WM capacity and early attentional gating across sensory domains, hypothesizing that measures of visual WM capacity should predict an individual's capacity to allocate auditory selective attention. To address this question, auditory ERPs were recorded in a linguistic dichotic listening task, and individual differences in ERP modulations by attention were correlated with estimates of WM capacity obtained in a separate visual change detection task. Auditory selective attention enhanced ERP amplitudes at an early latency (ca. 70-90 msec), with larger P1 components elicited by linguistic probes embedded in an attended narrative. Moreover, this effect was associated with greater individual estimates of visual WM capacity. These findings support the view that domain-general attentional control mechanisms underlie the wide variation of WM capacity across individuals.

  2. Solving Large Problems with a Small Working Memory

    ERIC Educational Resources Information Center

    Pizlo, Zygmunt; Stefanov, Emil

    2013-01-01

    We describe an important elaboration of our multiscale/multiresolution model for solving the Traveling Salesman Problem (TSP). Our previous model emulated the non-uniform distribution of receptors on the human retina and the shifts of visual attention. This model produced near-optimal solutions of TSP in linear time by performing hierarchical…

  3. Retinotopic patterns of background connectivity between V1 and fronto-parietal cortex are modulated by task demands

    PubMed Central

    Griffis, Joseph C.; Elkhetali, Abdurahman S.; Burge, Wesley K.; Chen, Richard H.; Visscher, Kristina M.

    2015-01-01

    Attention facilitates the processing of task-relevant visual information and suppresses interference from task-irrelevant information. Modulations of neural activity in visual cortex depend on attention, and likely result from signals originating in fronto-parietal and cingulo-opercular regions of cortex. Here, we tested the hypothesis that attentional facilitation of visual processing is accomplished in part by changes in how brain networks involved in attentional control interact with sectors of V1 that represent different retinal eccentricities. We measured the strength of background connectivity between fronto-parietal and cingulo-opercular regions with different eccentricity sectors in V1 using functional MRI data that were collected while participants performed tasks involving attention to either a centrally presented visual stimulus or a simultaneously presented auditory stimulus. We found that when the visual stimulus was attended, background connectivity between V1 and the left frontal eye fields (FEF), left intraparietal sulcus (IPS), and right IPS varied strongly across different eccentricity sectors in V1 so that foveal sectors were more strongly connected than peripheral sectors. This retinotopic gradient was weaker when the visual stimulus was ignored, indicating that it was driven by attentional effects. Greater task-driven differences between foveal and peripheral sectors in background connectivity to these regions were associated with better performance on the visual task and faster response times on correct trials. These findings are consistent with the notion that attention drives the configuration of task-specific functional pathways that enable the prioritized processing of task-relevant visual information, and show that the prioritization of visual information by attentional processes may be encoded in the retinotopic gradient of connectivty between V1 and fronto-parietal regions. PMID:26106320

  4. The Competitive Influences of Perceptual Load and Working Memory Guidance on Selective Attention.

    PubMed

    Tan, Jinfeng; Zhao, Yuanfang; Wang, Lijun; Tian, Xia; Cui, Yan; Yang, Qian; Pan, Weigang; Zhao, Xiaoyue; Chen, Antao

    2015-01-01

    The perceptual load theory in selective attention literature proposes that the interference from task-irrelevant distractor is eliminated when perceptual capacity is fully consumed by task-relevant information. However, the biased competition model suggests that the contents of working memory (WM) can guide attentional selection automatically, even when this guidance is detrimental to visual search. An intriguing but unsolved question is what will happen when selective attention is influenced by both perceptual load and WM guidance. To study this issue, behavioral performances and event-related potentials (ERPs) were recorded when participants were presented with a cue to either identify or hold in memory and had to perform a visual search task subsequently, under conditions of low or high perceptual load. Behavioural data showed that high perceptual load eliminated the attentional capture by WM. The ERP results revealed an obvious WM guidance effect in P1 component with invalid trials eliciting larger P1 than neutral trials, regardless of the level of perceptual load. The interaction between perceptual load and WM guidance was significant for the posterior N1 component. The memory guidance effect on N1 was eliminated by high perceptual load. Standardized Low Resolution Electrical Tomography Analysis (sLORETA) showed that the WM guidance effect and the perceptual load effect on attention can be localized into the occipital area and parietal lobe, respectively. Merely identifying the cue produced no effect on the P1 or N1 component. These results suggest that in selective attention, the information held in WM could capture attention at the early stage of visual processing in the occipital cortex. Interestingly, this initial capture of attention by WM could be modulated by the level of perceptual load and the parietal lobe mediates target selection at the discrimination stage.

  5. The Competitive Influences of Perceptual Load and Working Memory Guidance on Selective Attention

    PubMed Central

    Tan, Jinfeng; Zhao, Yuanfang; Wang, Lijun; Tian, Xia; Cui, Yan; Yang, Qian; Pan, Weigang; Zhao, Xiaoyue; Chen, Antao

    2015-01-01

    The perceptual load theory in selective attention literature proposes that the interference from task-irrelevant distractor is eliminated when perceptual capacity is fully consumed by task-relevant information. However, the biased competition model suggests that the contents of working memory (WM) can guide attentional selection automatically, even when this guidance is detrimental to visual search. An intriguing but unsolved question is what will happen when selective attention is influenced by both perceptual load and WM guidance. To study this issue, behavioral performances and event-related potentials (ERPs) were recorded when participants were presented with a cue to either identify or hold in memory and had to perform a visual search task subsequently, under conditions of low or high perceptual load. Behavioural data showed that high perceptual load eliminated the attentional capture by WM. The ERP results revealed an obvious WM guidance effect in P1 component with invalid trials eliciting larger P1 than neutral trials, regardless of the level of perceptual load. The interaction between perceptual load and WM guidance was significant for the posterior N1 component. The memory guidance effect on N1 was eliminated by high perceptual load. Standardized Low Resolution Electrical Tomography Analysis (sLORETA) showed that the WM guidance effect and the perceptual load effect on attention can be localized into the occipital area and parietal lobe, respectively. Merely identifying the cue produced no effect on the P1 or N1 component. These results suggest that in selective attention, the information held in WM could capture attention at the early stage of visual processing in the occipital cortex. Interestingly, this initial capture of attention by WM could be modulated by the level of perceptual load and the parietal lobe mediates target selection at the discrimination stage. PMID:26098079

  6. Pacing Visual Attention: Temporal Structure Effects

    DTIC Science & Technology

    1993-06-01

    of perception and motor action: Ideomotor compatibility and interference in divided attention . Journal of Motor Behavior, 2, (3), 155-162. Kwak, H...1993 Dissertation, Jun 89 - Jun 93 4. TITLE AND SUBTITLE S. FUNDING NUMBERS Pacing Visual Attention : Temporal Structure Effects PE - 62202F 6. AUTHOR(S...that persisting temporal relationships may be an important factor in the external (exogenous) control of visual attention , at least to some extent, was

  7. Perception and Attention for Visualization

    ERIC Educational Resources Information Center

    Haroz, Steve

    2013-01-01

    This work examines how a better understanding of visual perception and attention can impact visualization design. In a collection of studies, I explore how different levels of the visual system can measurably affect a variety of visualization metrics. The results show that expert preference, user performance, and even computational performance are…

  8. Not All Attention Orienting is Created Equal: Recognition Memory is Enhanced When Attention Orienting Involves Distractor Suppression

    PubMed Central

    Markant, Julie; Worden, Michael S.; Amso, Dima

    2015-01-01

    Learning through visual exploration often requires orienting of attention to meaningful information in a cluttered world. Previous work has shown that attention modulates visual cortex activity, with enhanced activity for attended targets and suppressed activity for competing inputs, thus enhancing the visual experience. Here we examined the idea that learning may be engaged differentially with variations in attention orienting mechanisms that drive driving eye movements during visual search and exploration. We hypothesized that attention orienting mechanisms that engaged suppression of a previously attended location will boost memory encoding of the currently attended target objects to a greater extent than those that involve target enhancement alone To test this hypothesis we capitalized on the classic spatial cueing task and the inhibition of return (IOR) mechanism (Posner, Rafal, & Choate, 1985; Posner, 1980) to demonstrate that object images encoded in the context of concurrent suppression at a previously attended location were encoded more effectively and remembered better than those encoded without concurrent suppression. Furthermore, fMRI analyses revealed that this memory benefit was driven by attention modulation of visual cortex activity, as increased suppression of the previously attended location in visual cortex during target object encoding predicted better subsequent recognition memory performance. These results suggest that not all attention orienting impacts learning and memory equally. PMID:25701278

  9. Impaired visual search in rats reveals cholinergic contributions to feature binding in visuospatial attention.

    PubMed

    Botly, Leigh C P; De Rosa, Eve

    2012-10-01

    The visual search task established the feature integration theory of attention in humans and measures visuospatial attentional contributions to feature binding. We recently demonstrated that the neuromodulator acetylcholine (ACh), from the nucleus basalis magnocellularis (NBM), supports the attentional processes required for feature binding using a rat digging-based task. Additional research has demonstrated cholinergic contributions from the NBM to visuospatial attention in rats. Here, we combined these lines of evidence and employed visual search in rats to examine whether cortical cholinergic input supports visuospatial attention specifically for feature binding. We trained 18 male Long-Evans rats to perform visual search using touch screen-equipped operant chambers. Sessions comprised Feature Search (no feature binding required) and Conjunctive Search (feature binding required) trials using multiple stimulus set sizes. Following acquisition of visual search, 8 rats received bilateral NBM lesions using 192 IgG-saporin to selectively reduce cholinergic afferentation of the neocortex, which we hypothesized would selectively disrupt the visuospatial attentional processes needed for efficient conjunctive visual search. As expected, relative to sham-lesioned rats, ACh-NBM-lesioned rats took significantly longer to locate the target stimulus on Conjunctive Search, but not Feature Search trials, thus demonstrating that cholinergic contributions to visuospatial attention are important for feature binding in rats.

  10. Typical visual search performance and atypical gaze behaviors in response to faces in Williams syndrome.

    PubMed

    Hirai, Masahiro; Muramatsu, Yukako; Mizuno, Seiji; Kurahashi, Naoko; Kurahashi, Hirokazu; Nakamura, Miho

    2016-01-01

    Evidence indicates that individuals with Williams syndrome (WS) exhibit atypical attentional characteristics when viewing faces. However, the dynamics of visual attention captured by faces remain unclear, especially when explicit attentional forces are present. To clarify this, we introduced a visual search paradigm and assessed how the relative strength of visual attention captured by a face and explicit attentional control changes as search progresses. Participants (WS and controls) searched for a target (butterfly) within an array of distractors, which sometimes contained an upright face. We analyzed reaction time and location of the first fixation-which reflect the attentional profile at the initial stage-and fixation durations. These features represent aspects of attention at later stages of visual search. The strength of visual attention captured by faces and explicit attentional control (toward the butterfly) was characterized by the frequency of first fixations on a face or butterfly and on the duration of face or butterfly fixations. Although reaction time was longer in all groups when faces were present, and visual attention was not dominated by faces in any group during the initial stages of the search, when faces were present, attention to faces dominated in the WS group during the later search stages. Furthermore, for the WS group, reaction time correlated with eye-movement measures at different stages of searching such that longer reaction times were associated with longer face-fixations, specifically at the initial stage of searching. Moreover, longer reaction times were associated with longer face-fixations at the later stages of searching, while shorter reaction times were associated with longer butterfly fixations. The relative strength of attention captured by faces in people with WS is not observed at the initial stage of searching but becomes dominant as the search progresses. Furthermore, although behavioral responses are associated with some aspects of eye movements, they are not as sensitive as eye-movement measurements themselves at detecting atypical attentional characteristics in people with WS.

  11. Cognitive Control Network Contributions to Memory-Guided Visual Attention.

    PubMed

    Rosen, Maya L; Stern, Chantal E; Michalka, Samantha W; Devaney, Kathryn J; Somers, David C

    2016-05-01

    Visual attentional capacity is severely limited, but humans excel in familiar visual contexts, in part because long-term memories guide efficient deployment of attention. To investigate the neural substrates that support memory-guided visual attention, we performed a set of functional MRI experiments that contrast long-term, memory-guided visuospatial attention with stimulus-guided visuospatial attention in a change detection task. Whereas the dorsal attention network was activated for both forms of attention, the cognitive control network(CCN) was preferentially activated during memory-guided attention. Three posterior nodes in the CCN, posterior precuneus, posterior callosal sulcus/mid-cingulate, and lateral intraparietal sulcus exhibited the greatest specificity for memory-guided attention. These 3 regions exhibit functional connectivity at rest, and we propose that they form a subnetwork within the broader CCN. Based on the task activation patterns, we conclude that the nodes of this subnetwork are preferentially recruited for long-term memory guidance of visuospatial attention. Published by Oxford University Press 2015. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  12. "Multisensory brand search: How the meaning of sounds guides consumers' visual attention": Correction to Knoeferle et al. (2016).

    PubMed

    2017-03-01

    Reports an error in "Multisensory brand search: How the meaning of sounds guides consumers' visual attention" by Klemens M. Knoeferle, Pia Knoeferle, Carlos Velasco and Charles Spence ( Journal of Experimental Psychology: Applied , 2016[Jun], Vol 22[2], 196-210). In the article, under Experiment 2, Design and Stimuli, the set number of target products and visual distractors reported in the second paragraph should be 20 and 13, respectively: "On each trial, the 16 products shown in the display were randomly selected from a set of 20 products belonging to different categories. Out of the set of 20 products, seven were potential targets, whereas the other 13 were used as visual distractors only throughout the experiment (since they were not linked to specific usage or consumption sounds)." Consequently, Appendix A in the supplemental materials has been updated. (The following abstract of the original article appeared in record 2016-28876-002.) Building on models of crossmodal attention, the present research proposes that brand search is inherently multisensory, in that the consumers' visual search for a specific brand can be facilitated by semantically related stimuli that are presented in another sensory modality. A series of 5 experiments demonstrates that the presentation of spatially nonpredictive auditory stimuli associated with products (e.g., usage sounds or product-related jingles) can crossmodally facilitate consumers' visual search for, and selection of, products. Eye-tracking data (Experiment 2) revealed that the crossmodal effect of auditory cues on visual search manifested itself not only in RTs, but also in the earliest stages of visual attentional processing, thus suggesting that the semantic information embedded within sounds can modulate the perceptual saliency of the target products' visual representations. Crossmodal facilitation was even observed for newly learnt associations between unfamiliar brands and sonic logos, implicating multisensory short-term learning in establishing audiovisual semantic associations. The facilitation effect was stronger when searching complex rather than simple visual displays, thus suggesting a modulatory role of perceptual load. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  13. Top-Down Control of Visual Attention by the Prefrontal Cortex. Functional Specialization and Long-Range Interactions

    PubMed Central

    Paneri, Sofia; Gregoriou, Georgia G.

    2017-01-01

    The ability to select information that is relevant to current behavioral goals is the hallmark of voluntary attention and an essential part of our cognition. Attention tasks are a prime example to study at the neuronal level, how task related information can be selectively processed in the brain while irrelevant information is filtered out. Whereas, numerous studies have focused on elucidating the mechanisms of visual attention at the single neuron and population level in the visual cortices, considerably less work has been devoted to deciphering the distinct contribution of higher-order brain areas, which are known to be critical for the employment of attention. Among these areas, the prefrontal cortex (PFC) has long been considered a source of top-down signals that bias selection in early visual areas in favor of the attended features. Here, we review recent experimental data that support the role of PFC in attention. We examine the existing evidence for functional specialization within PFC and we discuss how long-range interactions between PFC subregions and posterior visual areas may be implemented in the brain and contribute to the attentional modulation of different measures of neural activity in visual cortices. PMID:29033784

  14. Top-Down Control of Visual Attention by the Prefrontal Cortex. Functional Specialization and Long-Range Interactions.

    PubMed

    Paneri, Sofia; Gregoriou, Georgia G

    2017-01-01

    The ability to select information that is relevant to current behavioral goals is the hallmark of voluntary attention and an essential part of our cognition. Attention tasks are a prime example to study at the neuronal level, how task related information can be selectively processed in the brain while irrelevant information is filtered out. Whereas, numerous studies have focused on elucidating the mechanisms of visual attention at the single neuron and population level in the visual cortices, considerably less work has been devoted to deciphering the distinct contribution of higher-order brain areas, which are known to be critical for the employment of attention. Among these areas, the prefrontal cortex (PFC) has long been considered a source of top-down signals that bias selection in early visual areas in favor of the attended features. Here, we review recent experimental data that support the role of PFC in attention. We examine the existing evidence for functional specialization within PFC and we discuss how long-range interactions between PFC subregions and posterior visual areas may be implemented in the brain and contribute to the attentional modulation of different measures of neural activity in visual cortices.

  15. Changes in search rate but not in the dynamics of exogenous attention in action videogame players.

    PubMed

    Hubert-Wallander, Bjorn; Green, C Shawn; Sugarman, Michael; Bavelier, Daphne

    2011-11-01

    Many previous studies have shown that the speed of processing in attentionally demanding tasks seems enhanced following habitual action videogame play. However, using one of the diagnostic tasks for efficiency of attentional processing, a visual search task, Castel and collaborators (Castel, Pratt, & Drummond, Acta Psychologica 119:217-230, 2005) reported no difference in visual search rates, instead proposing that action gaming may change response execution time rather than the efficiency of visual selective attention per se. Here we used two hard visual search tasks, one measuring reaction time and the other accuracy, to test whether visual search rate may be changed by action videogame play. We found greater search rates in the gamer group than in the nongamer controls, consistent with increased efficiency in visual selective attention. We then asked how general the change in attentional throughput noted so far in gamers might be by testing whether exogenous attentional cues would lead to a disproportional enhancement in throughput in gamers as compared to nongamers. Interestingly, exogenous cues were found to enhance throughput equivalently between gamers and nongamers, suggesting that not all mechanisms known to enhance throughput are similarly enhanced in action videogamers.

  16. Visual selective attention and reading efficiency are related in children.

    PubMed

    Casco, C; Tressoldi, P E; Dellantonio, A

    1998-09-01

    We investigated the relationship between visual selective attention and linguistic performance. Subjects were classified in four categories according to their accuracy in a letter cancellation task involving selective attention. The task consisted in searching a target letter in a set of background letters and accuracy was measured as a function of set size. We found that children with the lowest performance in the cancellation task present a significantly slower reading rate and a higher number of reading visual errors than children with highest performance. Results also show that these groups of searchers present significant differences in a lexical search task whereas their performance did not differ in lexical decision and syllables control task. The relationship between letter search and reading, as well as the finding that poor readers-searchers perform poorly lexical search tasks also involving selective attention, suggest that the relationship between letter search and reading difficulty may reflect a deficit in a visual selective attention mechanisms which is involved in all these tasks. A deficit in visual attention can be linked to the problems that disabled readers present in the function of magnocellular stream which culminates in posterior parietal cortex, an area which plays an important role in guiding visual attention.

  17. Behavioral and Brain Measures of Phasic Alerting Effects on Visual Attention.

    PubMed

    Wiegand, Iris; Petersen, Anders; Finke, Kathrin; Bundesen, Claus; Lansner, Jon; Habekost, Thomas

    2017-01-01

    In the present study, we investigated effects of phasic alerting on visual attention in a partial report task, in which half of the displays were preceded by an auditory warning cue. Based on the computational Theory of Visual Attention (TVA), we estimated parameters of spatial and non-spatial aspects of visual attention and measured event-related lateralizations (ERLs) over visual processing areas. We found that the TVA parameter sensory effectiveness a , which is thought to reflect visual processing capacity, significantly increased with phasic alerting. By contrast, the distribution of visual processing resources according to task relevance and spatial position, as quantified in parameters top-down control α and spatial bias w index , was not modulated by phasic alerting. On the electrophysiological level, the latencies of ERLs in response to the task displays were reduced following the warning cue. These results suggest that phasic alerting facilitates visual processing in a general, unselective manner and that this effect originates in early stages of visual information processing.

  18. Saccade-synchronized rapid attention shifts in macaque visual cortical area MT.

    PubMed

    Yao, Tao; Treue, Stefan; Krishna, B Suresh

    2018-03-06

    While making saccadic eye-movements to scan a visual scene, humans and monkeys are able to keep track of relevant visual stimuli by maintaining spatial attention on them. This ability requires a shift of attentional modulation from the neuronal population representing the relevant stimulus pre-saccadically to the one representing it post-saccadically. For optimal performance, this trans-saccadic attention shift should be rapid and saccade-synchronized. Whether this is so is not known. We trained two rhesus monkeys to make saccades while maintaining covert attention at a fixed spatial location. We show that the trans-saccadic attention shift in cortical visual medial temporal (MT) area is well synchronized to saccades. Attentional modulation crosses over from the pre-saccadic to the post-saccadic neuronal representation by about 50 ms after a saccade. Taking response latency into account, the trans-saccadic attention shift is well timed to maintain spatial attention on relevant stimuli, so that they can be optimally tracked and processed across saccades.

  19. Shifting Attention within Memory Representations Involves Early Visual Areas

    PubMed Central

    Munneke, Jaap; Belopolsky, Artem V.; Theeuwes, Jan

    2012-01-01

    Prior studies have shown that spatial attention modulates early visual cortex retinotopically, resulting in enhanced processing of external perceptual representations. However, it is not clear whether the same visual areas are modulated when attention is focused on, and shifted within a working memory representation. In the current fMRI study participants were asked to memorize an array containing four stimuli. After a delay, participants were presented with a verbal cue instructing them to actively maintain the location of one of the stimuli in working memory. Additionally, on a number of trials a second verbal cue instructed participants to switch attention to the location of another stimulus within the memorized representation. Results of the study showed that changes in the BOLD pattern closely followed the locus of attention within the working memory representation. A decrease in BOLD-activity (V1–V3) was observed at ROIs coding a memory location when participants switched away from this location, whereas an increase was observed when participants switched towards this location. Continuous increased activity was obtained at the memorized location when participants did not switch. This study shows that shifting attention within memory representations activates the earliest parts of visual cortex (including V1) in a retinotopic fashion. We conclude that even in the absence of visual stimulation, early visual areas support shifting of attention within memorized representations, similar to when attention is shifted in the outside world. The relationship between visual working memory and visual mental imagery is discussed in light of the current findings. PMID:22558165

  20. Effects of Binaural Sensory Aids on the Development of Visual Perceptual Abilities in Visually Handicapped Infants. Final Report, April 15, 1982-November 15, 1982.

    ERIC Educational Resources Information Center

    Hart, Verna; Ferrell, Kay

    Twenty-four congenitally visually handicapped infants, aged 6-24 months, participated in a study to determine (1) those stimuli best able to elicit visual attention, (2) the stability of visual acuity over time, and (3) the effects of binaural sensory aids on both visual attention and visual acuity. Ss were dichotomized into visually handicapped…

  1. Orienting attention to visual or verbal/auditory imagery differentially impairs the processing of visual stimuli.

    PubMed

    Villena-González, Mario; López, Vladimir; Rodríguez, Eugenio

    2016-05-15

    When attention is oriented toward inner thoughts, as spontaneously occurs during mind wandering, the processing of external information is attenuated. However, the potential effects of thought's content regarding sensory attenuation are still unknown. The present study aims to assess if the representational format of thoughts, such as visual imagery or inner speech, might differentially affect the sensory processing of external stimuli. We recorded the brain activity of 20 participants (12 women) while they were exposed to a probe visual stimulus in three different conditions: executing a task on the visual probe (externally oriented attention), and two conditions involving inward-turned attention i.e. generating inner speech and performing visual imagery. Event-related potentials results showed that the P1 amplitude, related with sensory response, was significantly attenuated during both task involving inward attention compared with external task. When both representational formats were compared, the visual imagery condition showed stronger attenuation in sensory processing than inner speech condition. Alpha power in visual areas was measured as an index of cortical inhibition. Larger alpha amplitude was found when participants engaged in an internal thought contrasted with the external task, with visual imagery showing even more alpha power than inner speech condition. Our results show, for the first time to our knowledge, that visual attentional processing to external stimuli during self-generated thoughts is differentially affected by the representational format of the ongoing train of thoughts. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Visual attention in preterm born adults: specifically impaired attentional sub-mechanisms that link with altered intrinsic brain networks in a compensation-like mode.

    PubMed

    Finke, Kathrin; Neitzel, Julia; Bäuml, Josef G; Redel, Petra; Müller, Hermann J; Meng, Chun; Jaekel, Julia; Daamen, Marcel; Scheef, Lukas; Busch, Barbara; Baumann, Nicole; Boecker, Henning; Bartmann, Peter; Habekost, Thomas; Wolke, Dieter; Wohlschläger, Afra; Sorg, Christian

    2015-02-15

    Although pronounced and lasting deficits in selective attention have been observed for preterm born individuals it is unknown which specific attentional sub-mechanisms are affected and how they relate to brain networks. We used the computationally specified 'Theory of Visual Attention' together with whole- and partial-report paradigms to compare attentional sub-mechanisms of pre- (n=33) and full-term (n=32) born adults. Resting-state fMRI was used to evaluate both between-group differences and inter-individual variance in changed functional connectivity of intrinsic brain networks relevant for visual attention. In preterm born adults, we found specific impairments of visual short-term memory (vSTM) storage capacity while other sub-mechanisms such as processing speed or attentional weighting were unchanged. Furthermore, changed functional connectivity was found in unimodal visual and supramodal attention-related intrinsic networks. Among preterm born adults, the individual pattern of changed connectivity in occipital and parietal cortices was systematically associated with vSTM in such a way that the more distinct the connectivity differences, the better the preterm adults' storage capacity. These findings provide first evidence for selectively changed attentional sub-mechanisms in preterm born adults and their relation to altered intrinsic brain networks. In particular, data suggest that cortical changes in intrinsic functional connectivity may compensate adverse developmental consequences of prematurity on visual short-term storage capacity. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Attentional load modulates responses of human primary visual cortex to invisible stimuli.

    PubMed

    Bahrami, Bahador; Lavie, Nilli; Rees, Geraint

    2007-03-20

    Visual neuroscience has long sought to determine the extent to which stimulus-evoked activity in visual cortex depends on attention and awareness. Some influential theories of consciousness maintain that the allocation of attention is restricted to conscious representations [1, 2]. However, in the load theory of attention [3], competition between task-relevant and task-irrelevant stimuli for limited-capacity attention does not depend on conscious perception of the irrelevant stimuli. The critical test is whether the level of attentional load in a relevant task would determine unconscious neural processing of invisible stimuli. Human participants were scanned with high-field fMRI while they performed a foveal task of low or high attentional load. Irrelevant, invisible monocular stimuli were simultaneously presented peripherally and were continuously suppressed by a flashing mask in the other eye [4]. Attentional load in the foveal task strongly modulated retinotopic activity evoked in primary visual cortex (V1) by the invisible stimuli. Contrary to traditional views [1, 2, 5, 6], we found that availability of attentional capacity determines neural representations related to unconscious processing of continuously suppressed stimuli in human primary visual cortex. Spillover of attention to cortical representations of invisible stimuli (under low load) cannot be a sufficient condition for their awareness.

  4. Tracking the visual focus of attention for a varying number of wandering people.

    PubMed

    Smith, Kevin; Ba, Sileye O; Odobez, Jean-Marc; Gatica-Perez, Daniel

    2008-07-01

    We define and address the problem of finding the visual focus of attention for a varying number of wandering people (VFOA-W), determining where the people's movement is unconstrained. VFOA-W estimation is a new and important problem with mplications for behavior understanding and cognitive science, as well as real-world applications. One such application, which we present in this article, monitors the attention passers-by pay to an outdoor advertisement. Our approach to the VFOA-W problem proposes a multi-person tracking solution based on a dynamic Bayesian network that simultaneously infers the (variable) number of people in a scene, their body locations, their head locations, and their head pose. For efficient inference in the resulting large variable-dimensional state-space we propose a Reversible Jump Markov Chain Monte Carlo (RJMCMC) sampling scheme, as well as a novel global observation model which determines the number of people in the scene and localizes them. We propose a Gaussian Mixture Model (GMM) and Hidden Markov Model (HMM)-based VFOA-W model which use head pose and location information to determine people's focus state. Our models are evaluated for tracking performance and ability to recognize people looking at an outdoor advertisement, with results indicating good performance on sequences where a moderate number of people pass in front of an advertisement.

  5. Prefrontal contributions to visual selective attention.

    PubMed

    Squire, Ryan F; Noudoost, Behrad; Schafer, Robert J; Moore, Tirin

    2013-07-08

    The faculty of attention endows us with the capacity to process important sensory information selectively while disregarding information that is potentially distracting. Much of our understanding of the neural circuitry underlying this fundamental cognitive function comes from neurophysiological studies within the visual modality. Past evidence suggests that a principal function of the prefrontal cortex (PFC) is selective attention and that this function involves the modulation of sensory signals within posterior cortices. In this review, we discuss recent progress in identifying the specific prefrontal circuits controlling visual attention and its neural correlates within the primate visual system. In addition, we examine the persisting challenge of precisely defining how behavior should be affected when attentional function is lost.

  6. More than a filter: Feature-based attention regulates the distribution of visual working memory resources.

    PubMed

    Dube, Blaire; Emrich, Stephen M; Al-Aidroos, Naseem

    2017-10-01

    Across 2 experiments we revisited the filter account of how feature-based attention regulates visual working memory (VWM). Originally drawing from discrete-capacity ("slot") models, the filter account proposes that attention operates like the "bouncer in the brain," preventing distracting information from being encoded so that VWM resources are reserved for relevant information. Given recent challenges to the assumptions of discrete-capacity models, we investigated whether feature-based attention plays a broader role in regulating memory. Both experiments used partial report tasks in which participants memorized the colors of circle and square stimuli, and we provided a feature-based goal by manipulating the likelihood that 1 shape would be probed over the other across a range of probabilities. By decomposing participants' responses using mixture and variable-precision models, we estimated the contributions of guesses, nontarget responses, and imprecise memory representations to their errors. Consistent with the filter account, participants were less likely to guess when the probed memory item matched the feature-based goal. Interestingly, this effect varied with goal strength, even across high probabilities where goal-matching information should always be prioritized, demonstrating strategic control over filter strength. Beyond this effect of attention on which stimuli were encoded, we also observed effects on how they were encoded: Estimates of both memory precision and nontarget errors varied continuously with feature-based attention. The results offer support for an extension to the filter account, where feature-based attention dynamically regulates the distribution of resources within working memory so that the most relevant items are encoded with the greatest precision. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  7. Identifying Bottom-Up and Top-Down Components of Attentional Weight by Experimental Analysis and Computational Modeling

    ERIC Educational Resources Information Center

    Nordfang, Maria; Dyrholm, Mads; Bundesen, Claus

    2013-01-01

    The attentional weight of a visual object depends on the contrast of the features of the object to its local surroundings (feature contrast) and the relevance of the features to one's goals (feature relevance). We investigated the dependency in partial report experiments with briefly presented stimuli but unspeeded responses. The task was to…

  8. Auditory Attentional Capture during Serial Recall: Violations at Encoding of an Algorithm-Based Neural Model?

    ERIC Educational Resources Information Center

    Hughes, Robert W.; Vachon, Francois; Jones, Dylan M.

    2005-01-01

    A novel attentional capture effect is reported in which visual-verbal serial recall was disrupted if a single deviation in the interstimulus interval occurred within otherwise regularly presented task-irrelevant spoken items. The degree of disruption was the same whether the temporal deviant was embedded in a sequence made up of a repeating item…

  9. Towards a Cognitive Model of Distraction by Auditory Novelty: The Role of Involuntary Attention Capture and Semantic Processing

    ERIC Educational Resources Information Center

    Parmentier, Fabrice B. R.

    2008-01-01

    Unexpected auditory stimuli are potent distractors, able to break through selective attention and disrupt performance in an unrelated visual task. This study examined the processing fate of novel sounds by examining the extent to which their semantic content is analyzed and whether the outcome of this processing can impact on subsequent behavior.…

  10. Object Selection Costs in Visual Working Memory: A Diffusion Model Analysis of the Focus of Attention

    ERIC Educational Resources Information Center

    Sewell, David K.; Lilburn, Simon D.; Smith, Philip L.

    2016-01-01

    A central question in working memory research concerns the degree to which information in working memory is accessible to other cognitive processes (e.g., decision-making). Theories assuming that the focus of attention can only store a single object at a time require the focus to orient to a target representation before further processing can…

  11. More is still not better: testing the perturbation model of temporal reference memory across different modalities and tasks.

    PubMed

    Ogden, Ruth S; Jones, Luke A

    2009-05-01

    The ability of the perturbation model (Jones & Wearden, 2003) to account for reference memory function in a visual temporal generalization task and auditory and visual reproduction tasks was examined. In all tasks the number of presentations of the standard was manipulated (1, 3, or 5), and its effect on performance was compared. In visual temporal generalization the number of presentations of the standard did not affect the number of times the standard was correctly identified, nor did it affect the overall temporal generalization gradient. In auditory reproduction there was no effect of the number of times the standard was presented on mean reproductions. In visual reproduction mean reproductions were shorter when the standard was only presented once; however, this effect was reduced when a visual cue was provided before the first presentation of the standard. Whilst the results of all experiments are best accounted for by the perturbation model there appears to be some attentional benefit to multiple presentations of the standard in visual reproduction.

  12. Attention in natural scenes: Affective-motivational factors guide gaze independently of visual salience.

    PubMed

    Schomaker, Judith; Walper, Daniel; Wittmann, Bianca C; Einhäuser, Wolfgang

    2017-04-01

    In addition to low-level stimulus characteristics and current goals, our previous experience with stimuli can also guide attentional deployment. It remains unclear, however, if such effects act independently or whether they interact in guiding attention. In the current study, we presented natural scenes including every-day objects that differed in affective-motivational impact. In the first free-viewing experiment, we presented visually-matched triads of scenes in which one critical object was replaced that varied mainly in terms of motivational value, but also in terms of valence and arousal, as confirmed by ratings by a large set of observers. Treating motivation as a categorical factor, we found that it affected gaze. A linear-effect model showed that arousal, valence, and motivation predicted fixations above and beyond visual characteristics, like object size, eccentricity, or visual salience. In a second experiment, we experimentally investigated whether the effects of emotion and motivation could be modulated by visual salience. In a medium-salience condition, we presented the same unmodified scenes as in the first experiment. In a high-salience condition, we retained the saturation of the critical object in the scene, and decreased the saturation of the background, and in a low-salience condition, we desaturated the critical object while retaining the original saturation of the background. We found that highly salient objects guided gaze, but still found additional additive effects of arousal, valence and motivation, confirming that higher-level factors can also guide attention, as measured by fixations towards objects in natural scenes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Image quality assessment by preprocessing and full reference model combination

    NASA Astrophysics Data System (ADS)

    Bianco, S.; Ciocca, G.; Marini, F.; Schettini, R.

    2009-01-01

    This paper focuses on full-reference image quality assessment and presents different computational strategies aimed to improve the robustness and accuracy of some well known and widely used state of the art models, namely the Structural Similarity approach (SSIM) by Wang and Bovik and the S-CIELAB spatial-color model by Zhang and Wandell. We investigate the hypothesis that combining error images with a visual attention model could allow a better fit of the psycho-visual data of the LIVE Image Quality assessment Database Release 2. We show that the proposed quality assessment metric better correlates with the experimental data.

  14. Deficits in vision and visual attention associated with motor performance of very preterm/very low birth weight children.

    PubMed

    Geldof, Christiaan J A; van Hus, Janeline W P; Jeukens-Visser, Martine; Nollet, Frans; Kok, Joke H; Oosterlaan, Jaap; van Wassenaer-Leemhuis, Aleid G

    2016-01-01

    To extend understanding of impaired motor functioning of very preterm (VP)/very low birth weight (VLBW) children by investigating its relationship with visual attention, visual and visual-motor functioning. Motor functioning (Movement Assessment Battery for Children, MABC-2; Manual Dexterity, Aiming & Catching, and Balance component), as well as visual attention (attention network and visual search tests), vision (oculomotor, visual sensory and perceptive functioning), visual-motor integration (Beery Visual Motor Integration), and neurological status (Touwen examination) were comprehensively assessed in a sample of 106 5.5-year-old VP/VLBW children. Stepwise linear regression analyses were conducted to investigate multivariate associations between deficits in visual attention, oculomotor, visual sensory, perceptive and visual-motor integration functioning, abnormal neurological status, neonatal risk factors, and MABC-2 scores. Abnormal MABC-2 Total or component scores occurred in 23-36% of VP/VLBW children. Visual and visual-motor functioning accounted for 9-11% of variance in MABC-2 Total, Manual Dexterity and Balance scores. Visual perceptive deficits only were associated with Aiming & Catching. Abnormal neurological status accounted for an additional 19-30% of variance in MABC-2 Total, Manual Dexterity and Balance scores, and 5% of variance in Aiming & Catching, and neonatal risk factors for 3-6% of variance in MABC-2 Total, Manual Dexterity and Balance scores. Motor functioning is weakly associated with visual and visual-motor integration deficits and moderately associated with abnormal neurological status, indicating that motor performance reflects long term vulnerability following very preterm birth, and that visual deficits are of minor importance in understanding motor functioning of VP/VLBW children. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Functional size of human visual area V1: a neural correlate of top-down attention.

    PubMed

    Verghese, Ashika; Kolbe, Scott C; Anderson, Andrew J; Egan, Gary F; Vidyasagar, Trichur R

    2014-06-01

    Heavy demands are placed on the brain's attentional capacity when selecting a target item in a cluttered visual scene, or when reading. It is widely accepted that such attentional selection is mediated by top-down signals from higher cortical areas to early visual areas such as the primary visual cortex (V1). Further, it has also been reported that there is considerable variation in the surface area of V1. This variation may impact on either the number or specificity of attentional feedback signals and, thereby, the efficiency of attentional mechanisms. In this study, we investigated whether individual differences between humans performing attention-demanding tasks can be related to the functional area of V1. We found that those with a larger representation in V1 of the central 12° of the visual field as measured using BOLD signals from fMRI were able to perform a serial search task at a faster rate. In line with recent suggestions of the vital role of visuo-spatial attention in reading, the speed of reading showed a strong positive correlation with the speed of visual search, although it showed little correlation with the size of V1. The results support the idea that the functional size of the primary visual cortex is an important determinant of the efficiency of selective spatial attention for simple tasks, and that the attentional processing required for complex tasks like reading are to a large extent determined by other brain areas and inter-areal connections. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Visual short-term memory always requires general attention.

    PubMed

    Morey, Candice C; Bieler, Malte

    2013-02-01

    The role of attention in visual memory remains controversial; while some evidence has suggested that memory for binding between features demands no more attention than does memory for the same features, other evidence has indicated cognitive costs or mnemonic benefits for explicitly attending to bindings. We attempted to reconcile these findings by examining how memory for binding, for features, and for features during binding is affected by a concurrent attention-demanding task. We demonstrated that performing a concurrent task impairs memory for as few as two visual objects, regardless of whether each object includes one or more features. We argue that this pattern of results reflects an essential role for domain-general attention in visual memory, regardless of the simplicity of the to-be-remembered stimuli. We then discuss the implications of these findings for theories of visual working memory.

  17. Age-equivalent top-down modulation during cross-modal selective attention.

    PubMed

    Guerreiro, Maria J S; Anguera, Joaquin A; Mishra, Jyoti; Van Gerven, Pascal W M; Gazzaley, Adam

    2014-12-01

    Selective attention involves top-down modulation of sensory cortical areas, such that responses to relevant information are enhanced whereas responses to irrelevant information are suppressed. Suppression of irrelevant information, unlike enhancement of relevant information, has been shown to be deficient in aging. Although these attentional mechanisms have been well characterized within the visual modality, little is known about these mechanisms when attention is selectively allocated across sensory modalities. The present EEG study addressed this issue by testing younger and older participants in three different tasks: Participants attended to the visual modality and ignored the auditory modality, attended to the auditory modality and ignored the visual modality, or passively perceived information presented through either modality. We found overall modulation of visual and auditory processing during cross-modal selective attention in both age groups. Top-down modulation of visual processing was observed as a trend toward enhancement of visual information in the setting of auditory distraction, but no significant suppression of visual distraction when auditory information was relevant. Top-down modulation of auditory processing, on the other hand, was observed as suppression of auditory distraction when visual stimuli were relevant, but no significant enhancement of auditory information in the setting of visual distraction. In addition, greater visual enhancement was associated with better recognition of relevant visual information, and greater auditory distractor suppression was associated with a better ability to ignore auditory distraction. There were no age differences in these effects, suggesting that when relevant and irrelevant information are presented through different sensory modalities, selective attention remains intact in older age.

  18. Brain activity associated with selective attention, divided attention and distraction.

    PubMed

    Salo, Emma; Salmela, Viljami; Salmi, Juha; Numminen, Jussi; Alho, Kimmo

    2017-06-01

    Top-down controlled selective or divided attention to sounds and visual objects, as well as bottom-up triggered attention to auditory and visual distractors, has been widely investigated. However, no study has systematically compared brain activations related to all these types of attention. To this end, we used functional magnetic resonance imaging (fMRI) to measure brain activity in participants performing a tone pitch or a foveal grating orientation discrimination task, or both, distracted by novel sounds not sharing frequencies with the tones or by extrafoveal visual textures. To force focusing of attention to tones or gratings, or both, task difficulty was kept constantly high with an adaptive staircase method. A whole brain analysis of variance (ANOVA) revealed fronto-parietal attention networks for both selective auditory and visual attention. A subsequent conjunction analysis indicated partial overlaps of these networks. However, like some previous studies, the present results also suggest segregation of prefrontal areas involved in the control of auditory and visual attention. The ANOVA also suggested, and another conjunction analysis confirmed, an additional activity enhancement in the left middle frontal gyrus related to divided attention supporting the role of this area in top-down integration of dual task performance. Distractors expectedly disrupted task performance. However, contrary to our expectations, activations specifically related to the distractors were found only in the auditory and visual cortices. This suggests gating of the distractors from further processing perhaps due to strictly focused attention in the current demanding discrimination tasks. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. The visual attention span deficit in Chinese children with reading fluency difficulty.

    PubMed

    Zhao, Jing; Liu, Menglian; Liu, Hanlong; Huang, Chen

    2018-02-01

    With reading development, some children fail to learn to read fluently. However, reading fluency difficulty (RFD) has not been fully investigated. The present study explored the underlying mechanism of RFD from the aspect of visual attention span. Fourteen Chinese children with RFD and fourteen age-matched normal readers participated. The visual 1-back task was adopted to examine visual attention span. Reaction time and accuracy were recorded, and relevant d-prime (d') scores were computed. Results showed that children with RFD exhibited lower accuracy and lower d' values than the controls did in the visual 1-back task, revealing a visual attention span deficit. Further analyses on d' values revealed that the attention distribution seemed to exhibit an inverted U-shaped pattern without lateralization for normal readers, but a W-shaped pattern with a rightward bias for children with RFD, which was discussed based on between-group variation in reading strategies. Results of the correlation analyses showed that visual attention span was associated with reading fluency at the sentence level for normal readers, but was related to reading fluency at the single-character level for children with RFD. The different patterns in correlations between groups revealed that visual attention span might be affected by the variation in reading strategies. The current findings extend previous data from alphabetic languages to Chinese, a logographic language with a particularly deep orthography, and have implications for reading-dysfluency remediation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Visual spatial attention enhances the amplitude of positive and negative fMRI responses to visual stimulation in an eccentricity-dependent manner.

    PubMed

    Bressler, David W; Fortenbaugh, Francesca C; Robertson, Lynn C; Silver, Michael A

    2013-06-07

    Endogenous visual spatial attention improves perception and enhances neural responses to visual stimuli at attended locations. Although many aspects of visual processing differ significantly between central and peripheral vision, little is known regarding the neural substrates of the eccentricity dependence of spatial attention effects. We measured amplitudes of positive and negative fMRI responses to visual stimuli as a function of eccentricity in a large number of topographically-organized cortical areas. Responses to each stimulus were obtained when the stimulus was attended and when spatial attention was directed to a stimulus in the opposite visual hemifield. Attending to the stimulus increased both positive and negative response amplitudes in all cortical areas we studied: V1, V2, V3, hV4, VO1, LO1, LO2, V3A/B, IPS0, TO1, and TO2. However, the eccentricity dependence of these effects differed considerably across cortical areas. In early visual, ventral, and lateral occipital cortex, attentional enhancement of positive responses was greater for central compared to peripheral eccentricities. The opposite pattern was observed in dorsal stream areas IPS0 and putative MT homolog TO1, where attentional enhancement of positive responses was greater in the periphery. Both the magnitude and the eccentricity dependence of attentional modulation of negative fMRI responses closely mirrored that of positive responses across cortical areas. Copyright © 2013 Elsevier Ltd. All rights reserved.

Top