Stimulus information contaminates summation tests of independent neural representations of features
NASA Technical Reports Server (NTRS)
Shimozaki, Steven S.; Eckstein, Miguel P.; Abbey, Craig K.
2002-01-01
Many models of visual processing assume that visual information is analyzed into separable and independent neural codes, or features. A common psychophysical test of independent features is known as a summation study, which measures performance in a detection, discrimination, or visual search task as the number of proposed features increases. Improvement in human performance with increasing number of available features is typically attributed to the summation, or combination, of information across independent neural coding of the features. In many instances, however, increasing the number of available features also increases the stimulus information in the task, as assessed by an optimal observer that does not include the independent neural codes. In a visual search task with spatial frequency and orientation as the component features, a particular set of stimuli were chosen so that all searches had equivalent stimulus information, regardless of the number of features. In this case, human performance did not improve with increasing number of features, implying that the improvement observed with additional features may be due to stimulus information and not the combination across independent features.
Nagai, Takehiro; Matsushima, Toshiki; Koida, Kowa; Tani, Yusuke; Kitazaki, Michiteru; Nakauchi, Shigeki
2015-10-01
Humans can visually recognize material categories of objects, such as glass, stone, and plastic, easily. However, little is known about the kinds of surface quality features that contribute to such material class recognition. In this paper, we examine the relationship between perceptual surface features and material category discrimination performance for pictures of materials, focusing on temporal aspects, including reaction time and effects of stimulus duration. The stimuli were pictures of objects with an identical shape but made of different materials that could be categorized into seven classes (glass, plastic, metal, stone, wood, leather, and fabric). In a pre-experiment, observers rated the pictures on nine surface features, including visual (e.g., glossiness and transparency) and non-visual features (e.g., heaviness and warmness), on a 7-point scale. In the main experiments, observers judged whether two simultaneously presented pictures were classified as the same or different material category. Reaction times and effects of stimulus duration were measured. The results showed that visual feature ratings were correlated with material discrimination performance for short reaction times or short stimulus durations, while non-visual feature ratings were correlated only with performance for long reaction times or long stimulus durations. These results suggest that the mechanisms underlying visual and non-visual feature processing may differ in terms of processing time, although the cause is unclear. Visual surface features may mainly contribute to material recognition in daily life, while non-visual features may contribute only weakly, if at all. Copyright © 2014 Elsevier Ltd. All rights reserved.
Preattentive binding of auditory and visual stimulus features.
Winkler, István; Czigler, István; Sussman, Elyse; Horváth, János; Balázs, Lászlo
2005-02-01
We investigated the role of attention in feature binding in the auditory and the visual modality. One auditory and one visual experiment used the mismatch negativity (MMN and vMMN, respectively) event-related potential to index the memory representations created from stimulus sequences, which were either task-relevant and, therefore, attended or task-irrelevant and ignored. In the latter case, the primary task was a continuous demanding within-modality task. The test sequences were composed of two frequently occurring stimuli, which differed from each other in two stimulus features (standard stimuli) and two infrequently occurring stimuli (deviants), which combined one feature from one standard stimulus with the other feature of the other standard stimulus. Deviant stimuli elicited MMN responses of similar parameters across the different attentional conditions. These results suggest that the memory representations involved in the MMN deviance detection response encoded the frequently occurring feature combinations whether or not the test sequences were attended. A possible alternative to the memory-based interpretation of the visual results, the elicitation of the McCollough color-contingent aftereffect, was ruled out by the results of our third experiment. The current results are compared with those supporting the attentive feature integration theory. We conclude that (1) with comparable stimulus paradigms, similar results have been obtained in the two modalities, (2) there exist preattentive processes of feature binding, however, (3) conjoining features within rich arrays of objects under time pressure and/or longterm retention of the feature-conjoined memory representations may require attentive processes.
Gestalt perception modulates early visual processing.
Herrmann, C S; Bosch, V
2001-04-17
We examined whether early visual processing reflects perceptual properties of a stimulus in addition to physical features. We recorded event-related potentials (ERPs) of 13 subjects in a visual classification task. We used four different stimuli which were all composed of four identical elements. One of the stimuli constituted an illusory Kanizsa square, another was composed of the same number of collinear line segments but the elements did not form a Gestalt. In addition, a target and a control stimulus were used which were arranged differently. These stimuli allow us to differentiate the processing of colinear line elements (stimulus features) and illusory figures (perceptual properties). The visual N170 in response to the illusory figure was significantly larger as compared to the other collinear stimulus. This is taken to indicate that the visual N170 reflects cognitive processes of Gestalt perception in addition to attentional processes and physical stimulus properties.
Can responses to basic non-numerical visual features explain neural numerosity responses?
Harvey, Ben M; Dumoulin, Serge O
2017-04-01
Humans and many animals can distinguish between stimuli that differ in numerosity, the number of objects in a set. Human and macaque parietal lobes contain neurons that respond to changes in stimulus numerosity. However, basic non-numerical visual features can affect neural responses to and perception of numerosity, and visual features often co-vary with numerosity. Therefore, it is debated whether numerosity or co-varying low-level visual features underlie neural and behavioral responses to numerosity. To test the hypothesis that non-numerical visual features underlie neural numerosity responses in a human parietal numerosity map, we analyze responses to a group of numerosity stimulus configurations that have the same numerosity progression but vary considerably in their non-numerical visual features. Using ultra-high-field (7T) fMRI, we measure responses to these stimulus configurations in an area of posterior parietal cortex whose responses are believed to reflect numerosity-selective activity. We describe an fMRI analysis method to distinguish between alternative models of neural response functions, following a population receptive field (pRF) modeling approach. For each stimulus configuration, we first quantify the relationships between numerosity and several non-numerical visual features that have been proposed to underlie performance in numerosity discrimination tasks. We then determine how well responses to these non-numerical visual features predict the observed fMRI responses, and compare this to the predictions of responses to numerosity. We demonstrate that a numerosity response model predicts observed responses more accurately than models of responses to simple non-numerical visual features. As such, neural responses in cognitive processing need not reflect simpler properties of early sensory inputs. Copyright © 2017 Elsevier Inc. All rights reserved.
Expectation and Surprise Determine Neural Population Responses in the Ventral Visual Stream
Egner, Tobias; Monti, Jim M.; Summerfield, Christopher
2014-01-01
Visual cortex is traditionally viewed as a hierarchy of neural feature detectors, with neural population responses being driven by bottom-up stimulus features. Conversely, “predictive coding” models propose that each stage of the visual hierarchy harbors two computationally distinct classes of processing unit: representational units that encode the conditional probability of a stimulus and provide predictions to the next lower level; and error units that encode the mismatch between predictions and bottom-up evidence, and forward prediction error to the next higher level. Predictive coding therefore suggests that neural population responses in category-selective visual regions, like the fusiform face area (FFA), reflect a summation of activity related to prediction (“face expectation”) and prediction error (“face surprise”), rather than a homogenous feature detection response. We tested the rival hypotheses of the feature detection and predictive coding models by collecting functional magnetic resonance imaging data from the FFA while independently varying both stimulus features (faces vs houses) and subjects’ perceptual expectations regarding those features (low vs medium vs high face expectation). The effects of stimulus and expectation factors interacted, whereby FFA activity elicited by face and house stimuli was indistinguishable under high face expectation and maximally differentiated under low face expectation. Using computational modeling, we show that these data can be explained by predictive coding but not by feature detection models, even when the latter are augmented with attentional mechanisms. Thus, population responses in the ventral visual stream appear to be determined by feature expectation and surprise rather than by stimulus features per se. PMID:21147999
Fox, Olivia M.; Harel, Assaf; Bennett, Kevin B.
2017-01-01
The perception of a visual stimulus is dependent not only upon local features, but also on the arrangement of those features. When stimulus features are perceptually well organized (e.g., symmetric or parallel), a global configuration with a high degree of salience emerges from the interactions between these features, often referred to as emergent features. Emergent features can be demonstrated in the Configural Superiority Effect (CSE): presenting a stimulus within an organized context relative to its presentation in a disarranged one results in better performance. Prior neuroimaging work on the perception of emergent features regards the CSE as an “all or none” phenomenon, focusing on the contrast between configural and non-configural stimuli. However, it is still not clear how emergent features are processed between these two endpoints. The current study examined the extent to which behavioral and neuroimaging markers of emergent features are responsive to the degree of configurality in visual displays. Subjects were tasked with reporting the anomalous quadrant in a visual search task while being scanned. Degree of configurality was manipulated by incrementally varying the rotational angle of low-level features within the stimulus arrays. Behaviorally, we observed faster response times with increasing levels of configurality. These behavioral changes were accompanied by increases in response magnitude across multiple visual areas in occipito-temporal cortex, primarily early visual cortex and object-selective cortex. Our findings suggest that the neural correlates of emergent features can be observed even in response to stimuli that are not fully configural, and demonstrate that configural information is already present at early stages of the visual hierarchy. PMID:28167924
Graewe, Britta; De Weerd, Peter; Farivar, Reza; Castelo-Branco, Miguel
2012-01-01
Many studies have linked the processing of different object categories to specific event-related potentials (ERPs) such as the face-specific N170. Despite reports showing that object-related ERPs are influenced by visual stimulus features, there is consensus that these components primarily reflect categorical aspects of the stimuli. Here, we re-investigated this idea by systematically measuring the effects of visual feature manipulations on ERP responses elicited by both structure-from-motion (SFM)-defined and luminance-defined object stimuli. SFM objects elicited a novel component at 200–250 ms (N250) over parietal and posterior temporal sites. We found, however, that the N250 amplitude was unaffected by restructuring SFM stimuli into meaningless objects based on identical visual cues. This suggests that this N250 peak was not uniquely linked to categorical aspects of the objects, but is strongly determined by visual stimulus features. We provide strong support for this hypothesis by parametrically manipulating the depth range of both SFM- and luminance-defined object stimuli and showing that the N250 evoked by SFM stimuli as well as the well-known N170 to static faces were sensitive to this manipulation. Importantly, this effect could not be attributed to compromised object categorization in low depth stimuli, confirming a strong impact of visual stimulus features on object-related ERP signals. As ERP components linked with visual categorical object perception are likely determined by multiple stimulus features, this creates an interesting inverse problem when deriving specific perceptual processes from variations in ERP components. PMID:22363479
Graewe, Britta; De Weerd, Peter; Farivar, Reza; Castelo-Branco, Miguel
2012-01-01
Many studies have linked the processing of different object categories to specific event-related potentials (ERPs) such as the face-specific N170. Despite reports showing that object-related ERPs are influenced by visual stimulus features, there is consensus that these components primarily reflect categorical aspects of the stimuli. Here, we re-investigated this idea by systematically measuring the effects of visual feature manipulations on ERP responses elicited by both structure-from-motion (SFM)-defined and luminance-defined object stimuli. SFM objects elicited a novel component at 200-250 ms (N250) over parietal and posterior temporal sites. We found, however, that the N250 amplitude was unaffected by restructuring SFM stimuli into meaningless objects based on identical visual cues. This suggests that this N250 peak was not uniquely linked to categorical aspects of the objects, but is strongly determined by visual stimulus features. We provide strong support for this hypothesis by parametrically manipulating the depth range of both SFM- and luminance-defined object stimuli and showing that the N250 evoked by SFM stimuli as well as the well-known N170 to static faces were sensitive to this manipulation. Importantly, this effect could not be attributed to compromised object categorization in low depth stimuli, confirming a strong impact of visual stimulus features on object-related ERP signals. As ERP components linked with visual categorical object perception are likely determined by multiple stimulus features, this creates an interesting inverse problem when deriving specific perceptual processes from variations in ERP components.
Mendoza-Halliday, Diego; Martinez-Trujillo, Julio C.
2017-01-01
The primate lateral prefrontal cortex (LPFC) encodes visual stimulus features while they are perceived and while they are maintained in working memory. However, it remains unclear whether perceived and memorized features are encoded by the same or different neurons and population activity patterns. Here we record LPFC neuronal activity while monkeys perceive the motion direction of a stimulus that remains visually available, or memorize the direction if the stimulus disappears. We find neurons with a wide variety of combinations of coding strength for perceived and memorized directions: some neurons encode both to similar degrees while others preferentially or exclusively encode either one. Reading out the combined activity of all neurons, a machine-learning algorithm reliably decode the motion direction and determine whether it is perceived or memorized. Our results indicate that a functionally diverse population of LPFC neurons provides a substrate for discriminating between perceptual and mnemonic representations of visual features. PMID:28569756
Perceptual grouping enhances visual plasticity.
Mastropasqua, Tommaso; Turatto, Massimo
2013-01-01
Visual perceptual learning, a manifestation of neural plasticity, refers to improvements in performance on a visual task achieved by training. Attention is known to play an important role in perceptual learning, given that the observer's discriminative ability improves only for those stimulus feature that are attended. However, the distribution of attention can be severely constrained by perceptual grouping, a process whereby the visual system organizes the initial retinal input into candidate objects. Taken together, these two pieces of evidence suggest the interesting possibility that perceptual grouping might also affect perceptual learning, either directly or via attentional mechanisms. To address this issue, we conducted two experiments. During the training phase, participants attended to the contrast of the task-relevant stimulus (oriented grating), while two similar task-irrelevant stimuli were presented in the adjacent positions. One of the two flanking stimuli was perceptually grouped with the attended stimulus as a consequence of its similar orientation (Experiment 1) or because it was part of the same perceptual object (Experiment 2). A test phase followed the training phase at each location. Compared to the task-irrelevant no-grouping stimulus, orientation discrimination improved at the attended location. Critically, a perceptual learning effect equivalent to the one observed for the attended location also emerged for the task-irrelevant grouping stimulus, indicating that perceptual grouping induced a transfer of learning to the stimulus (or feature) being perceptually grouped with the task-relevant one. Our findings indicate that no voluntary effort to direct attention to the grouping stimulus or feature is necessary to enhance visual plasticity.
Babiloni, Claudio; Marzano, Nicola; Soricelli, Andrea; Cordone, Susanna; Millán-Calenti, José Carlos; Del Percio, Claudio; Buján, Ana
2016-01-01
This article reviews three experiments on event-related potentials (ERPs) testing the hypothesis that primary visual consciousness (stimulus self-report) is related to enhanced cortical neural synchronization as a function of stimulus features. ERP peak latency and sources were compared between “seen” trials and “not seen” trials, respectively related and unrelated to the primary visual consciousness. Three salient features of visual stimuli were considered (visuospatial, emotional face expression, and written words). Results showed the typical visual ERP components in both “seen” and “not seen” trials. There was no statistical difference in the ERP peak latencies between the “seen” and “not seen” trials, suggesting a similar timing of the cortical neural synchronization regardless the primary visual consciousness. In contrast, ERP sources showed differences between “seen” and “not seen” trials. For the visuospatial stimuli, the primary consciousness was related to higher activity in dorsal occipital and parietal sources at about 400 ms post-stimulus. For the emotional face expressions, there was greater activity in parietal and frontal sources at about 180 ms post-stimulus. For the written letters, there was higher activity in occipital, parietal and temporal sources at about 230 ms post-stimulus. These results hint that primary visual consciousness is associated with an enhanced cortical neural synchronization having entirely different spatiotemporal characteristics as a function of the features of the visual stimuli and possibly, the relative qualia (i.e., visuospatial, face expression, and words). In this framework, the dorsal visual stream may be synchronized in association with the primary consciousness of visuospatial and emotional face contents. Analogously, both dorsal and ventral visual streams may be synchronized in association with the primary consciousness of linguistic contents. In this line of reasoning, the ensemble of the cortical neural networks underpinning the single visual features would constitute a sort of multi-dimensional palette of colors, shapes, regions of the visual field, movements, emotional face expressions, and words. The synchronization of one or more of these cortical neural networks, each with its peculiar timing, would produce the primary consciousness of one or more of the visual features of the scene. PMID:27445750
Physical Features of Visual Images Affect Macaque Monkey’s Preference for These Images
Funahashi, Shintaro
2016-01-01
Animals exhibit different degrees of preference toward various visual stimuli. In addition, it has been shown that strongly preferred stimuli can often act as a reward. The aim of the present study was to determine what features determine the strength of the preference for visual stimuli in order to examine neural mechanisms of preference judgment. We used 50 color photographs obtained from the Flickr Material Database (FMD) as original stimuli. Four macaque monkeys performed a simple choice task, in which two stimuli selected randomly from among the 50 stimuli were simultaneously presented on a monitor and monkeys were required to choose either stimulus by eye movements. We considered that the monkeys preferred the chosen stimulus if it continued to look at the stimulus for an additional 6 s and calculated a choice ratio for each stimulus. Each monkey exhibited a different choice ratio for each of the original 50 stimuli. They tended to select clear, colorful and in-focus stimuli. Complexity and clarity were stronger determinants of preference than colorfulness. Images that included greater amounts of spatial frequency components were selected more frequently. These results indicate that particular physical features of the stimulus can affect the strength of a monkey’s preference and that the complexity, clarity and colorfulness of the stimulus are important determinants of this preference. Neurophysiological studies would be needed to examine whether these features of visual stimuli produce more activation in neurons that participate in this preference judgment. PMID:27853424
Rolke, Bettina; Festl, Freya; Seibold, Verena C
2016-11-01
We used ERPs to investigate whether temporal attention interacts with spatial attention and feature-based attention to enhance visual processing. We presented a visual search display containing one singleton stimulus among a set of homogenous distractors. Participants were asked to respond only to target singletons of a particular color and shape that were presented in an attended spatial position. We manipulated temporal attention by presenting a warning signal before each search display and varying the foreperiod (FP) between the warning signal and the search display in a blocked manner. We observed distinctive ERP effects of both spatial and temporal attention. The amplitudes for the N2pc, SPCN, and P3 were enhanced by spatial attention indicating a processing benefit of relevant stimulus features at the attended side. Temporal attention accelerated stimulus processing; this was indexed by an earlier onset of the N2pc component and a reduction in reaction times to targets. Most importantly, temporal attention did not interact with spatial attention or stimulus features to influence visual processing. Taken together, the results suggest that temporal attention fosters visual perceptual processing in a visual search task independently from spatial attention and feature-based attention; this provides support for the nonspecific enhancement hypothesis of temporal attention. © 2016 Society for Psychophysiological Research.
Perceptual Grouping Enhances Visual Plasticity
Mastropasqua, Tommaso; Turatto, Massimo
2013-01-01
Visual perceptual learning, a manifestation of neural plasticity, refers to improvements in performance on a visual task achieved by training. Attention is known to play an important role in perceptual learning, given that the observer's discriminative ability improves only for those stimulus feature that are attended. However, the distribution of attention can be severely constrained by perceptual grouping, a process whereby the visual system organizes the initial retinal input into candidate objects. Taken together, these two pieces of evidence suggest the interesting possibility that perceptual grouping might also affect perceptual learning, either directly or via attentional mechanisms. To address this issue, we conducted two experiments. During the training phase, participants attended to the contrast of the task-relevant stimulus (oriented grating), while two similar task-irrelevant stimuli were presented in the adjacent positions. One of the two flanking stimuli was perceptually grouped with the attended stimulus as a consequence of its similar orientation (Experiment 1) or because it was part of the same perceptual object (Experiment 2). A test phase followed the training phase at each location. Compared to the task-irrelevant no-grouping stimulus, orientation discrimination improved at the attended location. Critically, a perceptual learning effect equivalent to the one observed for the attended location also emerged for the task-irrelevant grouping stimulus, indicating that perceptual grouping induced a transfer of learning to the stimulus (or feature) being perceptually grouped with the task-relevant one. Our findings indicate that no voluntary effort to direct attention to the grouping stimulus or feature is necessary to enhance visual plasticity. PMID:23301100
Size matters: large objects capture attention in visual search.
Proulx, Michael J
2010-12-23
Can objects or events ever capture one's attention in a purely stimulus-driven manner? A recent review of the literature set out the criteria required to find stimulus-driven attentional capture independent of goal-directed influences, and concluded that no published study has satisfied that criteria. Here visual search experiments assessed whether an irrelevantly large object can capture attention. Capture of attention by this static visual feature was found. The results suggest that a large object can indeed capture attention in a stimulus-driven manner and independent of displaywide features of the task that might encourage a goal-directed bias for large items. It is concluded that these results are either consistent with the stimulus-driven criteria published previously or alternatively consistent with a flexible, goal-directed mechanism of saliency detection.
Response-specifying cue for action interferes with perception of feature-sharing stimuli.
Nishimura, Akio; Yokosawa, Kazuhiko
2010-06-01
Perceiving a visual stimulus is more difficult when a to-be-executed action is compatible with that stimulus, which is known as blindness to response-compatible stimuli. The present study explored how the factors constituting the action event (i.e., response-specifying cue, response intention, and response feature) affect the occurrence of this blindness effect. The response-specifying cue varied along the horizontal and vertical dimensions, while the response buttons were arranged diagonally. Participants responded based on one dimension randomly determined in a trial-by-trial manner. The response intention varied along a single dimension, whereas the response location and the response-specifying cue varied within both vertical and horizontal dimensions simultaneously. Moreover, the compatibility between the visual stimulus and the response location and the compatibility between that stimulus and the response-specifying cue was separately determined. The blindness effect emerged exclusively based on the feature correspondence between the response-specifying cue of the action task and the visual target of the perceptual task. The size of this stimulus-stimulus (S-S) blindness effect did not differ significantly across conditions, showing no effect of response intention and response location. This finding emphasizes the effect of stimulus factors, rather than response factors, of the action event as a source of the blindness to response-compatible stimuli.
Attention improves encoding of task-relevant features in the human visual cortex.
Jehee, Janneke F M; Brady, Devin K; Tong, Frank
2011-06-01
When spatial attention is directed toward a particular stimulus, increased activity is commonly observed in corresponding locations of the visual cortex. Does this attentional increase in activity indicate improved processing of all features contained within the attended stimulus, or might spatial attention selectively enhance the features relevant to the observer's task? We used fMRI decoding methods to measure the strength of orientation-selective activity patterns in the human visual cortex while subjects performed either an orientation or contrast discrimination task, involving one of two laterally presented gratings. Greater overall BOLD activation with spatial attention was observed in visual cortical areas V1-V4 for both tasks. However, multivariate pattern analysis revealed that orientation-selective responses were enhanced by attention only when orientation was the task-relevant feature and not when the contrast of the grating had to be attended. In a second experiment, observers discriminated the orientation or color of a specific lateral grating. Here, orientation-selective responses were enhanced in both tasks, but color-selective responses were enhanced only when color was task relevant. In both experiments, task-specific enhancement of feature-selective activity was not confined to the attended stimulus location but instead spread to other locations in the visual field, suggesting the concurrent involvement of a global feature-based attentional mechanism. These results suggest that attention can be remarkably selective in its ability to enhance particular task-relevant features and further reveal that increases in overall BOLD amplitude are not necessarily accompanied by improved processing of stimulus information.
Hardman, Kyle; Cowan, Nelson
2014-01-01
Visual working memory stores stimuli from our environment as representations that can be accessed by high-level control processes. This study addresses a longstanding debate in the literature about whether storage limits in visual working memory include a limit to the complexity of discrete items. We examined the issue with a number of change-detection experiments that used complex stimuli which possessed multiple features per stimulus item. We manipulated the number of relevant features of the stimulus objects in order to vary feature load. In all of our experiments, we found that increased feature load led to a reduction in change-detection accuracy. However, we found that feature load alone could not account for the results, but that a consideration of the number of relevant objects was also required. This study supports capacity limits for both feature and object storage in visual working memory. PMID:25089739
Setting and changing feature priorities in visual short-term memory.
Kalogeropoulou, Zampeta; Jagadeesh, Akshay V; Ohl, Sven; Rolfs, Martin
2017-04-01
Many everyday tasks require prioritizing some visual features over competing ones, both during the selection from the rich sensory input and while maintaining information in visual short-term memory (VSTM). Here, we show that observers can change priorities in VSTM when, initially, they attended to a different feature. Observers reported from memory the orientation of one of two spatially interspersed groups of black and white gratings. Using colored pre-cues (presented before stimulus onset) and retro-cues (presented after stimulus offset) predicting the to-be-reported group, we manipulated observers' feature priorities independently during stimulus encoding and maintenance, respectively. Valid pre-cues reliably increased observers' performance (reduced guessing, increased report precision) as compared to neutral ones; invalid pre-cues had the opposite effect. Valid retro-cues also consistently improved performance (by reducing random guesses), even if the unexpected group suddenly became relevant (invalid-valid condition). Thus, feature-based attention can reshape priorities in VSTM protecting information that would otherwise be forgotten.
Perceptual grouping across eccentricity.
Tannazzo, Teresa; Kurylo, Daniel D; Bukhari, Farhan
2014-10-01
Across the visual field, progressive differences exist in neural processing as well as perceptual abilities. Expansion of stimulus scale across eccentricity compensates for some basic visual capacities, but not for high-order functions. It was hypothesized that as with many higher-order functions, perceptual grouping ability should decline across eccentricity. To test this prediction, psychophysical measurements of grouping were made across eccentricity. Participants indicated the dominant grouping of dot grids in which grouping was based upon luminance, motion, orientation, or proximity. Across trials, the organization of stimuli was systematically decreased until perceived grouping became ambiguous. For all stimulus features, grouping ability remained relatively stable until 40°, beyond which thresholds significantly elevated. The pattern of change across eccentricity varied across stimulus feature, in which stimulus scale, dot size, or stimulus size interacted with eccentricity effects. These results demonstrate that perceptual grouping of such stimuli is not reliant upon foveal viewing, and suggest that selection of dominant grouping patterns from ambiguous displays operates similarly across much of the visual field. Copyright © 2014 Elsevier Ltd. All rights reserved.
Huynh, Duong L; Tripathy, Srimant P; Bedell, Harold E; Ögmen, Haluk
2015-01-01
Human memory is content addressable-i.e., contents of the memory can be accessed using partial information about the bound features of a stored item. In this study, we used a cross-feature cuing technique to examine how the human visual system encodes, binds, and retains information about multiple stimulus features within a set of moving objects. We sought to characterize the roles of three different features (position, color, and direction of motion, the latter two of which are processed preferentially within the ventral and dorsal visual streams, respectively) in the construction and maintenance of object representations. We investigated the extent to which these features are bound together across the following processing stages: during stimulus encoding, sensory (iconic) memory, and visual short-term memory. Whereas all features examined here can serve as cues for addressing content, their effectiveness shows asymmetries and varies according to cue-report pairings and the stage of information processing and storage. Position-based indexing theories predict that position should be more effective as a cue compared to other features. While we found a privileged role for position as a cue at the stimulus-encoding stage, position was not the privileged cue at the sensory and visual short-term memory stages. Instead, the pattern that emerged from our findings is one that mirrors the parallel processing streams in the visual system. This stream-specific binding and cuing effectiveness manifests itself in all three stages of information processing examined here. Finally, we find that the Leaky Flask model proposed in our previous study is applicable to all three features.
The effect of visual salience on memory-based choices.
Pooresmaeili, Arezoo; Bach, Dominik R; Dolan, Raymond J
2014-02-01
Deciding whether a stimulus is the "same" or "different" from a previous presented one involves integrating among the incoming sensory information, working memory, and perceptual decision making. Visual selective attention plays a crucial role in selecting the relevant information that informs a subsequent course of action. Previous studies have mainly investigated the role of visual attention during the encoding phase of working memory tasks. In this study, we investigate whether manipulation of bottom-up attention by changing stimulus visual salience impacts on later stages of memory-based decisions. In two experiments, we asked subjects to identify whether a stimulus had either the same or a different feature to that of a memorized sample. We manipulated visual salience of the test stimuli by varying a task-irrelevant feature contrast. Subjects chose a visually salient item more often when they looked for matching features and less often so when they looked for a nonmatch. This pattern of results indicates that salient items are more likely to be identified as a match. We interpret the findings in terms of capacity limitations at a comparison stage where a visually salient item is more likely to exhaust resources leading it to be prematurely parsed as a match.
Dynamic binding of visual features by neuronal/stimulus synchrony.
Iwabuchi, A
1998-05-01
When people see a visual scene, certain parts of the visual scene are treated as belonging together and we regard them as a perceptual unit, which is called a "figure". People focus on figures, and the remaining parts of the scene are disregarded as "ground". In Gestalt psychology this process is called "figure-ground segregation". According to current perceptual psychology, a figure is formed by binding various visual features in a scene, and developments in neuroscience have revealed that there are many feature-encoding neurons, which respond to such features specifically. It is not known, however, how the brain binds different features of an object into a coherent visual object representation. Recently, the theory of binding by neuronal synchrony, which argues that feature binding is dynamically mediated by neuronal synchrony of feature-encoding neurons, has been proposed. This review article portrays the problem of figure-ground segregation and features binding, summarizes neurophysiological and psychophysical experiments and theory relevant to feature binding by neuronal/stimulus synchrony, and suggests possible directions for future research on this topic.
Attention improves encoding of task-relevant features in the human visual cortex
Jehee, Janneke F.M.; Brady, Devin K.; Tong, Frank
2011-01-01
When spatial attention is directed towards a particular stimulus, increased activity is commonly observed in corresponding locations of the visual cortex. Does this attentional increase in activity indicate improved processing of all features contained within the attended stimulus, or might spatial attention selectively enhance the features relevant to the observer’s task? We used fMRI decoding methods to measure the strength of orientation-selective activity patterns in the human visual cortex while subjects performed either an orientation or contrast discrimination task, involving one of two laterally presented gratings. Greater overall BOLD activation with spatial attention was observed in areas V1-V4 for both tasks. However, multivariate pattern analysis revealed that orientation-selective responses were enhanced by attention only when orientation was the task-relevant feature, and not when the grating’s contrast had to be attended. In a second experiment, observers discriminated the orientation or color of a specific lateral grating. Here, orientation-selective responses were enhanced in both tasks but color-selective responses were enhanced only when color was task-relevant. In both experiments, task-specific enhancement of feature-selective activity was not confined to the attended stimulus location, but instead spread to other locations in the visual field, suggesting the concurrent involvement of a global feature-based attentional mechanism. These results suggest that attention can be remarkably selective in its ability to enhance particular task-relevant features, and further reveal that increases in overall BOLD amplitude are not necessarily accompanied by improved processing of stimulus information. PMID:21632942
Hardman, Kyle O; Cowan, Nelson
2015-03-01
Visual working memory stores stimuli from our environment as representations that can be accessed by high-level control processes. This study addresses a longstanding debate in the literature about whether storage limits in visual working memory include a limit to the complexity of discrete items. We examined the issue with a number of change-detection experiments that used complex stimuli that possessed multiple features per stimulus item. We manipulated the number of relevant features of the stimulus objects in order to vary feature load. In all of our experiments, we found that increased feature load led to a reduction in change-detection accuracy. However, we found that feature load alone could not account for the results but that a consideration of the number of relevant objects was also required. This study supports capacity limits for both feature and object storage in visual working memory. PsycINFO Database Record (c) 2015 APA, all rights reserved.
Simpson, Claire; Pinkham, Amy E; Kelsven, Skylar; Sasson, Noah J
2013-12-01
Emotion can be expressed by both the voice and face, and previous work suggests that presentation modality may impact emotion recognition performance in individuals with schizophrenia. We investigated the effect of stimulus modality on emotion recognition accuracy and the potential role of visual attention to faces in emotion recognition abilities. Thirty-one patients who met DSM-IV criteria for schizophrenia (n=8) or schizoaffective disorder (n=23) and 30 non-clinical control individuals participated. Both groups identified emotional expressions in three different conditions: audio only, visual only, combined audiovisual. In the visual only and combined conditions, time spent visually fixating salient features of the face were recorded. Patients were significantly less accurate than controls in emotion recognition during both the audio and visual only conditions but did not differ from controls on the combined condition. Analysis of visual scanning behaviors demonstrated that patients attended less than healthy individuals to the mouth in the visual condition but did not differ in visual attention to salient facial features in the combined condition, which may in part explain the absence of a deficit for patients in this condition. Collectively, these findings demonstrate that patients benefit from multimodal stimulus presentations of emotion and support hypotheses that visual attention to salient facial features may serve as a mechanism for accurate emotion identification. © 2013.
Botly, Leigh C P; De Rosa, Eve
2012-10-01
The visual search task established the feature integration theory of attention in humans and measures visuospatial attentional contributions to feature binding. We recently demonstrated that the neuromodulator acetylcholine (ACh), from the nucleus basalis magnocellularis (NBM), supports the attentional processes required for feature binding using a rat digging-based task. Additional research has demonstrated cholinergic contributions from the NBM to visuospatial attention in rats. Here, we combined these lines of evidence and employed visual search in rats to examine whether cortical cholinergic input supports visuospatial attention specifically for feature binding. We trained 18 male Long-Evans rats to perform visual search using touch screen-equipped operant chambers. Sessions comprised Feature Search (no feature binding required) and Conjunctive Search (feature binding required) trials using multiple stimulus set sizes. Following acquisition of visual search, 8 rats received bilateral NBM lesions using 192 IgG-saporin to selectively reduce cholinergic afferentation of the neocortex, which we hypothesized would selectively disrupt the visuospatial attentional processes needed for efficient conjunctive visual search. As expected, relative to sham-lesioned rats, ACh-NBM-lesioned rats took significantly longer to locate the target stimulus on Conjunctive Search, but not Feature Search trials, thus demonstrating that cholinergic contributions to visuospatial attention are important for feature binding in rats.
Meijer, Guido T; Montijn, Jorrit S; Pennartz, Cyriel M A; Lansink, Carien S
2017-09-06
The sensory neocortex is a highly connected associative network that integrates information from multiple senses, even at the level of the primary sensory areas. Although a growing body of empirical evidence supports this view, the neural mechanisms of cross-modal integration in primary sensory areas, such as the primary visual cortex (V1), are still largely unknown. Using two-photon calcium imaging in awake mice, we show that the encoding of audiovisual stimuli in V1 neuronal populations is highly dependent on the features of the stimulus constituents. When the visual and auditory stimulus features were modulated at the same rate (i.e., temporally congruent), neurons responded with either an enhancement or suppression compared with unisensory visual stimuli, and their prevalence was balanced. Temporally incongruent tones or white-noise bursts included in audiovisual stimulus pairs resulted in predominant response suppression across the neuronal population. Visual contrast did not influence multisensory processing when the audiovisual stimulus pairs were congruent; however, when white-noise bursts were used, neurons generally showed response suppression when the visual stimulus contrast was high whereas this effect was absent when the visual contrast was low. Furthermore, a small fraction of V1 neurons, predominantly those located near the lateral border of V1, responded to sound alone. These results show that V1 is involved in the encoding of cross-modal interactions in a more versatile way than previously thought. SIGNIFICANCE STATEMENT The neural substrate of cross-modal integration is not limited to specialized cortical association areas but extends to primary sensory areas. Using two-photon imaging of large groups of neurons, we show that multisensory modulation of V1 populations is strongly determined by the individual and shared features of cross-modal stimulus constituents, such as contrast, frequency, congruency, and temporal structure. Congruent audiovisual stimulation resulted in a balanced pattern of response enhancement and suppression compared with unisensory visual stimuli, whereas incongruent or dissimilar stimuli at full contrast gave rise to a population dominated by response-suppressing neurons. Our results indicate that V1 dynamically integrates nonvisual sources of information while still attributing most of its resources to coding visual information. Copyright © 2017 the authors 0270-6474/17/378783-14$15.00/0.
A Unifying Motif for Spatial and Directional Surround Suppression.
Liu, Liu D; Miller, Kenneth D; Pack, Christopher C
2018-01-24
In the visual system, the response to a stimulus in a neuron's receptive field can be modulated by stimulus context, and the strength of these contextual influences vary with stimulus intensity. Recent work has shown how a theoretical model, the stabilized supralinear network (SSN), can account for such modulatory influences, using a small set of computational mechanisms. Although the predictions of the SSN have been confirmed in primary visual cortex (V1), its computational principles apply with equal validity to any cortical structure. We have therefore tested the generality of the SSN by examining modulatory influences in the middle temporal area (MT) of the macaque visual cortex, using electrophysiological recordings and pharmacological manipulations. We developed a novel stimulus that can be adjusted parametrically to be larger or smaller in the space of all possible motion directions. We found, as predicted by the SSN, that MT neurons integrate across motion directions for low-contrast stimuli, but that they exhibit suppression by the same stimuli when they are high in contrast. These results are analogous to those found in visual cortex when stimulus size is varied in the space domain. We further tested the mechanisms of inhibition using pharmacological manipulations of inhibitory efficacy. As predicted by the SSN, local manipulation of inhibitory strength altered firing rates, but did not change the strength of surround suppression. These results are consistent with the idea that the SSN can account for modulatory influences along different stimulus dimensions and in different cortical areas. SIGNIFICANCE STATEMENT Visual neurons are selective for specific stimulus features in a region of visual space known as the receptive field, but can be modulated by stimuli outside of the receptive field. The SSN model has been proposed to account for these and other modulatory influences, and tested in V1. As this model is not specific to any particular stimulus feature or brain region, we wondered whether similar modulatory influences might be observed for other stimulus dimensions and other regions. We tested for specific patterns of modulatory influences in the domain of motion direction, using electrophysiological recordings from MT. Our data confirm the predictions of the SSN in MT, suggesting that the SSN computations might be a generic feature of sensory cortex. Copyright © 2018 the authors 0270-6474/18/380989-11$15.00/0.
Feature-selective attention in healthy old age: a selective decline in selective attention?
Quigley, Cliodhna; Müller, Matthias M
2014-02-12
Deficient selection against irrelevant information has been proposed to underlie age-related cognitive decline. We recently reported evidence for maintained early sensory selection when older and younger adults used spatial selective attention to perform a challenging task. Here we explored age-related differences when spatial selection is not possible and feature-selective attention must be deployed. We additionally compared the integrity of feedforward processing by exploiting the well established phenomenon of suppression of visual cortical responses attributable to interstimulus competition. Electroencephalogram was measured while older and younger human adults responded to brief occurrences of coherent motion in an attended stimulus composed of randomly moving, orientation-defined, flickering bars. Attention was directed to horizontal or vertical bars by a pretrial cue, after which two orthogonally oriented, overlapping stimuli or a single stimulus were presented. Horizontal and vertical bars flickered at different frequencies and thereby elicited separable steady-state visual-evoked potentials, which were used to examine the effect of feature-based selection and the competitive influence of a second stimulus on ongoing visual processing. Age differences were found in feature-selective attentional modulation of visual responses: older adults did not show consistent modulation of magnitude or phase. In contrast, the suppressive effect of a second stimulus was robust and comparable in magnitude across age groups, suggesting that bottom-up processing of the current stimuli is essentially unchanged in healthy old age. Thus, it seems that visual processing per se is unchanged, but top-down attentional control is compromised in older adults when space cannot be used to guide selection.
Effects of Temporal Features and Order on the Apparent duration of a Visual Stimulus
Bruno, Aurelio; Ayhan, Inci; Johnston, Alan
2012-01-01
The apparent duration of a visual stimulus has been shown to be influenced by its speed. For low speeds, apparent duration increases linearly with stimulus speed. This effect has been ascribed to the number of changes that occur within a visual interval. Accordingly, a higher number of changes should produce an increase in apparent duration. In order to test this prediction, we asked subjects to compare the relative duration of a 10-Hz drifting comparison stimulus with a standard stimulus that contained a different number of changes in different conditions. The standard could be static, drifting at 10 Hz, or mixed (a combination of variable duration static and drifting intervals). In this last condition the number of changes was intermediate between the static and the continuously drifting stimulus. For all standard durations, the mixed stimulus looked significantly compressed (∼20% reduction) relative to the drifting stimulus. However, no difference emerged between the static (that contained no changes) and the mixed stimuli (which contained an intermediate number of changes). We also observed that when the standard was displayed first, it appeared compressed relative to when it was displayed second with a magnitude that depended on standard duration. These results are at odds with a model of time perception that simply reflects the number of temporal features within an interval in determining the perceived passing of time. PMID:22461778
Population Response Profiles in Early Visual Cortex Are Biased in Favor of More Valuable Stimuli
Saproo, Sameer
2010-01-01
Voluntary and stimulus-driven shifts of attention can modulate the representation of behaviorally relevant stimuli in early areas of visual cortex. In turn, attended items are processed faster and more accurately, facilitating the selection of appropriate behavioral responses. Information processing is also strongly influenced by past experience and recent studies indicate that the learned value of a stimulus can influence relatively late stages of decision making such as the process of selecting a motor response. However, the learned value of a stimulus can also influence the magnitude of cortical responses in early sensory areas such as V1 and S1. These early effects of stimulus value are presumed to improve the quality of sensory representations; however, the nature of these modulations is not clear. They could reflect nonspecific changes in response amplitude associated with changes in general arousal or they could reflect a bias in population responses so that high-value features are represented more robustly. To examine this issue, subjects performed a two-alternative forced choice paradigm with a variable-interval payoff schedule to dynamically manipulate the relative value of two stimuli defined by their orientation (one was rotated clockwise from vertical, the other counterclockwise). Activation levels in visual cortex were monitored using functional MRI and feature-selective voxel tuning functions while subjects performed the behavioral task. The results suggest that value not only modulates the relative amplitude of responses in early areas of human visual cortex, but also sharpens the response profile across the populations of feature-selective neurons that encode the critical stimulus feature (orientation). Moreover, changes in space- or feature-based attention cannot easily explain the results because representations of both the selected and the unselected stimuli underwent a similar feature-selective modulation. This sharpening in the population response profile could theoretically improve the probability of correctly discriminating high-value stimuli from low-value alternatives. PMID:20410360
Katzner, Steffen; Busse, Laura; Treue, Stefan
2009-01-01
Directing visual attention to spatial locations or to non-spatial stimulus features can strongly modulate responses of individual cortical sensory neurons. Effects of attention typically vary in magnitude, not only between visual cortical areas but also between individual neurons from the same area. Here, we investigate whether the size of attentional effects depends on the match between the tuning properties of the recorded neuron and the perceptual task at hand. We recorded extracellular responses from individual direction-selective neurons in the middle temporal area (MT) of rhesus monkeys trained to attend either to the color or the motion signal of a moving stimulus. We found that effects of spatial and feature-based attention in MT, which are typically observed in tasks allocating attention to motion, were very similar even when attention was directed to the color of the stimulus. We conclude that attentional modulation can occur in extrastriate cortex, even under conditions without a match between the tuning properties of the recorded neuron and the perceptual task at hand. Our data are consistent with theories of object-based attention describing a transfer of attention from relevant to irrelevant features, within the attended object and across the visual field. These results argue for a unified attentional system that modulates responses to a stimulus across cortical areas, even if a given area is specialized for processing task-irrelevant aspects of that stimulus.
Audio-visual synchrony and feature-selective attention co-amplify early visual processing.
Keitel, Christian; Müller, Matthias M
2016-05-01
Our brain relies on neural mechanisms of selective attention and converging sensory processing to efficiently cope with rich and unceasing multisensory inputs. One prominent assumption holds that audio-visual synchrony can act as a strong attractor for spatial attention. Here, we tested for a similar effect of audio-visual synchrony on feature-selective attention. We presented two superimposed Gabor patches that differed in colour and orientation. On each trial, participants were cued to selectively attend to one of the two patches. Over time, spatial frequencies of both patches varied sinusoidally at distinct rates (3.14 and 3.63 Hz), giving rise to pulse-like percepts. A simultaneously presented pure tone carried a frequency modulation at the pulse rate of one of the two visual stimuli to introduce audio-visual synchrony. Pulsed stimulation elicited distinct time-locked oscillatory electrophysiological brain responses. These steady-state responses were quantified in the spectral domain to examine individual stimulus processing under conditions of synchronous versus asynchronous tone presentation and when respective stimuli were attended versus unattended. We found that both, attending to the colour of a stimulus and its synchrony with the tone, enhanced its processing. Moreover, both gain effects combined linearly for attended in-sync stimuli. Our results suggest that audio-visual synchrony can attract attention to specific stimulus features when stimuli overlap in space.
You prime what you code: The fAIM model of priming of pop-out
Meeter, Martijn
2017-01-01
Our visual brain makes use of recent experience to interact with the visual world, and efficiently select relevant information. This is exemplified by speeded search when target- and distractor features repeat across trials versus when they switch, a phenomenon referred to as intertrial priming. Here, we present fAIM, a computational model that demonstrates how priming can be explained by a simple feature-weighting mechanism integrated into an established model of bottom-up vision. In fAIM, such modulations in feature gains are widespread and not just restricted to one or a few features. Consequentially, priming effects result from the overall tuning of visual features to the task at hand. Such tuning allows the model to reproduce priming for different types of stimuli, including for typical stimulus dimensions such as ‘color’ and for less obvious dimensions such as ‘spikiness’ of shapes. Moreover, the model explains some puzzling findings from the literature: it shows how priming can be found for target-distractor stimulus relations rather than for their absolute stimulus values per se, without an explicit representation of relations. Similarly, it simulates effects that have been taken to reflect a modulation of priming by an observers’ goals—without any representation of goals in the model. We conclude that priming is best considered as a consequence of a general adaptation of the brain to visual input, and not as a peculiarity of visual search. PMID:29166386
Herrmann, C S; Mecklinger, A
2000-12-01
We examined evoked and induced responses in event-related fields and gamma activity in the magnetoencephalogram (MEG) during a visual classification task. The objective was to investigate the effects of target classification and the different levels of discrimination between certain stimulus features. We performed two experiments, which differed only in the subjects' task while the stimuli were identical. In Experiment 1, subjects responded by a button-press to rare Kanizsa squares (targets) among Kanizsa triangles and non-Kanizsa figures (standards). This task requires the processing of both stimulus features (colinearity and number of inducer disks). In Experiment 2, the four stimuli of Experiment 1 were used as standards and the occurrence of an additional stimulus without any feature overlap with the Kanizsa stimuli (a rare and highly salient red fixation cross) had to be detected. Discrimination of colinearity and number of inducer disks was not necessarily required for task performance. We applied a wavelet-based time-frequency analysis to the data and calculated topographical maps of the 40 Hz activity. The early evoked gamma activity (100-200 ms) in Experiment 1 was higher for targets as compared to standards. In Experiment 2, no significant differences were found in the gamma responses to the Kanizsa figures and non-Kanizsa figures. This pattern of results suggests that early evoked gamma activity in response to visual stimuli is affected by the targetness of a stimulus and the need to discriminate between the features of a stimulus.
Linear and Non-Linear Visual Feature Learning in Rat and Humans
Bossens, Christophe; Op de Beeck, Hans P.
2016-01-01
The visual system processes visual input in a hierarchical manner in order to extract relevant features that can be used in tasks such as invariant object recognition. Although typically investigated in primates, recent work has shown that rats can be trained in a variety of visual object and shape recognition tasks. These studies did not pinpoint the complexity of the features used by these animals. Many tasks might be solved by using a combination of relatively simple features which tend to be correlated. Alternatively, rats might extract complex features or feature combinations which are nonlinear with respect to those simple features. In the present study, we address this question by starting from a small stimulus set for which one stimulus-response mapping involves a simple linear feature to solve the task while another mapping needs a well-defined nonlinear combination of simpler features related to shape symmetry. We verified computationally that the nonlinear task cannot be trivially solved by a simple V1-model. We show how rats are able to solve the linear feature task but are unable to acquire the nonlinear feature. In contrast, humans are able to use the nonlinear feature and are even faster in uncovering this solution as compared to the linear feature. The implications for the computational capabilities of the rat visual system are discussed. PMID:28066201
Horikawa, Tomoyasu; Kamitani, Yukiyasu
2017-01-01
Dreaming is generally thought to be generated by spontaneous brain activity during sleep with patterns common to waking experience. This view is supported by a recent study demonstrating that dreamed objects can be predicted from brain activity during sleep using statistical decoders trained with stimulus-induced brain activity. However, it remains unclear whether and how visual image features associated with dreamed objects are represented in the brain. In this study, we used a deep neural network (DNN) model for object recognition as a proxy for hierarchical visual feature representation, and DNN features for dreamed objects were analyzed with brain decoding of fMRI data collected during dreaming. The decoders were first trained with stimulus-induced brain activity labeled with the feature values of the stimulus image from multiple DNN layers. The decoders were then used to decode DNN features from the dream fMRI data, and the decoded features were compared with the averaged features of each object category calculated from a large-scale image database. We found that the feature values decoded from the dream fMRI data positively correlated with those associated with dreamed object categories at mid- to high-level DNN layers. Using the decoded features, the dreamed object category could be identified at above-chance levels by matching them to the averaged features for candidate categories. The results suggest that dreaming recruits hierarchical visual feature representations associated with objects, which may support phenomenal aspects of dream experience.
Human Occipital and Parietal GABA Selectively Influence Visual Perception of Orientation and Size.
Song, Chen; Sandberg, Kristian; Andersen, Lau Møller; Blicher, Jakob Udby; Rees, Geraint
2017-09-13
GABA is the primary inhibitory neurotransmitter in human brain. The level of GABA varies substantially across individuals, and this variability is associated with interindividual differences in visual perception. However, it remains unclear whether the association between GABA level and visual perception reflects a general influence of visual inhibition or whether the GABA levels of different cortical regions selectively influence perception of different visual features. To address this, we studied how the GABA levels of parietal and occipital cortices related to interindividual differences in size, orientation, and brightness perception. We used visual contextual illusion as a perceptual assay since the illusion dissociates perceptual content from stimulus content and the magnitude of the illusion reflects the effect of visual inhibition. Across individuals, we observed selective correlations between the level of GABA and the magnitude of contextual illusion. Specifically, parietal GABA level correlated with size illusion magnitude but not with orientation or brightness illusion magnitude; in contrast, occipital GABA level correlated with orientation illusion magnitude but not with size or brightness illusion magnitude. Our findings reveal a region- and feature-dependent influence of GABA level on human visual perception. Parietal and occipital cortices contain, respectively, topographic maps of size and orientation preference in which neural responses to stimulus sizes and stimulus orientations are modulated by intraregional lateral connections. We propose that these lateral connections may underlie the selective influence of GABA on visual perception. SIGNIFICANCE STATEMENT GABA, the primary inhibitory neurotransmitter in human visual system, varies substantially across individuals. This interindividual variability in GABA level is linked to interindividual differences in many aspects of visual perception. However, the widespread influence of GABA raises the question of whether interindividual variability in GABA reflects an overall variability in visual inhibition and has a general influence on visual perception or whether the GABA levels of different cortical regions have selective influence on perception of different visual features. Here we report a region- and feature-dependent influence of GABA level on human visual perception. Our findings suggest that GABA level of a cortical region selectively influences perception of visual features that are topographically mapped in this region through intraregional lateral connections. Copyright © 2017 Song, Sandberg et al.
Unconscious analyses of visual scenes based on feature conjunctions.
Tachibana, Ryosuke; Noguchi, Yasuki
2015-06-01
To efficiently process a cluttered scene, the visual system analyzes statistical properties or regularities of visual elements embedded in the scene. It is controversial, however, whether those scene analyses could also work for stimuli unconsciously perceived. Here we show that our brain performs the unconscious scene analyses not only using a single featural cue (e.g., orientation) but also based on conjunctions of multiple visual features (e.g., combinations of color and orientation information). Subjects foveally viewed a stimulus array (duration: 50 ms) where 4 types of bars (red-horizontal, red-vertical, green-horizontal, and green-vertical) were intermixed. Although a conscious perception of those bars was inhibited by a subsequent mask stimulus, the brain correctly analyzed the information about color, orientation, and color-orientation conjunctions of those invisible bars. The information of those features was then used for the unconscious configuration analysis (statistical processing) of the central bars, which induced a perceptual bias and illusory feature binding in visible stimuli at peripheral locations. While statistical analyses and feature binding are normally 2 key functions of the visual system to construct coherent percepts of visual scenes, our results show that a high-level analysis combining those 2 functions is correctly performed by unconscious computations in the brain. (c) 2015 APA, all rights reserved).
Attention Determines Contextual Enhancement versus Suppression in Human Primary Visual Cortex.
Flevaris, Anastasia V; Murray, Scott O
2015-09-02
Neural responses in primary visual cortex (V1) depend on stimulus context in seemingly complex ways. For example, responses to an oriented stimulus can be suppressed when it is flanked by iso-oriented versus orthogonally oriented stimuli but can also be enhanced when attention is directed to iso-oriented versus orthogonal flanking stimuli. Thus the exact same contextual stimulus arrangement can have completely opposite effects on neural responses-in some cases leading to orientation-tuned suppression and in other cases leading to orientation-tuned enhancement. Here we show that stimulus-based suppression and enhancement of fMRI responses in humans depends on small changes in the focus of attention and can be explained by a model that combines feature-based attention with response normalization. Neurons in the primary visual cortex (V1) respond to stimuli within a restricted portion of the visual field, termed their "receptive field." However, neuronal responses can also be influenced by stimuli that surround a receptive field, although the nature of these contextual interactions and underlying neural mechanisms are debated. Here we show that the response in V1 to a stimulus in the same context can either be suppressed or enhanced depending on the focus of attention. We are able to explain the results using a simple computational model that combines two well established properties of visual cortical responses: response normalization and feature-based enhancement. Copyright © 2015 the authors 0270-6474/15/3512273-08$15.00/0.
Psychophysical and perceptual performance in a simulated-scotoma model of human eye injury
NASA Astrophysics Data System (ADS)
Brandeis, R.; Egoz, I.; Peri, D.; Sapiens, N.; Turetz, J.
2008-02-01
Macular scotomas, affecting visual functioning, characterize many eye and neurological diseases like AMD, diabetes mellitus, multiple sclerosis, and macular hole. In this work, foveal visual field defects were modeled, and their effects were evaluated on spatial contrast sensitivity and a task of stimulus detection and aiming. The modeled occluding scotomas, of different size, were superimposed on the stimuli presented on the computer display, and were stabilized on the retina using a mono Purkinje Eye-Tracker. Spatial contrast sensitivity was evaluated using square-wave grating stimuli, whose contrast thresholds were measured using the method of constant stimuli with "catch trials". The detection task consisted of a triple conjunctive visual search display of: size (in visual angle), contrast and background (simple, low-level features vs. complex, high-level features). Search/aiming accuracy as well as R.T. measures used for performance evaluation. Artificially generated scotomas suppressed spatial contrast sensitivity in a size dependent manner, similar to previous studies. Deprivation effect was dependent on spatial frequency, consistent with retinal inhomogeneity models. Stimulus detection time was slowed in complex background search situation more than in simple background. Detection speed was dependent on scotoma size and size of stimulus. In contrast, visually guided aiming was more sensitive to scotoma effect in simple background search situation than in complex background. Both stimulus aiming R.T. and accuracy (precision targeting) were impaired, as a function of scotoma size and size of stimulus. The data can be explained by models distinguishing between saliency-based, parallel and serial search processes, guiding visual attention, which are supported by underlying retinal as well as neural mechanisms.
NASA Astrophysics Data System (ADS)
Hramov, Alexander; Musatov, Vyacheslav Yu.; Runnova, Anastasija E.; Efremova, Tatiana Yu.; Koronovskii, Alexey A.; Pisarchik, Alexander N.
2018-04-01
In the paper we propose an approach based on artificial neural networks for recognition of different human brain states associated with distinct visual stimulus. Based on the developed numerical technique and the analysis of obtained experimental multichannel EEG data, we optimize the spatiotemporal representation of multichannel EEG to provide close to 97% accuracy in recognition of the EEG brain states during visual perception. Different interpretations of an ambiguous image produce different oscillatory patterns in the human EEG with similar features for every interpretation. Since these features are inherent to all subjects, a single artificial network can classify with high quality the associated brain states of other subjects.
Cortical dynamics of feature binding and reset: control of visual persistence.
Francis, G; Grossberg, S; Mingolla, E
1994-04-01
An analysis of the reset of visual cortical circuits responsible for the binding or segmentation of visual features into coherent visual forms yields a model that explains properties of visual persistence. The reset mechanisms prevent massive smearing of visual percepts in response to rapidly moving images. The model simulates relationships among psychophysical data showing inverse relations of persistence to flash luminance and duration, greater persistence of illusory contours than real contours, a U-shaped temporal function for persistence of illusory contours, a reduction of persistence due to adaptation with a stimulus of like orientation, an increase of persistence with spatial separation of a masking stimulus. The model suggests that a combination of habituative, opponent, and endstopping mechanisms prevent smearing and limit persistence. Earlier work with the model has analyzed data about boundary formation, texture segregation, shape-from-shading, and figure-ground separation. Thus, several types of data support each model mechanism and new predictions are made.
Task set induces dynamic reallocation of resources in visual short-term memory.
Sheremata, Summer L; Shomstein, Sarah
2017-08-01
Successful interaction with the environment requires the ability to flexibly allocate resources to different locations in the visual field. Recent evidence suggests that visual short-term memory (VSTM) resources are distributed asymmetrically across the visual field based upon task demands. Here, we propose that context, rather than the stimulus itself, determines asymmetrical distribution of VSTM resources. To test whether context modulates the reallocation of resources to the right visual field, task set, defined by memory-load, was manipulated to influence visual short-term memory performance. Performance was measured for single-feature objects embedded within predominantly single- or two-feature memory blocks. Therefore, context was varied to determine whether task set directly predicts changes in visual field biases. In accord with the dynamic reallocation of resources hypothesis, task set, rather than aspects of the physical stimulus, drove improvements in performance in the right- visual field. Our results show, for the first time, that preparation for upcoming memory demands directly determines how resources are allocated across the visual field.
Feature-based attention: it is all bottom-up priming.
Theeuwes, Jan
2013-10-19
Feature-based attention (FBA) enhances the representation of image characteristics throughout the visual field, a mechanism that is particularly useful when searching for a specific stimulus feature. Even though most theories of visual search implicitly or explicitly assume that FBA is under top-down control, we argue that the role of top-down processing in FBA may be limited. Our review of the literature indicates that all behavioural and neuro-imaging studies investigating FBA suffer from the shortcoming that they cannot rule out an effect of priming. The mere attending to a feature enhances the mandatory processing of that feature across the visual field, an effect that is likely to occur in an automatic, bottom-up way. Studies that have investigated the feasibility of FBA by means of cueing paradigms suggest that the role of top-down processing in FBA is limited (e.g. prepare for red). Instead, the actual processing of the stimulus is needed to cause the mandatory tuning of responses throughout the visual field. We conclude that it is likely that all FBA effects reported previously are the result of bottom-up priming.
Feature-based attention: it is all bottom-up priming
Theeuwes, Jan
2013-01-01
Feature-based attention (FBA) enhances the representation of image characteristics throughout the visual field, a mechanism that is particularly useful when searching for a specific stimulus feature. Even though most theories of visual search implicitly or explicitly assume that FBA is under top-down control, we argue that the role of top-down processing in FBA may be limited. Our review of the literature indicates that all behavioural and neuro-imaging studies investigating FBA suffer from the shortcoming that they cannot rule out an effect of priming. The mere attending to a feature enhances the mandatory processing of that feature across the visual field, an effect that is likely to occur in an automatic, bottom-up way. Studies that have investigated the feasibility of FBA by means of cueing paradigms suggest that the role of top-down processing in FBA is limited (e.g. prepare for red). Instead, the actual processing of the stimulus is needed to cause the mandatory tuning of responses throughout the visual field. We conclude that it is likely that all FBA effects reported previously are the result of bottom-up priming. PMID:24018717
Decoding conjunctions of direction-of-motion and binocular disparity from human visual cortex.
Seymour, Kiley J; Clifford, Colin W G
2012-05-01
Motion and binocular disparity are two features in our environment that share a common correspondence problem. Decades of psychophysical research dedicated to understanding stereopsis suggest that these features interact early in human visual processing to disambiguate depth. Single-unit recordings in the monkey also provide evidence for the joint encoding of motion and disparity across much of the dorsal visual stream. Here, we used functional MRI and multivariate pattern analysis to examine where in the human brain conjunctions of motion and disparity are encoded. Subjects sequentially viewed two stimuli that could be distinguished only by their conjunctions of motion and disparity. Specifically, each stimulus contained the same feature information (leftward and rightward motion and crossed and uncrossed disparity) but differed exclusively in the way these features were paired. Our results revealed that a linear classifier could accurately decode which stimulus a subject was viewing based on voxel activation patterns throughout the dorsal visual areas and as early as V2. This decoding success was conditional on some voxels being individually sensitive to the unique conjunctions comprising each stimulus, thus a classifier could not rely on independent information about motion and binocular disparity to distinguish these conjunctions. This study expands on evidence that disparity and motion interact at many levels of human visual processing, particularly within the dorsal stream. It also lends support to the idea that stereopsis is subserved by early mechanisms also tuned to direction of motion.
Gao, Dashan; Vasconcelos, Nuno
2009-01-01
A decision-theoretic formulation of visual saliency, first proposed for top-down processing (object recognition) (Gao & Vasconcelos, 2005a), is extended to the problem of bottom-up saliency. Under this formulation, optimality is defined in the minimum probability of error sense, under a constraint of computational parsimony. The saliency of the visual features at a given location of the visual field is defined as the power of those features to discriminate between the stimulus at the location and a null hypothesis. For bottom-up saliency, this is the set of visual features that surround the location under consideration. Discrimination is defined in an information-theoretic sense and the optimal saliency detector derived for a class of stimuli that complies with known statistical properties of natural images. It is shown that under the assumption that saliency is driven by linear filtering, the optimal detector consists of what is usually referred to as the standard architecture of V1: a cascade of linear filtering, divisive normalization, rectification, and spatial pooling. The optimal detector is also shown to replicate the fundamental properties of the psychophysics of saliency: stimulus pop-out, saliency asymmetries for stimulus presence versus absence, disregard of feature conjunctions, and Weber's law. Finally, it is shown that the optimal saliency architecture can be applied to the solution of generic inference problems. In particular, for the class of stimuli studied, it performs the three fundamental operations of statistical inference: assessment of probabilities, implementation of Bayes decision rule, and feature selection.
Visual short-term memory: activity supporting encoding and maintenance in retinotopic visual cortex.
Sneve, Markus H; Alnæs, Dag; Endestad, Tor; Greenlee, Mark W; Magnussen, Svein
2012-10-15
Recent studies have demonstrated that retinotopic cortex maintains information about visual stimuli during retention intervals. However, the process by which transient stimulus-evoked sensory responses are transformed into enduring memory representations is unknown. Here, using fMRI and short-term visual memory tasks optimized for univariate and multivariate analysis approaches, we report differential involvement of human retinotopic areas during memory encoding of the low-level visual feature orientation. All visual areas show weaker responses when memory encoding processes are interrupted, possibly due to effects in orientation-sensitive primary visual cortex (V1) propagating across extrastriate areas. Furthermore, intermediate areas in both dorsal (V3a/b) and ventral (LO1/2) streams are significantly more active during memory encoding compared with non-memory (active and passive) processing of the same stimulus material. These effects in intermediate visual cortex are also observed during memory encoding of a different stimulus feature (spatial frequency), suggesting that these areas are involved in encoding processes on a higher level of representation. Using pattern-classification techniques to probe the representational content in visual cortex during delay periods, we further demonstrate that simply initiating memory encoding is not sufficient to produce long-lasting memory traces. Rather, active maintenance appears to underlie the observed memory-specific patterns of information in retinotopic cortex. Copyright © 2012 Elsevier Inc. All rights reserved.
Visual attention to features by associative learning.
Gozli, Davood G; Moskowitz, Joshua B; Pratt, Jay
2014-11-01
Expecting a particular stimulus can facilitate processing of that stimulus over others, but what is the fate of other stimuli that are known to co-occur with the expected stimulus? This study examined the impact of learned association on feature-based attention. The findings show that the effectiveness of an uninformative color transient in orienting attention can change by learned associations between colors and the expected target shape. In an initial acquisition phase, participants learned two distinct sequences of stimulus-response-outcome, where stimuli were defined by shape ('S' vs. 'H'), responses were localized key-presses (left vs. right), and outcomes were colors (red vs. green). Next, in a test phase, while expecting a target shape (80% probable), participants showed reliable attentional orienting to the color transient associated with the target shape, and showed no attentional orienting with the color associated with the alternative target shape. This bias seemed to be driven by learned association between shapes and colors, and not modulated by the response. In addition, the bias seemed to depend on observing target-color conjunctions, since encountering the two features disjunctively (without spatiotemporal overlap) did not replicate the findings. We conclude that associative learning - likely mediated by mechanisms underlying visual object representation - can extend the impact of goal-driven attention to features associated with a target stimulus. Copyright © 2014 Elsevier B.V. All rights reserved.
Potts, Geoffrey F; Wood, Susan M; Kothmann, Delia; Martin, Laura E
2008-10-21
Attention directs limited-capacity information processing resources to a subset of available perceptual representations. The mechanisms by which attention selects task-relevant representations for preferential processing are not fully known. Triesman and Gelade's [Triesman, A., Gelade, G., 1980. A feature integration theory of attention. Cognit. Psychol. 12, 97-136.] influential attention model posits that simple features are processed preattentively, in parallel, but that attention is required to serially conjoin multiple features into an object representation. Event-related potentials have provided evidence for this model showing parallel processing of perceptual features in the posterior Selection Negativity (SN) and serial, hierarchic processing of feature conjunctions in the Frontal Selection Positivity (FSP). Most prior studies have been done on conjunctions within one sensory modality while many real-world objects have multimodal features. It is not known if the same neural systems of posterior parallel processing of simple features and frontal serial processing of feature conjunctions seen within a sensory modality also operate on conjunctions between modalities. The current study used ERPs and simultaneously presented auditory and visual stimuli in three task conditions: Attend Auditory (auditory feature determines the target, visual features are irrelevant), Attend Visual (visual features relevant, auditory irrelevant), and Attend Conjunction (target defined by the co-occurrence of an auditory and a visual feature). In the Attend Conjunction condition when the auditory but not the visual feature was a target there was an SN over auditory cortex, when the visual but not auditory stimulus was a target there was an SN over visual cortex, and when both auditory and visual stimuli were targets (i.e. conjunction target) there were SNs over both auditory and visual cortex, indicating parallel processing of the simple features within each modality. In contrast, an FSP was present when either the visual only or both auditory and visual features were targets, but not when only the auditory stimulus was a target, indicating that the conjunction target determination was evaluated serially and hierarchically with visual information taking precedence. This indicates that the detection of a target defined by audio-visual conjunction is achieved via the same mechanism as within a single perceptual modality, through separate, parallel processing of the auditory and visual features and serial processing of the feature conjunction elements, rather than by evaluation of a fused multimodal percept.
Andersen, Søren K; Müller, Matthias M; Hillyard, Steven A
2015-07-08
Experiments that study feature-based attention have often examined situations in which selection is based on a single feature (e.g., the color red). However, in more complex situations relevant stimuli may not be set apart from other stimuli by a single defining property but by a specific combination of features. Here, we examined sustained attentional selection of stimuli defined by conjunctions of color and orientation. Human observers attended to one out of four concurrently presented superimposed fields of randomly moving horizontal or vertical bars of red or blue color to detect brief intervals of coherent motion. Selective stimulus processing in early visual cortex was assessed by recordings of steady-state visual evoked potentials (SSVEPs) elicited by each of the flickering fields of stimuli. We directly contrasted attentional selection of single features and feature conjunctions and found that SSVEP amplitudes on conditions in which selection was based on a single feature only (color or orientation) exactly predicted the magnitude of attentional enhancement of SSVEPs when attending to a conjunction of both features. Furthermore, enhanced SSVEP amplitudes elicited by attended stimuli were accompanied by equivalent reductions of SSVEP amplitudes elicited by unattended stimuli in all cases. We conclude that attentional selection of a feature-conjunction stimulus is accomplished by the parallel and independent facilitation of its constituent feature dimensions in early visual cortex. The ability to perceive the world is limited by the brain's processing capacity. Attention affords adaptive behavior by selectively prioritizing processing of relevant stimuli based on their features (location, color, orientation, etc.). We found that attentional mechanisms for selection of different features belonging to the same object operate independently and in parallel: concurrent attentional selection of two stimulus features is simply the sum of attending to each of those features separately. This result is key to understanding attentional selection in complex (natural) scenes, where relevant stimuli are likely to be defined by a combination of stimulus features. Copyright © 2015 the authors 0270-6474/15/359912-08$15.00/0.
Feature-Based Visual Short-Term Memory Is Widely Distributed and Hierarchically Organized.
Dotson, Nicholas M; Hoffman, Steven J; Goodell, Baldwin; Gray, Charles M
2018-06-15
Feature-based visual short-term memory is known to engage both sensory and association cortices. However, the extent of the participating circuit and the neural mechanisms underlying memory maintenance is still a matter of vigorous debate. To address these questions, we recorded neuronal activity from 42 cortical areas in monkeys performing a feature-based visual short-term memory task and an interleaved fixation task. We find that task-dependent differences in firing rates are widely distributed throughout the cortex, while stimulus-specific changes in firing rates are more restricted and hierarchically organized. We also show that microsaccades during the memory delay encode the stimuli held in memory and that units modulated by microsaccades are more likely to exhibit stimulus specificity, suggesting that eye movements contribute to visual short-term memory processes. These results support a framework in which most cortical areas, within a modality, contribute to mnemonic representations at timescales that increase along the cortical hierarchy. Copyright © 2018 Elsevier Inc. All rights reserved.
Choe, Kyoung Whan; Blake, Randolph
2014-01-01
Primary visual cortex (V1) forms the initial cortical representation of objects and events in our visual environment, and it distributes information about that representation to higher cortical areas within the visual hierarchy. Decades of work have established tight linkages between neural activity occurring in V1 and features comprising the retinal image, but it remains debatable how that activity relates to perceptual decisions. An actively debated question is the extent to which V1 responses determine, on a trial-by-trial basis, perceptual choices made by observers. By inspecting the population activity of V1 from human observers engaged in a difficult visual discrimination task, we tested one essential prediction of the deterministic view: choice-related activity, if it exists in V1, and stimulus-related activity should occur in the same neural ensemble of neurons at the same time. Our findings do not support this prediction: while cortical activity signifying the variability in choice behavior was indeed found in V1, that activity was dissociated from activity representing stimulus differences relevant to the task, being advanced in time and carried by a different neural ensemble. The spatiotemporal dynamics of population responses suggest that short-term priors, perhaps formed in higher cortical areas involved in perceptual inference, act to modulate V1 activity prior to stimulus onset without modifying subsequent activity that actually represents stimulus features within V1. PMID:24523561
Reduced Perceptual Exclusivity during Object and Grating Rivalry in Autism
Freyberg, J.; Robertson, C.E.; Baron-Cohen, S.
2015-01-01
Background The dynamics of binocular rivalry may be a behavioural footprint of excitatory and inhibitory neural transmission in visual cortex. Given the presence of atypical visual features in Autism Spectrum Conditions (ASC), and evidence in support of the idea of an imbalance in excitatory/inhibitory neural transmission in ASC, we hypothesized that binocular rivalry might prove a simple behavioural marker of such a transmission imbalance in the autistic brain. In support of this hypothesis, we previously reported a slower rate of rivalry in ASC, driven by reduced perceptual exclusivity. Methods We tested whether atypical dynamics of binocular rivalry in ASC are specific to certain stimulus features. 53 participants (26 with ASC, matched for age, sex and IQ) participated in binocular rivalry experiments in which the dynamics of rivalry were measured at two levels of stimulus complexity, low (grayscale gratings) and high (coloured objects). Results Individuals with ASC experienced a slower rate of rivalry, driven by longer transitional states between dominant percepts. These exaggerated transitional states were present at both low and high levels of stimulus complexity, suggesting that atypical rivalry dynamics in autism are robust with respect to stimulus choice. Interactions between stimulus properties and rivalry dynamics in autism indicate that achromatic grating stimuli produce stronger group differences. Conclusion These results confirm the finding of atypical dynamics of binocular rivalry in ASC. These dynamics were present for stimuli of both low and high levels of visual complexity, suggesting an imbalance in competitive interactions throughout the visual system of individuals with ASC. PMID:26382002
NASA Astrophysics Data System (ADS)
Iramina, Keiji; Ge, Sheng; Hyodo, Akira; Hayami, Takehito; Ueno, Shoogo
2009-04-01
In this study, we applied a transcranial magnetic stimulation (TMS) to investigate the temporal aspect for the functional processing of visual attention. Although it has been known that right posterior parietal cortex (PPC) in the brain has a role in certain visual search tasks, there is little knowledge about the temporal aspect of this area. Three visual search tasks that have different difficulties of task execution individually were carried out. These three visual search tasks are the "easy feature task," the "hard feature task," and the "conjunction task." To investigate the temporal aspect of the PPC involved in the visual search, we applied various stimulus onset asynchronies (SOAs) and measured the reaction time of the visual search. The magnetic stimulation was applied on the right PPC or the left PPC by the figure-eight coil. The results show that the reaction times of the hard feature task are longer than those of the easy feature task. When SOA=150 ms, compared with no-TMS condition, there was a significant increase in target-present reaction time when TMS pulses were applied. We considered that the right PPC was involved in the visual search at about SOA=150 ms after visual stimulus presentation. The magnetic stimulation to the right PPC disturbed the processing of the visual search. However, the magnetic stimulation to the left PPC gives no effect on the processing of the visual search.
Goard, Michael J; Pho, Gerald N; Woodson, Jonathan; Sur, Mriganka
2016-08-04
Mapping specific sensory features to future motor actions is a crucial capability of mammalian nervous systems. We investigated the role of visual (V1), posterior parietal (PPC), and frontal motor (fMC) cortices for sensorimotor mapping in mice during performance of a memory-guided visual discrimination task. Large-scale calcium imaging revealed that V1, PPC, and fMC neurons exhibited heterogeneous responses spanning all task epochs (stimulus, delay, response). Population analyses demonstrated unique encoding of stimulus identity and behavioral choice information across regions, with V1 encoding stimulus, fMC encoding choice even early in the trial, and PPC multiplexing the two variables. Optogenetic inhibition during behavior revealed that all regions were necessary during the stimulus epoch, but only fMC was required during the delay and response epochs. Stimulus identity can thus be rapidly transformed into behavioral choice, requiring V1, PPC, and fMC during the transformation period, but only fMC for maintaining the choice in memory prior to execution.
Effects of feature-based attention on the motion aftereffect at remote locations.
Boynton, Geoffrey M; Ciaramitaro, Vivian M; Arman, A Cyrus
2006-09-01
Previous studies have shown that attention to a particular stimulus feature, such as direction of motion or color, enhances neuronal responses to unattended stimuli sharing that feature. We studied this effect psychophysically by measuring the strength of the motion aftereffect (MAE) induced by an unattended stimulus when attention was directed to one of two overlapping fields of moving dots in a different spatial location. When attention was directed to the same direction of motion as the unattended stimulus, the unattended stimulus induced a stronger MAE than when attention was directed to the opposite direction. Also, when the unattended location contained either uncorrelated motion or had no stimulus at all an MAE was induced in the opposite direction to the attended direction of motion. The strength of the MAE was similar regardless of whether subjects attended to the speed or luminance of the attended dots. These results provide further support for a global feature-based mechanism of attention, and show that the effect spreads across all features of an attended object, and to all locations of visual space.
Semantically Induced Distortions of Visual Awareness in a Patient with Balint's Syndrome
ERIC Educational Resources Information Center
Soto, David; Humphreys, Glyn W.
2009-01-01
We present data indicating that visual awareness for a basic perceptual feature (colour) can be influenced by the relation between the feature and the semantic properties of the stimulus. We examined semantic interference from the meaning of a colour word ("RED") on simple colour (ink related) detection responses in a patient with simultagnosia…
A method for real-time visual stimulus selection in the study of cortical object perception.
Leeds, Daniel D; Tarr, Michael J
2016-06-01
The properties utilized by visual object perception in the mid- and high-level ventral visual pathway are poorly understood. To better establish and explore possible models of these properties, we adopt a data-driven approach in which we repeatedly interrogate neural units using functional Magnetic Resonance Imaging (fMRI) to establish each unit's image selectivity. This approach to imaging necessitates a search through a broad space of stimulus properties using a limited number of samples. To more quickly identify the complex visual features underlying human cortical object perception, we implemented a new functional magnetic resonance imaging protocol in which visual stimuli are selected in real-time based on BOLD responses to recently shown images. Two variations of this protocol were developed, one relying on natural object stimuli and a second based on synthetic object stimuli, both embedded in feature spaces based on the complex visual properties of the objects. During fMRI scanning, we continuously controlled stimulus selection in the context of a real-time search through these image spaces in order to maximize neural responses across pre-determined 1cm(3) rain regions. Elsewhere we have reported the patterns of cortical selectivity revealed by this approach (Leeds et al., 2014). In contrast, here our objective is to present more detailed methods and explore the technical and biological factors influencing the behavior of our real-time stimulus search. We observe that: 1) Searches converged more reliably when exploring a more precisely parameterized space of synthetic objects; 2) real-time estimation of cortical responses to stimuli is reasonably consistent; 3) search behavior was acceptably robust to delays in stimulus displays and subject motion effects. Overall, our results indicate that real-time fMRI methods may provide a valuable platform for continuing study of localized neural selectivity, both for visual object representation and beyond. Copyright © 2016 Elsevier Inc. All rights reserved.
A method for real-time visual stimulus selection in the study of cortical object perception
Leeds, Daniel D.; Tarr, Michael J.
2016-01-01
The properties utilized by visual object perception in the mid- and high-level ventral visual pathway are poorly understood. To better establish and explore possible models of these properties, we adopt a data-driven approach in which we repeatedly interrogate neural units using functional Magnetic Resonance Imaging (fMRI) to establish each unit’s image selectivity. This approach to imaging necessitates a search through a broad space of stimulus properties using a limited number of samples. To more quickly identify the complex visual features underlying human cortical object perception, we implemented a new functional magnetic resonance imaging protocol in which visual stimuli are selected in real-time based on BOLD responses to recently shown images. Two variations of this protocol were developed, one relying on natural object stimuli and a second based on synthetic object stimuli, both embedded in feature spaces based on the complex visual properties of the objects. During fMRI scanning, we continuously controlled stimulus selection in the context of a real-time search through these image spaces in order to maximize neural responses across predetermined 1 cm3 brain regions. Elsewhere we have reported the patterns of cortical selectivity revealed by this approach (Leeds 2014). In contrast, here our objective is to present more detailed methods and explore the technical and biological factors influencing the behavior of our real-time stimulus search. We observe that: 1) Searches converged more reliably when exploring a more precisely parameterized space of synthetic objects; 2) Real-time estimation of cortical responses to stimuli are reasonably consistent; 3) Search behavior was acceptably robust to delays in stimulus displays and subject motion effects. Overall, our results indicate that real-time fMRI methods may provide a valuable platform for continuing study of localized neural selectivity, both for visual object representation and beyond. PMID:26973168
Dissociation between recognition and detection advantage for facial expressions: a meta-analysis.
Nummenmaa, Lauri; Calvo, Manuel G
2015-04-01
Happy facial expressions are recognized faster and more accurately than other expressions in categorization tasks, whereas detection in visual search tasks is widely believed to be faster for angry than happy faces. We used meta-analytic techniques for resolving this categorization versus detection advantage discrepancy for positive versus negative facial expressions. Effect sizes were computed on the basis of the r statistic for a total of 34 recognition studies with 3,561 participants and 37 visual search studies with 2,455 participants, yielding a total of 41 effect sizes for recognition accuracy, 25 for recognition speed, and 125 for visual search speed. Random effects meta-analysis was conducted to estimate effect sizes at population level. For recognition tasks, an advantage in recognition accuracy and speed for happy expressions was found for all stimulus types. In contrast, for visual search tasks, moderator analysis revealed that a happy face detection advantage was restricted to photographic faces, whereas a clear angry face advantage was found for schematic and "smiley" faces. Robust detection advantage for nonhappy faces was observed even when stimulus emotionality was distorted by inversion or rearrangement of the facial features, suggesting that visual features primarily drive the search. We conclude that the recognition advantage for happy faces is a genuine phenomenon related to processing of facial expression category and affective valence. In contrast, detection advantages toward either happy (photographic stimuli) or nonhappy (schematic) faces is contingent on visual stimulus features rather than facial expression, and may not involve categorical or affective processing. (c) 2015 APA, all rights reserved).
Chiang, Hsueh-Sheng; Eroh, Justin; Spence, Jeffrey S; Motes, Michael A; Maguire, Mandy J; Krawczyk, Daniel C; Brier, Matthew R; Hart, John; Kraut, Michael A
2016-08-01
How the brain combines the neural representations of features that comprise an object in order to activate a coherent object memory is poorly understood, especially when the features are presented in different modalities (visual vs. auditory) and domains (verbal vs. nonverbal). We examined this question using three versions of a modified Semantic Object Retrieval Test, where object memory was probed by a feature presented as a written word, a spoken word, or a picture, followed by a second feature always presented as a visual word. Participants indicated whether each feature pair elicited retrieval of the memory of a particular object. Sixteen subjects completed one of the three versions (N=48 in total) while their EEG were recorded simultaneously. We analyzed EEG data in four separate frequency bands (delta: 1-4Hz, theta: 4-7Hz; alpha: 8-12Hz; beta: 13-19Hz) using a multivariate data-driven approach. We found that alpha power time-locked to response was modulated by both cross-modality (visual vs. auditory) and cross-domain (verbal vs. nonverbal) probing of semantic object memory. In addition, retrieval trials showed greater changes in all frequency bands compared to non-retrieval trials across all stimulus types in both response-locked and stimulus-locked analyses, suggesting dissociable neural subcomponents involved in binding object features to retrieve a memory. We conclude that these findings support both modality/domain-dependent and modality/domain-independent mechanisms during semantic object memory retrieval. Copyright © 2016 Elsevier B.V. All rights reserved.
Feature-based attentional modulations in the absence of direct visual stimulation.
Serences, John T; Boynton, Geoffrey M
2007-07-19
When faced with a crowded visual scene, observers must selectively attend to behaviorally relevant objects to avoid sensory overload. Often this selection process is guided by prior knowledge of a target-defining feature (e.g., the color red when looking for an apple), which enhances the firing rate of visual neurons that are selective for the attended feature. Here, we used functional magnetic resonance imaging and a pattern classification algorithm to predict the attentional state of human observers as they monitored a visual feature (one of two directions of motion). We find that feature-specific attention effects spread across the visual field-even to regions of the scene that do not contain a stimulus. This spread of feature-based attention to empty regions of space may facilitate the perception of behaviorally relevant stimuli by increasing sensitivity to attended features at all locations in the visual field.
The Emergence of Visual Awareness: Temporal Dynamics in Relation to Task and Mask Type
Kiefer, Markus; Kammer, Thomas
2017-01-01
One aspect of consciousness phenomena, the temporal emergence of visual awareness, has been subject of a controversial debate. How can visual awareness, that is the experiential quality of visual stimuli, be characterized best? Is there a sharp discontinuous or dichotomous transition between unaware and fully aware states, or does awareness emerge gradually encompassing intermediate states? Previous studies yielded conflicting results and supported both dichotomous and gradual views. It is well conceivable that these conflicting results are more than noise, but reflect the dynamic nature of the temporal emergence of visual awareness. Using a psychophysical approach, the present research tested whether the emergence of visual awareness is context-dependent with a temporal two-alternative forced choice task. During backward masking of word targets, it was assessed whether the relative temporal sequence of stimulus thresholds is modulated by the task (stimulus presence, letter case, lexical decision, and semantic category) and by mask type. Four masks with different similarity to the target features were created. Psychophysical functions were then fitted to the accuracy data in the different task conditions as a function of the stimulus mask SOA in order to determine the inflection point (conscious threshold of each feature) and slope of the psychophysical function (transition from unaware to aware within each feature). Depending on feature-mask similarity, thresholds in the different tasks were highly dispersed suggesting a graded transition from unawareness to awareness or had less differentiated thresholds indicating that clusters of features probed by the tasks quite simultaneously contribute to the percept. The latter observation, although not compatible with the notion of a sharp all-or-none transition between unaware and aware states, suggests a less gradual or more discontinuous emergence of awareness. Analyses of slopes of the fitted psychophysical functions also indicated that the emergence of awareness of single features is variable and might be influenced by the continuity of the feature dimensions. The present work thus suggests that the emergence of awareness is neither purely gradual nor dichotomous, but highly dynamic depending on the task and mask type. PMID:28316583
Preserving information in neural transmission.
Sincich, Lawrence C; Horton, Jonathan C; Sharpee, Tatyana O
2009-05-13
Along most neural pathways, the spike trains transmitted from one neuron to the next are altered. In the process, neurons can either achieve a more efficient stimulus representation, or extract some biologically important stimulus parameter, or succeed at both. We recorded the inputs from single retinal ganglion cells and the outputs from connected lateral geniculate neurons in the macaque to examine how visual signals are relayed from retina to cortex. We found that geniculate neurons re-encoded multiple temporal stimulus features to yield output spikes that carried more information about stimuli than was available in each input spike. The coding transformation of some relay neurons occurred with no decrement in information rate, despite output spike rates that averaged half the input spike rates. This preservation of transmitted information was achieved by the short-term summation of inputs that geniculate neurons require to spike. A reduced model of the retinal and geniculate visual responses, based on two stimulus features and their associated nonlinearities, could account for >85% of the total information available in the spike trains and the preserved information transmission. These results apply to neurons operating on a single time-varying input, suggesting that synaptic temporal integration can alter the temporal receptive field properties to create a more efficient representation of visual signals in the thalamus than the retina.
Oscillatory encoding of visual stimulus familiarity.
Kissinger, Samuel T; Pak, Alexandr; Tang, Yu; Masmanidis, Sotiris C; Chubykin, Alexander A
2018-06-18
Familiarity of the environment changes the way we perceive and encode incoming information. However, the neural substrates underlying this phenomenon are poorly understood. Here we describe a new form of experience-dependent low frequency oscillations in the primary visual cortex (V1) of awake adult male mice. The oscillations emerged in visually evoked potentials (VEPs) and single-unit activity following repeated visual stimulation. The oscillations were sensitive to the spatial frequency content of a visual stimulus and required the muscarinic acetylcholine receptors (mAChRs) for their induction and expression. Finally, ongoing visually evoked theta (4-6 Hz) oscillations boost the VEP amplitude of incoming visual stimuli if the stimuli are presented at the high excitability phase of the oscillations. Our results demonstrate that an oscillatory code can be used to encode familiarity and serves as a gate for oncoming sensory inputs. Significance Statement. Previous experience can influence the processing of incoming sensory information by the brain and alter perception. However, the mechanistic understanding of how this process takes place is lacking. We have discovered that persistent low frequency oscillations in the primary visual cortex encode information about familiarity and the spatial frequency of the stimulus. These familiarity evoked oscillations influence neuronal responses to the oncoming stimuli in a way that depends on the oscillation phase. Our work demonstrates a new mechanism of visual stimulus feature detection and learning. Copyright © 2018 the authors.
Experimental test of contemporary mathematical models of visual letter recognition.
Townsend, J T; Ashby, F G
1982-12-01
A letter confusion experiment that used brief durations manipulated payoffs across the four stimulus letters, which were composed of line segments equal in length. The observers were required to report the features they perceived as well as to give a letter response. The early feature-sampling process is separated from the later letter-decision process in the substantive feature models, and predictions are thus obtained for the frequencies of feature report as well as letter report. Four substantive visual feature-processing models are developed and tested against one another and against three models of a more descriptive nature. The substantive models predict the decisional letter report phase much better than they do the feature-sampling phase, but the best overall 4 X 4 letter confusion matrix fits are obtained with one of the descriptive models, the similarity choice model. The present and other recent results suggest that the assumption that features are sampled in a stochastically independent manner may not be generally valid. The traditional high-threshold conceptualization of feature sampling is also falsified by the frequent reporting by observers of features not contained in the stimulus letter.
Desantis, Andrea; Haggard, Patrick
2016-01-01
To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events. PMID:27982063
Desantis, Andrea; Haggard, Patrick
2016-12-16
To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events.
Goard, Michael J; Pho, Gerald N; Woodson, Jonathan; Sur, Mriganka
2016-01-01
Mapping specific sensory features to future motor actions is a crucial capability of mammalian nervous systems. We investigated the role of visual (V1), posterior parietal (PPC), and frontal motor (fMC) cortices for sensorimotor mapping in mice during performance of a memory-guided visual discrimination task. Large-scale calcium imaging revealed that V1, PPC, and fMC neurons exhibited heterogeneous responses spanning all task epochs (stimulus, delay, response). Population analyses demonstrated unique encoding of stimulus identity and behavioral choice information across regions, with V1 encoding stimulus, fMC encoding choice even early in the trial, and PPC multiplexing the two variables. Optogenetic inhibition during behavior revealed that all regions were necessary during the stimulus epoch, but only fMC was required during the delay and response epochs. Stimulus identity can thus be rapidly transformed into behavioral choice, requiring V1, PPC, and fMC during the transformation period, but only fMC for maintaining the choice in memory prior to execution. DOI: http://dx.doi.org/10.7554/eLife.13764.001 PMID:27490481
Mishra, Jyoti; Zanto, Theodore; Nilakantan, Aneesha; Gazzaley, Adam
2013-01-01
Intrasensory interference during visual working memory (WM) maintenance by object stimuli (such as faces and scenes), has been shown to negatively impact WM performance, with greater detrimental impacts of interference observed in aging. Here we assessed age-related impacts by intrasensory WM interference from lower-level stimulus features such as visual and auditory motion stimuli. We consistently found that interference in the form of ignored distractions and secondary task i nterruptions presented during a WM maintenance period, degraded memory accuracy in both the visual and auditory domain. However, in contrast to prior studies assessing WM for visual object stimuli, feature-based interference effects were not observed to be significantly greater in older adults. Analyses of neural oscillations in the alpha frequency band further revealed preserved mechanisms of interference processing in terms of post-stimulus alpha suppression, which was observed maximally for secondary task interruptions in visual and auditory modalities in both younger and older adults. These results suggest that age-related sensitivity of WM to interference may be limited to complex object stimuli, at least at low WM loads. PMID:23791629
Huang, Liqiang
2015-05-01
Basic visual features (e.g., color, orientation) are assumed to be processed in the same general way across different visual tasks. Here, a significant deviation from this assumption was predicted on the basis of the analysis of stimulus spatial structure, as characterized by the Boolean-map notion. If a task requires memorizing the orientations of a set of bars, then the map consisting of those bars can be readily used to hold the overall structure in memory and will thus be especially useful. If the task requires visual search for a target, then the map, which contains only an overall structure, will be of little use. Supporting these predictions, the present study demonstrated that in comparison to stimulus colors, bar orientations were processed more efficiently in change-detection tasks but less efficiently in visual search tasks (Cohen's d = 4.24). In addition to offering support for the role of the Boolean map in conscious access, the present work also throws doubts on the generality of processing visual features. © The Author(s) 2015.
Bordier, Cecile; Puja, Francesco; Macaluso, Emiliano
2013-01-01
The investigation of brain activity using naturalistic, ecologically-valid stimuli is becoming an important challenge for neuroscience research. Several approaches have been proposed, primarily relying on data-driven methods (e.g. independent component analysis, ICA). However, data-driven methods often require some post-hoc interpretation of the imaging results to draw inferences about the underlying sensory, motor or cognitive functions. Here, we propose using a biologically-plausible computational model to extract (multi-)sensory stimulus statistics that can be used for standard hypothesis-driven analyses (general linear model, GLM). We ran two separate fMRI experiments, which both involved subjects watching an episode of a TV-series. In Exp 1, we manipulated the presentation by switching on-and-off color, motion and/or sound at variable intervals, whereas in Exp 2, the video was played in the original version, with all the consequent continuous changes of the different sensory features intact. Both for vision and audition, we extracted stimulus statistics corresponding to spatial and temporal discontinuities of low-level features, as well as a combined measure related to the overall stimulus saliency. Results showed that activity in occipital visual cortex and the superior temporal auditory cortex co-varied with changes of low-level features. Visual saliency was found to further boost activity in extra-striate visual cortex plus posterior parietal cortex, while auditory saliency was found to enhance activity in the superior temporal cortex. Data-driven ICA analyses of the same datasets also identified “sensory” networks comprising visual and auditory areas, but without providing specific information about the possible underlying processes, e.g., these processes could relate to modality, stimulus features and/or saliency. We conclude that the combination of computational modeling and GLM enables the tracking of the impact of bottom–up signals on brain activity during viewing of complex and dynamic multisensory stimuli, beyond the capability of purely data-driven approaches. PMID:23202431
Deconstructing continuous flash suppression
Yang, Eunice; Blake, Randolph
2012-01-01
In this paper, we asked to what extent the depth of interocular suppression engendered by continuous flash suppression (CFS) varies depending on spatiotemporal properties of the suppressed stimulus and CFS suppressor. An answer to this question could have implications for interpreting the results in which CFS influences the processing of different categories of stimuli to different extents. In a series of experiments, we measured the selectivity and depth of suppression (i.e., elevation in contrast detection thresholds) as a function of the visual features of the stimulus being suppressed and the stimulus evoking suppression, namely, the popular “Mondrian” CFS stimulus (N. Tsuchiya & C. Koch, 2005). First, we found that CFS differentially suppresses the spatial components of the suppressed stimulus: Observers' sensitivity for stimuli of relatively low spatial frequency or cardinally oriented features was more strongly impaired in comparison to high spatial frequency or obliquely oriented stimuli. Second, we discovered that this feature-selective bias primarily arises from the spatiotemporal structure of the CFS stimulus, particularly within information residing in the low spatial frequency range and within the smooth rather than abrupt luminance changes over time. These results imply that this CFS stimulus operates by selectively attenuating certain classes of low-level signals while leaving others to be potentially encoded during suppression. These findings underscore the importance of considering the contribution of low-level features in stimulus-driven effects that are reported under CFS. PMID:22408039
Deconstructing continuous flash suppression.
Yang, Eunice; Blake, Randolph
2012-03-08
In this paper, we asked to what extent the depth of interocular suppression engendered by continuous flash suppression (CFS) varies depending on spatiotemporal properties of the suppressed stimulus and CFS suppressor. An answer to this question could have implications for interpreting the results in which CFS influences the processing of different categories of stimuli to different extents. In a series of experiments, we measured the selectivity and depth of suppression (i.e., elevation in contrast detection thresholds) as a function of the visual features of the stimulus being suppressed and the stimulus evoking suppression, namely, the popular "Mondrian" CFS stimulus (N. Tsuchiya & C. Koch, 2005). First, we found that CFS differentially suppresses the spatial components of the suppressed stimulus: Observers' sensitivity for stimuli of relatively low spatial frequency or cardinally oriented features was more strongly impaired in comparison to high spatial frequency or obliquely oriented stimuli. Second, we discovered that this feature-selective bias primarily arises from the spatiotemporal structure of the CFS stimulus, particularly within information residing in the low spatial frequency range and within the smooth rather than abrupt luminance changes over time. These results imply that this CFS stimulus operates by selectively attenuating certain classes of low-level signals while leaving others to be potentially encoded during suppression. These findings underscore the importance of considering the contribution of low-level features in stimulus-driven effects that are reported under CFS.
In search of the emotional face: anger versus happiness superiority in visual search.
Savage, Ruth A; Lipp, Ottmar V; Craig, Belinda M; Becker, Stefanie I; Horstmann, Gernot
2013-08-01
Previous research has provided inconsistent results regarding visual search for emotional faces, yielding evidence for either anger superiority (i.e., more efficient search for angry faces) or happiness superiority effects (i.e., more efficient search for happy faces), suggesting that these results do not reflect on emotional expression, but on emotion (un-)related low-level perceptual features. The present study investigated possible factors mediating anger/happiness superiority effects; specifically search strategy (fixed vs. variable target search; Experiment 1), stimulus choice (Nimstim database vs. Ekman & Friesen database; Experiments 1 and 2), and emotional intensity (Experiment 3 and 3a). Angry faces were found faster than happy faces regardless of search strategy using faces from the Nimstim database (Experiment 1). By contrast, a happiness superiority effect was evident in Experiment 2 when using faces from the Ekman and Friesen database. Experiment 3 employed angry, happy, and exuberant expressions (Nimstim database) and yielded anger and happiness superiority effects, respectively, highlighting the importance of the choice of stimulus materials. Ratings of the stimulus materials collected in Experiment 3a indicate that differences in perceived emotional intensity, pleasantness, or arousal do not account for differences in search efficiency. Across three studies, the current investigation indicates that prior reports of anger or happiness superiority effects in visual search are likely to reflect on low-level visual features associated with the stimulus materials used, rather than on emotion. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Stronger Neural Modulation by Visual Motion Intensity in Autism Spectrum Disorders
Peiker, Ina; Schneider, Till R.; Milne, Elizabeth; Schöttle, Daniel; Vogeley, Kai; Münchau, Alexander; Schunke, Odette; Siegel, Markus; Engel, Andreas K.; David, Nicole
2015-01-01
Theories of autism spectrum disorders (ASD) have focused on altered perceptual integration of sensory features as a possible core deficit. Yet, there is little understanding of the neuronal processing of elementary sensory features in ASD. For typically developed individuals, we previously established a direct link between frequency-specific neural activity and the intensity of a specific sensory feature: Gamma-band activity in the visual cortex increased approximately linearly with the strength of visual motion. Using magnetoencephalography (MEG), we investigated whether in individuals with ASD neural activity reflect the coherence, and thus intensity, of visual motion in a similar fashion. Thirteen adult participants with ASD and 14 control participants performed a motion direction discrimination task with increasing levels of motion coherence. A polynomial regression analysis revealed that gamma-band power increased significantly stronger with motion coherence in ASD compared to controls, suggesting excessive visual activation with increasing stimulus intensity originating from motion-responsive visual areas V3, V6 and hMT/V5. Enhanced neural responses with increasing stimulus intensity suggest an enhanced response gain in ASD. Response gain is controlled by excitatory-inhibitory interactions, which also drive high-frequency oscillations in the gamma-band. Thus, our data suggest that a disturbed excitatory-inhibitory balance underlies enhanced neural responses to coherent motion in ASD. PMID:26147342
Featural and temporal attention selectively enhance task-appropriate representations in human V1
Warren, Scott; Yacoub, Essa; Ghose, Geoffrey
2015-01-01
Our perceptions are often shaped by focusing our attention toward specific features or periods of time irrespective of location. We explore the physiological bases of these non-spatial forms of attention by imaging brain activity while subjects perform a challenging change detection task. The task employs a continuously varying visual stimulus that, for any moment in time, selectively activates functionally distinct subpopulations of primary visual cortex (V1) neurons. When subjects are cued to the timing and nature of the change, the mapping of orientation preference across V1 was systematically shifts toward the cued stimulus just prior to its appearance. A simple linear model can explain this shift: attentional changes are selectively targeted toward neural subpopulations representing the attended feature at the times the feature was anticipated. Our results suggest that featural attention is mediated by a linear change in the responses of task-appropriate neurons across cortex during appropriate periods of time. PMID:25501983
Neural correlates of context-dependent feature conjunction learning in visual search tasks.
Reavis, Eric A; Frank, Sebastian M; Greenlee, Mark W; Tse, Peter U
2016-06-01
Many perceptual learning experiments show that repeated exposure to a basic visual feature such as a specific orientation or spatial frequency can modify perception of that feature, and that those perceptual changes are associated with changes in neural tuning early in visual processing. Such perceptual learning effects thus exert a bottom-up influence on subsequent stimulus processing, independent of task-demands or endogenous influences (e.g., volitional attention). However, it is unclear whether such bottom-up changes in perception can occur as more complex stimuli such as conjunctions of visual features are learned. It is not known whether changes in the efficiency with which people learn to process feature conjunctions in a task (e.g., visual search) reflect true bottom-up perceptual learning versus top-down, task-related learning (e.g., learning better control of endogenous attention). Here we show that feature conjunction learning in visual search leads to bottom-up changes in stimulus processing. First, using fMRI, we demonstrate that conjunction learning in visual search has a distinct neural signature: an increase in target-evoked activity relative to distractor-evoked activity (i.e., a relative increase in target salience). Second, we demonstrate that after learning, this neural signature is still evident even when participants passively view learned stimuli while performing an unrelated, attention-demanding task. This suggests that conjunction learning results in altered bottom-up perceptual processing of the learned conjunction stimuli (i.e., a perceptual change independent of the task). We further show that the acquired change in target-evoked activity is contextually dependent on the presence of distractors, suggesting that search array Gestalts are learned. Hum Brain Mapp 37:2319-2330, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Samaha, Jason; Postle, Bradley R
2017-11-29
Adaptive behaviour depends on the ability to introspect accurately about one's own performance. Whether this metacognitive ability is supported by the same mechanisms across different tasks is unclear. We investigated the relationship between metacognition of visual perception and metacognition of visual short-term memory (VSTM). Experiments 1 and 2 required subjects to estimate the perceived or remembered orientation of a grating stimulus and rate their confidence. We observed strong positive correlations between individual differences in metacognitive accuracy between the two tasks. This relationship was not accounted for by individual differences in task performance or average confidence, and was present across two different metrics of metacognition and in both experiments. A model-based analysis of data from a third experiment showed that a cross-domain correlation only emerged when both tasks shared the same task-relevant stimulus feature. That is, metacognition for perception and VSTM were correlated when both tasks required orientation judgements, but not when the perceptual task was switched to require contrast judgements. In contrast with previous results comparing perception and long-term memory, which have largely provided evidence for domain-specific metacognitive processes, the current findings suggest that metacognition of visual perception and VSTM is supported by a domain-general metacognitive architecture, but only when both domains share the same task-relevant stimulus feature. © 2017 The Author(s).
Lu, Kun-Han; Hung, Shao-Chin; Wen, Haiguang; Marussich, Lauren; Liu, Zhongming
2016-01-01
Complex, sustained, dynamic, and naturalistic visual stimulation can evoke distributed brain activities that are highly reproducible within and across individuals. However, the precise origins of such reproducible responses remain incompletely understood. Here, we employed concurrent functional magnetic resonance imaging (fMRI) and eye tracking to investigate the experimental and behavioral factors that influence fMRI activity and its intra- and inter-subject reproducibility during repeated movie stimuli. We found that widely distributed and highly reproducible fMRI responses were attributed primarily to the high-level natural content in the movie. In the absence of such natural content, low-level visual features alone in a spatiotemporally scrambled control stimulus evoked significantly reduced degree and extent of reproducible responses, which were mostly confined to the primary visual cortex (V1). We also found that the varying gaze behavior affected the cortical response at the peripheral part of V1 and in the oculomotor network, with minor effects on the response reproducibility over the extrastriate visual areas. Lastly, scene transitions in the movie stimulus due to film editing partly caused the reproducible fMRI responses at widespread cortical areas, especially along the ventral visual pathway. Therefore, the naturalistic nature of a movie stimulus is necessary for driving highly reliable visual activations. In a movie-stimulation paradigm, scene transitions and individuals’ gaze behavior should be taken as potential confounding factors in order to properly interpret cortical activity that supports natural vision. PMID:27564573
Color-Change Detection Activity in the Primate Superior Colliculus.
Herman, James P; Krauzlis, Richard J
2017-01-01
The primate superior colliculus (SC) is a midbrain structure that participates in the control of spatial attention. Previous studies examining the role of the SC in attention have mostly used luminance-based visual features (e.g., motion, contrast) as the stimuli and saccadic eye movements as the behavioral response, both of which are known to modulate the activity of SC neurons. To explore the limits of the SC's involvement in the control of spatial attention, we recorded SC neuronal activity during a task using color, a visual feature dimension not traditionally associated with the SC, and required monkeys to detect threshold-level changes in the saturation of a cued stimulus by releasing a joystick during maintained fixation. Using this color-based spatial attention task, we found substantial cue-related modulation in all categories of visually responsive neurons in the intermediate layers of the SC. Notably, near-threshold changes in color saturation, both increases and decreases, evoked phasic bursts of activity with magnitudes as large as those evoked by stimulus onset. This change-detection activity had two distinctive features: activity for hits was larger than for misses, and the timing of change-detection activity accounted for 67% of joystick release latency, even though it preceded the release by at least 200 ms. We conclude that during attention tasks, SC activity denotes the behavioral relevance of the stimulus regardless of feature dimension and that phasic event-related SC activity is suitable to guide the selection of manual responses as well as saccadic eye movements.
Visual search in Dementia with Lewy Bodies and Alzheimer's disease.
Landy, Kelly M; Salmon, David P; Filoteo, J Vincent; Heindel, William C; Galasko, Douglas; Hamilton, Joanne M
2015-12-01
Visual search is an aspect of visual cognition that may be more impaired in Dementia with Lewy Bodies (DLB) than Alzheimer's disease (AD). To assess this possibility, the present study compared patients with DLB (n = 17), AD (n = 30), or Parkinson's disease with dementia (PDD; n = 10) to non-demented patients with PD (n = 18) and normal control (NC) participants (n = 13) on single-feature and feature-conjunction visual search tasks. In the single-feature task participants had to determine if a target stimulus (i.e., a black dot) was present among 3, 6, or 12 distractor stimuli (i.e., white dots) that differed in one salient feature. In the feature-conjunction task participants had to determine if a target stimulus (i.e., a black circle) was present among 3, 6, or 12 distractor stimuli (i.e., white dots and black squares) that shared either of the target's salient features. Results showed that target detection time in the single-feature task was not influenced by the number of distractors (i.e., "pop-out" effect) for any of the groups. In contrast, target detection time increased as the number of distractors increased in the feature-conjunction task for all groups, but more so for patients with AD or DLB than for any of the other groups. These results suggest that the single-feature search "pop-out" effect is preserved in DLB and AD patients, whereas ability to perform the feature-conjunction search is impaired. This pattern of preserved single-feature search with impaired feature-conjunction search is consistent with a deficit in feature binding that may be mediated by abnormalities in networks involving the dorsal occipito-parietal cortex. Copyright © 2015 Elsevier Ltd. All rights reserved.
Visual Search in Dementia with Lewy Bodies and Alzheimer’s Disease
Landy, Kelly M.; Salmon, David P.; Filoteo, J. Vincent; Heindel, William C.; Galasko, Douglas; Hamilton, Joanne M.
2016-01-01
Visual search is an aspect of visual cognition that may be more impaired in Dementia with Lewy Bodies (DLB) than Alzheimer’s disease (AD). To assess this possibility, the present study compared patients with DLB (n=17), AD (n=30), or Parkinson’s disease with dementia (PDD; n=10) to non-demented patients with PD (n=18) and normal control (NC) participants (n=13) on single-feature and feature-conjunction visual search tasks. In the single-feature task participants had to determine if a target stimulus (i.e., a black dot) was present among 3, 6, or 12 distractor stimuli (i.e., white dots) that differed in one salient feature. In the feature-conjunction task participants had to determine if a target stimulus (i.e., a black circle) was present among 3, 6, or 12 distractor stimuli (i.e., white dots and black squares) that shared either of the target’s salient features. Results showed that target detection time in the single-feature task was not influenced by the number of distractors (i.e., “pop-out” effect) for any of the groups. In contrast, target detection time increased as the number of distractors increased in the feature-conjunction task for all groups, but more so for patients with AD or DLB than for any of the other groups. These results suggest that the single-feature search “pop-out” effect is preserved in DLB and AD patients, whereas ability to perform the feature-conjunction search is impaired. This pattern of preserved single-feature search with impaired feature-conjunction search is consistent with a deficit in feature binding that may be mediated by abnormalities in networks involving the dorsal occipito-parietal cortex. PMID:26476402
Timing in audiovisual speech perception: A mini review and new psychophysical data.
Venezia, Jonathan H; Thurman, Steven M; Matchin, William; George, Sahara E; Hickok, Gregory
2016-02-01
Recent influential models of audiovisual speech perception suggest that visual speech aids perception by generating predictions about the identity of upcoming speech sounds. These models place stock in the assumption that visual speech leads auditory speech in time. However, it is unclear whether and to what extent temporally-leading visual speech information contributes to perception. Previous studies exploring audiovisual-speech timing have relied upon psychophysical procedures that require artificial manipulation of cross-modal alignment or stimulus duration. We introduce a classification procedure that tracks perceptually relevant visual speech information in time without requiring such manipulations. Participants were shown videos of a McGurk syllable (auditory /apa/ + visual /aka/ = perceptual /ata/) and asked to perform phoneme identification (/apa/ yes-no). The mouth region of the visual stimulus was overlaid with a dynamic transparency mask that obscured visual speech in some frames but not others randomly across trials. Variability in participants' responses (~35 % identification of /apa/ compared to ~5 % in the absence of the masker) served as the basis for classification analysis. The outcome was a high resolution spatiotemporal map of perceptually relevant visual features. We produced these maps for McGurk stimuli at different audiovisual temporal offsets (natural timing, 50-ms visual lead, and 100-ms visual lead). Briefly, temporally-leading (~130 ms) visual information did influence auditory perception. Moreover, several visual features influenced perception of a single speech sound, with the relative influence of each feature depending on both its temporal relation to the auditory signal and its informational content.
Timing in Audiovisual Speech Perception: A Mini Review and New Psychophysical Data
Venezia, Jonathan H.; Thurman, Steven M.; Matchin, William; George, Sahara E.; Hickok, Gregory
2015-01-01
Recent influential models of audiovisual speech perception suggest that visual speech aids perception by generating predictions about the identity of upcoming speech sounds. These models place stock in the assumption that visual speech leads auditory speech in time. However, it is unclear whether and to what extent temporally-leading visual speech information contributes to perception. Previous studies exploring audiovisual-speech timing have relied upon psychophysical procedures that require artificial manipulation of cross-modal alignment or stimulus duration. We introduce a classification procedure that tracks perceptually-relevant visual speech information in time without requiring such manipulations. Participants were shown videos of a McGurk syllable (auditory /apa/ + visual /aka/ = perceptual /ata/) and asked to perform phoneme identification (/apa/ yes-no). The mouth region of the visual stimulus was overlaid with a dynamic transparency mask that obscured visual speech in some frames but not others randomly across trials. Variability in participants' responses (∼35% identification of /apa/ compared to ∼5% in the absence of the masker) served as the basis for classification analysis. The outcome was a high resolution spatiotemporal map of perceptually-relevant visual features. We produced these maps for McGurk stimuli at different audiovisual temporal offsets (natural timing, 50-ms visual lead, and 100-ms visual lead). Briefly, temporally-leading (∼130 ms) visual information did influence auditory perception. Moreover, several visual features influenced perception of a single speech sound, with the relative influence of each feature depending on both its temporal relation to the auditory signal and its informational content. PMID:26669309
Feature Integration in the Mapping of Multi-Attribute Visual Stimuli to Responses
Ishizaki, Takuya; Morita, Hiromi; Morita, Masahiko
2015-01-01
In the human visual system, different attributes of an object, such as shape and color, are separately processed in different modules and then integrated to elicit a specific response. In this process, different attributes are thought to be temporarily “bound” together by focusing attention on the object; however, how such binding contributes to stimulus-response mapping remains unclear. Here we report that learning and performance of stimulus-response tasks was more difficult when three attributes of the stimulus determined the correct response than when two attributes did. We also found that spatially separated presentations of attributes considerably complicated the task, although they did not markedly affect target detection. These results are consistent with a paired-attribute model in which bound feature pairs, rather than object representations, are associated with responses by learning. This suggests that attention does not bind three or more attributes into a unitary object representation, and long-term learning is required for their integration. PMID:25762010
Liu, B; Meng, X; Wu, G; Huang, Y
2012-05-17
In this article, we aimed to study whether feature precedence existed in the cognitive processing of multifeature visual information in the human brain. In our experiment, we paid attention to two important visual features as follows: color and shape. In order to avoid the presence of semantic constraints between them and the resulting impact, pure color and simple geometric shape were chosen as the color feature and shape feature of visual stimulus, respectively. We adopted an "old/new" paradigm to study the cognitive processing of color feature, shape feature and the combination of color feature and shape feature, respectively. The experiment consisted of three tasks as follows: Color task, Shape task and Color-Shape task. The results showed that the feature-based pattern would be activated in the human brain in processing multifeature visual information without semantic association between features. Furthermore, shape feature was processed earlier than color feature, and the cognitive processing of color feature was more difficult than that of shape feature. Copyright © 2012 IBRO. Published by Elsevier Ltd. All rights reserved.
Jackson, Jade; Rich, Anina N; Williams, Mark A; Woolgar, Alexandra
2017-02-01
Human cognition is characterized by astounding flexibility, enabling us to select appropriate information according to the objectives of our current task. A circuit of frontal and parietal brain regions, often referred to as the frontoparietal attention network or multiple-demand (MD) regions, are believed to play a fundamental role in this flexibility. There is evidence that these regions dynamically adjust their responses to selectively process information that is currently relevant for behavior, as proposed by the "adaptive coding hypothesis" [Duncan, J. An adaptive coding model of neural function in prefrontal cortex. Nature Reviews Neuroscience, 2, 820-829, 2001]. Could this provide a neural mechanism for feature-selective attention, the process by which we preferentially process one feature of a stimulus over another? We used multivariate pattern analysis of fMRI data during a perceptually challenging categorization task to investigate whether the representation of visual object features in the MD regions flexibly adjusts according to task relevance. Participants were trained to categorize visually similar novel objects along two orthogonal stimulus dimensions (length/orientation) and performed short alternating blocks in which only one of these dimensions was relevant. We found that multivoxel patterns of activation in the MD regions encoded the task-relevant distinctions more strongly than the task-irrelevant distinctions: The MD regions discriminated between stimuli of different lengths when length was relevant and between the same objects according to orientation when orientation was relevant. The data suggest a flexible neural system that adjusts its representation of visual objects to preferentially encode stimulus features that are currently relevant for behavior, providing a neural mechanism for feature-selective attention.
Exploration of complex visual feature spaces for object perception
Leeds, Daniel D.; Pyles, John A.; Tarr, Michael J.
2014-01-01
The mid- and high-level visual properties supporting object perception in the ventral visual pathway are poorly understood. In the absence of well-specified theory, many groups have adopted a data-driven approach in which they progressively interrogate neural units to establish each unit's selectivity. Such methods are challenging in that they require search through a wide space of feature models and stimuli using a limited number of samples. To more rapidly identify higher-level features underlying human cortical object perception, we implemented a novel functional magnetic resonance imaging method in which visual stimuli are selected in real-time based on BOLD responses to recently shown stimuli. This work was inspired by earlier primate physiology work, in which neural selectivity for mid-level features in IT was characterized using a simple parametric approach (Hung et al., 2012). To extend such work to human neuroimaging, we used natural and synthetic object stimuli embedded in feature spaces constructed on the basis of the complex visual properties of the objects themselves. During fMRI scanning, we employed a real-time search method to control continuous stimulus selection within each image space. This search was designed to maximize neural responses across a pre-determined 1 cm3 brain region within ventral cortex. To assess the value of this method for understanding object encoding, we examined both the behavior of the method itself and the complex visual properties the method identified as reliably activating selected brain regions. We observed: (1) Regions selective for both holistic and component object features and for a variety of surface properties; (2) Object stimulus pairs near one another in feature space that produce responses at the opposite extremes of the measured activity range. Together, these results suggest that real-time fMRI methods may yield more widely informative measures of selectivity within the broad classes of visual features associated with cortical object representation. PMID:25309408
On the use of continuous flash suppression for the study of visual processing outside of awareness
Yang, Eunice; Brascamp, Jan; Kang, Min-Suk; Blake, Randolph
2014-01-01
The interocular suppression technique termed continuous flash suppression (CFS) has become an immensely popular tool for investigating visual processing outside of awareness. The emerging picture from studies using CFS is that extensive processing of a visual stimulus, including its semantic and affective content, occurs despite suppression from awareness of that stimulus by CFS. However, the current implementation of CFS in many studies examining processing outside of awareness has several drawbacks that may be improved upon for future studies using CFS. In this paper, we address some of those shortcomings, particularly ones that affect the assessment of unawareness during CFS, and ones to do with the use of “visible” conditions that are often included as a comparison to a CFS condition. We also discuss potential biases in stimulus processing as a result of spatial attention and feature-selective suppression. We suggest practical guidelines that minimize the effects of those limitations in using CFS to study visual processing outside of awareness. PMID:25071685
Miskovic, Vladimir; Martinovic, Jasna; Wieser, Matthias M.; Petro, Nathan M.; Bradley, Margaret M.; Keil, Andreas
2015-01-01
Emotionally arousing scenes readily capture visual attention, prompting amplified neural activity in sensory regions of the brain. The physical stimulus features and related information channels in the human visual system that contribute to this modulation, however, are not known. Here, we manipulated low-level physical parameters of complex scenes varying in hedonic valence and emotional arousal in order to target the relative contributions of luminance based versus chromatic visual channels to emotional perception. Stimulus-evoked brain electrical activity was measured during picture viewing and used to quantify neural responses sensitive to lower-tier visual cortical involvement (steady-state visual evoked potentials) as well as the late positive potential, reflecting a more distributed cortical event. Results showed that the enhancement for emotional content was stimulus-selective when examining the steady-state segments of the evoked visual potentials. Response amplification was present only for low spatial frequency, grayscale stimuli, and not for high spatial frequency, red/green stimuli. In contrast, the late positive potential was modulated by emotion regardless of the scene’s physical properties. Our findings are discussed in relation to neurophysiologically plausible constraints operating at distinct stages of the cortical processing stream. PMID:25640949
Miskovic, Vladimir; Martinovic, Jasna; Wieser, Matthias J; Petro, Nathan M; Bradley, Margaret M; Keil, Andreas
2015-03-01
Emotionally arousing scenes readily capture visual attention, prompting amplified neural activity in sensory regions of the brain. The physical stimulus features and related information channels in the human visual system that contribute to this modulation, however, are not known. Here, we manipulated low-level physical parameters of complex scenes varying in hedonic valence and emotional arousal in order to target the relative contributions of luminance based versus chromatic visual channels to emotional perception. Stimulus-evoked brain electrical activity was measured during picture viewing and used to quantify neural responses sensitive to lower-tier visual cortical involvement (steady-state visual evoked potentials) as well as the late positive potential, reflecting a more distributed cortical event. Results showed that the enhancement for emotional content was stimulus-selective when examining the steady-state segments of the evoked visual potentials. Response amplification was present only for low spatial frequency, grayscale stimuli, and not for high spatial frequency, red/green stimuli. In contrast, the late positive potential was modulated by emotion regardless of the scene's physical properties. Our findings are discussed in relation to neurophysiologically plausible constraints operating at distinct stages of the cortical processing stream. Copyright © 2015 Elsevier B.V. All rights reserved.
Kopp, Bruno; Wessel, Karl
2010-05-01
In the present study, event-related potentials (ERPs) were recorded to investigate cognitive processes related to the partial transmission of information from stimulus recognition to response preparation. Participants classified two-dimensional visual stimuli with dimensions size and form. One feature combination was designated as the go-target, whereas the other three feature combinations served as no-go distractors. Size discriminability was manipulated across three experimental conditions. N2c and P3a amplitudes were enhanced in response to those distractors that shared the feature from the faster dimension with the target. Moreover, N2c and P3a amplitudes showed a crossover effect: Size distractors evoked more pronounced ERPs under high size discriminability, but form distractors elicited enhanced ERPs under low size discriminability. These results suggest that partial perceptual-motor transmission of information is accompanied by acts of cognitive control and by shifts of attention between the sources of conflicting information. Selection negativity findings imply adaptive allocation of visual feature-based attention across the two stimulus dimensions.
Lim, Seung-Lark; O'Doherty, John P.
2013-01-01
We often have to make choices among multiattribute stimuli (e.g., a food that differs on its taste and health). Behavioral data suggest that choices are made by computing the value of the different attributes and then integrating them into an overall stimulus value signal. However, it is not known whether this theory describes the way the brain computes the stimulus value signals, or how the underlying computations might be implemented. We investigated these questions using a human fMRI task in which individuals had to evaluate T-shirts that varied in their visual esthetic (e.g., color) and semantic (e.g., meaning of logo printed in T-shirt) components. We found that activity in the fusiform gyrus, an area associated with the processing of visual features, correlated with the value of the visual esthetic attributes, but not with the value of the semantic attributes. In contrast, activity in posterior superior temporal gyrus, an area associated with the processing of semantic meaning, exhibited the opposite pattern. Furthermore, both areas exhibited functional connectivity with an area of ventromedial prefrontal cortex that reflects the computation of overall stimulus values at the time of decision. The results provide supporting evidence for the hypothesis that some attribute values are computed in cortical areas specialized in the processing of such features, and that those attribute-specific values are then passed to the vmPFC to be integrated into an overall stimulus value signal to guide the decision. PMID:23678116
Lim, Seung-Lark; O'Doherty, John P; Rangel, Antonio
2013-05-15
We often have to make choices among multiattribute stimuli (e.g., a food that differs on its taste and health). Behavioral data suggest that choices are made by computing the value of the different attributes and then integrating them into an overall stimulus value signal. However, it is not known whether this theory describes the way the brain computes the stimulus value signals, or how the underlying computations might be implemented. We investigated these questions using a human fMRI task in which individuals had to evaluate T-shirts that varied in their visual esthetic (e.g., color) and semantic (e.g., meaning of logo printed in T-shirt) components. We found that activity in the fusiform gyrus, an area associated with the processing of visual features, correlated with the value of the visual esthetic attributes, but not with the value of the semantic attributes. In contrast, activity in posterior superior temporal gyrus, an area associated with the processing of semantic meaning, exhibited the opposite pattern. Furthermore, both areas exhibited functional connectivity with an area of ventromedial prefrontal cortex that reflects the computation of overall stimulus values at the time of decision. The results provide supporting evidence for the hypothesis that some attribute values are computed in cortical areas specialized in the processing of such features, and that those attribute-specific values are then passed to the vmPFC to be integrated into an overall stimulus value signal to guide the decision.
Distortions of Subjective Time Perception Within and Across Senses
van Wassenhove, Virginie; Buonomano, Dean V.; Shimojo, Shinsuke; Shams, Ladan
2008-01-01
Background The ability to estimate the passage of time is of fundamental importance for perceptual and cognitive processes. One experience of time is the perception of duration, which is not isomorphic to physical duration and can be distorted by a number of factors. Yet, the critical features generating these perceptual shifts in subjective duration are not understood. Methodology/Findings We used prospective duration judgments within and across sensory modalities to examine the effect of stimulus predictability and feature change on the perception of duration. First, we found robust distortions of perceived duration in auditory, visual and auditory-visual presentations despite the predictability of the feature changes in the stimuli. For example, a looming disc embedded in a series of steady discs led to time dilation, whereas a steady disc embedded in a series of looming discs led to time compression. Second, we addressed whether visual (auditory) inputs could alter the perception of duration of auditory (visual) inputs. When participants were presented with incongruent audio-visual stimuli, the perceived duration of auditory events could be shortened or lengthened by the presence of conflicting visual information; however, the perceived duration of visual events was seldom distorted by the presence of auditory information and was never perceived shorter than their actual durations. Conclusions/Significance These results support the existence of multisensory interactions in the perception of duration and, importantly, suggest that vision can modify auditory temporal perception in a pure timing task. Insofar as distortions in subjective duration can neither be accounted for by the unpredictability of an auditory, visual or auditory-visual event, we propose that it is the intrinsic features of the stimulus that critically affect subjective time distortions. PMID:18197248
Dorsal hippocampus is necessary for visual categorization in rats.
Kim, Jangjin; Castro, Leyre; Wasserman, Edward A; Freeman, John H
2018-02-23
The hippocampus may play a role in categorization because of the need to differentiate stimulus categories (pattern separation) and to recognize category membership of stimuli from partial information (pattern completion). We hypothesized that the hippocampus would be more crucial for categorization of low-density (few relevant features) stimuli-due to the higher demand on pattern separation and pattern completion-than for categorization of high-density (many relevant features) stimuli. Using a touchscreen apparatus, rats were trained to categorize multiple abstract stimuli into two different categories. Each stimulus was a pentagonal configuration of five visual features; some of the visual features were relevant for defining the category whereas others were irrelevant. Two groups of rats were trained with either a high (dense, n = 8) or low (sparse, n = 8) number of category-relevant features. Upon reaching criterion discrimination (≥75% correct, on 2 consecutive days), bilateral cannulas were implanted in the dorsal hippocampus. The rats were then given either vehicle or muscimol infusions into the hippocampus just prior to various testing sessions. They were tested with: the previously trained stimuli (trained), novel stimuli involving new irrelevant features (novel), stimuli involving relocated features (relocation), and a single relevant feature (singleton). In training, the dense group reached criterion faster than the sparse group, indicating that the sparse task was more difficult than the dense task. In testing, accuracy of both groups was equally high for trained and novel stimuli. However, both groups showed impaired accuracy in the relocation and singleton conditions, with a greater deficit in the sparse group. The testing data indicate that rats encode both the relevant features and the spatial locations of the features. Hippocampal inactivation impaired visual categorization regardless of the density of the category-relevant features for the trained, novel, relocation, and singleton stimuli. Hippocampus-mediated pattern completion and pattern separation mechanisms may be necessary for visual categorization involving overlapping irrelevant features. © 2018 Wiley Periodicals, Inc.
Flexible cue combination in the guidance of attention in visual search
Brand, John; Oriet, Chris; Johnson, Aaron P.; Wolfe, Jeremy M.
2014-01-01
Hodsoll and Humphreys (2001) have assessed the relative contributions of stimulus-driven and user-driven knowledge on linearly- and nonlinearly separable search. However, the target feature used to determine linear separability in their task (i.e., target size) was required to locate the target. In the present work, we investigated the contributions of stimulus-driven and user-driven knowledge when a linearly- or nonlinearly-separable feature is available but not required for target identification. We asked observers to complete a series of standard color X orientation conjunction searches in which target size was either linearly- or nonlinearly separable from the size of the distractors. When guidance by color X orientation and by size information are both available, observers rely on whichever information results in the best search efficiency. This is the case irrespective of whether we provide target foreknowledge by blocking stimulus conditions, suggesting that feature information is used in both a stimulus-driven and user-driven fashion. PMID:25463553
Lahnakoski, Juha M; Salmi, Juha; Jääskeläinen, Iiro P; Lampinen, Jouko; Glerean, Enrico; Tikka, Pia; Sams, Mikko
2012-01-01
Understanding how the brain processes stimuli in a rich natural environment is a fundamental goal of neuroscience. Here, we showed a feature film to 10 healthy volunteers during functional magnetic resonance imaging (fMRI) of hemodynamic brain activity. We then annotated auditory and visual features of the motion picture to inform analysis of the hemodynamic data. The annotations were fitted to both voxel-wise data and brain network time courses extracted by independent component analysis (ICA). Auditory annotations correlated with two independent components (IC) disclosing two functional networks, one responding to variety of auditory stimulation and another responding preferentially to speech but parts of the network also responding to non-verbal communication. Visual feature annotations correlated with four ICs delineating visual areas according to their sensitivity to different visual stimulus features. In comparison, a separate voxel-wise general linear model based analysis disclosed brain areas preferentially responding to sound energy, speech, music, visual contrast edges, body motion and hand motion which largely overlapped the results revealed by ICA. Differences between the results of IC- and voxel-based analyses demonstrate that thorough analysis of voxel time courses is important for understanding the activity of specific sub-areas of the functional networks, while ICA is a valuable tool for revealing novel information about functional connectivity which need not be explained by the predefined model. Our results encourage the use of naturalistic stimuli and tasks in cognitive neuroimaging to study how the brain processes stimuli in rich natural environments.
Lahnakoski, Juha M.; Salmi, Juha; Jääskeläinen, Iiro P.; Lampinen, Jouko; Glerean, Enrico; Tikka, Pia; Sams, Mikko
2012-01-01
Understanding how the brain processes stimuli in a rich natural environment is a fundamental goal of neuroscience. Here, we showed a feature film to 10 healthy volunteers during functional magnetic resonance imaging (fMRI) of hemodynamic brain activity. We then annotated auditory and visual features of the motion picture to inform analysis of the hemodynamic data. The annotations were fitted to both voxel-wise data and brain network time courses extracted by independent component analysis (ICA). Auditory annotations correlated with two independent components (IC) disclosing two functional networks, one responding to variety of auditory stimulation and another responding preferentially to speech but parts of the network also responding to non-verbal communication. Visual feature annotations correlated with four ICs delineating visual areas according to their sensitivity to different visual stimulus features. In comparison, a separate voxel-wise general linear model based analysis disclosed brain areas preferentially responding to sound energy, speech, music, visual contrast edges, body motion and hand motion which largely overlapped the results revealed by ICA. Differences between the results of IC- and voxel-based analyses demonstrate that thorough analysis of voxel time courses is important for understanding the activity of specific sub-areas of the functional networks, while ICA is a valuable tool for revealing novel information about functional connectivity which need not be explained by the predefined model. Our results encourage the use of naturalistic stimuli and tasks in cognitive neuroimaging to study how the brain processes stimuli in rich natural environments. PMID:22496909
An integrated reweighting theory of perceptual learning
Dosher, Barbara Anne; Jeter, Pamela; Liu, Jiajuan; Lu, Zhong-Lin
2013-01-01
Improvements in performance on visual tasks due to practice are often specific to a retinal position or stimulus feature. Many researchers suggest that specific perceptual learning alters selective retinotopic representations in early visual analysis. However, transfer is almost always practically advantageous, and it does occur. If perceptual learning alters location-specific representations, how does it transfer to new locations? An integrated reweighting theory explains transfer over retinal locations by incorporating higher level location-independent representations into a multilevel learning system. Location transfer is mediated through location-independent representations, whereas stimulus feature transfer is determined by stimulus similarity at both location-specific and location-independent levels. Transfer to new locations/positions differs fundamentally from transfer to new stimuli. After substantial initial training on an orientation discrimination task, switches to a new location or position are compared with switches to new orientations in the same position, or switches of both. Position switches led to the highest degree of transfer, whereas orientation switches led to the highest levels of specificity. A computational model of integrated reweighting is developed and tested that incorporates the details of the stimuli and the experiment. Transfer to an identical orientation task in a new position is mediated via more broadly tuned location-invariant representations, whereas changing orientation in the same position invokes interference or independent learning of the new orientations at both levels, reflecting stimulus dissimilarity. Consistent with single-cell recording studies, perceptual learning alters the weighting of both early and midlevel representations of the visual system. PMID:23898204
ten Oever, Sanne; Sack, Alexander T.; Wheat, Katherine L.; Bien, Nina; van Atteveldt, Nienke
2013-01-01
Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception. PMID:23805110
Ten Oever, Sanne; Sack, Alexander T; Wheat, Katherine L; Bien, Nina; van Atteveldt, Nienke
2013-01-01
Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception.
Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Liu, Yongjian; Liang, Changhong; Sun, Pei
2015-02-01
Previous studies have shown that audiovisual integration improves identification performance and enhances neural activity in heteromodal brain areas, for example, the posterior superior temporal sulcus/middle temporal gyrus (pSTS/MTG). Furthermore, it has also been demonstrated that attention plays an important role in crossmodal integration. In this study, we considered crossmodal integration in audiovisual facial perception and explored its effect on the neural representation of features. The audiovisual stimuli in the experiment consisted of facial movie clips that could be classified into 2 gender categories (male vs. female) or 2 emotion categories (crying vs. laughing). The visual/auditory-only stimuli were created from these movie clips by removing the auditory/visual contents. The subjects needed to make a judgment about the gender/emotion category for each movie clip in the audiovisual, visual-only, or auditory-only stimulus condition as functional magnetic resonance imaging (fMRI) signals were recorded. The neural representation of the gender/emotion feature was assessed using the decoding accuracy and the brain pattern-related reproducibility indices, obtained by a multivariate pattern analysis method from the fMRI data. In comparison to the visual-only and auditory-only stimulus conditions, we found that audiovisual integration enhanced the neural representation of task-relevant features and that feature-selective attention might play a role of modulation in the audiovisual integration. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Neocortical Rebound Depolarization Enhances Visual Perception
Funayama, Kenta; Ban, Hiroshi; Chan, Allen W.; Matsuki, Norio; Murphy, Timothy H.; Ikegaya, Yuji
2015-01-01
Animals are constantly exposed to the time-varying visual world. Because visual perception is modulated by immediately prior visual experience, visual cortical neurons may register recent visual history into a specific form of offline activity and link it to later visual input. To examine how preceding visual inputs interact with upcoming information at the single neuron level, we designed a simple stimulation protocol in which a brief, orientated flashing stimulus was subsequently coupled to visual stimuli with identical or different features. Using in vivo whole-cell patch-clamp recording and functional two-photon calcium imaging from the primary visual cortex (V1) of awake mice, we discovered that a flash of sinusoidal grating per se induces an early, transient activation as well as a long-delayed reactivation in V1 neurons. This late response, which started hundreds of milliseconds after the flash and persisted for approximately 2 s, was also observed in human V1 electroencephalogram. When another drifting grating stimulus arrived during the late response, the V1 neurons exhibited a sublinear, but apparently increased response, especially to the same grating orientation. In behavioral tests of mice and humans, the flashing stimulation enhanced the detection power of the identically orientated visual stimulation only when the second stimulation was presented during the time window of the late response. Therefore, V1 late responses likely provide a neural basis for admixing temporally separated stimuli and extracting identical features in time-varying visual environments. PMID:26274866
Rules infants look by: Testing the assumption of transitivity in visual salience.
Kibbe, Melissa M; Kaldy, Zsuzsa; Blaser, Erik
2018-01-01
What drives infants' attention in complex visual scenes? Early models of infant attention suggested that the degree to which different visual features were detectable determines their attentional priority. Here, we tested this by asking whether two targets - defined by different features, but each equally salient when evaluated independently - would drive attention equally when pitted head-to-head. In Experiment 1, we presented 6-month-old infants with an array of gabor patches in which a target region varied either in color or spatial frequency from the background. Using a forced-choice preferential-looking method, we measured how readily infants fixated the target as its featural difference from the background was parametrically increased. Then, in Experiment 2, we used these psychometric preference functions to choose values for color and spatial frequency targets that were equally salient (preferred), and pitted them against each other within the same display. We reasoned that, if salience is transitive, then the stimuli should be iso-salient and infants should therefore show no systematic preference for either stimulus. On the contrary, we found that infants consistently preferred the color-defined stimulus. This suggests that computing visual salience in more complex scenes needs to include factors above and beyond local salience values.
Gomez-Ramirez, Manuel; Trzcinski, Natalie K.; Mihalas, Stefan; Niebur, Ernst
2014-01-01
Studies in vision show that attention enhances the firing rates of cells when it is directed towards their preferred stimulus feature. However, it is unknown whether other sensory systems employ this mechanism to mediate feature selection within their modalities. Moreover, whether feature-based attention modulates the correlated activity of a population is unclear. Indeed, temporal correlation codes such as spike-synchrony and spike-count correlations (rsc) are believed to play a role in stimulus selection by increasing the signal and reducing the noise in a population, respectively. Here, we investigate (1) whether feature-based attention biases the correlated activity between neurons when attention is directed towards their common preferred feature, (2) the interplay between spike-synchrony and rsc during feature selection, and (3) whether feature attention effects are common across the visual and tactile systems. Single-unit recordings were made in secondary somatosensory cortex of three non-human primates while animals engaged in tactile feature (orientation and frequency) and visual discrimination tasks. We found that both firing rate and spike-synchrony between neurons with similar feature selectivity were enhanced when attention was directed towards their preferred feature. However, attention effects on spike-synchrony were twice as large as those on firing rate, and had a tighter relationship with behavioral performance. Further, we observed increased rsc when attention was directed towards the visual modality (i.e., away from touch). These data suggest that similar feature selection mechanisms are employed in vision and touch, and that temporal correlation codes such as spike-synchrony play a role in mediating feature selection. We posit that feature-based selection operates by implementing multiple mechanisms that reduce the overall noise levels in the neural population and synchronize activity across subpopulations that encode the relevant features of sensory stimuli. PMID:25423284
Visual Working Memory Is Independent of the Cortical Spacing Between Memoranda.
Harrison, William J; Bays, Paul M
2018-03-21
The sensory recruitment hypothesis states that visual short-term memory is maintained in the same visual cortical areas that initially encode a stimulus' features. Although it is well established that the distance between features in visual cortex determines their visibility, a limitation known as crowding, it is unknown whether short-term memory is similarly constrained by the cortical spacing of memory items. Here, we investigated whether the cortical spacing between sequentially presented memoranda affects the fidelity of memory in humans (of both sexes). In a first experiment, we varied cortical spacing by taking advantage of the log-scaling of visual cortex with eccentricity, presenting memoranda in peripheral vision sequentially along either the radial or tangential visual axis with respect to the fovea. In a second experiment, we presented memoranda sequentially either within or beyond the critical spacing of visual crowding, a distance within which visual features cannot be perceptually distinguished due to their nearby cortical representations. In both experiments and across multiple measures, we found strong evidence that the ability to maintain visual features in memory is unaffected by cortical spacing. These results indicate that the neural architecture underpinning working memory has properties inconsistent with the known behavior of sensory neurons in visual cortex. Instead, the dissociation between perceptual and memory representations supports a role of higher cortical areas such as posterior parietal or prefrontal regions or may involve an as yet unspecified mechanism in visual cortex in which stimulus features are bound to their temporal order. SIGNIFICANCE STATEMENT Although much is known about the resolution with which we can remember visual objects, the cortical representation of items held in short-term memory remains contentious. A popular hypothesis suggests that memory of visual features is maintained via the recruitment of the same neural architecture in sensory cortex that encodes stimuli. We investigated this claim by manipulating the spacing in visual cortex between sequentially presented memoranda such that some items shared cortical representations more than others while preventing perceptual interference between stimuli. We found clear evidence that short-term memory is independent of the intracortical spacing of memoranda, revealing a dissociation between perceptual and memory representations. Our data indicate that working memory relies on different neural mechanisms from sensory perception. Copyright © 2018 Harrison and Bays.
Güçlü, Umut; van Gerven, Marcel A J
2015-07-08
Converging evidence suggests that the primate ventral visual pathway encodes increasingly complex stimulus features in downstream areas. We quantitatively show that there indeed exists an explicit gradient for feature complexity in the ventral pathway of the human brain. This was achieved by mapping thousands of stimulus features of increasing complexity across the cortical sheet using a deep neural network. Our approach also revealed a fine-grained functional specialization of downstream areas of the ventral stream. Furthermore, it allowed decoding of representations from human brain activity at an unsurpassed degree of accuracy, confirming the quality of the developed approach. Stimulus features that successfully explained neural responses indicate that population receptive fields were explicitly tuned for object categorization. This provides strong support for the hypothesis that object categorization is a guiding principle in the functional organization of the primate ventral stream. Copyright © 2015 the authors 0270-6474/15/3510005-10$15.00/0.
Design of novel non-contact multimedia controller for disability by using visual stimulus.
Pan, Jeng-Shyang; Lo, Chi-Chun; Tsai, Shang-Ho; Lin, Bor-Shyh
2015-12-01
The design of a novel non-contact multimedia controller is proposed in this study. Nowadays, multimedia controllers are generally used by patients and nursing assistants in the hospital. Conventional multimedia controllers usually involve in manual operation or other physical movements. However, it is more difficult for the disabled patients to operate the conventional multimedia controller by themselves; they might totally depend on others. Different from other multimedia controllers, the proposed system provides a novel concept of controlling multimedia via visual stimuli, without manual operation. The disabled patients can easily operate the proposed multimedia system by focusing on the control icons of a visual stimulus device, where a commercial tablet is used as the visual stimulus device. Moreover, a wearable and wireless electroencephalogram (EEG) acquisition device is also designed and implemented to easily monitor the user's EEG signals in daily life. Finally, the proposed system has been validated. The experimental result shows that the proposed system can effectively measure and extract the EEG feature related to visual stimuli, and its information transfer rate is also good. Therefore, the proposed non-contact multimedia controller exactly provides a good prototype of novel multimedia controlling scheme. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Trade-off between curvature tuning and position invariance in visual area V4
Sharpee, Tatyana O.; Kouh, Minjoon; Reynolds, John H.
2013-01-01
Humans can rapidly recognize a multitude of objects despite differences in their appearance. The neural mechanisms that endow high-level sensory neurons with both selectivity to complex stimulus features and “tolerance” or invariance to identity-preserving transformations, such as spatial translation, remain poorly understood. Previous studies have demonstrated that both tolerance and selectivity to conjunctions of features are increased at successive stages of the ventral visual stream that mediates visual recognition. Within a given area, such as visual area V4 or the inferotemporal cortex, tolerance has been found to be inversely related to the sparseness of neural responses, which in turn was positively correlated with conjunction selectivity. However, the direct relationship between tolerance and conjunction selectivity has been difficult to establish, with different studies reporting either an inverse or no significant relationship. To resolve this, we measured V4 responses to natural scenes, and using recently developed statistical techniques, we estimated both the relevant stimulus features and the range of translation invariance for each neuron. Focusing the analysis on tuning to curvature, a tractable example of conjunction selectivity, we found that neurons that were tuned to more curved contours had smaller ranges of position invariance and produced sparser responses to natural stimuli. These trade-offs provide empirical support for recent theories of how the visual system estimates 3D shapes from shading and texture flows, as well as the tiling hypothesis of the visual space for different curvature values. PMID:23798444
Briand, K A; Klein, R M
1987-05-01
In the present study we investigated whether the visually allocated "beam" studied by Posner and others is the same visual attentional resource that performs the role of feature integration in Treisman's model. Subjects were cued to attend to a certain spatial location by a visual cue, and performance at expected and unexpected stimulus locations was compared. Subjects searched for a target letter (R) with distractor letters that either could give rise to illusory conjunctions (PQ) or could not (PB). Results from three separate experiments showed that orienting attention in response to central cues (endogenous orienting) showed similar effects for both conjunction and feature search. However, when attention was oriented with peripheral visual cues (exogenous orienting), conjunction search showed larger effects of attention than did feature search. It is suggested that the attentional systems that are oriented in response to central and peripheral cues may not be the same and that only the latter performs a role in feature integration. Possibilities for future research are discussed.
Infants' Selective Attention to Reliable Visual Cues in the Presence of Salient Distractors
ERIC Educational Resources Information Center
Tummeltshammer, Kristen Swan; Mareschal, Denis; Kirkham, Natasha Z.
2014-01-01
With many features competing for attention in their visual environment, infants must learn to deploy attention toward informative cues while ignoring distractions. Three eye tracking experiments were conducted to investigate whether 6- and 8-month-olds (total N = 102) would shift attention away from a distractor stimulus to learn a cue-reward…
Place avoidance learning and memory in a jumping spider.
Peckmezian, Tina; Taylor, Phillip W
2017-03-01
Using a conditioned passive place avoidance paradigm, we investigated the relative importance of three experimental parameters on learning and memory in a salticid, Servaea incana. Spiders encountered an aversive electric shock stimulus paired with one side of a two-sided arena. Our three parameters were the ecological relevance of the visual stimulus, the time interval between trials and the time interval before test. We paired electric shock with either a black or white visual stimulus, as prior studies in our laboratory have demonstrated that S. incana prefer dark 'safe' regions to light ones. We additionally evaluated the influence of two temporal features (time interval between trials and time interval before test) on learning and memory. Spiders exposed to the shock stimulus learned to associate shock with the visual background cue, but the extent to which they did so was dependent on which visual stimulus was present and the time interval between trials. Spiders trained with a long interval between trials (24 h) maintained performance throughout training, whereas spiders trained with a short interval (10 min) maintained performance only when the safe side was black. When the safe side was white, performance worsened steadily over time. There was no difference between spiders tested after a short (10 min) or long (24 h) interval before test. These results suggest that the ecological relevance of the stimuli used and the duration of the interval between trials can influence learning and memory in jumping spiders.
Ip, Ifan Betina; Bridge, Holly; Parker, Andrew J.
2014-01-01
An important advance in the study of visual attention has been the identification of a non-spatial component of attention that enhances the response to similar features or objects across the visual field. Here we test whether this non-spatial component can co-select individual features that are perceptually bound into a coherent object. We combined human psychophysics and functional magnetic resonance imaging (fMRI) to demonstrate the ability to co-select individual features from perceptually coherent objects. Our study used binocular disparity and visual motion to define disparity structure-from-motion (dSFM) stimuli. Although the spatial attention system induced strong modulations of the fMRI response in visual regions, the non-spatial system’s ability to co-select features of the dSFM stimulus was less pronounced and variable across subjects. Our results demonstrate that feature and global feature attention effects are variable across participants, suggesting that the feature attention system may be limited in its ability to automatically select features within the attended object. Careful comparison of the task design suggests that even minor differences in the perceptual task may be critical in revealing the presence of global feature attention. PMID:24936974
NASA Technical Reports Server (NTRS)
Eckstein, M. P.; Thomas, J. P.; Palmer, J.; Shimozaki, S. S.
2000-01-01
Recently, quantitative models based on signal detection theory have been successfully applied to the prediction of human accuracy in visual search for a target that differs from distractors along a single attribute (feature search). The present paper extends these models for visual search accuracy to multidimensional search displays in which the target differs from the distractors along more than one feature dimension (conjunction, disjunction, and triple conjunction displays). The model assumes that each element in the display elicits a noisy representation for each of the relevant feature dimensions. The observer combines the representations across feature dimensions to obtain a single decision variable, and the stimulus with the maximum value determines the response. The model accurately predicts human experimental data on visual search accuracy in conjunctions and disjunctions of contrast and orientation. The model accounts for performance degradation without resorting to a limited-capacity spatially localized and temporally serial mechanism by which to bind information across feature dimensions.
Visual attention mitigates information loss in small- and large-scale neural codes
Sprague, Thomas C; Saproo, Sameer; Serences, John T
2015-01-01
Summary The visual system transforms complex inputs into robust and parsimonious neural codes that efficiently guide behavior. Because neural communication is stochastic, the amount of encoded visual information necessarily decreases with each synapse. This constraint requires processing sensory signals in a manner that protects information about relevant stimuli from degradation. Such selective processing – or selective attention – is implemented via several mechanisms, including neural gain and changes in tuning properties. However, examining each of these effects in isolation obscures their joint impact on the fidelity of stimulus feature representations by large-scale population codes. Instead, large-scale activity patterns can be used to reconstruct representations of relevant and irrelevant stimuli, providing a holistic understanding about how neuron-level modulations collectively impact stimulus encoding. PMID:25769502
Feature-Selective Attentional Modulations in Human Frontoparietal Cortex.
Ester, Edward F; Sutterer, David W; Serences, John T; Awh, Edward
2016-08-03
Control over visual selection has long been framed in terms of a dichotomy between "source" and "site," where top-down feedback signals originating in frontoparietal cortical areas modulate or bias sensory processing in posterior visual areas. This distinction is motivated in part by observations that frontoparietal cortical areas encode task-level variables (e.g., what stimulus is currently relevant or what motor outputs are appropriate), while posterior sensory areas encode continuous or analog feature representations. Here, we present evidence that challenges this distinction. We used fMRI, a roving searchlight analysis, and an inverted encoding model to examine representations of an elementary feature property (orientation) across the entire human cortical sheet while participants attended either the orientation or luminance of a peripheral grating. Orientation-selective representations were present in a multitude of visual, parietal, and prefrontal cortical areas, including portions of the medial occipital cortex, the lateral parietal cortex, and the superior precentral sulcus (thought to contain the human homolog of the macaque frontal eye fields). Additionally, representations in many-but not all-of these regions were stronger when participants were instructed to attend orientation relative to luminance. Collectively, these findings challenge models that posit a strict segregation between sources and sites of attentional control on the basis of representational properties by demonstrating that simple feature values are encoded by cortical regions throughout the visual processing hierarchy, and that representations in many of these areas are modulated by attention. Influential models of visual attention posit a distinction between top-down control and bottom-up sensory processing networks. These models are motivated in part by demonstrations showing that frontoparietal cortical areas associated with top-down control represent abstract or categorical stimulus information, while visual areas encode parametric feature information. Here, we show that multivariate activity in human visual, parietal, and frontal cortical areas encode representations of a simple feature property (orientation). Moreover, representations in several (though not all) of these areas were modulated by feature-based attention in a similar fashion. These results provide an important challenge to models that posit dissociable top-down control and sensory processing networks on the basis of representational properties. Copyright © 2016 the authors 0270-6474/16/368188-12$15.00/0.
Feature singletons attract spatial attention independently of feature priming
Yashar, Amit; White, Alex L.; Fang, Wanghaoming; Carrasco, Marisa
2017-01-01
People perform better in visual search when the target feature repeats across trials (intertrial feature priming [IFP]). Here, we investigated whether repetition of a feature singleton's color modulates stimulus-driven shifts of spatial attention by presenting a probe stimulus immediately after each singleton display. The task alternated every two trials between a probe discrimination task and a singleton search task. We measured both stimulus-driven spatial attention (via the distance between the probe and singleton) and IFP (via repetition of the singleton's color). Color repetition facilitated search performance (IFP effect) when the set size was small. When the probe appeared at the singleton's location, performance was better than at the opposite location (stimulus-driven attention effect). The magnitude of this attention effect increased with the singleton's set size (which increases its saliency) but did not depend on whether the singleton's color repeated across trials, even when the previous singleton had been attended as a search target. Thus, our findings show that repetition of a salient singleton's color affects performance when the singleton is task relevant and voluntarily attended (as in search trials). However, color repetition does not affect performance when the singleton becomes irrelevant to the current task, even though the singleton does capture attention (as in probe trials). Therefore, color repetition per se does not make a singleton more salient for stimulus-driven attention. Rather, we suggest that IFP requires voluntary selection of color singletons in each consecutive trial. PMID:28800369
Feature singletons attract spatial attention independently of feature priming.
Yashar, Amit; White, Alex L; Fang, Wanghaoming; Carrasco, Marisa
2017-08-01
People perform better in visual search when the target feature repeats across trials (intertrial feature priming [IFP]). Here, we investigated whether repetition of a feature singleton's color modulates stimulus-driven shifts of spatial attention by presenting a probe stimulus immediately after each singleton display. The task alternated every two trials between a probe discrimination task and a singleton search task. We measured both stimulus-driven spatial attention (via the distance between the probe and singleton) and IFP (via repetition of the singleton's color). Color repetition facilitated search performance (IFP effect) when the set size was small. When the probe appeared at the singleton's location, performance was better than at the opposite location (stimulus-driven attention effect). The magnitude of this attention effect increased with the singleton's set size (which increases its saliency) but did not depend on whether the singleton's color repeated across trials, even when the previous singleton had been attended as a search target. Thus, our findings show that repetition of a salient singleton's color affects performance when the singleton is task relevant and voluntarily attended (as in search trials). However, color repetition does not affect performance when the singleton becomes irrelevant to the current task, even though the singleton does capture attention (as in probe trials). Therefore, color repetition per se does not make a singleton more salient for stimulus-driven attention. Rather, we suggest that IFP requires voluntary selection of color singletons in each consecutive trial.
Brashier, Nadia M.
2015-01-01
The human brain encodes experience in an integrative fashion by binding together the various features of an event (i.e., stimuli and responses) into memory “event files.” A subsequent reoccurrence of an event feature can then cue the retrieval of the memory file to “prime” cognition and action. Intriguingly, recent behavioral studies indicate that, in addition to linking concrete stimulus and response features, event coding may also incorporate more abstract, “internal” event features such as attentional control states. In the present study, we used fMRI in healthy human volunteers to determine the neural mechanisms supporting this type of holistic event binding. Specifically, we combined fMRI with a task protocol that dissociated the expression of event feature-binding effects pertaining to concrete stimulus and response features, stimulus categories, and attentional control demands. Using multivariate neural pattern classification, we show that the hippocampus and putamen integrate event attributes across all of these levels in conjunction with other regions representing concrete-feature-selective (primarily visual cortex), category-selective (posterior frontal cortex), and control demand-selective (insula, caudate, anterior cingulate, and parietal cortex) event information. Together, these results suggest that the hippocampus and putamen are involved in binding together holistic event memories that link physical stimulus and response characteristics with internal representations of stimulus categories and attentional control states. These bindings then presumably afford shortcuts to adaptive information processing and response selection in the face of recurring events. SIGNIFICANCE STATEMENT Memory binds together the different features of our experience, such as an observed stimulus and concurrent motor responses, into so-called event files. Recent behavioral studies suggest that the observer's internal attentional state might also become integrated into the event memory. Here, we used fMRI to determine the brain areas responsible for binding together event information pertaining to concrete stimulus and response features, stimulus categories, and internal attentional control states. We found that neural signals in the hippocampus and putamen contained information about all of these event attributes and could predict behavioral priming effects stemming from these features. Therefore, medial temporal lobe and dorsal striatum structures appear to be involved in binding internal control states to event memories. PMID:26538657
Feature-selective attention enhances color signals in early visual areas of the human brain.
Müller, M M; Andersen, S; Trujillo, N J; Valdés-Sosa, P; Malinowski, P; Hillyard, S A
2006-09-19
We used an electrophysiological measure of selective stimulus processing (the steady-state visual evoked potential, SSVEP) to investigate feature-specific attention to color cues. Subjects viewed a display consisting of spatially intermingled red and blue dots that continually shifted their positions at random. The red and blue dots flickered at different frequencies and thereby elicited distinguishable SSVEP signals in the visual cortex. Paying attention selectively to either the red or blue dot population produced an enhanced amplitude of its frequency-tagged SSVEP, which was localized by source modeling to early levels of the visual cortex. A control experiment showed that this selection was based on color rather than flicker frequency cues. This signal amplification of attended color items provides an empirical basis for the rapid identification of feature conjunctions during visual search, as proposed by "guided search" models.
Töllner, Thomas; Müller, Hermann J; Zehetleitner, Michael
2012-07-01
Visual search for feature singletons is slowed when a task-irrelevant, but more salient distracter singleton is concurrently presented. While there is a consensus that this distracter interference effect can be influenced by internal system settings, it remains controversial at what stage of processing this influence starts to affect visual coding. Advocates of the "stimulus-driven" view maintain that the initial sweep of visual processing is entirely driven by physical stimulus attributes and that top-down settings can bias visual processing only after selection of the most salient item. By contrast, opponents argue that top-down expectancies can alter the initial selection priority, so that focal attention is "not automatically" shifted to the location exhibiting the highest feature contrast. To precisely trace the allocation of focal attention, we analyzed the Posterior-Contralateral-Negativity (PCN) in a task in which the likelihood (expectancy) with which a distracter occurred was systematically varied. Our results show that both high (vs. low) distracter expectancy and experiencing a distracter on the previous trial speed up the timing of the target-elicited PCN. Importantly, there was no distracter-elicited PCN, indicating that participants did not shift attention to the distracter before selecting the target. This pattern unambiguously demonstrates that preattentive vision is top-down modifiable.
Hummingbirds control hovering flight by stabilizing visual motion.
Goller, Benjamin; Altshuler, Douglas L
2014-12-23
Relatively little is known about how sensory information is used for controlling flight in birds. A powerful method is to immerse an animal in a dynamic virtual reality environment to examine behavioral responses. Here, we investigated the role of vision during free-flight hovering in hummingbirds to determine how optic flow--image movement across the retina--is used to control body position. We filmed hummingbirds hovering in front of a projection screen with the prediction that projecting moving patterns would disrupt hovering stability but stationary patterns would allow the hummingbird to stabilize position. When hovering in the presence of moving gratings and spirals, hummingbirds lost positional stability and responded to the specific orientation of the moving visual stimulus. There was no loss of stability with stationary versions of the same stimulus patterns. When exposed to a single stimulus many times or to a weakened stimulus that combined a moving spiral with a stationary checkerboard, the response to looming motion declined. However, even minimal visual motion was sufficient to cause a loss of positional stability despite prominent stationary features. Collectively, these experiments demonstrate that hummingbirds control hovering position by stabilizing motions in their visual field. The high sensitivity and persistence of this disruptive response is surprising, given that the hummingbird brain is highly specialized for sensory processing and spatial mapping, providing other potential mechanisms for controlling position.
Visual awareness suppression by pre-stimulus brain stimulation; a neural effect.
Jacobs, Christianne; Goebel, Rainer; Sack, Alexander T
2012-01-02
Transcranial magnetic stimulation (TMS) has established the functional relevance of early visual cortex (EVC) for visual awareness with great temporal specificity non-invasively in conscious human volunteers. Many studies have found a suppressive effect when TMS was applied over EVC 80-100 ms after the onset of the visual stimulus (post-stimulus TMS time window). Yet, few studies found task performance to also suffer when TMS was applied even before visual stimulus presentation (pre-stimulus TMS time window). This pre-stimulus TMS effect, however, remains controversially debated and its origin had mainly been ascribed to TMS-induced eye-blinking artifacts. Here, we applied chronometric TMS over EVC during the execution of a visual discrimination task, covering an exhaustive range of visual stimulus-locked TMS time windows ranging from -80 pre-stimulus to 300 ms post-stimulus onset. Electrooculographical (EoG) recordings, sham TMS stimulation, and vertex TMS stimulation controlled for different types of non-neural TMS effects. Our findings clearly reveal TMS-induced masking effects for both pre- and post-stimulus time windows, and for both objective visual discrimination performance and subjective visibility. Importantly, all effects proved to be still present after post hoc removal of eye blink trials, suggesting a neural origin for the pre-stimulus TMS suppression effect on visual awareness. We speculate based on our data that TMS exerts its pre-stimulus effect via generation of a neural state which interacts with subsequent visual input. Copyright © 2011 Elsevier Inc. All rights reserved.
Andersen, S K; Müller, M M
2010-08-03
A central question in the field of attention is whether visual processing is a strictly limited resource, which must be allocated by selective attention. If this were the case, attentional enhancement of one stimulus should invariably lead to suppression of unattended distracter stimuli. Here we examine voluntary cued shifts of feature-selective attention to either one of two superimposed red or blue random dot kinematograms (RDKs) to test whether such a reciprocal relationship between enhancement of an attended and suppression of an unattended stimulus can be observed. The steady-state visual evoked potential (SSVEP), an oscillatory brain response elicited by the flickering RDKs, was measured in human EEG. Supporting limited resources, we observed both an enhancement of the attended and a suppression of the unattended RDK, but this observed reciprocity did not occur concurrently: enhancement of the attended RDK started at 220 ms after cue onset and preceded suppression of the unattended RDK by about 130 ms. Furthermore, we found that behavior was significantly correlated with the SSVEP time course of a measure of selectivity (attended minus unattended) but not with a measure of total activity (attended plus unattended). The significant deviations from a temporally synchronized reciprocity between enhancement and suppression suggest that the enhancement of the attended stimulus may cause the suppression of the unattended stimulus in the present experiment.
Toward a hybrid brain-computer interface based on repetitive visual stimuli with missing events.
Wu, Yingying; Li, Man; Wang, Jing
2016-07-26
Steady-state visually evoked potentials (SSVEPs) can be elicited by repetitive stimuli and extracted in the frequency domain with satisfied performance. However, the temporal information of such stimulus is often ignored. In this study, we utilized repetitive visual stimuli with missing events to present a novel hybrid BCI paradigm based on SSVEP and omitted stimulus potential (OSP). Four discs flickering from black to white with missing flickers served as visual stimulators to simultaneously elicit subject's SSVEPs and OSPs. Key parameters in the new paradigm, including flicker frequency, optimal electrodes, missing flicker duration and intervals of missing events were qualitatively discussed with offline data. Two omitted flicker patterns including missing black/white disc were proposed and compared. Averaging times were optimized with Information Transfer Rate (ITR) in online experiments, where SSVEPs and OSPs were identified using Canonical Correlation Analysis in the frequency domain and Support Vector Machine (SVM)-Bayes fusion in the time domain, respectively. The online accuracy and ITR (mean ± standard deviation) over nine healthy subjects were 79.29 ± 18.14 % and 19.45 ± 11.99 bits/min with missing black disc pattern, and 86.82 ± 12.91 % and 24.06 ± 10.95 bits/min with missing white disc pattern, respectively. The proposed BCI paradigm, for the first time, demonstrated that SSVEPs and OSPs can be simultaneously elicited in single visual stimulus pattern and recognized in real-time with satisfied performance. Besides the frequency features such as SSVEP elicited by repetitive stimuli, we found a new feature (OSP) in the time domain to design a novel hybrid BCI paradigm by adding missing events in repetitive stimuli.
Ma, Wei Ji; Zhou, Xiang; Ross, Lars A; Foxe, John J; Parra, Lucas C
2009-01-01
Watching a speaker's facial movements can dramatically enhance our ability to comprehend words, especially in noisy environments. From a general doctrine of combining information from different sensory modalities (the principle of inverse effectiveness), one would expect that the visual signals would be most effective at the highest levels of auditory noise. In contrast, we find, in accord with a recent paper, that visual information improves performance more at intermediate levels of auditory noise than at the highest levels, and we show that a novel visual stimulus containing only temporal information does the same. We present a Bayesian model of optimal cue integration that can explain these conflicts. In this model, words are regarded as points in a multidimensional space and word recognition is a probabilistic inference process. When the dimensionality of the feature space is low, the Bayesian model predicts inverse effectiveness; when the dimensionality is high, the enhancement is maximal at intermediate auditory noise levels. When the auditory and visual stimuli differ slightly in high noise, the model makes a counterintuitive prediction: as sound quality increases, the proportion of reported words corresponding to the visual stimulus should first increase and then decrease. We confirm this prediction in a behavioral experiment. We conclude that auditory-visual speech perception obeys the same notion of optimality previously observed only for simple multisensory stimuli.
Electrophysiological evidence for biased competition in V1 for fear expressions.
West, Greg L; Anderson, Adam A K; Ferber, Susanne; Pratt, Jay
2011-11-01
When multiple stimuli are concurrently displayed in the visual field, they must compete for neural representation at the processing expense of their contemporaries. This biased competition is thought to begin as early as primary visual cortex, and can be driven by salient low-level stimulus features. Stimuli important for an organism's survival, such as facial expressions signaling environmental threat, might be similarly prioritized at this early stage of visual processing. In the present study, we used ERP recordings from striate cortex to examine whether fear expressions can bias the competition for neural representation at the earliest stage of retinotopic visuo-cortical processing when in direct competition with concurrently presented visual information of neutral valence. We found that within 50 msec after stimulus onset, information processing in primary visual cortex is biased in favor of perceptual representations of fear at the expense of competing visual information (Experiment 1). Additional experiments confirmed that the facial display's emotional content rather than low-level features is responsible for this prioritization in V1 (Experiment 2), and that this competition is reliant on a face's upright canonical orientation (Experiment 3). These results suggest that complex stimuli important for an organism's survival can indeed be prioritized at the earliest stage of cortical processing at the expense of competing information, with competition possibly beginning before encoding in V1.
Willmore, Ben D.B.; Bulstrode, Harry; Tolhurst, David J.
2012-01-01
Neuronal populations in the primary visual cortex (V1) of mammals exhibit contrast normalization. Neurons that respond strongly to simple visual stimuli – such as sinusoidal gratings – respond less well to the same stimuli when they are presented as part of a more complex stimulus which also excites other, neighboring neurons. This phenomenon is generally attributed to generalized patterns of inhibitory connections between nearby V1 neurons. The Bienenstock, Cooper and Munro (BCM) rule is a neural network learning rule that, when trained on natural images, produces model neurons which, individually, have many tuning properties in common with real V1 neurons. However, when viewed as a population, a BCM network is very different from V1 – each member of the BCM population tends to respond to the same dominant features of visual input, producing an incomplete, highly redundant code for visual information. Here, we demonstrate that, by adding contrast normalization into the BCM rule, we arrive at a neurally-plausible Hebbian learning rule that can learn an efficient sparse, overcomplete representation that is a better model for stimulus selectivity in V1. This suggests that one role of contrast normalization in V1 is to guide the neonatal development of receptive fields, so that neurons respond to different features of visual input. PMID:22230381
Hamker, Fred H
2008-07-15
Feature inheritance provides evidence that properties of an invisible target stimulus can be attached to a following mask. We apply a systemslevel model of attention and decision making to explore the influence of memory and feedback connections in feature inheritance. We find that the presence of feedback loops alone is sufficient to account for feature inheritance. Although our simulations do not cover all experimental variations and focus only on the general principle, our result appears of specific interest since the model was designed for a completely different purpose than to explain feature inheritance. We suggest that feedback is an important property in visual perception and provide a description of its mechanism and its role in perception.
Enhanced attentional gain as a mechanism for generalized perceptual learning in human visual cortex.
Byers, Anna; Serences, John T
2014-09-01
Learning to better discriminate a specific visual feature (i.e., a specific orientation in a specific region of space) has been associated with plasticity in early visual areas (sensory modulation) and with improvements in the transmission of sensory information from early visual areas to downstream sensorimotor and decision regions (enhanced readout). However, in many real-world scenarios that require perceptual expertise, observers need to efficiently process numerous exemplars from a broad stimulus class as opposed to just a single stimulus feature. Some previous data suggest that perceptual learning leads to highly specific neural modulations that support the discrimination of specific trained features. However, the extent to which perceptual learning acts to improve the discriminability of a broad class of stimuli via the modulation of sensory responses in human visual cortex remains largely unknown. Here, we used functional MRI and a multivariate analysis method to reconstruct orientation-selective response profiles based on activation patterns in the early visual cortex before and after subjects learned to discriminate small offsets in a set of grating stimuli that were rendered in one of nine possible orientations. Behavioral performance improved across 10 training sessions, and there was a training-related increase in the amplitude of orientation-selective response profiles in V1, V2, and V3 when orientation was task relevant compared with when it was task irrelevant. These results suggest that generalized perceptual learning can lead to modified responses in the early visual cortex in a manner that is suitable for supporting improved discriminability of stimuli drawn from a large set of exemplars. Copyright © 2014 the American Physiological Society.
Seeing Objects as Faces Enhances Object Detection.
Takahashi, Kohske; Watanabe, Katsumi
2015-10-01
The face is a special visual stimulus. Both bottom-up processes for low-level facial features and top-down modulation by face expectations contribute to the advantages of face perception. However, it is hard to dissociate the top-down factors from the bottom-up processes, since facial stimuli mandatorily lead to face awareness. In the present study, using the face pareidolia phenomenon, we demonstrated that face awareness, namely seeing an object as a face, enhances object detection performance. In face pareidolia, some people see a visual stimulus, for example, three dots arranged in V shape, as a face, while others do not. This phenomenon allows us to investigate the effect of face awareness leaving the stimulus per se unchanged. Participants were asked to detect a face target or a triangle target. While target per se was identical between the two tasks, the detection sensitivity was higher when the participants recognized the target as a face. This was the case irrespective of the stimulus eccentricity or the vertical orientation of the stimulus. These results demonstrate that seeing an object as a face facilitates object detection via top-down modulation. The advantages of face perception are, therefore, at least partly, due to face awareness.
Seeing Objects as Faces Enhances Object Detection
Watanabe, Katsumi
2015-01-01
The face is a special visual stimulus. Both bottom-up processes for low-level facial features and top-down modulation by face expectations contribute to the advantages of face perception. However, it is hard to dissociate the top-down factors from the bottom-up processes, since facial stimuli mandatorily lead to face awareness. In the present study, using the face pareidolia phenomenon, we demonstrated that face awareness, namely seeing an object as a face, enhances object detection performance. In face pareidolia, some people see a visual stimulus, for example, three dots arranged in V shape, as a face, while others do not. This phenomenon allows us to investigate the effect of face awareness leaving the stimulus per se unchanged. Participants were asked to detect a face target or a triangle target. While target per se was identical between the two tasks, the detection sensitivity was higher when the participants recognized the target as a face. This was the case irrespective of the stimulus eccentricity or the vertical orientation of the stimulus. These results demonstrate that seeing an object as a face facilitates object detection via top-down modulation. The advantages of face perception are, therefore, at least partly, due to face awareness. PMID:27648219
Sarabi, Mitra Taghizadeh; Aoki, Ryuta; Tsumura, Kaho; Keerativittayayut, Ruedeerat; Jimura, Koji; Nakahara, Kiyoshi
2018-01-01
The neural mechanisms underlying visual perceptual learning (VPL) have typically been studied by examining changes in task-related brain activation after training. However, the relationship between post-task "offline" processes and VPL remains unclear. The present study examined this question by obtaining resting-state functional magnetic resonance imaging (fMRI) scans of human brains before and after a task-fMRI session involving visual perceptual training. During the task-fMRI session, participants performed a motion coherence discrimination task in which they judged the direction of moving dots with a coherence level that varied between trials (20, 40, and 80%). We found that stimulus-induced activation increased with motion coherence in the middle temporal cortex (MT+), a feature-specific region representing visual motion. On the other hand, stimulus-induced activation decreased with motion coherence in the dorsal anterior cingulate cortex (dACC) and bilateral insula, regions involved in decision making under perceptual ambiguity. Moreover, by comparing pre-task and post-task rest periods, we revealed that resting-state functional connectivity (rs-FC) with the MT+ was significantly increased after training in widespread cortical regions including the bilateral sensorimotor and temporal cortices. In contrast, rs-FC with the MT+ was significantly decreased in subcortical regions including the thalamus and putamen. Importantly, the training-induced change in rs-FC was observed only with the MT+, but not with the dACC or insula. Thus, our findings suggest that perceptual training induces plastic changes in offline functional connectivity specifically in brain regions representing the trained visual feature, emphasising the distinct roles of feature-representation regions and decision-related regions in VPL.
Visual attention mitigates information loss in small- and large-scale neural codes.
Sprague, Thomas C; Saproo, Sameer; Serences, John T
2015-04-01
The visual system transforms complex inputs into robust and parsimonious neural codes that efficiently guide behavior. Because neural communication is stochastic, the amount of encoded visual information necessarily decreases with each synapse. This constraint requires that sensory signals are processed in a manner that protects information about relevant stimuli from degradation. Such selective processing--or selective attention--is implemented via several mechanisms, including neural gain and changes in tuning properties. However, examining each of these effects in isolation obscures their joint impact on the fidelity of stimulus feature representations by large-scale population codes. Instead, large-scale activity patterns can be used to reconstruct representations of relevant and irrelevant stimuli, thereby providing a holistic understanding about how neuron-level modulations collectively impact stimulus encoding. Copyright © 2015 Elsevier Ltd. All rights reserved.
Threat as a feature in visual semantic object memory.
Calley, Clifford S; Motes, Michael A; Chiang, H-Sheng; Buhl, Virginia; Spence, Jeffrey S; Abdi, Hervé; Anand, Raksha; Maguire, Mandy; Estevez, Leonardo; Briggs, Richard; Freeman, Thomas; Kraut, Michael A; Hart, John
2013-08-01
Threatening stimuli have been found to modulate visual processes related to perception and attention. The present functional magnetic resonance imaging (fMRI) study investigated whether threat modulates visual object recognition of man-made and naturally occurring categories of stimuli. Compared with nonthreatening pictures, threatening pictures of real items elicited larger fMRI BOLD signal changes in medial visual cortices extending inferiorly into the temporo-occipital (TO) "what" pathways. This region elicited greater signal changes for threatening items compared to nonthreatening from both the natural-occurring and man-made stimulus supraordinate categories, demonstrating a featural component to these visual processing areas. Two additional loci of signal changes within more lateral inferior TO areas (bilateral BA18 and 19 as well as the right ventral temporal lobe) were detected for a category-feature interaction, with stronger responses to man-made (category) threatening (feature) stimuli than to natural threats. The findings are discussed in terms of visual recognition of processing efficiently or rapidly groups of items that confer an advantage for survival. Copyright © 2012 Wiley Periodicals, Inc.
Preparatory attention in visual cortex.
Battistoni, Elisa; Stein, Timo; Peelen, Marius V
2017-05-01
Top-down attention is the mechanism that allows us to selectively process goal-relevant aspects of a scene while ignoring irrelevant aspects. A large body of research has characterized the effects of attention on neural activity evoked by a visual stimulus. However, attention also includes a preparatory phase before stimulus onset in which the attended dimension is internally represented. Here, we review neurophysiological, functional magnetic resonance imaging, magnetoencephalography, electroencephalography, and transcranial magnetic stimulation (TMS) studies investigating the neural basis of preparatory attention, both when attention is directed to a location in space and when it is directed to nonspatial stimulus attributes (content-based attention) ranging from low-level features to object categories. Results show that both spatial and content-based attention lead to increased baseline activity in neural populations that selectively code for the attended attribute. TMS studies provide evidence that this preparatory activity is causally related to subsequent attentional selection and behavioral performance. Attention thus acts by preactivating selective neurons in the visual cortex before stimulus onset. This appears to be a general mechanism that can operate on multiple levels of representation. We discuss the functional relevance of this mechanism, its limitations, and its relation to working memory, imagery, and expectation. We conclude by outlining open questions and future directions. © 2017 New York Academy of Sciences.
Puffe, Lydia; Dittrich, Kerstin; Klauer, Karl Christoph
2017-01-01
In a joint go/no-go Simon task, each of two participants is to respond to one of two non-spatial stimulus features by means of a spatially lateralized response. Stimulus position varies horizontally and responses are faster and more accurate when response side and stimulus position match (compatible trial) than when they mismatch (incompatible trial), defining the social Simon effect or joint spatial compatibility effect. This effect was originally explained in terms of action/task co-representation, assuming that the co-actor's action is automatically co-represented. Recent research by Dolk, Hommel, Prinz, and Liepelt (2013) challenged this account by demonstrating joint spatial compatibility effects in a task-setting in which non-social objects like a Japanese waving cat were present, but no real co-actor. They postulated that every sufficiently salient object induces joint spatial compatibility effects. However, what makes an object sufficiently salient is so far not well defined. To scrutinize this open question, the current study manipulated auditory and/or visual attention-attracting cues of a Japanese waving cat within an auditory (Experiment 1) and a visual joint go/no-go Simon task (Experiment 2). Results revealed that joint spatial compatibility effects only occurred in an auditory Simon task when the cat provided auditory cues while no joint spatial compatibility effects were found in a visual Simon task. This demonstrates that it is not the sufficiently salient object alone that leads to joint spatial compatibility effects but instead, a complex interaction between features of the object and the stimulus material of the joint go/no-go Simon task.
Distributed neural signatures of natural audiovisual speech and music in the human auditory cortex.
Salmi, Juha; Koistinen, Olli-Pekka; Glerean, Enrico; Jylänki, Pasi; Vehtari, Aki; Jääskeläinen, Iiro P; Mäkelä, Sasu; Nummenmaa, Lauri; Nummi-Kuisma, Katarina; Nummi, Ilari; Sams, Mikko
2017-08-15
During a conversation or when listening to music, auditory and visual information are combined automatically into audiovisual objects. However, it is still poorly understood how specific type of visual information shapes neural processing of sounds in lifelike stimulus environments. Here we applied multi-voxel pattern analysis to investigate how naturally matching visual input modulates supratemporal cortex activity during processing of naturalistic acoustic speech, singing and instrumental music. Bayesian logistic regression classifiers with sparsity-promoting priors were trained to predict whether the stimulus was audiovisual or auditory, and whether it contained piano playing, speech, or singing. The predictive performances of the classifiers were tested by leaving one participant at a time for testing and training the model using the remaining 15 participants. The signature patterns associated with unimodal auditory stimuli encompassed distributed locations mostly in the middle and superior temporal gyrus (STG/MTG). A pattern regression analysis, based on a continuous acoustic model, revealed that activity in some of these MTG and STG areas were associated with acoustic features present in speech and music stimuli. Concurrent visual stimulus modulated activity in bilateral MTG (speech), lateral aspect of right anterior STG (singing), and bilateral parietal opercular cortex (piano). Our results suggest that specific supratemporal brain areas are involved in processing complex natural speech, singing, and piano playing, and other brain areas located in anterior (facial speech) and posterior (music-related hand actions) supratemporal cortex are influenced by related visual information. Those anterior and posterior supratemporal areas have been linked to stimulus identification and sensory-motor integration, respectively. Copyright © 2017 Elsevier Inc. All rights reserved.
Observers' cognitive states modulate how visual inputs relate to gaze control.
Kardan, Omid; Henderson, John M; Yourganov, Grigori; Berman, Marc G
2016-09-01
Previous research has shown that eye-movements change depending on both the visual features of our environment, and the viewer's top-down knowledge. One important question that is unclear is the degree to which the visual goals of the viewer modulate how visual features of scenes guide eye-movements. Here, we propose a systematic framework to investigate this question. In our study, participants performed 3 different visual tasks on 135 scenes: search, memorization, and aesthetic judgment, while their eye-movements were tracked. Canonical correlation analyses showed that eye-movements were reliably more related to low-level visual features at fixations during the visual search task compared to the aesthetic judgment and scene memorization tasks. Different visual features also had different relevance to eye-movements between tasks. This modulation of the relationship between visual features and eye-movements by task was also demonstrated with classification analyses, where classifiers were trained to predict the viewing task based on eye movements and visual features at fixations. Feature loadings showed that the visual features at fixations could signal task differences independent of temporal and spatial properties of eye-movements. When classifying across participants, edge density and saliency at fixations were as important as eye-movements in the successful prediction of task, with entropy and hue also being significant, but with smaller effect sizes. When classifying within participants, brightness and saturation were also significant contributors. Canonical correlation and classification results, together with a test of moderation versus mediation, suggest that the cognitive state of the observer moderates the relationship between stimulus-driven visual features and eye-movements. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Brooks, Joseph L.; Gilaie-Dotan, Sharon; Rees, Geraint; Bentin, Shlomo; Driver, Jon
2012-01-01
Visual perception depends not only on local stimulus features but also on their relationship to the surrounding stimulus context, as evident in both local and contextual influences on figure-ground segmentation. Intermediate visual areas may play a role in such contextual influences, as we tested here by examining LG, a rare case of developmental visual agnosia. LG has no evident abnormality of brain structure and functional neuroimaging showed relatively normal V1 function, but his intermediate visual areas (V2/V3) function abnormally. We found that contextual influences on figure-ground organization were selectively disrupted in LG, while local sources of figure-ground influences were preserved. Effects of object knowledge and familiarity on figure-ground organization were also significantly diminished. Our results suggest that the mechanisms mediating contextual and familiarity influences on figure-ground organization are dissociable from those mediating local influences on figure-ground assignment. The disruption of contextual processing in intermediate visual areas may play a role in the substantial object recognition difficulties experienced by LG. PMID:22947116
Visual Presentation Effects on Identification of Multiple Environmental Sounds
Masakura, Yuko; Ichikawa, Makoto; Shimono, Koichi; Nakatsuka, Reio
2016-01-01
This study examined how the contents and timing of a visual stimulus affect the identification of mixed sounds recorded in a daily life environment. For experiments, we presented four environment sounds as auditory stimuli for 5 s along with a picture or a written word as a visual stimulus that might or might not denote the source of one of the four sounds. Three conditions of temporal relations between the visual stimuli and sounds were used. The visual stimulus was presented either: (a) for 5 s simultaneously with the sound; (b) for 5 s, 1 s before the sound (SOA between the audio and visual stimuli was 6 s); or (c) for 33 ms, 1 s before the sound (SOA was 1033 ms). Participants reported all identifiable sounds for those audio–visual stimuli. To characterize the effects of visual stimuli on sound identification, the following were used: the identification rates of sounds for which the visual stimulus denoted its sound source, the rates of other sounds for which the visual stimulus did not denote the sound source, and the frequency of false hearing of a sound that was not presented for each sound set. Results of the four experiments demonstrated that a picture or a written word promoted identification of the sound when it was related to the sound, particularly when the visual stimulus was presented for 5 s simultaneously with the sounds. However, a visual stimulus preceding the sounds had a benefit only for the picture, not for the written word. Furthermore, presentation with a picture denoting a sound simultaneously with the sound reduced the frequency of false hearing. These results suggest three ways that presenting a visual stimulus affects identification of the auditory stimulus. First, activation of the visual representation extracted directly from the picture promotes identification of the denoted sound and suppresses the processing of sounds for which the visual stimulus did not denote the sound source. Second, effects based on processing of the conceptual information promote identification of the denoted sound and suppress the processing of sounds for which the visual stimulus did not denote the sound source. Third, processing of the concurrent visual representation suppresses false hearing. PMID:26973478
Ince, Robin A. A.; Jaworska, Katarzyna; Gross, Joachim; Panzeri, Stefano; van Rijsbergen, Nicola J.; Rousselet, Guillaume A.; Schyns, Philippe G.
2016-01-01
A key to understanding visual cognition is to determine “where”, “when”, and “how” brain responses reflect the processing of the specific visual features that modulate categorization behavior—the “what”. The N170 is the earliest Event-Related Potential (ERP) that preferentially responds to faces. Here, we demonstrate that a paradigmatic shift is necessary to interpret the N170 as the product of an information processing network that dynamically codes and transfers face features across hemispheres, rather than as a local stimulus-driven event. Reverse-correlation methods coupled with information-theoretic analyses revealed that visibility of the eyes influences face detection behavior. The N170 initially reflects coding of the behaviorally relevant eye contralateral to the sensor, followed by a causal communication of the other eye from the other hemisphere. These findings demonstrate that the deceptively simple N170 ERP hides a complex network information processing mechanism involving initial coding and subsequent cross-hemispheric transfer of visual features. PMID:27550865
Age-related slowing of response selection and production in a visual choice reaction time task
Woods, David L.; Wyma, John M.; Yund, E. William; Herron, Timothy J.; Reed, Bruce
2015-01-01
Aging is associated with delayed processing in choice reaction time (CRT) tasks, but the processing stages most impacted by aging have not been clearly identified. Here, we analyzed CRT latencies in a computerized serial visual feature-conjunction task. Participants responded to a target letter (probability 40%) by pressing one mouse button, and responded to distractor letters differing either in color, shape, or both features from the target (probabilities 20% each) by pressing the other mouse button. Stimuli were presented randomly to the left and right visual fields and stimulus onset asynchronies (SOAs) were adaptively reduced following correct responses using a staircase procedure. In Experiment 1, we tested 1466 participants who ranged in age from 18 to 65 years. CRT latencies increased significantly with age (r = 0.47, 2.80 ms/year). Central processing time (CPT), isolated by subtracting simple reaction times (SRT) (obtained in a companion experiment performed on the same day) from CRT latencies, accounted for more than 80% of age-related CRT slowing, with most of the remaining increase in latency due to slowed motor responses. Participants were faster and more accurate when the stimulus location was spatially compatible with the mouse button used for responding, and this effect increased slightly with age. Participants took longer to respond to distractors with target color or shape than to distractors with no target features. However, the additional time needed to discriminate the more target-like distractors did not increase with age. In Experiment 2, we replicated the findings of Experiment 1 in a second population of 178 participants (ages 18–82 years). CRT latencies did not differ significantly in the two experiments, and similar effects of age, distractor similarity, and stimulus-response spatial compatibility were found. The results suggest that the age-related slowing in visual CRT latencies is largely due to delays in response selection and production. PMID:25954175
Effects of Sound Frequency on Audiovisual Integration: An Event-Related Potential Study.
Yang, Weiping; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Ren, Yanna; Takahashi, Satoshi; Wu, Jinglong
2015-01-01
A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190-210 ms, for 1 kHz stimuli from 170-200 ms, for 2.5 kHz stimuli from 140-200 ms, 5 kHz stimuli from 100-200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300-340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies.
Internal state of monkey primary visual cortex (V1) predicts figure-ground perception.
Supèr, Hans; van der Togt, Chris; Spekreijse, Henk; Lamme, Victor A F
2003-04-15
When stimulus information enters the visual cortex, it is rapidly processed for identification. However, sometimes the processing of the stimulus is inadequate and the subject fails to notice the stimulus. Human psychophysical studies show that this occurs during states of inattention or absent-mindedness. At a neurophysiological level, it remains unclear what these states are. To study the role of cortical state in perception, we analyzed neural activity in the monkey primary visual cortex before the appearance of a stimulus. We show that, before the appearance of a reported stimulus, neural activity was stronger and more correlated than for a not-reported stimulus. This indicates that the strength of neural activity and the functional connectivity between neurons in the primary visual cortex participate in the perceptual processing of stimulus information. Thus, to detect a stimulus, the visual cortex needs to be in an appropriate state.
Inter-area correlations in the ventral visual pathway reflect feature integration
Freeman, Jeremy; Donner, Tobias H.; Heeger, David J.
2011-01-01
During object perception, the brain integrates simple features into representations of complex objects. A perceptual phenomenon known as visual crowding selectively interferes with this process. Here, we use crowding to characterize a neural correlate of feature integration. Cortical activity was measured with functional magnetic resonance imaging, simultaneously in multiple areas of the ventral visual pathway (V1–V4 and the visual word form area, VWFA, which responds preferentially to familiar letters), while human subjects viewed crowded and uncrowded letters. Temporal correlations between cortical areas were lower for crowded letters than for uncrowded letters, especially between V1 and VWFA. These differences in correlation were retinotopically specific, and persisted when attention was diverted from the letters. But correlation differences were not evident when we substituted the letters with grating patches that were not crowded under our stimulus conditions. We conclude that inter-area correlations reflect feature integration and are disrupted by crowding. We propose that crowding may perturb the transformations between neural representations along the ventral pathway that underlie the integration of features into objects. PMID:21521832
Emotional Picture and Word Processing: An fMRI Study on Effects of Stimulus Complexity
Schlochtermeier, Lorna H.; Kuchinke, Lars; Pehrs, Corinna; Urton, Karolina; Kappelhoff, Hermann; Jacobs, Arthur M.
2013-01-01
Neuroscientific investigations regarding aspects of emotional experiences usually focus on one stimulus modality (e.g., pictorial or verbal). Similarities and differences in the processing between the different modalities have rarely been studied directly. The comparison of verbal and pictorial emotional stimuli often reveals a processing advantage of emotional pictures in terms of larger or more pronounced emotion effects evoked by pictorial stimuli. In this study, we examined whether this picture advantage refers to general processing differences or whether it might partly be attributed to differences in visual complexity between pictures and words. We first developed a new stimulus database comprising valence and arousal ratings for more than 200 concrete objects representable in different modalities including different levels of complexity: words, phrases, pictograms, and photographs. Using fMRI we then studied the neural correlates of the processing of these emotional stimuli in a valence judgment task, in which the stimulus material was controlled for differences in emotional arousal. No superiority for the pictorial stimuli was found in terms of emotional information processing with differences between modalities being revealed mainly in perceptual processing regions. While visual complexity might partly account for previously found differences in emotional stimulus processing, the main existing processing differences are probably due to enhanced processing in modality specific perceptual regions. We would suggest that both pictures and words elicit emotional responses with no general superiority for either stimulus modality, while emotional responses to pictures are modulated by perceptual stimulus features, such as picture complexity. PMID:23409009
Neural Signatures of Stimulus Features in Visual Working Memory—A Spatiotemporal Approach
Jackson, Margaret C.; Klein, Christoph; Mohr, Harald; Shapiro, Kimron L.; Linden, David E. J.
2010-01-01
We examined the neural signatures of stimulus features in visual working memory (WM) by integrating functional magnetic resonance imaging (fMRI) and event-related potential data recorded during mental manipulation of colors, rotation angles, and color–angle conjunctions. The N200, negative slow wave, and P3b were modulated by the information content of WM, and an fMRI-constrained source model revealed a progression in neural activity from posterior visual areas to higher order areas in the ventral and dorsal processing streams. Color processing was associated with activity in inferior frontal gyrus during encoding and retrieval, whereas angle processing involved right parietal regions during the delay interval. WM for color–angle conjunctions did not involve any additional neural processes. The finding that different patterns of brain activity underlie WM for color and spatial information is consistent with ideas that the ventral/dorsal “what/where” segregation of perceptual processing influences WM organization. The absence of characteristic signatures of conjunction-related brain activity, which was generally intermediate between the 2 single conditions, suggests that conjunction judgments are based on the coordinated activity of these 2 streams. PMID:19429863
Yeh, Su-Ling; Liao, Hsin-I
2010-10-01
The contingent orienting hypothesis (Folk, Remington, & Johnston, 1992) states that attentional capture is contingent on top-down control settings induced by task demands. Past studies supporting this hypothesis have identified three kinds of top-down control settings: for target-specific features, for the strategy to search for a singleton, and for visual features in the target display as a whole. Previously, we have found stimulus-driven capture by onset that was not contingent on the first two kinds of settings (Yeh & Liao, 2008). The current study aims to test the third kind: the displaywide contingent orienting hypothesis (Gibson & Kelsey, 1998). Specifically, we ask whether an onset stimulus can still capture attention in the spatial cueing paradigm when attentional control settings for the displaywide onset of the target are excluded by making all letters in the target display emerge from placeholders. Results show that a preceding uninformative onset cue still captured attention to its location in a stimulus-driven fashion, whereas a color cue captured attention only when it was contingent on the setting for displaywide color. These results raise doubts as to the generality of the displaywide contingent orienting hypothesis and help delineate the boundary conditions on this hypothesis. Copyright © 2010 Elsevier B.V. All rights reserved.
Saccadic eye movements do not disrupt the deployment of feature-based attention.
Kalogeropoulou, Zampeta; Rolfs, Martin
2017-07-01
The tight link of saccades to covert spatial attention has been firmly established, yet their relation to other forms of visual selection remains poorly understood. Here we studied the temporal dynamics of feature-based attention (FBA) during fixation and across saccades. Participants reported the orientation (on a continuous scale) of one of two sets of spatially interspersed Gabors (black or white). We tested performance at different intervals between the onset of a colored cue (black or white, indicating which stimulus was the most probable target; red: neutral condition) and the stimulus. FBA built up after cue onset: Benefits (errors for valid vs. neutral cues), costs (invalid vs. neutral), and the overall cueing effect (valid vs. invalid) increased with the cue-stimulus interval. Critically, we also tested visual performance at different intervals after a saccade, when FBA had been fully deployed before saccade initiation. Cueing effects were evident immediately after the saccade and were predicted most accurately and most precisely by fully deployed FBA, indicating that FBA was continuous throughout saccades. Finally, a decomposition of orientation reports into target reports and random guesses confirmed continuity of report precision and guess rates across the saccade. We discuss the role of FBA in perceptual continuity across saccades.
Seki, Yoshimasa; Okanoya, Kazuo
2008-02-01
Both visual and auditory information are important for songbirds, especially in developmental and sexual contexts. To investigate bimodal cognition in songbirds, the authors conducted audiovisual discrimination training in Bengalese finches. The authors used two types of stimulus: an "artificial stimulus," which is a combination of simple figures and sound, and a "biological stimulus," consisting of video images of singing males along with their songs. The authors found that while both sexes predominantly used visual cues in the discrimination tasks, males tended to be more dependent on auditory information for the biological stimulus. Female responses were always dependent on the visual stimulus for both stimulus types. Only males changed their discrimination strategy according to stimulus type. Although males used both visual and auditory cues for the biological stimulus, they responded to the artificial stimulus depending only on visual information, as the females did. These findings suggest a sex difference in innate auditory sensitivity. (c) 2008 APA.
Dube, William V.; Wilkinson, Krista M.
2014-01-01
This paper examines the phenomenon of “stimulus overselectivity” or “overselective attention” as it may impact AAC training and use in individuals with intellectual disabilities. Stimulus overselectivity is defined as an atypical limitation in the number of stimuli or stimulus features within an image that are attended to and subsequently learned. Within AAC, the term “stimulus” could refer to symbols or line drawings on speech generating devices, drawings or pictures on low-technology systems, and/or the elements within visual scene displays. In this context, overselective attention may result in unusual or uneven error patterns such as confusion between two symbols that share a single feature or difficulties with transitioning between different types of hardware. We review some of the ways that overselective attention has been studied behaviorally. We then examine how eye tracking technology allows a glimpse into some of the behavioral characteristics of overselective attention. We describe an intervention approach, differential observing responses, that may reduce or eliminate overselectivity, and we consider this type of intervention as it relates to issues of relevance for AAC. PMID:24773053
Horváth, János; Sussman, Elyse; Winkler, István; Schröger, Erich
2011-01-01
Rare irregular sounds (deviants) embedded into a regular sound sequence have large potential to draw attention to themselves (distraction). It has been previously shown that distraction, as manifested by behavioral response delay, and the P3a and reorienting negativity (RON) event-related potentials, could be reduced when the forthcoming deviant was signaled by visual cues preceding the sounds. In the present study, we investigated the type of information used in the prevention of distraction by manipulating the information content of the visual cues preceding the sounds. Cues could signal the specific variant of the forthcoming deviant, or they could just signal that the next tone was a deviant. We found that stimulus-specific cue information was used in reducing distraction. The results also suggest that early P3a and RON index processes related to the specific deviating stimulus feature, whereas late P3a reflects a general distraction-related process. PMID:21310210
Chirp-modulated visual evoked potential as a generalization of steady state visual evoked potential
NASA Astrophysics Data System (ADS)
Tu, Tao; Xin, Yi; Gao, Xiaorong; Gao, Shangkai
2012-02-01
Visual evoked potentials (VEPs) are of great concern in cognitive and clinical neuroscience as well as in the recent research field of brain-computer interfaces (BCIs). In this study, a chirp-modulated stimulation was employed to serve as a novel type of visual stimulus. Based on our empirical study, the chirp stimuli visual evoked potential (Chirp-VEP) preserved frequency features of the chirp stimulus analogous to the steady state evoked potential (SSVEP), and therefore it can be regarded as a generalization of SSVEP. Specifically, we first investigated the characteristics of the Chirp-VEP in the time-frequency domain and the fractional domain via fractional Fourier transform. We also proposed a group delay technique to derive the apparent latency from Chirp-VEP. Results on EEG data showed that our approach outperformed the traditional SSVEP-based method in efficiency and ease of apparent latency estimation. For the recruited six subjects, the average apparent latencies ranged from 100 to 130 ms. Finally, we implemented a BCI system with six targets to validate the feasibility of Chirp-VEP as a potential candidate in the field of BCIs.
The impact of task demand on visual word recognition.
Yang, J; Zevin, J
2014-07-11
The left occipitotemporal cortex has been found sensitive to the hierarchy of increasingly complex features in visually presented words, from individual letters to bigrams and morphemes. However, whether this sensitivity is a stable property of the brain regions engaged by word recognition is still unclear. To address the issue, the current study investigated whether different task demands modify this sensitivity. Participants viewed real English words and stimuli with hierarchical word-likeness while performing a lexical decision task (i.e., to decide whether each presented stimulus is a real word) and a symbol detection task. General linear model and independent component analysis indicated strong activation in the fronto-parietal and temporal regions during the two tasks. Furthermore, the bilateral inferior frontal gyrus and insula showed significant interaction effects between task demand and stimulus type in the pseudoword condition. The occipitotemporal cortex showed strong main effects for task demand and stimulus type, but no sensitivity to the hierarchical word-likeness was found. These results suggest that different task demands on semantic, phonological and orthographic processes can influence the involvement of the relevant regions during visual word recognition. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.
Value-driven attentional capture in the auditory domain.
Anderson, Brian A
2016-01-01
It is now well established that the visual attention system is shaped by reward learning. When visual features are associated with a reward outcome, they acquire high priority and can automatically capture visual attention. To date, evidence for value-driven attentional capture has been limited entirely to the visual system. In the present study, I demonstrate that previously reward-associated sounds also capture attention, interfering more strongly with the performance of a visual task. This finding suggests that value-driven attention reflects a broad principle of information processing that can be extended to other sensory modalities and that value-driven attention can bias cross-modal stimulus competition.
The Comparison of Visual Working Memory Representations with Perceptual Inputs
Hyun, Joo-seok; Woodman, Geoffrey F.; Vogel, Edward K.; Hollingworth, Andrew
2008-01-01
The human visual system can notice differences between memories of previous visual inputs and perceptions of new visual inputs, but the comparison process that detects these differences has not been well characterized. This study tests the hypothesis that differences between the memory of a stimulus array and the perception of a new array are detected in a manner that is analogous to the detection of simple features in visual search tasks. That is, just as the presence of a task-relevant feature in visual search can be detected in parallel, triggering a rapid shift of attention to the object containing the feature, the presence of a memory-percept difference along a task-relevant dimension can be detected in parallel, triggering a rapid shift of attention to the changed object. Supporting evidence was obtained in a series of experiments that examined manual reaction times, saccadic reaction times, and event-related potential latencies. However, these experiments also demonstrated that a slow, limited-capacity process must occur before the observer can make a manual change-detection response. PMID:19653755
Visual feature extraction from voxel-weighted averaging of stimulus images in 2 fMRI studies.
Hart, Corey B; Rose, William J
2013-11-01
Multiple studies have provided evidence for distributed object representation in the brain, with several recent experiments leveraging basis function estimates for partial image reconstruction from fMRI data. Using a novel combination of statistical decomposition, generalized linear models, and stimulus averaging on previously examined image sets and Bayesian regression of recorded fMRI activity during presentation of these data sets, we identify a subset of relevant voxels that appear to code for covarying object features. Using a technique we term "voxel-weighted averaging," we isolate image filters that these voxels appear to implement. The results, though very cursory, appear to have significant implications for hierarchical and deep-learning-type approaches toward the understanding of neural coding and representation.
Rosenblatt, Steven David; Crane, Benjamin Thomas
2015-01-01
A moving visual field can induce the feeling of self-motion or vection. Illusory motion from static repeated asymmetric patterns creates a compelling visual motion stimulus, but it is unclear if such illusory motion can induce a feeling of self-motion or alter self-motion perception. In these experiments, human subjects reported the perceived direction of self-motion for sway translation and yaw rotation at the end of a period of viewing set visual stimuli coordinated with varying inertial stimuli. This tested the hypothesis that illusory visual motion would influence self-motion perception in the horizontal plane. Trials were arranged into 5 blocks based on stimulus type: moving star field with yaw rotation, moving star field with sway translation, illusory motion with yaw, illusory motion with sway, and static arrows with sway. Static arrows were used to evaluate the effect of cognitive suggestion on self-motion perception. Each trial had a control condition; the illusory motion controls were altered versions of the experimental image, which removed the illusory motion effect. For the moving visual stimulus, controls were carried out in a dark room. With the arrow visual stimulus, controls were a gray screen. In blocks containing a visual stimulus there was an 8s viewing interval with the inertial stimulus occurring over the final 1s. This allowed measurement of the visual illusion perception using objective methods. When no visual stimulus was present, only the 1s motion stimulus was presented. Eight women and five men (mean age 37) participated. To assess for a shift in self-motion perception, the effect of each visual stimulus on the self-motion stimulus (cm/s) at which subjects were equally likely to report motion in either direction was measured. Significant effects were seen for moving star fields for both translation (p = 0.001) and rotation (p<0.001), and arrows (p = 0.02). For the visual motion stimuli, inertial motion perception was shifted in the direction consistent with the visual stimulus. Arrows had a small effect on self-motion perception driven by a minority of subjects. There was no significant effect of illusory motion on self-motion perception for either translation or rotation (p>0.1 for both). Thus, although a true moving visual field can induce self-motion, results of this study show that illusory motion does not.
Visual speech discrimination and identification of natural and synthetic consonant stimuli
Files, Benjamin T.; Tjan, Bosco S.; Jiang, Jintao; Bernstein, Lynne E.
2015-01-01
From phonetic features to connected discourse, every level of psycholinguistic structure including prosody can be perceived through viewing the talking face. Yet a longstanding notion in the literature is that visual speech perceptual categories comprise groups of phonemes (referred to as visemes), such as /p, b, m/ and /f, v/, whose internal structure is not informative to the visual speech perceiver. This conclusion has not to our knowledge been evaluated using a psychophysical discrimination paradigm. We hypothesized that perceivers can discriminate the phonemes within typical viseme groups, and that discrimination measured with d-prime (d’) and response latency is related to visual stimulus dissimilarities between consonant segments. In Experiment 1, participants performed speeded discrimination for pairs of consonant-vowel spoken nonsense syllables that were predicted to be same, near, or far in their perceptual distances, and that were presented as natural or synthesized video. Near pairs were within-viseme consonants. Natural within-viseme stimulus pairs were discriminated significantly above chance (except for /k/-/h/). Sensitivity (d’) increased and response times decreased with distance. Discrimination and identification were superior with natural stimuli, which comprised more phonetic information. We suggest that the notion of the viseme as a unitary perceptual category is incorrect. Experiment 2 probed the perceptual basis for visual speech discrimination by inverting the stimuli. Overall reductions in d’ with inverted stimuli but a persistent pattern of larger d’ for far than for near stimulus pairs are interpreted as evidence that visual speech is represented by both its motion and configural attributes. The methods and results of this investigation open up avenues for understanding the neural and perceptual bases for visual and audiovisual speech perception and for development of practical applications such as visual lipreading/speechreading speech synthesis. PMID:26217249
Baumgaertner, Annette; Hartwigsen, Gesa; Roman Siebner, Hartwig
2013-06-01
Verbal stimuli often induce right-hemispheric activation in patients with aphasia after left-hemispheric stroke. This right-hemispheric activation is commonly attributed to functional reorganization within the language system. Yet previous evidence suggests that functional activation in right-hemispheric homologues of classic left-hemispheric language areas may partly be due to processing nonlinguistic perceptual features of verbal stimuli. We used functional MRI (fMRI) to clarify the role of the right hemisphere in the perception of nonlinguistic word features in healthy individuals. Participants made perceptual, semantic, or phonological decisions on the same set of auditorily and visually presented word stimuli. Perceptual decisions required judgements about stimulus-inherent changes in font size (visual modality) or fundamental frequency contour (auditory modality). The semantic judgement required subjects to decide whether a stimulus is natural or man-made; the phonologic decision required a decision on whether a stimulus contains two or three syllables. Compared to phonologic or semantic decision, nonlinguistic perceptual decisions resulted in a stronger right-hemispheric activation. Specifically, the right inferior frontal gyrus (IFG), an area previously suggested to support language recovery after left-hemispheric stroke, displayed modality-independent activation during perceptual processing of word stimuli. Our findings indicate that activation of the right hemisphere during language tasks may, in some instances, be driven by a "nonlinguistic perceptual processing" mode that focuses on nonlinguistic word features. This raises the possibility that stronger activation of right inferior frontal areas during language tasks in aphasic patients with left-hemispheric stroke may at least partially reflect increased attentional focus on nonlinguistic perceptual aspects of language. Copyright © 2012 Wiley Periodicals, Inc.
Effects of Sound Frequency on Audiovisual Integration: An Event-Related Potential Study
Yang, Weiping; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Ren, Yanna; Takahashi, Satoshi; Wu, Jinglong
2015-01-01
A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190–210 ms, for 1 kHz stimuli from 170–200 ms, for 2.5 kHz stimuli from 140–200 ms, 5 kHz stimuli from 100–200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300–340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies. PMID:26384256
Nakamura, S; Shimojo, S
1998-10-01
The effects of the size and eccentricity of the visual stimulus upon visually induced perception of self-motion (vection) were examined with various sizes of central and peripheral visual stimulation. Analysis indicated the strength of vection increased linearly with the size of the area in which the moving pattern was presented, but there was no difference in vection strength between central and peripheral stimuli when stimulus sizes were the same. Thus, the effect of stimulus size is homogeneous across eccentricities in the visual field.
Li, Yuanqing; Wang, Fangyi; Chen, Yongbin; Cichocki, Andrzej; Sejnowski, Terrence
2017-09-25
At cocktail parties, our brains often simultaneously receive visual and auditory information. Although the cocktail party problem has been widely investigated under auditory-only settings, the effects of audiovisual inputs have not. This study explored the effects of audiovisual inputs in a simulated cocktail party. In our fMRI experiment, each congruent audiovisual stimulus was a synthesis of 2 facial movie clips, each of which could be classified into 1 of 2 emotion categories (crying and laughing). Visual-only (faces) and auditory-only stimuli (voices) were created by extracting the visual and auditory contents from the synthesized audiovisual stimuli. Subjects were instructed to selectively attend to 1 of the 2 objects contained in each stimulus and to judge its emotion category in the visual-only, auditory-only, and audiovisual conditions. The neural representations of the emotion features were assessed by calculating decoding accuracy and brain pattern-related reproducibility index based on the fMRI data. We compared the audiovisual condition with the visual-only and auditory-only conditions and found that audiovisual inputs enhanced the neural representations of emotion features of the attended objects instead of the unattended objects. This enhancement might partially explain the benefits of audiovisual inputs for the brain to solve the cocktail party problem. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Identifying a "default" visual search mode with operant conditioning.
Kawahara, Jun-ichiro
2010-09-01
The presence of a singleton in a task-irrelevant domain can impair visual search. This impairment, known as the attentional capture depends on the set of participants. When narrowly searching for a specific feature (the feature search mode), only matching stimuli capture attention. When searching broadly (the singleton detection mode), any oddball captures attention. The present study examined which strategy represents the "default" mode using an operant conditioning approach in which participants were trained, in the absence of explicit instructions, to search for a target in an ambiguous context in which one of two modes was available. The results revealed that participants behaviorally adopted the singleton detection as the default mode but reported using the feature search mode. Conscious strategies did not eliminate capture. These results challenge the view that a conscious set always modulates capture, suggesting that the visual system tends to rely on stimulus salience to deploy attention.
The Role of Temporal Disparity on Audiovisual Integration in Low-Vision Individuals.
Targher, Stefano; Micciolo, Rocco; Occelli, Valeria; Zampini, Massimiliano
2017-12-01
Recent findings have shown that sounds improve visual detection in low vision individuals when the audiovisual stimuli pairs of stimuli are presented simultaneously and from the same spatial position. The present study purports to investigate the temporal aspects of the audiovisual enhancement effect previously reported. Low vision participants were asked to detect the presence of a visual stimulus (yes/no task) presented either alone or together with an auditory stimulus at different stimulus onset asynchronies (SOAs). In the first experiment, the sound was presented either simultaneously or before the visual stimulus (i.e., SOAs 0, 100, 250, 400 ms). The results show that the presence of a task-irrelevant auditory stimulus produced a significant visual detection enhancement in all the conditions. In the second experiment, the sound was either synchronized with, or randomly preceded/lagged behind the visual stimulus (i.e., SOAs 0, ± 250, ± 400 ms). The visual detection enhancement was reduced in magnitude and limited only to the synchronous condition and to the condition in which the sound stimulus was presented 250 ms before the visual stimulus. Taken together, the evidence of the present study seems to suggest that audiovisual interaction in low vision individuals is highly modulated by top-down mechanisms.
Ince, Robin A A; Jaworska, Katarzyna; Gross, Joachim; Panzeri, Stefano; van Rijsbergen, Nicola J; Rousselet, Guillaume A; Schyns, Philippe G
2016-08-22
A key to understanding visual cognition is to determine "where", "when", and "how" brain responses reflect the processing of the specific visual features that modulate categorization behavior-the "what". The N170 is the earliest Event-Related Potential (ERP) that preferentially responds to faces. Here, we demonstrate that a paradigmatic shift is necessary to interpret the N170 as the product of an information processing network that dynamically codes and transfers face features across hemispheres, rather than as a local stimulus-driven event. Reverse-correlation methods coupled with information-theoretic analyses revealed that visibility of the eyes influences face detection behavior. The N170 initially reflects coding of the behaviorally relevant eye contralateral to the sensor, followed by a causal communication of the other eye from the other hemisphere. These findings demonstrate that the deceptively simple N170 ERP hides a complex network information processing mechanism involving initial coding and subsequent cross-hemispheric transfer of visual features. © The Author 2016. Published by Oxford University Press.
Romei, Vincenzo; Thut, Gregor; Mok, Robert M; Schyns, Philippe G; Driver, Jon
2012-03-01
Although oscillatory activity in the alpha band was traditionally associated with lack of alertness, more recent work has linked it to specific cognitive functions, including visual attention. The emerging method of rhythmic transcranial magnetic stimulation (TMS) allows causal interventional tests for the online impact on performance of TMS administered in short bursts at a particular frequency. TMS bursts at 10 Hz have recently been shown to have an impact on spatial visual attention, but any role in featural attention remains unclear. Here we used rhythmic TMS at 10 Hz to assess the impact on attending to global or local components of a hierarchical Navon-like stimulus (D. Navon (1977) Forest before trees: The precedence of global features in visual perception. Cognit. Psychol., 9, 353), in a paradigm recently used with TMS at other frequencies (V. Romei, J. Driver, P.G. Schyns & G. Thut. (2011) Rhythmic TMS over parietal cortex links distinct brain frequencies to global versus local visual processing. Curr. Biol., 2, 334-337). In separate groups, left or right posterior parietal sites were stimulated at 10 Hz just before presentation of the hierarchical stimulus. Participants had to identify either the local or global component in separate blocks. Right parietal 10 Hz stimulation (vs. sham) significantly impaired global processing without affecting local processing, while left parietal 10 Hz stimulation vs. sham impaired local processing with a minor trend to enhance global processing. These 10 Hz outcomes differed significantly from stimulation at other frequencies (i.e. 5 or 20 Hz) over the same site in other recent work with the same paradigm. These dissociations confirm differential roles of the two hemispheres in local vs. global processing, and reveal a frequency-specific role for stimulation in the alpha band for regulating feature-based visual attention. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Differences in gamma frequencies across visual cortex restrict their possible use in computation.
Ray, Supratim; Maunsell, John H R
2010-09-09
Neuronal oscillations in the gamma band (30-80 Hz) have been suggested to play a central role in feature binding or establishing channels for neural communication. For these functions, the gamma rhythm frequency must be consistent across neural assemblies encoding the features of a stimulus. Here we test the dependence of gamma frequency on stimulus contrast in V1 cortex of awake behaving macaques and show that gamma frequency increases monotonically with contrast. Changes in stimulus contrast over time leads to a reliable gamma frequency modulation on a fast timescale. Further, large stimuli whose contrast varies across space generate gamma rhythms at significantly different frequencies in simultaneously recorded neuronal assemblies separated by as little as 400 microm, making the gamma rhythm a poor candidate for binding or communication, at least in V1. Instead, our results suggest that the gamma rhythm arises from local interactions between excitation and inhibition. 2010 Elsevier Inc. All rights reserved.
Spatial resolution in visual memory.
Ben-Shalom, Asaf; Ganel, Tzvi
2015-04-01
Representations in visual short-term memory are considered to contain relatively elaborated information on object structure. Conversely, representations in earlier stages of the visual hierarchy are thought to be dominated by a sensory-based, feed-forward buildup of information. In four experiments, we compared the spatial resolution of different object properties between two points in time along the processing hierarchy in visual short-term memory. Subjects were asked either to estimate the distance between objects or to estimate the size of one of the objects' features under two experimental conditions, of either a short or a long delay period between the presentation of the target stimulus and the probe. When different objects were referred to, similar spatial resolution was found for the two delay periods, suggesting that initial processing stages are sensitive to object-based properties. Conversely, superior resolution was found for the short, as compared with the long, delay when features were referred to. These findings suggest that initial representations in visual memory are hybrid in that they allow fine-grained resolution for object features alongside normal visual sensitivity to the segregation between objects. The findings are also discussed in reference to the distinction made in earlier studies between visual short-term memory and iconic memory.
Hecht, Marcus; Thiemann, Ulf; Freitag, Christine M; Bender, Stephan
2016-01-15
Post-perceptual cues can enhance visual short term memory encoding even after the offset of the visual stimulus. However, both the mechanisms by which the sensory stimulus characteristics are buffered as well as the mechanisms by which post-perceptual selective attention enhances short term memory encoding remain unclear. We analyzed late post-perceptual event-related potentials (ERPs) in visual change detection tasks (100ms stimulus duration) by high-resolution ERP analysis to elucidate these mechanisms. The effects of early and late auditory post-cues (300ms or 850ms after visual stimulus onset) as well as the effects of a visual interference stimulus were examined in 27 healthy right-handed adults. Focusing attention with post-perceptual cues at both latencies significantly improved memory performance, i.e. sensory stimulus characteristics were available for up to 850ms after stimulus presentation. Passive watching of the visual stimuli without auditory cue presentation evoked a slow negative wave (N700) over occipito-temporal visual areas. N700 was strongly reduced by a visual interference stimulus which impeded memory maintenance. In contrast, contralateral delay activity (CDA) still developed in this condition after the application of auditory post-cues and was thereby dissociated from N700. CDA and N700 seem to represent two different processes involved in short term memory encoding. While N700 could reflect visual post processing by automatic attention attraction, CDA may reflect the top-down process of searching selectively for the required information through post-perceptual attention. Copyright © 2015 Elsevier Inc. All rights reserved.
Beyond the search surface: visual search and attentional engagement.
Duncan, J; Humphreys, G
1992-05-01
Treisman (1991) described a series of visual search studies testing feature integration theory against an alternative (Duncan & Humphreys, 1989) in which feature and conjunction search are basically similar. Here the latter account is noted to have 2 distinct levels: (a) a summary of search findings in terms of stimulus similarities, and (b) a theory of how visual attention is brought to bear on relevant objects. Working at the 1st level, Treisman found that even when similarities were calibrated and controlled, conjunction search was much harder than feature search. The theory, however, can only really be tested at the 2nd level, because the 1st is an approximation. An account of the findings is developed at the 2nd level, based on the 2 processes of input-template matching and spreading suppression. New data show that, when both of these factors are controlled, feature and conjunction search are equally difficult. Possibilities for unification of the alternative views are considered.
Associative cueing of attention through implicit feature-location binding.
Girardi, Giovanna; Nico, Daniele
2017-09-01
In order to assess associative learning between two task-irrelevant features in cueing spatial attention, we devised a task in which participants have to make an identity comparison between two sequential visual stimuli. Unbeknownst to them, location of the second stimulus could be predicted by the colour of the first or a concurrent sound. Albeit unnecessary to perform the identity-matching judgment the predictive features thus provided an arbitrary association favouring the spatial anticipation of the second stimulus. A significant advantage was found with faster responses at predicted compared to non-predicted locations. Results clearly demonstrated an associative cueing of attention via a second-order arbitrary feature/location association but with a substantial discrepancy depending on the sensory modality of the predictive feature. With colour as predictive feature, significant advantages emerged only after the completion of three blocks of trials. On the contrary, sound affected responses from the first block of trials and significant advantages were manifest from the beginning of the second. The possible mechanisms underlying the associative cueing of attention in both conditions are discussed. Copyright © 2017 Elsevier B.V. All rights reserved.
Processing of pitch and location in human auditory cortex during visual and auditory tasks.
Häkkinen, Suvi; Ovaska, Noora; Rinne, Teemu
2015-01-01
The relationship between stimulus-dependent and task-dependent activations in human auditory cortex (AC) during pitch and location processing is not well understood. In the present functional magnetic resonance imaging study, we investigated the processing of task-irrelevant and task-relevant pitch and location during discrimination, n-back, and visual tasks. We tested three hypotheses: (1) According to prevailing auditory models, stimulus-dependent processing of pitch and location should be associated with enhanced activations in distinct areas of the anterior and posterior superior temporal gyrus (STG), respectively. (2) Based on our previous studies, task-dependent activation patterns during discrimination and n-back tasks should be similar when these tasks are performed on sounds varying in pitch or location. (3) Previous studies in humans and animals suggest that pitch and location tasks should enhance activations especially in those areas that also show activation enhancements associated with stimulus-dependent pitch and location processing, respectively. Consistent with our hypotheses, we found stimulus-dependent sensitivity to pitch and location in anterolateral STG and anterior planum temporale (PT), respectively, in line with the view that these features are processed in separate parallel pathways. Further, task-dependent activations during discrimination and n-back tasks were associated with enhanced activations in anterior/posterior STG and posterior STG/inferior parietal lobule (IPL) irrespective of stimulus features. However, direct comparisons between pitch and location tasks performed on identical sounds revealed no significant activation differences. These results suggest that activations during pitch and location tasks are not strongly affected by enhanced stimulus-dependent activations to pitch or location. We also found that activations in PT were strongly modulated by task requirements and that areas in the inferior parietal lobule (IPL) showed task-dependent activation modulations, but no systematic activations to pitch or location. Based on these results, we argue that activations during pitch and location tasks cannot be explained by enhanced stimulus-specific processing alone, but rather that activations in human AC depend in a complex manner on the requirements of the task at hand.
Processing of pitch and location in human auditory cortex during visual and auditory tasks
Häkkinen, Suvi; Ovaska, Noora; Rinne, Teemu
2015-01-01
The relationship between stimulus-dependent and task-dependent activations in human auditory cortex (AC) during pitch and location processing is not well understood. In the present functional magnetic resonance imaging study, we investigated the processing of task-irrelevant and task-relevant pitch and location during discrimination, n-back, and visual tasks. We tested three hypotheses: (1) According to prevailing auditory models, stimulus-dependent processing of pitch and location should be associated with enhanced activations in distinct areas of the anterior and posterior superior temporal gyrus (STG), respectively. (2) Based on our previous studies, task-dependent activation patterns during discrimination and n-back tasks should be similar when these tasks are performed on sounds varying in pitch or location. (3) Previous studies in humans and animals suggest that pitch and location tasks should enhance activations especially in those areas that also show activation enhancements associated with stimulus-dependent pitch and location processing, respectively. Consistent with our hypotheses, we found stimulus-dependent sensitivity to pitch and location in anterolateral STG and anterior planum temporale (PT), respectively, in line with the view that these features are processed in separate parallel pathways. Further, task-dependent activations during discrimination and n-back tasks were associated with enhanced activations in anterior/posterior STG and posterior STG/inferior parietal lobule (IPL) irrespective of stimulus features. However, direct comparisons between pitch and location tasks performed on identical sounds revealed no significant activation differences. These results suggest that activations during pitch and location tasks are not strongly affected by enhanced stimulus-dependent activations to pitch or location. We also found that activations in PT were strongly modulated by task requirements and that areas in the inferior parietal lobule (IPL) showed task-dependent activation modulations, but no systematic activations to pitch or location. Based on these results, we argue that activations during pitch and location tasks cannot be explained by enhanced stimulus-specific processing alone, but rather that activations in human AC depend in a complex manner on the requirements of the task at hand. PMID:26594185
Roldan, Stephanie M
2017-01-01
One of the fundamental goals of object recognition research is to understand how a cognitive representation produced from the output of filtered and transformed sensory information facilitates efficient viewer behavior. Given that mental imagery strongly resembles perceptual processes in both cortical regions and subjective visual qualities, it is reasonable to question whether mental imagery facilitates cognition in a manner similar to that of perceptual viewing: via the detection and recognition of distinguishing features. Categorizing the feature content of mental imagery holds potential as a reverse pathway by which to identify the components of a visual stimulus which are most critical for the creation and retrieval of a visual representation. This review will examine the likelihood that the information represented in visual mental imagery reflects distinctive object features thought to facilitate efficient object categorization and recognition during perceptual viewing. If it is the case that these representational features resemble their sensory counterparts in both spatial and semantic qualities, they may well be accessible through mental imagery as evaluated through current investigative techniques. In this review, methods applied to mental imagery research and their findings are reviewed and evaluated for their efficiency in accessing internal representations, and implications for identifying diagnostic features are discussed. An argument is made for the benefits of combining mental imagery assessment methods with diagnostic feature research to advance the understanding of visual perceptive processes, with suggestions for avenues of future investigation.
Roldan, Stephanie M.
2017-01-01
One of the fundamental goals of object recognition research is to understand how a cognitive representation produced from the output of filtered and transformed sensory information facilitates efficient viewer behavior. Given that mental imagery strongly resembles perceptual processes in both cortical regions and subjective visual qualities, it is reasonable to question whether mental imagery facilitates cognition in a manner similar to that of perceptual viewing: via the detection and recognition of distinguishing features. Categorizing the feature content of mental imagery holds potential as a reverse pathway by which to identify the components of a visual stimulus which are most critical for the creation and retrieval of a visual representation. This review will examine the likelihood that the information represented in visual mental imagery reflects distinctive object features thought to facilitate efficient object categorization and recognition during perceptual viewing. If it is the case that these representational features resemble their sensory counterparts in both spatial and semantic qualities, they may well be accessible through mental imagery as evaluated through current investigative techniques. In this review, methods applied to mental imagery research and their findings are reviewed and evaluated for their efficiency in accessing internal representations, and implications for identifying diagnostic features are discussed. An argument is made for the benefits of combining mental imagery assessment methods with diagnostic feature research to advance the understanding of visual perceptive processes, with suggestions for avenues of future investigation. PMID:28588538
de Graaf, Tom A; Cornelsen, Sonja; Jacobs, Christianne; Sack, Alexander T
2011-12-01
Transcranial magnetic stimulation (TMS) can be used to mask visual stimuli, disrupting visual task performance or preventing visual awareness. While TMS masking studies generally fix stimulation intensity, we hypothesized that varying the intensity of TMS pulses in a masking paradigm might inform several ongoing debates concerning TMS disruption of vision as measured subjectively versus objectively, and pre-stimulus (forward) versus post-stimulus (backward) TMS masking. We here show that both pre-stimulus TMS pulses and post-stimulus TMS pulses could strongly mask visual stimuli. We found no dissociations between TMS effects on the subjective and objective measures of vision for any masking window or intensity, ruling out the option that TMS intensity levels determine whether dissociations between subjective and objective vision are obtained. For the post-stimulus time window particularly, we suggest that these data provide new constraints for (e.g. recurrent) models of vision and visual awareness. Finally, our data are in line with the idea that pre-stimulus masking operates differently from conventional post-stimulus masking. Copyright © 2011 Elsevier Inc. All rights reserved.
Electrophysiological evidence for parallel and serial processing during visual search.
Luck, S J; Hillyard, S A
1990-12-01
Event-related potentials were recorded from young adults during a visual search task in order to evaluate parallel and serial models of visual processing in the context of Treisman's feature integration theory. Parallel and serial search strategies were produced by the use of feature-present and feature-absent targets, respectively. In the feature-absent condition, the slopes of the functions relating reaction time and latency of the P3 component to set size were essentially identical, indicating that the longer reaction times observed for larger set sizes can be accounted for solely by changes in stimulus identification and classification time, rather than changes in post-perceptual processing stages. In addition, the amplitude of the P3 wave on target-present trials in this condition increased with set size and was greater when the preceding trial contained a target, whereas P3 activity was minimal on target-absent trials. These effects are consistent with the serial self-terminating search model and appear to contradict parallel processing accounts of attention-demanding visual search performance, at least for a subset of search paradigms. Differences in ERP scalp distributions further suggested that different physiological processes are utilized for the detection of feature presence and absence.
Effect of feature-selective attention on neuronal responses in macaque area MT
Chen, X.; Hoffmann, K.-P.; Albright, T. D.
2012-01-01
Attention influences visual processing in striate and extrastriate cortex, which has been extensively studied for spatial-, object-, and feature-based attention. Most studies exploring neural signatures of feature-based attention have trained animals to attend to an object identified by a certain feature and ignore objects/displays identified by a different feature. Little is known about the effects of feature-selective attention, where subjects attend to one stimulus feature domain (e.g., color) of an object while features from different domains (e.g., direction of motion) of the same object are ignored. To study this type of feature-selective attention in area MT in the middle temporal sulcus, we trained macaque monkeys to either attend to and report the direction of motion of a moving sine wave grating (a feature for which MT neurons display strong selectivity) or attend to and report its color (a feature for which MT neurons have very limited selectivity). We hypothesized that neurons would upregulate their firing rate during attend-direction conditions compared with attend-color conditions. We found that feature-selective attention significantly affected 22% of MT neurons. Contrary to our hypothesis, these neurons did not necessarily increase firing rate when animals attended to direction of motion but fell into one of two classes. In one class, attention to color increased the gain of stimulus-induced responses compared with attend-direction conditions. The other class displayed the opposite effects. Feature-selective activity modulations occurred earlier in neurons modulated by attention to color compared with neurons modulated by attention to motion direction. Thus feature-selective attention influences neuronal processing in macaque area MT but often exhibited a mismatch between the preferred stimulus dimension (direction of motion) and the preferred attention dimension (attention to color). PMID:22170961
Effect of feature-selective attention on neuronal responses in macaque area MT.
Chen, X; Hoffmann, K-P; Albright, T D; Thiele, A
2012-03-01
Attention influences visual processing in striate and extrastriate cortex, which has been extensively studied for spatial-, object-, and feature-based attention. Most studies exploring neural signatures of feature-based attention have trained animals to attend to an object identified by a certain feature and ignore objects/displays identified by a different feature. Little is known about the effects of feature-selective attention, where subjects attend to one stimulus feature domain (e.g., color) of an object while features from different domains (e.g., direction of motion) of the same object are ignored. To study this type of feature-selective attention in area MT in the middle temporal sulcus, we trained macaque monkeys to either attend to and report the direction of motion of a moving sine wave grating (a feature for which MT neurons display strong selectivity) or attend to and report its color (a feature for which MT neurons have very limited selectivity). We hypothesized that neurons would upregulate their firing rate during attend-direction conditions compared with attend-color conditions. We found that feature-selective attention significantly affected 22% of MT neurons. Contrary to our hypothesis, these neurons did not necessarily increase firing rate when animals attended to direction of motion but fell into one of two classes. In one class, attention to color increased the gain of stimulus-induced responses compared with attend-direction conditions. The other class displayed the opposite effects. Feature-selective activity modulations occurred earlier in neurons modulated by attention to color compared with neurons modulated by attention to motion direction. Thus feature-selective attention influences neuronal processing in macaque area MT but often exhibited a mismatch between the preferred stimulus dimension (direction of motion) and the preferred attention dimension (attention to color).
Reavis, Eric A; Frank, Sebastian M; Tse, Peter U
2018-04-12
Visual search is often slow and difficult for complex stimuli such as feature conjunctions. Search efficiency, however, can improve with training. Search for stimuli that can be identified by the spatial configuration of two elements (e.g., the relative position of two colored shapes) improves dramatically within a few hundred trials of practice. Several recent imaging studies have identified neural correlates of this learning, but it remains unclear what stimulus properties participants learn to use to search efficiently. Influential models, such as reverse hierarchy theory, propose two major possibilities: learning to use information contained in low-level image statistics (e.g., single features at particular retinotopic locations) or in high-level characteristics (e.g., feature conjunctions) of the task-relevant stimuli. In a series of experiments, we tested these two hypotheses, which make different predictions about the effect of various stimulus manipulations after training. We find relatively small effects of manipulating low-level properties of the stimuli (e.g., changing their retinotopic location) and some conjunctive properties (e.g., color-position), whereas the effects of manipulating other conjunctive properties (e.g., color-shape) are larger. Overall, the findings suggest conjunction learning involving such stimuli might be an emergent phenomenon that reflects multiple different learning processes, each of which capitalizes on different types of information contained in the stimuli. We also show that both targets and distractors are learned, and that reversing learned target and distractor identities impairs performance. This suggests that participants do not merely learn to discriminate target and distractor stimuli, they also learn stimulus identity mappings that contribute to performance improvements.
Fournier, Lisa Renee; Wiediger, Matthew D; McMeans, Ryan; Mattson, Paul S; Kirkwood, Joy; Herzog, Theibot
2010-07-01
Holding an action plan in memory for later execution can delay execution of another action if the actions share a similar (compatible) feature. This compatibility interference (CI) occurs for actions that share the same response modality (e.g., manual response). We investigated whether CI can generalize to actions that utilize different response modalities (manual and vocal). In three experiments, participants planned and withheld a sequence of key-presses with the left- or right-hand based on the visual identity of the first stimulus, and then immediately executed a speeded, vocal response ('left' or 'right') to a second visual stimulus. The vocal response was based on discriminating stimulus color (Experiment 1), reading a written word (Experiment 2), or reporting the antonym of a written word (Experiment 3). Results showed that CI occurred when the manual response hand (e.g., left) was compatible with the identity of the vocal response (e.g., 'left') in Experiment 1 and 3, but not in Experiment 2. This suggests that partial overlap of semantic codes is sufficient to obtain CI unless the intervening action can be accessed automatically (Experiment 2). These findings are consistent with the code occupation hypothesis and the general framework of the theory of event coding (Behav Brain Sci 24:849-878, 2001a; Behav Brain Sci 24:910-937, 2001b).
Rutishauser, Ueli; Kotowicz, Andreas; Laurent, Gilles
2013-01-01
Brain activity often consists of interactions between internal—or on-going—and external—or sensory—activity streams, resulting in complex, distributed patterns of neural activity. Investigation of such interactions could benefit from closed-loop experimental protocols in which one stream can be controlled depending on the state of the other. We describe here methods to present rapid and precisely timed visual stimuli to awake animals, conditional on features of the animal’s on-going brain state; those features are the presence, power and phase of oscillations in local field potentials (LFP). The system can process up to 64 channels in real time. We quantified its performance using simulations, synthetic data and animal experiments (chronic recordings in the dorsal cortex of awake turtles). The delay from detection of an oscillation to the onset of a visual stimulus on an LCD screen was 47.5 ms and visual-stimulus onset could be locked to the phase of ongoing oscillations at any frequency ≤40 Hz. Our software’s architecture is flexible, allowing on-the-fly modifications by experimenters and the addition of new closed-loop control and analysis components through plugins. The source code of our system “StimOMatic” is available freely as open-source. PMID:23473800
Temporal kinetics of prefrontal modulation of the extrastriate cortex during visual attention.
Yago, Elena; Duarte, Audrey; Wong, Ting; Barceló, Francisco; Knight, Robert T
2004-12-01
Single-unit, event-related potential (ERP), and neuroimaging studies have implicated the prefrontal cortex (PFC) in top-down control of attention and working memory. We conducted an experiment in patients with unilateral PFC damage (n = 8) to assess the temporal kinetics of PFC-extrastriate interactions during visual attention. Subjects alternated attention between the left and the right hemifields in successive runs while they detected target stimuli embedded in streams of repetitive task-irrelevant stimuli (standards). The design enabled us to examine tonic (spatial selection) and phasic (feature selection) PFC-extrastriate interactions. PFC damage impaired performance in the visual field contralateral to lesions, as manifested by both larger reaction times and error rates. Assessment of the extrastriate P1 ERP revealed that the PFC exerts a tonic (spatial selection) excitatory input to the ipsilateral extrastriate cortex as early as 100 msec post stimulus delivery. The PFC exerts a second phasic (feature selection) excitatory extrastriate modulation from 180 to 300 msec, as evidenced by reductions in selection negativity after damage. Finally, reductions of the N2 ERP to target stimuli supports the notion that the PFC exerts a third phasic (target selection) signal necessary for successful template matching during postselection analysis of target features. The results provide electrophysiological evidence of three distinct tonic and phasic PFC inputs to the extrastriate cortex in the initial few hundred milliseconds of stimulus processing. Damage to this network appears to underlie the pervasive deficits in attention observed in patients with prefrontal lesions.
Visual Perceptual Echo Reflects Learning of Regularities in Rapid Luminance Sequences.
Chang, Acer Y-C; Schwartzman, David J; VanRullen, Rufin; Kanai, Ryota; Seth, Anil K
2017-08-30
A novel neural signature of active visual processing has recently been described in the form of the "perceptual echo", in which the cross-correlation between a sequence of randomly fluctuating luminance values and occipital electrophysiological signals exhibits a long-lasting periodic (∼100 ms cycle) reverberation of the input stimulus (VanRullen and Macdonald, 2012). As yet, however, the mechanisms underlying the perceptual echo and its function remain unknown. Reasoning that natural visual signals often contain temporally predictable, though nonperiodic features, we hypothesized that the perceptual echo may reflect a periodic process associated with regularity learning. To test this hypothesis, we presented subjects with successive repetitions of a rapid nonperiodic luminance sequence, and examined the effects on the perceptual echo, finding that echo amplitude linearly increased with the number of presentations of a given luminance sequence. These data suggest that the perceptual echo reflects a neural signature of regularity learning.Furthermore, when a set of repeated sequences was followed by a sequence with inverted luminance polarities, the echo amplitude decreased to the same level evoked by a novel stimulus sequence. Crucially, when the original stimulus sequence was re-presented, the echo amplitude returned to a level consistent with the number of presentations of this sequence, indicating that the visual system retained sequence-specific information, for many seconds, even in the presence of intervening visual input. Altogether, our results reveal a previously undiscovered regularity learning mechanism within the human visual system, reflected by the perceptual echo. SIGNIFICANCE STATEMENT How the brain encodes and learns fast-changing but nonperiodic visual input remains unknown, even though such visual input characterizes natural scenes. We investigated whether the phenomenon of "perceptual echo" might index such learning. The perceptual echo is a long-lasting reverberation between a rapidly changing visual input and evoked neural activity, apparent in cross-correlations between occipital EEG and stimulus sequences, peaking in the alpha (∼10 Hz) range. We indeed found that perceptual echo is enhanced by repeatedly presenting the same visual sequence, indicating that the human visual system can rapidly and automatically learn regularities embedded within fast-changing dynamic sequences. These results point to a previously undiscovered regularity learning mechanism, operating at a rate defined by the alpha frequency. Copyright © 2017 the authors 0270-6474/17/378486-12$15.00/0.
Semantically induced distortions of visual awareness in a patient with Balint's syndrome.
Soto, David; Humphreys, Glyn W
2009-02-01
We present data indicating that visual awareness for a basic perceptual feature (colour) can be influenced by the relation between the feature and the semantic properties of the stimulus. We examined semantic interference from the meaning of a colour word (''RED") on simple colour (ink related) detection responses in a patient with simultagnosia due to bilateral parietal lesions. We found that colour detection was influenced by the congruency between the meaning of the word and the relevant ink colour, with impaired performance when the word and the colour mismatched (on incongruent trials). This result held even when remote associations between meaning and colour were used (i.e. the word ''PEA" influenced detection of the ink colour red). The results are consistent with a late locus of conscious visual experience that is derived at post-semantic levels. The implications for the understanding of the role of parietal cortex in object binding and visual awareness are discussed.
Spatial and Feature-Based Attention in a Layered Cortical Microcircuit Model
Wagatsuma, Nobuhiko; Potjans, Tobias C.; Diesmann, Markus; Sakai, Ko; Fukai, Tomoki
2013-01-01
Directing attention to the spatial location or the distinguishing feature of a visual object modulates neuronal responses in the visual cortex and the stimulus discriminability of subjects. However, the spatial and feature-based modes of attention differently influence visual processing by changing the tuning properties of neurons. Intriguingly, neurons' tuning curves are modulated similarly across different visual areas under both these modes of attention. Here, we explored the mechanism underlying the effects of these two modes of visual attention on the orientation selectivity of visual cortical neurons. To do this, we developed a layered microcircuit model. This model describes multiple orientation-specific microcircuits sharing their receptive fields and consisting of layers 2/3, 4, 5, and 6. These microcircuits represent a functional grouping of cortical neurons and mutually interact via lateral inhibition and excitatory connections between groups with similar selectivity. The individual microcircuits receive bottom-up visual stimuli and top-down attention in different layers. A crucial assumption of the model is that feature-based attention activates orientation-specific microcircuits for the relevant feature selectively, whereas spatial attention activates all microcircuits homogeneously, irrespective of their orientation selectivity. Consequently, our model simultaneously accounts for the multiplicative scaling of neuronal responses in spatial attention and the additive modulations of orientation tuning curves in feature-based attention, which have been observed widely in various visual cortical areas. Simulations of the model predict contrasting differences between excitatory and inhibitory neurons in the two modes of attentional modulations. Furthermore, the model replicates the modulation of the psychophysical discriminability of visual stimuli in the presence of external noise. Our layered model with a biologically suggested laminar structure describes the basic circuit mechanism underlying the attention-mode specific modulations of neuronal responses and visual perception. PMID:24324628
Krishna, B. Suresh; Treue, Stefan
2016-01-01
Paying attention to a sensory feature improves its perception and impairs that of others. Recent work has shown that a Normalization Model of Attention (NMoA) can account for a wide range of physiological findings and the influence of different attentional manipulations on visual performance. A key prediction of the NMoA is that attention to a visual feature like an orientation or a motion direction will increase the response of neurons preferring the attended feature (response gain) rather than increase the sensory input strength of the attended stimulus (input gain). This effect of feature-based attention on neuronal responses should translate to similar patterns of improvement in behavioral performance, with psychometric functions showing response gain rather than input gain when attention is directed to the task-relevant feature. In contrast, we report here that when human subjects are cued to attend to one of two motion directions in a transparent motion display, attentional effects manifest as a combination of input and response gain. Further, the impact on input gain is greater when attention is directed towards a narrow range of motion directions than when it is directed towards a broad range. These results are captured by an extended NMoA, which either includes a stimulus-independent attentional contribution to normalization or utilizes direction-tuned normalization. The proposed extensions are consistent with the feature-similarity gain model of attention and the attentional modulation in extrastriate area MT, where neuronal responses are enhanced and suppressed by attention to preferred and non-preferred motion directions respectively. PMID:27977679
Perception of biological motion from size-invariant body representations.
Lappe, Markus; Wittinghofer, Karin; de Lussanet, Marc H E
2015-01-01
The visual recognition of action is one of the socially most important and computationally demanding capacities of the human visual system. It combines visual shape recognition with complex non-rigid motion perception. Action presented as a point-light animation is a striking visual experience for anyone who sees it for the first time. Information about the shape and posture of the human body is sparse in point-light animations, but it is essential for action recognition. In the posturo-temporal filter model of biological motion perception posture information is picked up by visual neurons tuned to the form of the human body before body motion is calculated. We tested whether point-light stimuli are processed through posture recognition of the human body form by using a typical feature of form recognition, namely size invariance. We constructed a point-light stimulus that can only be perceived through a size-invariant mechanism. This stimulus changes rapidly in size from one image to the next. It thus disrupts continuity of early visuo-spatial properties but maintains continuity of the body posture representation. Despite this massive manipulation at the visuo-spatial level, size-changing point-light figures are spontaneously recognized by naive observers, and support discrimination of human body motion.
Visual Prediction Error Spreads Across Object Features in Human Visual Cortex
Summerfield, Christopher; Egner, Tobias
2016-01-01
Visual cognition is thought to rely heavily on contextual expectations. Accordingly, previous studies have revealed distinct neural signatures for expected versus unexpected stimuli in visual cortex. However, it is presently unknown how the brain combines multiple concurrent stimulus expectations such as those we have for different features of a familiar object. To understand how an unexpected object feature affects the simultaneous processing of other expected feature(s), we combined human fMRI with a task that independently manipulated expectations for color and motion features of moving-dot stimuli. Behavioral data and neural signals from visual cortex were then interrogated to adjudicate between three possible ways in which prediction error (surprise) in the processing of one feature might affect the concurrent processing of another, expected feature: (1) feature processing may be independent; (2) surprise might “spread” from the unexpected to the expected feature, rendering the entire object unexpected; or (3) pairing a surprising feature with an expected feature might promote the inference that the two features are not in fact part of the same object. To formalize these rival hypotheses, we implemented them in a simple computational model of multifeature expectations. Across a range of analyses, behavior and visual neural signals consistently supported a model that assumes a mixing of prediction error signals across features: surprise in one object feature spreads to its other feature(s), thus rendering the entire object unexpected. These results reveal neurocomputational principles of multifeature expectations and indicate that objects are the unit of selection for predictive vision. SIGNIFICANCE STATEMENT We address a key question in predictive visual cognition: how does the brain combine multiple concurrent expectations for different features of a single object such as its color and motion trajectory? By combining a behavioral protocol that independently varies expectation of (and attention to) multiple object features with computational modeling and fMRI, we demonstrate that behavior and fMRI activity patterns in visual cortex are best accounted for by a model in which prediction error in one object feature spreads to other object features. These results demonstrate how predictive vision forms object-level expectations out of multiple independent features. PMID:27810936
The influence of spontaneous activity on stimulus processing in primary visual cortex.
Schölvinck, M L; Friston, K J; Rees, G
2012-02-01
Spontaneous activity in the resting human brain has been studied extensively; however, how such activity affects the local processing of a sensory stimulus is relatively unknown. Here, we examined the impact of spontaneous activity in primary visual cortex on neuronal and behavioural responses to a simple visual stimulus, using functional MRI. Stimulus-evoked responses remained essentially unchanged by spontaneous fluctuations, combining with them in a largely linear fashion (i.e., with little evidence for an interaction). However, interactions between spontaneous fluctuations and stimulus-evoked responses were evident behaviourally; high levels of spontaneous activity tended to be associated with increased stimulus detection at perceptual threshold. Our results extend those found in studies of spontaneous fluctuations in motor cortex and higher order visual areas, and suggest a fundamental role for spontaneous activity in stimulus processing. Copyright © 2011. Published by Elsevier Inc.
Purely temporal figure-ground segregation.
Kandil, F I; Fahle, M
2001-05-01
Visual figure-ground segregation is achieved by exploiting differences in features such as luminance, colour, motion or presentation time between a figure and its surround. Here we determine the shortest delay times required for figure-ground segregation based on purely temporal features. Previous studies usually employed stimulus onset asynchronies between figure- and ground-containing possible artefacts based on apparent motion cues or on luminance differences. Our stimuli systematically avoid these artefacts by constantly showing 20 x 20 'colons' that flip by 90 degrees around their midpoints at constant time intervals. Colons constituting the background flip in-phase whereas those constituting the target flip with a phase delay. We tested the impact of frequency modulation and phase reduction on target detection. Younger subjects performed well above chance even at temporal delays as short as 13 ms, whilst older subjects required up to three times longer delays in some conditions. Figure-ground segregation can rely on purely temporal delays down to around 10 ms even in the absence of luminance and motion artefacts, indicating a temporal precision of cortical information processing almost an order of magnitude lower than the one required for some models of feature binding in the visual cortex [e.g. Singer, W. (1999), Curr. Opin. Neurobiol., 9, 189-194]. Hence, in our experiment, observers are unable to use temporal stimulus features with the precision required for these models.
Starr, Ariel; DeWind, Nicholas K; Brannon, Elizabeth M
2017-11-01
Numerical acuity, frequently measured by a Weber fraction derived from nonsymbolic numerical comparison judgments, has been shown to be predictive of mathematical ability. However, recent findings suggest that stimulus controls in these tasks are often insufficiently implemented, and the proposal has been made that alternative visual features or inhibitory control capacities may actually explain this relation. Here, we use a novel mathematical algorithm to parse the relative influence of numerosity from other visual features in nonsymbolic numerical discrimination and to examine the strength of the relations between each of these variables, including inhibitory control, and mathematical ability. We examined these questions developmentally by testing 4-year-old children, 6-year-old children, and adults with a nonsymbolic numerical comparison task, a symbolic math assessment, and a test of inhibitory control. We found that the influence of non-numerical features decreased significantly over development but that numerosity was a primary determinate of decision making at all ages. In addition, numerical acuity was a stronger predictor of math achievement than either non-numerical bias or inhibitory control in children. These results suggest that the ability to selectively attend to number contributes to the maturation of the number sense and that numerical acuity, independent of inhibitory control, contributes to math achievement in early childhood. Copyright © 2017 Elsevier B.V. All rights reserved.
Trial-by-trial adjustments in control triggered by incidentally encoded semantic cues.
Blais, Chris; Harris, Michael B; Sinanian, Michael H; Bunge, Silvia A
2015-01-01
Cognitive control mechanisms provide the flexibility to rapidly adapt to contextual demands. These contexts can be defined by top-down goals-but also by bottom-up perceptual factors, such as the location at which a visual stimulus appears. There are now several experiments reporting contextual control effects. Such experiments establish that contexts defined by low-level perceptual cues such as the location of a visual stimulus can lead to context-specific control, suggesting a relatively early focus for cognitive control. The current set of experiments involved a word-word interference task designed to assess whether a high-level cue, the semantic category to which a word belongs, can also facilitate contextual control. Indeed, participants exhibit a larger Flanker effect to items pertaining to a semantic category in which 75% of stimuli are incongruent than in response to items pertaining to a category in which 25% of stimuli are incongruent. Thus, both low-level and high-level stimulus features can affect the bottom-up engagement of cognitive control. The implications for current models of cognitive control are discussed.
Sztarker, Julieta; Tomsic, Daniel
2008-06-01
When confronted with predators, animals are forced to take crucial decisions such as the timing and manner of escape. In the case of the crab Chasmagnathus, cumulative evidence suggests that the escape response to a visual danger stimulus (VDS) can be accounted for by the response of a group of lobula giant (LG) neurons. To further investigate this hypothesis, we examined the relationship between behavioral and neuronal activities within a variety of experimental conditions that affected the level of escape. The intensity of the escape response to VDS was influenced by seasonal variations, changes in stimulus features, and whether the crab perceived stimuli monocularly or binocularly. These experimental conditions consistently affected the response of LG neurons in a way that closely matched the effects observed at the behavioral level. In other words, the intensity of the stimulus-elicited spike activity of LG neurons faithfully reflected the intensity of the escape response. These results support the idea that the LG neurons from the lobula of crabs are deeply involved in the decision for escaping from VDS.
Variability and Correlations in Primary Visual Cortical Neurons Driven by Fixational Eye Movements
McFarland, James M.; Cumming, Bruce G.
2016-01-01
The ability to distinguish between elements of a sensory neuron's activity that are stimulus independent versus driven by the stimulus is critical for addressing many questions in systems neuroscience. This is typically accomplished by measuring neural responses to repeated presentations of identical stimuli and identifying the trial-variable components of the response as noise. In awake primates, however, small “fixational” eye movements (FEMs) introduce uncontrolled trial-to-trial differences in the visual stimulus itself, potentially confounding this distinction. Here, we describe novel analytical methods that directly quantify the stimulus-driven and stimulus-independent components of visual neuron responses in the presence of FEMs. We apply this approach, combined with precise model-based eye tracking, to recordings from primary visual cortex (V1), finding that standard approaches that ignore FEMs typically miss more than half of the stimulus-driven neural response variance, creating substantial biases in measures of response reliability. We show that these effects are likely not isolated to the particular experimental conditions used here, such as the choice of visual stimulus or spike measurement time window, and thus will be a more general problem for V1 recordings in awake primates. We also demonstrate that measurements of the stimulus-driven and stimulus-independent correlations among pairs of V1 neurons can be greatly biased by FEMs. These results thus illustrate the potentially dramatic impact of FEMs on measures of signal and noise in visual neuron activity and also demonstrate a novel approach for controlling for these eye-movement-induced effects. SIGNIFICANCE STATEMENT Distinguishing between the signal and noise in a sensory neuron's activity is typically accomplished by measuring neural responses to repeated presentations of an identical stimulus. For recordings from the visual cortex of awake animals, small “fixational” eye movements (FEMs) inevitably introduce trial-to-trial variability in the visual stimulus, potentially confounding such measures. Here, we show that FEMs often have a dramatic impact on several important measures of response variability for neurons in primary visual cortex. We also present an analytical approach for quantifying signal and noise in visual neuron activity in the presence of FEMs. These results thus highlight the importance of controlling for FEMs in studies of visual neuron function, and demonstrate novel methods for doing so. PMID:27277801
Tanahashi, Shigehito; Ashihara, Kaoru; Ujike, Hiroyasu
2015-01-01
Recent studies have found that self-motion perception induced by simultaneous presentation of visual and auditory motion is facilitated when the directions of visual and auditory motion stimuli are identical. They did not, however, examine possible contributions of auditory motion information for determining direction of self-motion perception. To examine this, a visual stimulus projected on a hemisphere screen and an auditory stimulus presented through headphones were presented separately or simultaneously, depending on experimental conditions. The participant continuously indicated the direction and strength of self-motion during the 130-s experimental trial. When the visual stimulus with a horizontal shearing rotation and the auditory stimulus with a horizontal one-directional rotation were presented simultaneously, the duration and strength of self-motion perceived in the opposite direction of the auditory rotation stimulus were significantly longer and stronger than those perceived in the same direction of the auditory rotation stimulus. However, the auditory stimulus alone could not sufficiently induce self-motion perception, and if it did, its direction was not consistent within each experimental trial. We concluded that auditory motion information can determine perceived direction of self-motion during simultaneous presentation of visual and auditory motion information, at least when visual stimuli moved in opposing directions (around the yaw-axis). We speculate that the contribution of auditory information depends on the plausibility and information balance of visual and auditory information. PMID:26113828
Tapia, Evelina; Beck, Diane M
2014-01-01
A number of influential theories posit that visual awareness relies not only on the initial, stimulus-driven (i.e., feedforward) sweep of activation but also on recurrent feedback activity within and between brain regions. These theories of awareness draw heavily on data from masking paradigms in which visibility of one stimulus is reduced due to the presence of another stimulus. More recently transcranial magnetic stimulation (TMS) has been used to study the temporal dynamics of visual awareness. TMS over occipital cortex affects performance on visual tasks at distinct time points and in a manner that is comparable to visual masking. We draw parallels between these two methods and examine evidence for the neural mechanisms by which visual masking and TMS suppress stimulus visibility. Specifically, both methods have been proposed to affect feedforward as well as feedback signals when applied at distinct time windows relative to stimulus onset and as a result modify visual awareness. Most recent empirical evidence, moreover, suggests that while visual masking and TMS impact stimulus visibility comparably, the processes these methods affect may not be as similar as previously thought. In addition to reviewing both masking and TMS studies that examine feedforward and feedback processes in vision, we raise questions to guide future studies and further probe the necessary conditions for visual awareness.
Greene, Michelle R; Baldassano, Christopher; Fei-Fei, Li; Beck, Diane M; Baker, Chris I
2018-01-01
Inherent correlations between visual and semantic features in real-world scenes make it difficult to determine how different scene properties contribute to neural representations. Here, we assessed the contributions of multiple properties to scene representation by partitioning the variance explained in human behavioral and brain measurements by three feature models whose inter-correlations were minimized a priori through stimulus preselection. Behavioral assessments of scene similarity reflected unique contributions from a functional feature model indicating potential actions in scenes as well as high-level visual features from a deep neural network (DNN). In contrast, similarity of cortical responses in scene-selective areas was uniquely explained by mid- and high-level DNN features only, while an object label model did not contribute uniquely to either domain. The striking dissociation between functional and DNN features in their contribution to behavioral and brain representations of scenes indicates that scene-selective cortex represents only a subset of behaviorally relevant scene information. PMID:29513219
Groen, Iris Ia; Greene, Michelle R; Baldassano, Christopher; Fei-Fei, Li; Beck, Diane M; Baker, Chris I
2018-03-07
Inherent correlations between visual and semantic features in real-world scenes make it difficult to determine how different scene properties contribute to neural representations. Here, we assessed the contributions of multiple properties to scene representation by partitioning the variance explained in human behavioral and brain measurements by three feature models whose inter-correlations were minimized a priori through stimulus preselection. Behavioral assessments of scene similarity reflected unique contributions from a functional feature model indicating potential actions in scenes as well as high-level visual features from a deep neural network (DNN). In contrast, similarity of cortical responses in scene-selective areas was uniquely explained by mid- and high-level DNN features only, while an object label model did not contribute uniquely to either domain. The striking dissociation between functional and DNN features in their contribution to behavioral and brain representations of scenes indicates that scene-selective cortex represents only a subset of behaviorally relevant scene information.
Reward associations impact both iconic and visual working memory.
Infanti, Elisa; Hickey, Clayton; Turatto, Massimo
2015-02-01
Reward plays a fundamental role in human behavior. A growing number of studies have shown that stimuli associated with reward become salient and attract attention. The aim of the present study was to extend these results into the investigation of iconic memory and visual working memory. In two experiments we asked participants to perform a visual-search task where different colors of the target stimuli were paired with high or low reward. We then tested whether the pre-established feature-reward association affected performance on a subsequent visual memory task, in which no reward was provided. In this test phase participants viewed arrays of 8 objects, one of which had unique color that could match the color associated with reward during the previous visual-search task. A probe appeared at varying intervals after stimulus offset to identify the to-be-reported item. Our results suggest that reward biases the encoding of visual information such that items characterized by a reward-associated feature interfere with mnemonic representations of other items in the test display. These results extend current knowledge regarding the influence of reward on early cognitive processes, suggesting that feature-reward associations automatically interact with the encoding and storage of visual information, both in iconic memory and visual working memory. Copyright © 2014 Elsevier Ltd. All rights reserved.
Comparison on driving fatigue related hemodynamics activated by auditory and visual stimulus
NASA Astrophysics Data System (ADS)
Deng, Zishan; Gao, Yuan; Li, Ting
2018-02-01
As one of the main causes of traffic accidents, driving fatigue deserves researchers' attention and its detection and monitoring during long-term driving require a new technique to realize. Since functional near-infrared spectroscopy (fNIRS) can be applied to detect cerebral hemodynamic responses, we can promisingly expect its application in fatigue level detection. Here, we performed three different kinds of experiments on a driver and recorded his cerebral hemodynamic responses when driving for long hours utilizing our device based on fNIRS. Each experiment lasted for 7 hours and one of the three specific experimental tests, detecting the driver's response to sounds, traffic lights and direction signs respectively, was done every hour. The results showed that visual stimulus was easier to cause fatigue compared with auditory stimulus and visual stimulus induced by traffic lights scenes was easier to cause fatigue compared with visual stimulus induced by direction signs in the first few hours. We also found that fatigue related hemodynamics caused by auditory stimulus increased fastest, then traffic lights scenes, and direction signs scenes slowest. Our study successfully compared audio, visual color, and visual character stimulus in sensitivity to cause driving fatigue, which is meaningful for driving safety management.
Neuroimaging Evidence for 2 Types of Plasticity in Association with Visual Perceptual Learning.
Shibata, Kazuhisa; Sasaki, Yuka; Kawato, Mitsuo; Watanabe, Takeo
2016-09-01
Visual perceptual learning (VPL) is long-term performance improvement as a result of perceptual experience. It is unclear whether VPL is associated with refinement in representations of the trained feature (feature-based plasticity), improvement in processing of the trained task (task-based plasticity), or both. Here, we provide empirical evidence that VPL of motion detection is associated with both types of plasticity which occur predominantly in different brain areas. Before and after training on a motion detection task, subjects' neural responses to the trained motion stimuli were measured using functional magnetic resonance imaging. In V3A, significant response changes after training were observed specifically to the trained motion stimulus but independently of whether subjects performed the trained task. This suggests that the response changes in V3A represent feature-based plasticity in VPL of motion detection. In V1 and the intraparietal sulcus, significant response changes were found only when subjects performed the trained task on the trained motion stimulus. This suggests that the response changes in these areas reflect task-based plasticity. These results collectively suggest that VPL of motion detection is associated with the 2 types of plasticity, which occur in different areas and therefore have separate mechanisms at least to some degree. © The Author 2016. Published by Oxford University Press.
Snyder, Adam C.; Foxe, John J.
2010-01-01
Retinotopically specific increases in alpha-band (~10 Hz) oscillatory power have been strongly implicated in the suppression of processing for irrelevant parts of the visual field during the deployment of visuospatial attention. Here, we asked whether this alpha suppression mechanism also plays a role in the nonspatial anticipatory biasing of feature-based attention. Visual word cues informed subjects what the task-relevant feature of an upcoming visual stimulus (S2) was, while high-density electroencephalographic recordings were acquired. We examined anticipatory oscillatory activity in the Cue-to-S2 interval (~2 s). Subjects were cued on a trial-by-trial basis to attend to either the color or direction of motion of an upcoming dot field array, and to respond when they detected that a subset of the dots differed from the majority along the target feature dimension. We used the features of color and motion, expressly because they have well known, spatially separated cortical processing areas, to distinguish shifts in alpha power over areas processing each feature. Alpha power from dorsal regions increased when motion was the irrelevant feature (i.e., color was cued), and alpha power from ventral regions increased when color was irrelevant. Thus, alpha-suppression mechanisms appear to operate during feature-based selection in much the same manner as has been shown for space-based attention. PMID:20237273
Vollrath-Smith, Fiori R.; Shin, Rick
2011-01-01
Rationale Noncontingent administration of amphetamine into the ventral striatum or systemic nicotine increases responses rewarded by inconsequential visual stimuli. When these drugs are contingently administered, rats learn to self-administer them. We recently found that rats self-administer the GABAB receptor agonist baclofen into the median (MR) or dorsal (DR) raphe nuclei. Objectives We examined whether noncontingent administration of baclofen into the MR or DR increases rats’ investigatory behavior rewarded by a flash of light. Results Contingent presentations of a flash of light slightly increased lever presses. Whereas noncontingent administration of baclofen into the MR or DR did not reliably increase lever presses in the absence of visual stimulus reward, the same manipulation markedly increased lever presses rewarded by the visual stimulus. Heightened locomotor activity induced by intraperitoneal injections of amphetamine (3 mg/kg) failed to concur with increased lever pressing for the visual stimulus. These results indicate that the observed enhancement of visual stimulus seeking is distinct from an enhancement of general locomotor activity. Visual stimulus seeking decreased when baclofen was co-administered with the GABAB receptor antagonist, SCH 50911, confirming the involvement of local GABAB receptors. Seeking for visual stimulus also abated when baclofen administration was preceded by intraperitoneal injections of the dopamine antagonist, SCH 23390 (0.025 mg/kg), suggesting enhanced visual stimulus seeking depends on intact dopamine signals. Conclusions Baclofen administration into the MR or DR increased investigatory behavior induced by visual stimuli. Stimulation of GABAB receptors in the MR and DR appears to disinhibit the motivational process involving stimulus–approach responses. PMID:21904820
The fate of task-irrelevant visual motion: perceptual load versus feature-based attention.
Taya, Shuichiro; Adams, Wendy J; Graf, Erich W; Lavie, Nilli
2009-11-18
We tested contrasting predictions derived from perceptual load theory and from recent feature-based selection accounts. Observers viewed moving, colored stimuli and performed low or high load tasks associated with one stimulus feature, either color or motion. The resultant motion aftereffect (MAE) was used to evaluate attentional allocation. We found that task-irrelevant visual features received less attention than co-localized task-relevant features of the same objects. Moreover, when color and motion features were co-localized yet perceived to belong to two distinct surfaces, feature-based selection was further increased at the expense of object-based co-selection. Load theory predicts that the MAE for task-irrelevant motion would be reduced with a higher load color task. However, this was not seen for co-localized features; perceptual load only modulated the MAE for task-irrelevant motion when this was spatially separated from the attended color location. Our results suggest that perceptual load effects are mediated by spatial selection and do not generalize to the feature domain. Feature-based selection operates to suppress processing of task-irrelevant, co-localized features, irrespective of perceptual load.
Rapid innate defensive responses of mice to looming visual stimuli.
Yilmaz, Melis; Meister, Markus
2013-10-21
Much of brain science is concerned with understanding the neural circuits that underlie specific behaviors. While the mouse has become a favorite experimental subject, the behaviors of this species are still poorly explored. For example, the mouse retina, like that of other mammals, contains ∼20 different circuits that compute distinct features of the visual scene [1, 2]. By comparison, only a handful of innate visual behaviors are known in this species--the pupil reflex [3], phototaxis [4], the optomotor response [5], and the cliff response [6]--two of which are simple reflexes that require little visual processing. We explored the behavior of mice under a visual display that simulates an approaching object, which causes defensive reactions in some other species [7, 8]. We show that mice respond to this stimulus either by initiating escape within a second or by freezing for an extended period. The probability of these defensive behaviors is strongly dependent on the parameters of the visual stimulus. Directed experiments identify candidate retinal circuits underlying the behavior and lead the way into detailed study of these neural pathways. This response is a new addition to the repertoire of innate defensive behaviors in the mouse that allows the detection and avoidance of aerial predators. Copyright © 2013 Elsevier Ltd. All rights reserved.
Cooke, Sam F.; Bear, Mark F.
2014-01-01
Donald Hebb chose visual learning in primary visual cortex (V1) of the rodent to exemplify his theories of how the brain stores information through long-lasting homosynaptic plasticity. Here, we revisit V1 to consider roles for bidirectional ‘Hebbian’ plasticity in the modification of vision through experience. First, we discuss the consequences of monocular deprivation (MD) in the mouse, which have been studied by many laboratories over many years, and the evidence that synaptic depression of excitatory input from the thalamus is a primary contributor to the loss of visual cortical responsiveness to stimuli viewed through the deprived eye. Second, we describe a less studied, but no less interesting form of plasticity in the visual cortex known as stimulus-selective response potentiation (SRP). SRP results in increases in the response of V1 to a visual stimulus through repeated viewing and bears all the hallmarks of perceptual learning. We describe evidence implicating an important role for potentiation of thalamo-cortical synapses in SRP. In addition, we present new data indicating that there are some features of this form of plasticity that cannot be fully accounted for by such feed-forward Hebbian plasticity, suggesting contributions from intra-cortical circuit components. PMID:24298166
Effects of age, gender, and stimulus presentation period on visual short-term memory.
Kunimi, Mitsunobu
2016-01-01
This study focused on age-related changes in visual short-term memory using visual stimuli that did not allow verbal encoding. Experiment 1 examined the effects of age and the length of the stimulus presentation period on visual short-term memory function. Experiment 2 examined the effects of age, gender, and the length of the stimulus presentation period on visual short-term memory function. The worst memory performance and the largest performance difference between the age groups were observed in the shortest stimulus presentation period conditions. The performance difference between the age groups became smaller as the stimulus presentation period became longer; however, it did not completely disappear. Although gender did not have a significant effect on d' regardless of the presentation period in the young group, a significant gender-based difference was observed for stimulus presentation periods of 500 ms and 1,000 ms in the older group. This study indicates that the decline in visual short-term memory observed in the older group is due to the interaction of several factors.
ERIC Educational Resources Information Center
Elias, Lorin J.; Robinson, Brent; Saucier, Deborah M.
2005-01-01
Neurologically normal individuals exhibit strong leftward response biases during free-viewing perceptual judgments of brightness, quantity, and size. When participants view two mirror-reversed objects and they are forced to choose which object appears darker, more numerous, or larger, the stimulus with the relevant feature on the left side is…
Visual and auditory accessory stimulus offset and the Simon effect.
Nishimura, Akio; Yokosawa, Kazuhiko
2010-10-01
We investigated the effect on the right and left responses of the disappearance of a task-irrelevant stimulus located on the right or left side. Participants pressed a right or left response key on the basis of the color of a centrally located visual target. Visual (Experiment 1) or auditory (Experiment 2) task-irrelevant accessory stimuli appeared or disappeared at locations to the right or left of the central target. In Experiment 1, responses were faster when onset or offset of the visual accessory stimulus was spatially congruent with the response. In Experiment 2, responses were again faster when onset of the auditory accessory stimulus and the response were on the same side. However, responses were slightly slower when offset of the auditory accessory stimulus and the response were on the same side than when they were on opposite sides. These findings indicate that transient change information is crucial for a visual Simon effect, whereas sustained stimulation from an ongoing stimulus also contributes to an auditory Simon effect.
Neural oscillatory deficits in schizophrenia predict behavioral and neurocognitive impairments
Martínez, Antígona; Gaspar, Pablo A.; Hillyard, Steven A.; Bickel, Stephan; Lakatos, Peter; Dias, Elisa C.; Javitt, Daniel C.
2015-01-01
Paying attention to visual stimuli is typically accompanied by event-related desynchronizations (ERD) of ongoing alpha (7–14 Hz) activity in visual cortex. The present study used time-frequency based analyses to investigate the role of impaired alpha ERD in visual processing deficits in schizophrenia (Sz). Subjects viewed sinusoidal gratings of high (HSF) and low (LSF) spatial frequency (SF) designed to test functioning of the parvo- vs. magnocellular pathways, respectively. Patients with Sz and healthy controls paid attention selectively to either the LSF or HSF gratings which were presented in random order. Event-related brain potentials (ERPs) were recorded to all stimuli. As in our previous study, it was found that Sz patients were selectively impaired at detecting LSF target stimuli and that ERP amplitudes to LSF stimuli were diminished, both for the early sensory-evoked components and for the attend minus unattend difference component (the Selection Negativity), which is generally regarded as a specific index of feature-selective attention. In the time-frequency domain, the differential ERP deficits to LSF stimuli were echoed in a virtually absent theta-band phase locked response to both unattended and attended LSF stimuli (along with relatively intact theta-band activity for HSF stimuli). In contrast to the theta-band evoked responses which were tightly stimulus locked, stimulus-induced desynchronizations of ongoing alpha activity were not tightly stimulus locked and were apparent only in induced power analyses. Sz patients were significantly impaired in the attention-related modulation of ongoing alpha activity for both HSF and LSF stimuli. These deficits correlated with patients’ behavioral deficits in visual information processing as well as with visually based neurocognitive deficits. These findings suggest an additional, pathway-independent, mechanism by which deficits in early visual processing contribute to overall cognitive impairment in Sz. PMID:26190988
Interaction Between Spatial and Feature Attention in Posterior Parietal Cortex
Ibos, Guilhem; Freedman, David J.
2016-01-01
Summary Lateral intraparietal (LIP) neurons encode a vast array of sensory and cognitive variables. Recently, we proposed that the flexibility of feature representations in LIP reflect the bottom-up integration of sensory signals, modulated by feature-based attention (FBA), from upstream feature-selective cortical neurons. Moreover, LIP activity is also strongly modulated by the position of space-based attention (SBA). However, the mechanisms by which SBA and FBA interact to facilitate the representation of task-relevant spatial and non-spatial features in LIP remain unclear. We recorded from LIP neurons during performance of a task which required monkeys to detect specific conjunctions of color, motion-direction, and stimulus position. Here we show that FBA and SBA potentiate each other’s effect in a manner consistent with attention gating the flow of visual information along the cortical visual pathway. Our results suggest that linear bottom-up integrative mechanisms allow LIP neurons to emphasize task-relevant spatial and non-spatial features. PMID:27499082
Interaction between Spatial and Feature Attention in Posterior Parietal Cortex.
Ibos, Guilhem; Freedman, David J
2016-08-17
Lateral intraparietal (LIP) neurons encode a vast array of sensory and cognitive variables. Recently, we proposed that the flexibility of feature representations in LIP reflect the bottom-up integration of sensory signals, modulated by feature-based attention (FBA), from upstream feature-selective cortical neurons. Moreover, LIP activity is also strongly modulated by the position of space-based attention (SBA). However, the mechanisms by which SBA and FBA interact to facilitate the representation of task-relevant spatial and non-spatial features in LIP remain unclear. We recorded from LIP neurons during performance of a task that required monkeys to detect specific conjunctions of color, motion direction, and stimulus position. Here we show that FBA and SBA potentiate each other's effect in a manner consistent with attention gating the flow of visual information along the cortical visual pathway. Our results suggest that linear bottom-up integrative mechanisms allow LIP neurons to emphasize task-relevant spatial and non-spatial features. Copyright © 2016 Elsevier Inc. All rights reserved.
Neural theory for the perception of causal actions.
Fleischer, Falk; Christensen, Andrea; Caggiano, Vittorio; Thier, Peter; Giese, Martin A
2012-07-01
The efficient prediction of the behavior of others requires the recognition of their actions and an understanding of their action goals. In humans, this process is fast and extremely robust, as demonstrated by classical experiments showing that human observers reliably judge causal relationships and attribute interactive social behavior to strongly simplified stimuli consisting of simple moving geometrical shapes. While psychophysical experiments have identified critical visual features that determine the perception of causality and agency from such stimuli, the underlying detailed neural mechanisms remain largely unclear, and it is an open question why humans developed this advanced visual capability at all. We created pairs of naturalistic and abstract stimuli of hand actions that were exactly matched in terms of their motion parameters. We show that varying critical stimulus parameters for both stimulus types leads to very similar modulations of the perception of causality. However, the additional form information about the hand shape and its relationship with the object supports more fine-grained distinctions for the naturalistic stimuli. Moreover, we show that a physiologically plausible model for the recognition of goal-directed hand actions reproduces the observed dependencies of causality perception on critical stimulus parameters. These results support the hypothesis that selectivity for abstract action stimuli might emerge from the same neural mechanisms that underlie the visual processing of natural goal-directed action stimuli. Furthermore, the model proposes specific detailed neural circuits underlying this visual function, which can be evaluated in future experiments.
Tapia, Evelina; Beck, Diane M.
2014-01-01
A number of influential theories posit that visual awareness relies not only on the initial, stimulus-driven (i.e., feedforward) sweep of activation but also on recurrent feedback activity within and between brain regions. These theories of awareness draw heavily on data from masking paradigms in which visibility of one stimulus is reduced due to the presence of another stimulus. More recently transcranial magnetic stimulation (TMS) has been used to study the temporal dynamics of visual awareness. TMS over occipital cortex affects performance on visual tasks at distinct time points and in a manner that is comparable to visual masking. We draw parallels between these two methods and examine evidence for the neural mechanisms by which visual masking and TMS suppress stimulus visibility. Specifically, both methods have been proposed to affect feedforward as well as feedback signals when applied at distinct time windows relative to stimulus onset and as a result modify visual awareness. Most recent empirical evidence, moreover, suggests that while visual masking and TMS impact stimulus visibility comparably, the processes these methods affect may not be as similar as previously thought. In addition to reviewing both masking and TMS studies that examine feedforward and feedback processes in vision, we raise questions to guide future studies and further probe the necessary conditions for visual awareness. PMID:25374548
Xiao, Jianbo
2015-01-01
Segmenting visual scenes into distinct objects and surfaces is a fundamental visual function. To better understand the underlying neural mechanism, we investigated how neurons in the middle temporal cortex (MT) of macaque monkeys represent overlapping random-dot stimuli moving transparently in slightly different directions. It has been shown that the neuronal response elicited by two stimuli approximately follows the average of the responses elicited by the constituent stimulus components presented alone. In this scheme of response pooling, the ability to segment two simultaneously presented motion directions is limited by the width of the tuning curve to motion in a single direction. We found that, although the population-averaged neuronal tuning showed response averaging, subgroups of neurons showed distinct patterns of response tuning and were capable of representing component directions that were separated by a small angle—less than the tuning width to unidirectional stimuli. One group of neurons preferentially represented the component direction at a specific side of the bidirectional stimuli, weighting one stimulus component more strongly than the other. Another group of neurons pooled the component responses nonlinearly and showed two separate peaks in their tuning curves even when the average of the component responses was unimodal. We also show for the first time that the direction tuning of MT neurons evolved from initially representing the vector-averaged direction of slightly different stimuli to gradually representing the component directions. Our results reveal important neural processes underlying image segmentation and suggest that information about slightly different stimulus components is computed dynamically and distributed across neurons. SIGNIFICANCE STATEMENT Natural scenes often contain multiple entities. The ability to segment visual scenes into distinct objects and surfaces is fundamental to sensory processing and is crucial for generating the perception of our environment. Because cortical neurons are broadly tuned to a given visual feature, segmenting two stimuli that differ only slightly is a challenge for the visual system. In this study, we discovered that many neurons in the visual cortex are capable of representing individual components of slightly different stimuli by selectively and nonlinearly pooling the responses elicited by the stimulus components. We also show for the first time that the neural representation of individual stimulus components developed over a period of ∼70–100 ms, revealing a dynamic process of image segmentation. PMID:26658869
Stimulus specificity of a steady-state visual-evoked potential-based brain-computer interface.
Ng, Kian B; Bradley, Andrew P; Cunnington, Ross
2012-06-01
The mechanisms of neural excitation and inhibition when given a visual stimulus are well studied. It has been established that changing stimulus specificity such as luminance contrast or spatial frequency can alter the neuronal activity and thus modulate the visual-evoked response. In this paper, we study the effect that stimulus specificity has on the classification performance of a steady-state visual-evoked potential-based brain-computer interface (SSVEP-BCI). For example, we investigate how closely two visual stimuli can be placed before they compete for neural representation in the cortex and thus influence BCI classification accuracy. We characterize stimulus specificity using the four stimulus parameters commonly encountered in SSVEP-BCI design: temporal frequency, spatial size, number of simultaneously displayed stimuli and their spatial proximity. By varying these quantities and measuring the SSVEP-BCI classification accuracy, we are able to determine the parameters that provide optimal performance. Our results show that superior SSVEP-BCI accuracy is attained when stimuli are placed spatially more than 5° apart, with size that subtends at least 2° of visual angle, when using a tagging frequency of between high alpha and beta band. These findings may assist in deciding the stimulus parameters for optimal SSVEP-BCI design.
Stimulus specificity of a steady-state visual-evoked potential-based brain-computer interface
NASA Astrophysics Data System (ADS)
Ng, Kian B.; Bradley, Andrew P.; Cunnington, Ross
2012-06-01
The mechanisms of neural excitation and inhibition when given a visual stimulus are well studied. It has been established that changing stimulus specificity such as luminance contrast or spatial frequency can alter the neuronal activity and thus modulate the visual-evoked response. In this paper, we study the effect that stimulus specificity has on the classification performance of a steady-state visual-evoked potential-based brain-computer interface (SSVEP-BCI). For example, we investigate how closely two visual stimuli can be placed before they compete for neural representation in the cortex and thus influence BCI classification accuracy. We characterize stimulus specificity using the four stimulus parameters commonly encountered in SSVEP-BCI design: temporal frequency, spatial size, number of simultaneously displayed stimuli and their spatial proximity. By varying these quantities and measuring the SSVEP-BCI classification accuracy, we are able to determine the parameters that provide optimal performance. Our results show that superior SSVEP-BCI accuracy is attained when stimuli are placed spatially more than 5° apart, with size that subtends at least 2° of visual angle, when using a tagging frequency of between high alpha and beta band. These findings may assist in deciding the stimulus parameters for optimal SSVEP-BCI design.
Stekelenburg, Jeroen J; Keetels, Mirjam
2016-05-01
The Colavita effect refers to the phenomenon that when confronted with an audiovisual stimulus, observers report more often to have perceived the visual than the auditory component. The Colavita effect depends on low-level stimulus factors such as spatial and temporal proximity between the unimodal signals. Here, we examined whether the Colavita effect is modulated by synesthetic congruency between visual size and auditory pitch. If the Colavita effect depends on synesthetic congruency, we expect a larger Colavita effect for synesthetically congruent size/pitch (large visual stimulus/low-pitched tone; small visual stimulus/high-pitched tone) than synesthetically incongruent (large visual stimulus/high-pitched tone; small visual stimulus/low-pitched tone) combinations. Participants had to identify stimulus type (visual, auditory or audiovisual). The study replicated the Colavita effect because participants reported more often the visual than auditory component of the audiovisual stimuli. Synesthetic congruency had, however, no effect on the magnitude of the Colavita effect. EEG recordings to congruent and incongruent audiovisual pairings showed a late frontal congruency effect at 400-550 ms and an occipitoparietal effect at 690-800 ms with neural sources in the anterior cingulate and premotor cortex for the 400- to 550-ms window and premotor cortex, inferior parietal lobule and the posterior middle temporal gyrus for the 690- to 800-ms window. The electrophysiological data show that synesthetic congruency was probably detected in a processing stage subsequent to the Colavita effect. We conclude that-in a modality detection task-the Colavita effect can be modulated by low-level structural factors but not by higher-order associations between auditory and visual inputs.
Adaptation in human visual cortex as a mechanism for rapid discrimination of aversive stimuli.
Keil, Andreas; Stolarova, Margarita; Moratti, Stephan; Ray, William J
2007-06-01
The ability to react rapidly and efficiently to adverse stimuli is crucial for survival. Neuroscience and behavioral studies have converged to show that visual information associated with aversive content is processed quickly and accurately and is associated with rapid amplification of the neural responses. In particular, unpleasant visual information has repeatedly been shown to evoke increased cortical activity during early visual processing between 60 and 120 ms following the onset of a stimulus. However, the nature of these early responses is not well understood. Using neutral versus unpleasant colored pictures, the current report examines the time course of short-term changes in the human visual cortex when a subject is repeatedly exposed to simple grating stimuli in a classical conditioning paradigm. We analyzed changes in amplitude and synchrony of large-scale oscillatory activity across 2 days of testing, which included baseline measurements, 2 conditioning sessions, and a final extinction session. We found a gradual increase in amplitude and synchrony of very early cortical oscillations in the 20-35 Hz range across conditioning sessions, specifically for conditioned stimuli predicting aversive visual events. This increase for conditioned stimuli affected stimulus-locked cortical oscillations at a latency of around 60-90 ms and disappeared during extinction. Our findings suggest that reorganization of neural connectivity on the level of the visual cortex acts to optimize early perception of specific features indicative of emotional relevance.
Oculomotor guidance and capture by irrelevant faces.
Devue, Christel; Belopolsky, Artem V; Theeuwes, Jan
2012-01-01
Even though it is generally agreed that face stimuli constitute a special class of stimuli, which are treated preferentially by our visual system, it remains unclear whether faces can capture attention in a stimulus-driven manner. Moreover, there is a long-standing debate regarding the mechanism underlying the preferential bias of selecting faces. Some claim that faces constitute a set of special low-level features to which our visual system is tuned; others claim that the visual system is capable of extracting the meaning of faces very rapidly, driving attentional selection. Those debates continue because many studies contain methodological peculiarities and manipulations that prevent a definitive conclusion. Here, we present a new visual search task in which observers had to make a saccade to a uniquely colored circle while completely irrelevant objects were also present in the visual field. The results indicate that faces capture and guide the eyes more than other animated objects and that our visual system is not only tuned to the low-level features that make up a face but also to its meaning.
Unique sudden onsets capture attention even when observers are in feature-search mode.
Spalek, Thomas M; Yanko, Matthew R; Poiese, Paola; Lagroix, Hayley E P
2012-01-01
Two sources of attentional capture have been proposed: stimulus-driven (exogenous) and goal-oriented (endogenous). A resolution between these modes of capture has not been straightforward. Even such a clearly exogenous event as the sudden onset of a stimulus can be said to capture attention endogenously if observers operate in singleton-detection mode rather than feature-search mode. In four experiments we show that a unique sudden onset captures attention even when observers are in feature-search mode. The displays were rapid serial visual presentation (RSVP) streams of differently coloured letters with the target letter defined by a specific colour. Distractors were four #s, one of the target colour, surrounding one of the non-target letters. Capture was substantially reduced when the onset of the distractor array was not unique because it was preceded by other sets of four grey # arrays in the RSVP stream. This provides unambiguous evidence that attention can be captured both exogenously and endogenously within a single task.
Basu, Anamitra; Mandal, Manas K
2004-07-01
The present study examined visual-field advantage as a function of presentation mode (unilateral, bilateral), stimulus structure (facial, lexical), and stimulus content (emotional, neutral). The experiment was conducted in a split visual-field paradigm using a JAVA-based computer program with recognition accuracy as the dependent measure. Unilaterally, rather than bilaterally, presented stimuli were significantly better recognized. Words were significantly better recognized than faces in the right visual-field; the difference was nonsignificant in the left visual-field. Emotional content elicited left visual-field and neutral content elicited right visual-field advantages. Copyright Taylor and Francis Inc.
Innes-Brown, Hamish; Barutchu, Ayla; Crewther, David P.
2013-01-01
The effect of multi-modal vs uni-modal prior stimuli on the subsequent processing of a simple flash stimulus was studied in the context of the audio-visual ‘flash-beep’ illusion, in which the number of flashes a person sees is influenced by accompanying beep stimuli. EEG recordings were made while combinations of simple visual and audio-visual stimuli were presented. The experiments found that the electric field strength related to a flash stimulus was stronger when it was preceded by a multi-modal flash/beep stimulus, compared to when it was preceded by another uni-modal flash stimulus. This difference was found to be significant in two distinct timeframes – an early timeframe, from 130–160 ms, and a late timeframe, from 300–320 ms. Source localisation analysis found that the increased activity in the early interval was localised to an area centred on the inferior and superior parietal lobes, whereas the later increase was associated with stronger activity in an area centred on primary and secondary visual cortex, in the occipital lobe. The results suggest that processing of a visual stimulus can be affected by the presence of an immediately prior multisensory event. Relatively long-lasting interactions generated by the initial auditory and visual stimuli altered the processing of a subsequent visual stimulus. PMID:24391939
ERIC Educational Resources Information Center
Devauchelle, Anne-Dominique; Oppenheim, Catherine; Rizzi, Luigi; Dehaene, Stanislas; Pallier, Christophe
2009-01-01
Priming effects have been well documented in behavioral psycholinguistics experiments: The processing of a word or a sentence is typically facilitated when it shares lexico-semantic or syntactic features with a previously encountered stimulus. Here, we used fMRI priming to investigate which brain areas show adaptation to the repetition of a…
ERIC Educational Resources Information Center
Guy, Maggie W.; Reynolds, Greg D.; Zhang, Dantong
2013-01-01
Event-related potentials (ERPs) were utilized in an investigation of 21 six-month-olds' attention to and processing of global and local properties of hierarchical patterns. Overall, infants demonstrated an advantage for processing the overall configuration (i.e., global properties) of local features of hierarchical patterns; however,…
Kasai, Tetsuko; Moriya, Hiroki; Hirano, Shingo
2011-07-05
It has been proposed that the most fundamental units of attentional selection are "objects" that are grouped according to Gestalt factors such as similarity or connectedness. Previous studies using event-related potentials (ERPs) have shown that object-based attention is associated with modulations of the visual-evoked N1 component, which reflects an early cortical mechanism that is shared with spatial attention. However, these studies only examined the case of perceptually continuous objects. The present study examined the case of separate objects that are grouped according to feature similarity (color, shape) by indexing lateralized potentials at posterior sites in a sustained-attention task that involved bilateral stimulus arrays. A behavioral object effect was found only for task-relevant shape similarity. Electrophysiological results indicated that attention was guided to the task-irrelevant side of the visual field due to achromatic-color similarity in N1 (155-205 ms post-stimulus) and early N2 (210-260 ms) and due to shape similarity in early N2 and late N2 (280-400 ms) latency ranges. These results are discussed in terms of selection mechanisms and object/group representations. Copyright © 2011 Elsevier B.V. All rights reserved.
Neural Correlates of Individual Differences in Infant Visual Attention and Recognition Memory
Reynolds, Greg D.; Guy, Maggie W.; Zhang, Dantong
2010-01-01
Past studies have identified individual differences in infant visual attention based upon peak look duration during initial exposure to a stimulus. Colombo and colleagues (e.g., Colombo & Mitchell, 1990) found that infants that demonstrate brief visual fixations (i.e., short lookers) during familiarization are more likely to demonstrate evidence of recognition memory during subsequent stimulus exposure than infants that demonstrate long visual fixations (i.e., long lookers). The current study utilized event-related potentials to examine possible neural mechanisms associated with individual differences in visual attention and recognition memory for 6- and 7.5-month-old infants. Short- and long-looking infants viewed images of familiar and novel objects during ERP testing. There was a stimulus type by looker type interaction at temporal and frontal electrodes on the late slow wave (LSW). Short lookers demonstrated a LSW that was significantly greater in amplitude in response to novel stimulus presentations. No significant differences in LSW amplitude were found based on stimulus type for long lookers. These results indicate deeper processing and recognition memory of the familiar stimulus for short lookers. PMID:21666833
Porcu, Emanuele; Keitel, Christian; Müller, Matthias M
2013-11-27
We investigated effects of inter-modal attention on concurrent visual and tactile stimulus processing by means of stimulus-driven oscillatory brain responses, so-called steady-state evoked potentials (SSEPs). To this end, we frequency-tagged a visual (7.5Hz) and a tactile stimulus (20Hz) and participants were cued, on a trial-by-trial basis, to attend to either vision or touch to perform a detection task in the cued modality. SSEPs driven by the stimulation comprised stimulus frequency-following (i.e. fundamental frequency) as well as frequency-doubling (i.e. second harmonic) responses. We observed that inter-modal attention to vision increased amplitude and phase synchrony of the fundamental frequency component of the visual SSEP while the second harmonic component showed an increase in phase synchrony, only. In contrast, inter-modal attention to touch increased SSEP amplitude of the second harmonic but not of the fundamental frequency, while leaving phase synchrony unaffected in both responses. Our results show that inter-modal attention generally influences concurrent stimulus processing in vision and touch, thus, extending earlier audio-visual findings to a visuo-tactile stimulus situation. The pattern of results, however, suggests differences in the neural implementation of inter-modal attentional influences on visual vs. tactile stimulus processing. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
On the rules of integration of crowded orientation signals
Põder, Endel
2012-01-01
Crowding is related to an integration of feature signals over an inappropriately large area in the visual periphery. The rules of this integration are still not well understood. This study attempts to understand how the orientation signals from the target and flankers are combined. A target Gabor, together with 2, 4, or 6 flanking Gabors, was briefly presented in a peripheral location (4° eccentricity). The observer's task was to identify the orientation of the target (eight-alternative forced-choice). Performance was found to be nonmonotonically dependent on the target–flanker orientation difference (a drop at intermediate differences). For small target–flanker differences, a strong assimilation bias was observed. An effect of the number of flankers was found for heterogeneous flankers only. It appears that different rules of integration are used, dependent on some salient aspects (target pop-out, homogeneity–heterogeneity) of the stimulus pattern. The strategy of combining simple rules may be explained by the goal of the visual system to encode potentially important aspects of a stimulus with limited processing resources and using statistical regularities of the natural visual environment. PMID:23145295
On the rules of integration of crowded orientation signals.
Põder, Endel
2012-01-01
Crowding is related to an integration of feature signals over an inappropriately large area in the visual periphery. The rules of this integration are still not well understood. This study attempts to understand how the orientation signals from the target and flankers are combined. A target Gabor, together with 2, 4, or 6 flanking Gabors, was briefly presented in a peripheral location (4° eccentricity). The observer's task was to identify the orientation of the target (eight-alternative forced-choice). Performance was found to be nonmonotonically dependent on the target-flanker orientation difference (a drop at intermediate differences). For small target-flanker differences, a strong assimilation bias was observed. An effect of the number of flankers was found for heterogeneous flankers only. It appears that different rules of integration are used, dependent on some salient aspects (target pop-out, homogeneity-heterogeneity) of the stimulus pattern. The strategy of combining simple rules may be explained by the goal of the visual system to encode potentially important aspects of a stimulus with limited processing resources and using statistical regularities of the natural visual environment.
Wasserman, Edward A.; Anderson, Patricia A.
1974-01-01
The learning by hungry pigeons of a discrimination between two successively presented compound visual stimuli was investigated using a two-key autoshaping procedure. Common and distinctive stimulus elements were simultaneously presented on separate keys and either followed by food delivery, S+, or not, S−. The subjects acquired both between-trial and within-trial discriminations. On S+ trials, pigeons pecked the distinctive stimulus more than the common stimulus; before responding ceased on S− trials, they pecked the common stimulus more than the distinctive one. Mastery of the within-display discrimination during S+ trials preceded mastery of the between-trials discrimination. These findings extend the Jenkins-Sainsbury analysis of discriminations based upon a single distinguishing feature to discriminations in which common and distinctive elements are associated with both the positive and negative discriminative stimuli. The similarity of these findings to other effects found in autoshaping—approach to signals that forecast reinforcement and withdrawal from signals that forecast nonreinforcement—is also discussed. PMID:16811812
Wasserman, E A; Anderson, P A
1974-11-01
The learning by hungry pigeons of a discrimination between two successively presented compound visual stimuli was investigated using a two-key autoshaping procedure. Common and distinctive stimulus elements were simultaneously presented on separate keys and either followed by food delivery, S+, or not, S-. The subjects acquired both between-trial and within-trial discriminations. On S+ trials, pigeons pecked the distinctive stimulus more than the common stimulus; before responding ceased on S- trials, they pecked the common stimulus more than the distinctive one. Mastery of the within-display discrimination during S+ trials preceded mastery of the between-trials discrimination. These findings extend the Jenkins-Sainsbury analysis of discriminations based upon a single distinguishing feature to discriminations in which common and distinctive elements are associated with both the positive and negative discriminative stimuli. The similarity of these findings to other effects found in autoshaping-approach to signals that forecast reinforcement and withdrawal from signals that forecast nonreinforcement-is also discussed.
Covic, Amra; Keitel, Christian; Porcu, Emanuele; Schröger, Erich; Müller, Matthias M
2017-11-01
The neural processing of a visual stimulus can be facilitated by attending to its position or by a co-occurring auditory tone. Using frequency-tagging, we investigated whether facilitation by spatial attention and audio-visual synchrony rely on similar neural processes. Participants attended to one of two flickering Gabor patches (14.17 and 17 Hz) located in opposite lower visual fields. Gabor patches further "pulsed" (i.e. showed smooth spatial frequency variations) at distinct rates (3.14 and 3.63 Hz). Frequency-modulating an auditory stimulus at the pulse-rate of one of the visual stimuli established audio-visual synchrony. Flicker and pulsed stimulation elicited stimulus-locked rhythmic electrophysiological brain responses that allowed tracking the neural processing of simultaneously presented Gabor patches. These steady-state responses (SSRs) were quantified in the spectral domain to examine visual stimulus processing under conditions of synchronous vs. asynchronous tone presentation and when respective stimulus positions were attended vs. unattended. Strikingly, unique patterns of effects on pulse- and flicker driven SSRs indicated that spatial attention and audiovisual synchrony facilitated early visual processing in parallel and via different cortical processes. We found attention effects to resemble the classical top-down gain effect facilitating both, flicker and pulse-driven SSRs. Audio-visual synchrony, in turn, only amplified synchrony-producing stimulus aspects (i.e. pulse-driven SSRs) possibly highlighting the role of temporally co-occurring sights and sounds in bottom-up multisensory integration. Copyright © 2017 Elsevier Inc. All rights reserved.
Kovalenko, Lyudmyla Y; Chaumon, Maximilien; Busch, Niko A
2012-07-01
Semantic processing of verbal and visual stimuli has been investigated in semantic violation or semantic priming paradigms in which a stimulus is either related or unrelated to a previously established semantic context. A hallmark of semantic priming is the N400 event-related potential (ERP)--a deflection of the ERP that is more negative for semantically unrelated target stimuli. The majority of studies investigating the N400 and semantic integration have used verbal material (words or sentences), and standardized stimulus sets with norms for semantic relatedness have been published for verbal but not for visual material. However, semantic processing of visual objects (as opposed to words) is an important issue in research on visual cognition. In this study, we present a set of 800 pairs of semantically related and unrelated visual objects. The images were rated for semantic relatedness by a sample of 132 participants. Furthermore, we analyzed low-level image properties and matched the two semantic categories according to these features. An ERP study confirmed the suitability of this image set for evoking a robust N400 effect of semantic integration. Additionally, using a general linear modeling approach of single-trial data, we also demonstrate that low-level visual image properties and semantic relatedness are in fact only minimally overlapping. The image set is available for download from the authors' website. We expect that the image set will facilitate studies investigating mechanisms of semantic and contextual processing of visual stimuli.
Threat captures attention but does not affect learning of contextual regularities.
Yamaguchi, Motonori; Harwood, Sarah L
2017-04-01
Some of the stimulus features that guide visual attention are abstract properties of objects such as potential threat to one's survival, whereas others are complex configurations such as visual contexts that are learned through past experiences. The present study investigated the two functions that guide visual attention, threat detection and learning of contextual regularities, in visual search. Search arrays contained images of threat and non-threat objects, and their locations were fixed on some trials but random on other trials. Although they were irrelevant to the visual search task, threat objects facilitated attention capture and impaired attention disengagement. Search time improved for fixed configurations more than for random configurations, reflecting learning of visual contexts. Nevertheless, threat detection had little influence on learning of the contextual regularities. The results suggest that factors guiding visual attention are different from factors that influence learning to guide visual attention.
Contextual modulation and stimulus selectivity in extrastriate cortex.
Krause, Matthew R; Pack, Christopher C
2014-11-01
Contextual modulation is observed throughout the visual system, using techniques ranging from single-neuron recordings to behavioral experiments. Its role in generating feature selectivity within the retina and primary visual cortex has been extensively described in the literature. Here, we describe how similar computations can also elaborate feature selectivity in the extrastriate areas of both the dorsal and ventral streams of the primate visual system. We discuss recent work that makes use of normalization models to test specific roles for contextual modulation in visual cortex function. We suggest that contextual modulation renders neuronal populations more selective for naturalistic stimuli. Specifically, we discuss contextual modulation's role in processing optic flow in areas MT and MST and for representing naturally occurring curvature and contours in areas V4 and IT. We also describe how the circuitry that supports contextual modulation is robust to variations in overall input levels. Finally, we describe how this theory relates to other hypothesized roles for contextual modulation. Copyright © 2014 Elsevier Ltd. All rights reserved.
Ptak, Radek; Lazeyras, François; Di Pietro, Marie; Schnider, Armin; Simon, Stéphane R
2014-07-01
Patients with visual object agnosia fail to recognize the identity of visually presented objects despite preserved semantic knowledge. Object agnosia may result from damage to visual cortex lying close to or overlapping with the lateral occipital complex (LOC), a brain region that exhibits selectivity to the shape of visually presented objects. Despite this anatomical overlap the relationship between shape processing in the LOC and shape representations in object agnosia is unknown. We studied a patient with object agnosia following isolated damage to the left occipito-temporal cortex overlapping with the LOC. The patient showed intact processing of object structure, yet often made identification errors that were mainly based on the global visual similarity between objects. Using functional Magnetic Resonance Imaging (fMRI) we found that the damaged as well as the contralateral, structurally intact right LOC failed to show any object-selective fMRI activity, though the latter retained selectivity for faces. Thus, unilateral damage to the left LOC led to a bilateral breakdown of neural responses to a specific stimulus class (objects and artefacts) while preserving the response to a different stimulus class (faces). These findings indicate that representations of structure necessary for the identification of objects crucially rely on bilateral, distributed coding of shape features. Copyright © 2014 Elsevier Ltd. All rights reserved.
Hemispheric differences in visual search of simple line arrays.
Polich, J; DeFrancesco, D P; Garon, J F; Cohen, W
1990-01-01
The effects of perceptual organization on hemispheric visual-information processing were assessed with stimulus arrays composed of short lines arranged in columns. A visual-search task was employed in which subjects judged whether all the lines were vertical (same) or whether a single horizontal line was present (different). Stimulus-display organization was manipulated in two experiments by variation of line density, linear organization, and array size. In general, left-visual-field/right-hemisphere presentations demonstrated more rapid and accurate responses when the display was perceived as a whole. Right-visual-field/left-hemisphere superiorities were observed when the display organization coerced assessment of individual array elements because the physical qualities of the stimulus did not effect a gestalt whole. Response times increased somewhat with increases in array size, although these effects interacted with other stimulus variables. Error rates tended to follow the reaction-time patterns. The results suggest that laterality differences in visual search are governed by stimulus properties which contribute to, or inhibit, the perception of a display as a gestalt. The implications of these findings for theoretical interpretations of hemispheric specialization are discussed.
Vaidya, Avinash R; Fellows, Lesley K
2015-09-16
Adaptively interacting with our environment requires extracting information that will allow us to successfully predict reward. This can be a challenge, particularly when there are many candidate cues, and when rewards are probabilistic. Recent work has demonstrated that visual attention is allocated to stimulus features that have been associated with reward on previous trials. The ventromedial frontal lobe (VMF) has been implicated in learning in dynamic environments of this kind, but the mechanism by which this region influences this process is not clear. Here, we hypothesized that the VMF plays a critical role in guiding attention to reward-predictive stimulus features based on feedback. We tested the effects of VMF damage in human subjects on a visual search task in which subjects were primed to attend to task-irrelevant colors associated with different levels of reward, incidental to the search task. Consistent with previous work, we found that distractors had a greater influence on reaction time when they appeared in colors associated with high reward in the previous trial compared with colors associated with low reward in healthy control subjects and patients with prefrontal damage sparing the VMF. However, this reward modulation of attentional priming was absent in patients with VMF damage. Thus, an intact VMF is necessary for directing attention based on experience with cue-reward associations. We suggest that this region plays a role in selecting reward-predictive cues to facilitate future learning. There has been a swell of interest recently in the ventromedial frontal cortex (VMF), a brain region critical to associative learning. However, the underlying mechanism by which this region guides learning is not well understood. Here, we tested the effects of damage to this region in humans on a task in which rewards were linked incidentally to visual features, resulting in trial-by-trial attentional priming. Controls and subjects with prefrontal damage sparing the VMF showed normal reward priming, but VMF-damaged patients did not. This work sheds light on a potential mechanism through which this region influences behavior. We suggest that the VMF is necessary for directing attention to reward-predictive visual features based on feedback, facilitating future learning and decision-making. Copyright © 2015 the authors 0270-6474/15/3512813-11$15.00/0.
The impact of interference on short-term memory for visual orientation.
Rademaker, Rosanne L; Bloem, Ilona M; De Weerd, Peter; Sack, Alexander T
2015-12-01
Visual short-term memory serves as an efficient buffer for maintaining no longer directly accessible information. How robust are visual memories against interference? Memory for simple visual features has proven vulnerable to distractors containing conflicting information along the relevant stimulus dimension, leading to the idea that interacting feature-specific channels at an early stage of visual processing support memory for simple visual features. Here we showed that memory for a single randomly orientated grating was susceptible to interference from a to-be-ignored distractor grating presented midway through a 3-s delay period. Memory for the initially presented orientation became noisier when it differed from the distractor orientation, and response distributions were shifted toward the distractor orientation (by ∼3°). Interestingly, when the distractor was rendered task-relevant by making it a second memory target, memory for both retained orientations showed reduced reliability as a function of increased orientation differences between them. However, the degree to which responses to the first grating shifted toward the orientation of the task-relevant second grating was much reduced. Finally, using a dichoptic display, we demonstrated that these systematic biases caused by a consciously perceived distractor disappeared once the distractor was presented outside of participants' awareness. Together, our results show that visual short-term memory for orientation can be systematically biased by interfering information that is consciously perceived. (c) 2015 APA, all rights reserved).
Defever, Emmy; Reynvoet, Bert; Gebuis, Titia
2013-10-01
Researchers investigating numerosity processing manipulate the visual stimulus properties (e.g., surface). This is done to control for the confound between numerosity and its visual properties and should allow the examination of pure number processes. Nevertheless, several studies have shown that, despite different visual controls, visual cues remained to exert their influence on numerosity judgments. This study, therefore, investigated whether the impact of the visual stimulus manipulations on numerosity judgments is dependent on the task at hand (comparison task vs. same-different task) and whether this impact changes throughout development. In addition, we examined whether the influence of visual stimulus manipulations on numerosity judgments plays a role in the relation between performance on numerosity tasks and mathematics achievement. Our findings confirmed that the visual stimulus manipulations affect numerosity judgments; more important, we found that these influences changed with increasing age and differed between the comparison and the same-different tasks. Consequently, direct comparisons between numerosity studies using different tasks and age groups are difficult. No meaningful relationship between the performance on the comparison and same-different tasks and mathematics achievement was found in typically developing children, nor did we find consistent differences between children with and without mathematical learning disability (MLD). Copyright © 2013 Elsevier Inc. All rights reserved.
Harrison, Charlotte; Jackson, Jade; Oh, Seung-Mock; Zeringyte, Vaida
2016-01-01
Multivariate pattern analysis of functional magnetic resonance imaging (fMRI) data is widely used, yet the spatial scales and origin of neurovascular signals underlying such analyses remain unclear. We compared decoding performance for stimulus orientation and eye of origin from fMRI measurements in human visual cortex with predictions based on the columnar organization of each feature and estimated the spatial scales of patterns driving decoding. Both orientation and eye of origin could be decoded significantly above chance in early visual areas (V1–V3). Contrary to predictions based on a columnar origin of response biases, decoding performance for eye of origin in V2 and V3 was not significantly lower than that in V1, nor did decoding performance for orientation and eye of origin differ significantly. Instead, response biases for both features showed large-scale organization, evident as a radial bias for orientation, and a nasotemporal bias for eye preference. To determine whether these patterns could drive classification, we quantified the effect on classification performance of binning voxels according to visual field position. Consistent with large-scale biases driving classification, binning by polar angle yielded significantly better decoding performance for orientation than random binning in V1–V3. Similarly, binning by hemifield significantly improved decoding performance for eye of origin. Patterns of orientation and eye preference bias in V2 and V3 showed a substantial degree of spatial correlation with the corresponding patterns in V1, suggesting that response biases in these areas originate in V1. Together, these findings indicate that multivariate classification results need not reflect the underlying columnar organization of neuronal response selectivities in early visual areas. NEW & NOTEWORTHY Large-scale response biases can account for decoding of orientation and eye of origin in human early visual areas V1–V3. For eye of origin this pattern is a nasotemporal bias; for orientation it is a radial bias. Differences in decoding performance across areas and stimulus features are not well predicted by differences in columnar-scale organization of each feature. Large-scale biases in extrastriate areas are spatially correlated with those in V1, suggesting biases originate in primary visual cortex. PMID:27903637
Evidence for unlimited capacity processing of simple features in visual cortex
White, Alex L.; Runeson, Erik; Palmer, John; Ernst, Zachary R.; Boynton, Geoffrey M.
2017-01-01
Performance in many visual tasks is impaired when observers attempt to divide spatial attention across multiple visual field locations. Correspondingly, neuronal response magnitudes in visual cortex are often reduced during divided compared with focused spatial attention. This suggests that early visual cortex is the site of capacity limits, where finite processing resources must be divided among attended stimuli. However, behavioral research demonstrates that not all visual tasks suffer such capacity limits: The costs of divided attention are minimal when the task and stimulus are simple, such as when searching for a target defined by orientation or contrast. To date, however, every neuroimaging study of divided attention has used more complex tasks and found large reductions in response magnitude. We bridged that gap by using functional magnetic resonance imaging to measure responses in the human visual cortex during simple feature detection. The first experiment used a visual search task: Observers detected a low-contrast Gabor patch within one or four potentially relevant locations. The second experiment used a dual-task design, in which observers made independent judgments of Gabor presence in patches of dynamic noise at two locations. In both experiments, blood-oxygen level–dependent (BOLD) signals in the retinotopic cortex were significantly lower for ignored than attended stimuli. However, when observers divided attention between multiple stimuli, BOLD signals were not reliably reduced and behavioral performance was unimpaired. These results suggest that processing of simple features in early visual cortex has unlimited capacity. PMID:28654964
Tanaka, Tomohiro; Nishida, Satoshi
2015-01-01
The neuronal processes that underlie visual searches can be divided into two stages: target discrimination and saccade preparation/generation. This predicts that the length of time of the prediscrimination stage varies according to the search difficulty across different stimulus conditions, whereas the length of the latter postdiscrimination stage is stimulus invariant. However, recent studies have suggested that the length of the postdiscrimination interval changes with different stimulus conditions. To address whether and how the visual stimulus affects determination of the postdiscrimination interval, we recorded single-neuron activity in the lateral intraparietal area (LIP) when monkeys (Macaca fuscata) performed a color-singleton search involving four stimulus conditions that differed regarding luminance (Bright vs. Dim) and target-distractor color similarity (Easy vs. Difficult). We specifically focused on comparing activities between the Bright-Difficult and Dim-Easy conditions, in which the visual stimuli were considerably different, but the mean reaction times were indistinguishable. This allowed us to examine the neuronal activity when the difference in the degree of search speed between different stimulus conditions was minimal. We found that not only prediscrimination but also postdiscrimination intervals varied across stimulus conditions: the postdiscrimination interval was longer in the Dim-Easy condition than in the Bright-Difficult condition. Further analysis revealed that the postdiscrimination interval might vary with stimulus luminance. A computer simulation using an accumulation-to-threshold model suggested that the luminance-related difference in visual response strength at discrimination time could be the cause of different postdiscrimination intervals. PMID:25995344
Goal-Directed and Habit-Like Modulations of Stimulus Processing during Reinforcement Learning.
Luque, David; Beesley, Tom; Morris, Richard W; Jack, Bradley N; Griffiths, Oren; Whitford, Thomas J; Le Pelley, Mike E
2017-03-15
Recent research has shown that perceptual processing of stimuli previously associated with high-value rewards is automatically prioritized even when rewards are no longer available. It has been hypothesized that such reward-related modulation of stimulus salience is conceptually similar to an "attentional habit." Recording event-related potentials in humans during a reinforcement learning task, we show strong evidence in favor of this hypothesis. Resistance to outcome devaluation (the defining feature of a habit) was shown by the stimulus-locked P1 component, reflecting activity in the extrastriate visual cortex. Analysis at longer latencies revealed a positive component (corresponding to the P3b, from 550-700 ms) sensitive to outcome devaluation. Therefore, distinct spatiotemporal patterns of brain activity were observed corresponding to habitual and goal-directed processes. These results demonstrate that reinforcement learning engages both attentional habits and goal-directed processes in parallel. Consequences for brain and computational models of reinforcement learning are discussed. SIGNIFICANCE STATEMENT The human attentional network adapts to detect stimuli that predict important rewards. A recent hypothesis suggests that the visual cortex automatically prioritizes reward-related stimuli, driven by cached representations of reward value; that is, stimulus-response habits. Alternatively, the neural system may track the current value of the predicted outcome. Our results demonstrate for the first time that visual cortex activity is increased for reward-related stimuli even when the rewarding event is temporarily devalued. In contrast, longer-latency brain activity was specifically sensitive to transient changes in reward value. Therefore, we show that both habit-like attention and goal-directed processes occur in the same learning episode at different latencies. This result has important consequences for computational models of reinforcement learning. Copyright © 2017 the authors 0270-6474/17/373009-09$15.00/0.
The role of prestimulus activity in visual extinction☆
Urner, Maren; Sarri, Margarita; Grahn, Jessica; Manly, Tom; Rees, Geraint; Friston, Karl
2013-01-01
Patients with visual extinction following right-hemisphere damage sometimes see and sometimes miss stimuli in the left visual field, particularly when stimuli are presented simultaneously to both visual fields. Awareness of left visual field stimuli is associated with increased activity in bilateral parietal and frontal cortex. However, it is unknown why patients see or miss these stimuli. Previous neuroimaging studies in healthy adults show that prestimulus activity biases perceptual decisions, and biases in visual perception can be attributed to fluctuations in prestimulus activity in task relevant brain regions. Here, we used functional MRI to investigate whether prestimulus activity affected perception in the context of visual extinction following stroke. We measured prestimulus activity in stimulus-responsive cortical areas during an extinction paradigm in a patient with unilateral right parietal damage and visual extinction. This allowed us to compare prestimulus activity on physically identical bilateral trials that either did or did not lead to visual extinction. We found significantly increased activity prior to stimulus presentation in two areas that were also activated by visual stimulation: the left calcarine sulcus and right occipital inferior cortex. Using dynamic causal modelling (DCM) we found that both these differences in prestimulus activity and stimulus evoked responses could be explained by enhanced effective connectivity within and between visual areas, prior to stimulus presentation. Thus, we provide evidence for the idea that differences in ongoing neural activity in visually responsive areas prior to stimulus onset affect awareness in visual extinction, and that these differences are mediated by fluctuations in extrinsic and intrinsic connectivity. PMID:23680398
The role of prestimulus activity in visual extinction.
Urner, Maren; Sarri, Margarita; Grahn, Jessica; Manly, Tom; Rees, Geraint; Friston, Karl
2013-07-01
Patients with visual extinction following right-hemisphere damage sometimes see and sometimes miss stimuli in the left visual field, particularly when stimuli are presented simultaneously to both visual fields. Awareness of left visual field stimuli is associated with increased activity in bilateral parietal and frontal cortex. However, it is unknown why patients see or miss these stimuli. Previous neuroimaging studies in healthy adults show that prestimulus activity biases perceptual decisions, and biases in visual perception can be attributed to fluctuations in prestimulus activity in task relevant brain regions. Here, we used functional MRI to investigate whether prestimulus activity affected perception in the context of visual extinction following stroke. We measured prestimulus activity in stimulus-responsive cortical areas during an extinction paradigm in a patient with unilateral right parietal damage and visual extinction. This allowed us to compare prestimulus activity on physically identical bilateral trials that either did or did not lead to visual extinction. We found significantly increased activity prior to stimulus presentation in two areas that were also activated by visual stimulation: the left calcarine sulcus and right occipital inferior cortex. Using dynamic causal modelling (DCM) we found that both these differences in prestimulus activity and stimulus evoked responses could be explained by enhanced effective connectivity within and between visual areas, prior to stimulus presentation. Thus, we provide evidence for the idea that differences in ongoing neural activity in visually responsive areas prior to stimulus onset affect awareness in visual extinction, and that these differences are mediated by fluctuations in extrinsic and intrinsic connectivity. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.
Eccentricity effects in vision and attention.
Staugaard, Camilla Funch; Petersen, Anders; Vangkilde, Signe
2016-11-01
Stimulus eccentricity affects visual processing in multiple ways. Performance on a visual task is often better when target stimuli are presented near or at the fovea compared to the retinal periphery. For instance, reaction times and error rates are often reported to increase with increasing eccentricity. Such findings have been interpreted as purely visual, reflecting neurophysiological differences in central and peripheral vision, as well as attentional, reflecting a central bias in the allocation of attentional resources. Other findings indicate that in some cases, information from the periphery is preferentially processed. Specifically, it has been suggested that visual processing speed increases with increasing stimulus eccentricity, and that this positive correlation is reduced, but not eliminated, when the amount of cortex activated by a stimulus is kept constant by magnifying peripheral stimuli (Carrasco et al., 2003). In this study, we investigated effects of eccentricity on visual attentional capacity with and without magnification, using computational modeling based on Bundesen's (1990) theory of visual attention. Our results suggest a general decrease in attentional capacity with increasing stimulus eccentricity, irrespective of magnification. We discuss these results in relation to the physiology of the visual system, the use of different paradigms for investigating visual perception across the visual field, and the use of different stimulus materials (e.g. Gabor patches vs. letters). Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Spatial updating in human parietal cortex
NASA Technical Reports Server (NTRS)
Merriam, Elisha P.; Genovese, Christopher R.; Colby, Carol L.
2003-01-01
Single neurons in monkey parietal cortex update visual information in conjunction with eye movements. This remapping of stimulus representations is thought to contribute to spatial constancy. We hypothesized that a similar process occurs in human parietal cortex and that we could visualize it with functional MRI. We scanned subjects during a task that involved remapping of visual signals across hemifields. We observed an initial response in the hemisphere contralateral to the visual stimulus, followed by a remapped response in the hemisphere ipsilateral to the stimulus. We ruled out the possibility that this remapped response resulted from either eye movements or visual stimuli alone. Our results demonstrate that updating of visual information occurs in human parietal cortex.
Moving Stimuli Facilitate Synchronization But Not Temporal Perception
Silva, Susana; Castro, São Luís
2016-01-01
Recent studies have shown that a moving visual stimulus (e.g., a bouncing ball) facilitates synchronization compared to a static stimulus (e.g., a flashing light), and that it can even be as effective as an auditory beep. We asked a group of participants to perform different tasks with four stimulus types: beeps, siren-like sounds, visual flashes (static) and bouncing balls. First, participants performed synchronization with isochronous sequences (stimulus-guided synchronization), followed by a continuation phase in which the stimulus was internally generated (imagery-guided synchronization). Then they performed a perception task, in which they judged whether the final part of a temporal sequence was compatible with the previous beat structure (stimulus-guided perception). Similar to synchronization, an imagery-guided variant was added, in which sequences contained a gap in between (imagery-guided perception). Balls outperformed flashes and matched beeps (powerful ball effect) in stimulus-guided synchronization but not in perception (stimulus- or imagery-guided). In imagery-guided synchronization, performance accuracy decreased for beeps and balls, but not for flashes and sirens. Our findings suggest that the advantages of moving visual stimuli over static ones are grounded in action rather than perception, and they support the hypothesis that the sensorimotor coupling mechanisms for auditory (beeps) and moving visual stimuli (bouncing balls) overlap. PMID:27909419
Moving Stimuli Facilitate Synchronization But Not Temporal Perception.
Silva, Susana; Castro, São Luís
2016-01-01
Recent studies have shown that a moving visual stimulus (e.g., a bouncing ball) facilitates synchronization compared to a static stimulus (e.g., a flashing light), and that it can even be as effective as an auditory beep. We asked a group of participants to perform different tasks with four stimulus types: beeps, siren-like sounds, visual flashes (static) and bouncing balls. First, participants performed synchronization with isochronous sequences (stimulus-guided synchronization), followed by a continuation phase in which the stimulus was internally generated (imagery-guided synchronization). Then they performed a perception task, in which they judged whether the final part of a temporal sequence was compatible with the previous beat structure (stimulus-guided perception). Similar to synchronization, an imagery-guided variant was added, in which sequences contained a gap in between (imagery-guided perception). Balls outperformed flashes and matched beeps (powerful ball effect) in stimulus-guided synchronization but not in perception (stimulus- or imagery-guided). In imagery-guided synchronization, performance accuracy decreased for beeps and balls, but not for flashes and sirens. Our findings suggest that the advantages of moving visual stimuli over static ones are grounded in action rather than perception, and they support the hypothesis that the sensorimotor coupling mechanisms for auditory (beeps) and moving visual stimuli (bouncing balls) overlap.
Comparing visual representations across human fMRI and computational vision
Leeds, Daniel D.; Seibert, Darren A.; Pyles, John A.; Tarr, Michael J.
2013-01-01
Feedforward visual object perception recruits a cortical network that is assumed to be hierarchical, progressing from basic visual features to complete object representations. However, the nature of the intermediate features related to this transformation remains poorly understood. Here, we explore how well different computer vision recognition models account for neural object encoding across the human cortical visual pathway as measured using fMRI. These neural data, collected during the viewing of 60 images of real-world objects, were analyzed with a searchlight procedure as in Kriegeskorte, Goebel, and Bandettini (2006): Within each searchlight sphere, the obtained patterns of neural activity for all 60 objects were compared to model responses for each computer recognition algorithm using representational dissimilarity analysis (Kriegeskorte et al., 2008). Although each of the computer vision methods significantly accounted for some of the neural data, among the different models, the scale invariant feature transform (Lowe, 2004), encoding local visual properties gathered from “interest points,” was best able to accurately and consistently account for stimulus representations within the ventral pathway. More generally, when present, significance was observed in regions of the ventral-temporal cortex associated with intermediate-level object perception. Differences in model effectiveness and the neural location of significant matches may be attributable to the fact that each model implements a different featural basis for representing objects (e.g., more holistic or more parts-based). Overall, we conclude that well-known computer vision recognition systems may serve as viable proxies for theories of intermediate visual object representation. PMID:24273227
Phonological Processing in Human Auditory Cortical Fields
Woods, David L.; Herron, Timothy J.; Cate, Anthony D.; Kang, Xiaojian; Yund, E. W.
2011-01-01
We used population-based cortical-surface analysis of functional magnetic imaging data to characterize the processing of consonant–vowel–consonant syllables (CVCs) and spectrally matched amplitude-modulated noise bursts (AMNBs) in human auditory cortex as subjects attended to auditory or visual stimuli in an intermodal selective attention paradigm. Average auditory cortical field (ACF) locations were defined using tonotopic mapping in a previous study. Activations in auditory cortex were defined by two stimulus-preference gradients: (1) Medial belt ACFs preferred AMNBs and lateral belt and parabelt fields preferred CVCs. This preference extended into core ACFs with medial regions of primary auditory cortex (A1) and the rostral field preferring AMNBs and lateral regions preferring CVCs. (2) Anterior ACFs showed smaller activations but more clearly defined stimulus preferences than did posterior ACFs. Stimulus preference gradients were unaffected by auditory attention suggesting that ACF preferences reflect the automatic processing of different spectrotemporal sound features. PMID:21541252
Face imagery is based on featural representations.
Lobmaier, Janek S; Mast, Fred W
2008-01-01
The effect of imagery on featural and configural face processing was investigated using blurred and scrambled faces. By means of blurring, featural information is reduced; by scrambling a face into its constituent parts configural information is lost. Twenty-four participants learned ten faces together with the sound of a name. In following matching-to-sample tasks participants had to decide whether an auditory presented name belonged to a visually presented scrambled or blurred face in two experimental conditions. In the imagery condition, the name was presented prior to the visual stimulus and participants were required to imagine the corresponding face as clearly and vividly as possible. In the perception condition name and test face were presented simultaneously, thus no facilitation via mental imagery was possible. Analyses of the hit values showed that in the imagery condition scrambled faces were recognized significantly better than blurred faces whereas there was no such effect for the perception condition. The results suggest that mental imagery activates featural representations more than configural representations.
Dynamic reweighting of three modalities for sensor fusion.
Hwang, Sungjae; Agada, Peter; Kiemel, Tim; Jeka, John J
2014-01-01
We simultaneously perturbed visual, vestibular and proprioceptive modalities to understand how sensory feedback is re-weighted so that overall feedback remains suited to stabilizing upright stance. Ten healthy young subjects received an 80 Hz vibratory stimulus to their bilateral Achilles tendons (stimulus turns on-off at 0.28 Hz), a ± 1 mA binaural monopolar galvanic vestibular stimulus at 0.36 Hz, and a visual stimulus at 0.2 Hz during standing. The visual stimulus was presented at different amplitudes (0.2, 0.8 deg rotation about ankle axis) to measure: the change in gain (weighting) to vision, an intramodal effect; and a change in gain to vibration and galvanic vestibular stimulation, both intermodal effects. The results showed a clear intramodal visual effect, indicating a de-emphasis on vision when the amplitude of visual stimulus increased. At the same time, an intermodal visual-proprioceptive reweighting effect was observed with the addition of vibration, which is thought to change proprioceptive inputs at the ankles, forcing the nervous system to rely more on vision and vestibular modalities. Similar intermodal effects for visual-vestibular reweighting were observed, suggesting that vestibular information is not a "fixed" reference, but is dynamically adjusted in the sensor fusion process. This is the first time, to our knowledge, that the interplay between the three primary modalities for postural control has been clearly delineated, illustrating a central process that fuses these modalities for accurate estimates of self-motion.
Nishimura, Akio; Yokosawa, Kazuhiko
2012-01-01
Tlauka and McKenna ( 2000 ) reported a reversal of the traditional stimulus-response compatibility (SRC) effect (faster responding to a stimulus presented on the same side than to one on the opposite side) when the stimulus appearing on one side of a display is a member of a superordinate unit that is largely on the opposite side. We investigated the effects of a visual cue that explicitly shows a superordinate unit, and of assignment of multiple stimuli within each superordinate unit to one response, on the SRC effect based on superordinate unit position. Three experiments revealed that stimulus-response assignment is critical, while the visual cue plays a minor role, in eliciting the SRC effect based on the superordinate unit position. Findings suggest bidirectional interaction between perception and action and simultaneous spatial stimulus coding according to multiple frames of reference, with contribution of each coding to the SRC effect flexibly varying with task situations.
Kasties, Nils; Starosta, Sarah; Güntürkün, Onur; Stüttgen, Maik C.
2016-01-01
Animals exploit visual information to identify objects, form stimulus-reward associations, and prepare appropriate behavioral responses. The nidopallium caudolaterale (NCL), an associative region of the avian endbrain, contains neurons exhibiting prominent response modulation during presentation of reward-predicting visual stimuli, but it is unclear whether neural activity represents valuation signals, stimulus properties, or sensorimotor contingencies. To test the hypothesis that NCL neurons represent stimulus value, we subjected pigeons to a Pavlovian sign-tracking paradigm in which visual cues predicted rewards differing in magnitude (large vs. small) and delay to presentation (short vs. long). Subjects’ strength of conditioned responding to visual cues reliably differentiated between predicted reward types and thus indexed valuation. The majority of NCL neurons discriminated between visual cues, with discriminability peaking shortly after stimulus onset and being maintained at lower levels throughout the stimulus presentation period. However, while some cells’ firing rates correlated with reward value, such neurons were not more frequent than expected by chance. Instead, neurons formed discernible clusters which differed in their preferred visual cue. We propose that this activity pattern constitutes a prerequisite for using visual information in more complex situations e.g. requiring value-based choices. PMID:27762287
Subliminal perception of complex visual stimuli.
Ionescu, Mihai Radu
2016-01-01
Rationale: Unconscious perception of various sensory modalities is an active subject of research though its function and effect on behavior is uncertain. Objective: The present study tried to assess if unconscious visual perception could occur with more complex visual stimuli than previously utilized. Methods and Results: Videos containing slideshows of indifferent complex images with interspersed frames of interest of various durations were presented to 24 healthy volunteers. The perception of the stimulus was evaluated with a forced-choice questionnaire while awareness was quantified by self-assessment with a modified awareness scale annexed to each question with 4 categories of awareness. At values of 16.66 ms of stimulus duration, conscious awareness was not possible and answers regarding the stimulus were random. At 50 ms, nonrandom answers were coupled with no self-reported awareness suggesting unconscious perception of the stimulus. At larger durations of stimulus presentation, significantly correct answers were coupled with a certain conscious awareness. Discussion: At values of 50 ms, unconscious perception is possible even with complex visual stimuli. Further studies are recommended with a focus on a range of interest of stimulus duration between 50 to 16.66 ms.
Visual Cortical Entrainment to Motion and Categorical Speech Features during Silent Lipreading
O’Sullivan, Aisling E.; Crosse, Michael J.; Di Liberto, Giovanni M.; Lalor, Edmund C.
2017-01-01
Speech is a multisensory percept, comprising an auditory and visual component. While the content and processing pathways of audio speech have been well characterized, the visual component is less well understood. In this work, we expand current methodologies using system identification to introduce a framework that facilitates the study of visual speech in its natural, continuous form. Specifically, we use models based on the unheard acoustic envelope (E), the motion signal (M) and categorical visual speech features (V) to predict EEG activity during silent lipreading. Our results show that each of these models performs similarly at predicting EEG in visual regions and that respective combinations of the individual models (EV, MV, EM and EMV) provide an improved prediction of the neural activity over their constituent models. In comparing these different combinations, we find that the model incorporating all three types of features (EMV) outperforms the individual models, as well as both the EV and MV models, while it performs similarly to the EM model. Importantly, EM does not outperform EV and MV, which, considering the higher dimensionality of the V model, suggests that more data is needed to clarify this finding. Nevertheless, the performance of EMV, and comparisons of the subject performances for the three individual models, provides further evidence to suggest that visual regions are involved in both low-level processing of stimulus dynamics and categorical speech perception. This framework may prove useful for investigating modality-specific processing of visual speech under naturalistic conditions. PMID:28123363
Dong, Guangheng; Yang, Lizhu; Shen, Yue
2009-08-21
The present study investigated the course of visual searching to a target in a fixed location, using an emotional flanker task. Event-related potentials (ERPs) were recorded while participants performed the task. Emotional facial expressions were used as emotion-eliciting triggers. The course of visual searching was analyzed through the emotional effects arising from these emotion-eliciting stimuli. The flanker stimuli showed effects at about 150-250 ms following the stimulus onset, while the effect of target stimuli showed effects at about 300-400 ms. The visual search sequence in an emotional flanker task moved from a whole overview to a specific target, even if the target always appeared at a known location. The processing sequence was "parallel" in this task. The results supported the feature integration theory of visual search.
Non-provocative diagnostics of photosensitivity using visual evoked potentials.
Vermeulen, Joost; Kalitzin, Stiliyan; Parra, Jaime; Dekker, Erwin; Vossepoel, Albert; da Silva, Fernando Lopes
2008-04-01
Photosensitive epilepsy (PSE) is the most common form of reflex epilepsy. Usually, to find out whether a patient is sensitive, he/she is stimulated visually with, e.g. a stroboscopic light stimulus at variable frequency and intensity until a photo paroxysmal response (PPR) occurs. The research described in this work aims to find whether photosensitivity can be detected without provoking a PPR. Twenty-two subjects, 15 with known photosensitivity, were stimulated with visual stimuli that did not provoke a PPR. Using an "evoked response representation", 18 features were analytically derived from EEG signals. Single- and multi-feature classification paradigms were applied to extract those features that separate best subjects with PSE from controls. Two variables in the "evoked response representation", a frequency term and a goodness of fit term to a particular template, appeared to be best suited to make a prediction about the photosensitivity of a subject. Evoked responses appear to carry information about potential PSE. This result can be useful for screening patients for photosensitivity and it may also help to assess in a quantitative way the effectiveness of medical therapy.
Dynamic interactions between visual working memory and saccade target selection
Schneegans, Sebastian; Spencer, John P.; Schöner, Gregor; Hwang, Seongmin; Hollingworth, Andrew
2014-01-01
Recent psychophysical experiments have shown that working memory for visual surface features interacts with saccadic motor planning, even in tasks where the saccade target is unambiguously specified by spatial cues. Specifically, a match between a memorized color and the color of either the designated target or a distractor stimulus influences saccade target selection, saccade amplitudes, and latencies in a systematic fashion. To elucidate these effects, we present a dynamic neural field model in combination with new experimental data. The model captures the neural processes underlying visual perception, working memory, and saccade planning relevant to the psychophysical experiment. It consists of a low-level visual sensory representation that interacts with two separate pathways: a spatial pathway implementing spatial attention and saccade generation, and a surface feature pathway implementing color working memory and feature attention. Due to bidirectional coupling between visual working memory and feature attention in the model, the working memory content can indirectly exert an effect on perceptual processing in the low-level sensory representation. This in turn biases saccadic movement planning in the spatial pathway, allowing the model to quantitatively reproduce the observed interaction effects. The continuous coupling between representations in the model also implies that modulation should be bidirectional, and model simulations provide specific predictions for complementary effects of saccade target selection on visual working memory. These predictions were empirically confirmed in a new experiment: Memory for a sample color was biased toward the color of a task-irrelevant saccade target object, demonstrating the bidirectional coupling between visual working memory and perceptual processing. PMID:25228628
Stimulus competition mediates the joint effects of spatial and feature-based attention
White, Alex L.; Rolfs, Martin; Carrasco, Marisa
2015-01-01
Distinct attentional mechanisms enhance the sensory processing of visual stimuli that appear at task-relevant locations and have task-relevant features. We used a combination of psychophysics and computational modeling to investigate how these two types of attention—spatial and feature based—interact to modulate sensitivity when combined in one task. Observers monitored overlapping groups of dots for a target change in color saturation, which they had to localize as being in the upper or lower visual hemifield. Pre-cues indicated the target's most likely location (left/right), color (red/green), or both location and color. We measured sensitivity (d′) for every combination of the location cue and the color cue, each of which could be valid, neutral, or invalid. When three competing saturation changes occurred simultaneously with the target change, there was a clear interaction: The spatial cueing effect was strongest for the cued color, and the color cueing effect was strongest at the cued location. In a second experiment, only the target dot group changed saturation, such that stimulus competition was low. The resulting cueing effects were statistically independent and additive: The color cueing effect was equally strong at attended and unattended locations. We account for these data with a computational model in which spatial and feature-based attention independently modulate the gain of sensory responses, consistent with measurements of cortical activity. Multiple responses then compete via divisive normalization. Sufficient competition creates interactions between the two cueing effects, although the attentional systems are themselves independent. This model helps reconcile seemingly disparate behavioral and physiological findings. PMID:26473316
Moors, Pieter; Huygelier, Hanne; Wagemans, Johan; de-Wit, Lee; van Ee, Raymond
2015-01-01
Previous studies using binocular rivalry have shown that signals in a modality other than the visual can bias dominance durations depending on their congruency with the rivaling stimuli. More recently, studies using continuous flash suppression (CFS) have reported that multisensory integration influences how long visual stimuli remain suppressed. In this study, using CFS, we examined whether the contrast thresholds for detecting visual looming stimuli are influenced by a congruent auditory stimulus. In Experiment 1, we show that a looming visual stimulus can result in lower detection thresholds compared to a static concentric grating, but that auditory tone pips congruent with the looming stimulus did not lower suppression thresholds any further. In Experiments 2, 3, and 4, we again observed no advantage for congruent multisensory stimuli. These results add to our understanding of the conditions under which multisensory integration is possible, and suggest that certain forms of multisensory integration are not evident when the visual stimulus is suppressed from awareness using CFS.
The levels of perceptual processing and the neural correlates of increasing subjective visibility.
Binder, Marek; Gociewicz, Krzysztof; Windey, Bert; Koculak, Marcin; Finc, Karolina; Nikadon, Jan; Derda, Monika; Cleeremans, Axel
2017-10-01
According to the levels-of-processing hypothesis, transitions from unconscious to conscious perception may depend on stimulus processing level, with more gradual changes for low-level stimuli and more dichotomous changes for high-level stimuli. In an event-related fMRI study we explored this hypothesis using a visual backward masking procedure. Task requirements manipulated level of processing. Participants reported the magnitude of the target digit in the high-level task, its color in the low-level task, and rated subjective visibility of stimuli using the Perceptual Awareness Scale. Intermediate stimulus visibility was reported more frequently in the low-level task, confirming prior behavioral results. Visible targets recruited insulo-fronto-parietal regions in both tasks. Task effects were observed in visual areas, with higher activity in the low-level task across all visibility levels. Thus, the influence of level of processing on conscious perception may be mediated by attentional modulation of activity in regions representing features of consciously experienced stimuli. Copyright © 2017 Elsevier Inc. All rights reserved.
Retrospective Attention Gates Discrete Conscious Access to Past Sensory Stimuli.
Thibault, Louis; van den Berg, Ronald; Cavanagh, Patrick; Sergent, Claire
2016-01-01
Cueing attention after the disappearance of visual stimuli biases which items will be remembered best. This observation has historically been attributed to the influence of attention on memory as opposed to subjective visual experience. We recently challenged this view by showing that cueing attention after the stimulus can improve the perception of a single Gabor patch at threshold levels of contrast. Here, we test whether this retro-perception actually increases the frequency of consciously perceiving the stimulus, or simply allows for a more precise recall of its features. We used retro-cues in an orientation-matching task and performed mixture-model analysis to independently estimate the proportion of guesses and the precision of non-guess responses. We find that the improvements in performance conferred by retrospective attention are overwhelmingly determined by a reduction in the proportion of guesses, providing strong evidence that attracting attention to the target's location after its disappearance increases the likelihood of perceiving it consciously.
The spread of attention across features of a surface
Ernst, Zachary Raymond; Jazayeri, Mehrdad
2013-01-01
Contrasting theories of visual attention have emphasized selection by spatial location, individual features, and whole objects. We used functional magnetic resonance imaging to ask whether and how attention to one feature of an object spreads to other features of the same object. Subjects viewed two spatially superimposed surfaces of random dots that were segregated by distinct color-motion conjunctions. The color and direction of motion of each surface changed smoothly and in a cyclical fashion. Subjects were required to track one feature (e.g., color) of one of the two surfaces and detect brief moments when the attended feature diverged from its smooth trajectory. To tease apart the effect of attention to individual features on the hemodynamic response, we used a frequency-tagging scheme. In this scheme, the stimulus features (color and direction of motion) are modulated periodically at distinct frequencies so that the contribution of each feature to the hemodynamics can be inferred from the harmonic response at the corresponding frequency. We found that attention to one feature (e.g., color) of one surface increased the response modulation not only to the attended feature but also to the other feature (e.g., motion) of the same surface. This attentional modulation was evident in multiple visual areas and was present as early as V1. The spread of attention to the behaviorally irrelevant features of a surface suggests that attention may automatically select all features of a single object. Thus object-based attention may be supported by an enhancement of feature-specific sensory signals in the visual cortex. PMID:23883860
Fries, Pascal; Womelsdorf, Thilo; Oostenveld, Robert; Desimone, Robert
2008-04-30
Selective attention lends relevant sensory input priority access to higher-level brain areas and ultimately to behavior. Recent studies have suggested that those neurons in visual areas that are activated by an attended stimulus engage in enhanced gamma-band (30-70 Hz) synchronization compared with neurons activated by a distracter. Such precise synchronization could enhance the postsynaptic impact of cells carrying behaviorally relevant information. Previous studies have used the local field potential (LFP) power spectrum or spike-LFP coherence (SFC) to indirectly estimate spike synchronization. Here, we directly demonstrate zero-phase gamma-band coherence among spike trains of V4 neurons. This synchronization was particularly evident during visual stimulation and enhanced by selective attention, thus confirming the pattern inferred from LFP power and SFC. We therefore investigated the time course of LFP gamma-band power and found rapid dynamics consistent with interactions of top-down spatial and feature attention with bottom-up saliency. In addition to the modulation of synchronization during visual stimulation, selective attention significantly changed the prestimulus pattern of synchronization. Attention inside the receptive field of the recorded neuronal population enhanced gamma-band synchronization and strongly reduced alpha-band (9-11 Hz) synchronization in the prestimulus period. These results lend further support for a functional role of rhythmic neuronal synchronization in attentional stimulus selection.
Task-dependent modulation of the visual sensory thalamus assists visual-speech recognition.
Díaz, Begoña; Blank, Helen; von Kriegstein, Katharina
2018-05-14
The cerebral cortex modulates early sensory processing via feed-back connections to sensory pathway nuclei. The functions of this top-down modulation for human behavior are poorly understood. Here, we show that top-down modulation of the visual sensory thalamus (the lateral geniculate body, LGN) is involved in visual-speech recognition. In two independent functional magnetic resonance imaging (fMRI) studies, LGN response increased when participants processed fast-varying features of articulatory movements required for visual-speech recognition, as compared to temporally more stable features required for face identification with the same stimulus material. The LGN response during the visual-speech task correlated positively with the visual-speech recognition scores across participants. In addition, the task-dependent modulation was present for speech movements and did not occur for control conditions involving non-speech biological movements. In face-to-face communication, visual speech recognition is used to enhance or even enable understanding what is said. Speech recognition is commonly explained in frameworks focusing on cerebral cortex areas. Our findings suggest that task-dependent modulation at subcortical sensory stages has an important role for communication: Together with similar findings in the auditory modality the findings imply that task-dependent modulation of the sensory thalami is a general mechanism to optimize speech recognition. Copyright © 2018. Published by Elsevier Inc.
The feature-weighted receptive field: an interpretable encoding model for complex feature spaces.
St-Yves, Ghislain; Naselaris, Thomas
2017-06-20
We introduce the feature-weighted receptive field (fwRF), an encoding model designed to balance expressiveness, interpretability and scalability. The fwRF is organized around the notion of a feature map-a transformation of visual stimuli into visual features that preserves the topology of visual space (but not necessarily the native resolution of the stimulus). The key assumption of the fwRF model is that activity in each voxel encodes variation in a spatially localized region across multiple feature maps. This region is fixed for all feature maps; however, the contribution of each feature map to voxel activity is weighted. Thus, the model has two separable sets of parameters: "where" parameters that characterize the location and extent of pooling over visual features, and "what" parameters that characterize tuning to visual features. The "where" parameters are analogous to classical receptive fields, while "what" parameters are analogous to classical tuning functions. By treating these as separable parameters, the fwRF model complexity is independent of the resolution of the underlying feature maps. This makes it possible to estimate models with thousands of high-resolution feature maps from relatively small amounts of data. Once a fwRF model has been estimated from data, spatial pooling and feature tuning can be read-off directly with no (or very little) additional post-processing or in-silico experimentation. We describe an optimization algorithm for estimating fwRF models from data acquired during standard visual neuroimaging experiments. We then demonstrate the model's application to two distinct sets of features: Gabor wavelets and features supplied by a deep convolutional neural network. We show that when Gabor feature maps are used, the fwRF model recovers receptive fields and spatial frequency tuning functions consistent with known organizational principles of the visual cortex. We also show that a fwRF model can be used to regress entire deep convolutional networks against brain activity. The ability to use whole networks in a single encoding model yields state-of-the-art prediction accuracy. Our results suggest a wide variety of uses for the feature-weighted receptive field model, from retinotopic mapping with natural scenes, to regressing the activities of whole deep neural networks onto measured brain activity. Copyright © 2017. Published by Elsevier Inc.
Kavcic, Voyko; Triplett, Regina L.; Das, Anasuya; Martin, Tim; Huxlin, Krystel R.
2015-01-01
Partial cortical blindness is a visual deficit caused by unilateral damage to the primary visual cortex, a condition previously considered beyond hopes of rehabilitation. However, recent data demonstrate that patients may recover both simple and global motion discrimination following intensive training in their blind field. The present experiments characterized motion-induced neural activity of cortically blind (CB) subjects prior to the onset of visual rehabilitation. This was done to provide information about visual processing capabilities available to mediate training-induced visual improvements. Visual Evoked Potentials (VEPs) were recorded from two experimental groups consisting of 9 CB subjects and 9 age-matched, visually-intact controls. VEPs were collected following lateralized stimulus presentation to each of the 4 visual field quadrants. VEP waveforms were examined for both stimulus-onset (SO) and motion-onset (MO) related components in postero-lateral electrodes. While stimulus presentation to intact regions of the visual field elicited normal SO-P1, SO-N1, SO-P2 and MO-N2 amplitudes and latencies in contralateral brain regions of CB subjects, these components were not observed contralateral to stimulus presentation in blind quadrants of the visual field. In damaged brain hemispheres, SO-VEPs were only recorded following stimulus presentation to intact visual field quadrants, via inter-hemispheric transfer. MO-VEPs were only recorded from damaged left brain hemispheres, possibly reflecting a native left/right asymmetry in inter-hemispheric connections. The present findings suggest that damaged brain hemispheres contain areas capable of responding to visual stimulation. However, in the absence of training or rehabilitation, these areas only generate detectable VEPs in response to stimulation of the intact hemifield of vision. PMID:25575450
The Effect of Visual Threat on Spatial Attention to Touch
ERIC Educational Resources Information Center
Poliakoff, Ellen; Miles, Eleanor; Li, Xinying; Blanchette, Isabelle
2007-01-01
Viewing a threatening stimulus can bias visual attention toward that location. Such effects have typically been investigated only in the visual modality, despite the fact that many threatening stimuli are most dangerous when close to or in contact with the body. Recent multisensory research indicates that a neutral visual stimulus, such as a light…
ERIC Educational Resources Information Center
Locke, Linda; Locke, Terry
2011-01-01
How might primary students utilise the stimulus of a painting in a collaborative composition drawing on a non-conventional sound palette of their own making? This practitioner research features 17 recorder players from a Year 6 class (10-11-year-olds) who attend a West Auckland primary school in New Zealand. These children were invited to…
Werner, Sebastian; Noppeney, Uta
2010-08-01
Merging information from multiple senses provides a more reliable percept of our environment. Yet, little is known about where and how various sensory features are combined within the cortical hierarchy. Combining functional magnetic resonance imaging and psychophysics, we investigated the neural mechanisms underlying integration of audiovisual object features. Subjects categorized or passively perceived audiovisual object stimuli with the informativeness (i.e., degradation) of the auditory and visual modalities being manipulated factorially. Controlling for low-level integration processes, we show higher level audiovisual integration selectively in the superior temporal sulci (STS) bilaterally. The multisensory interactions were primarily subadditive and even suppressive for intact stimuli but turned into additive effects for degraded stimuli. Consistent with the inverse effectiveness principle, auditory and visual informativeness determine the profile of audiovisual integration in STS similarly to the influence of physical stimulus intensity in the superior colliculus. Importantly, when holding stimulus degradation constant, subjects' audiovisual behavioral benefit predicts their multisensory integration profile in STS: only subjects that benefit from multisensory integration exhibit superadditive interactions, while those that do not benefit show suppressive interactions. In conclusion, superadditive and subadditive integration profiles in STS are functionally relevant and related to behavioral indices of multisensory integration with superadditive interactions mediating successful audiovisual object categorization.
Ludwig, Karin; Sterzer, Philipp; Kathmann, Norbert; Hesselmann, Guido
2016-10-01
As a functional organization principle in cortical visual information processing, the influential 'two visual systems' hypothesis proposes a division of labor between a dorsal "vision-for-action" and a ventral "vision-for-perception" stream. A core assumption of this model is that the two visual streams are differentially involved in visual awareness: ventral stream processing is closely linked to awareness while dorsal stream processing is not. In this functional magnetic resonance imaging (fMRI) study with human observers, we directly probed the stimulus-related information encoded in fMRI response patterns in both visual streams as a function of stimulus visibility. We parametrically modulated the visibility of face and tool stimuli by varying the contrasts of the masks in a continuous flash suppression (CFS) paradigm. We found that visibility - operationalized by objective and subjective measures - decreased proportionally with increasing log CFS mask contrast. Neuronally, this relationship was closely matched by ventral visual areas, showing a linear decrease of stimulus-related information with increasing mask contrast. Stimulus-related information in dorsal areas also showed a dependency on mask contrast, but the decrease rather followed a step function instead of a linear function. Together, our results suggest that both the ventral and the dorsal visual stream are linked to visual awareness, but neural activity in ventral areas more closely reflects graded differences in awareness compared to dorsal areas. Copyright © 2016 Elsevier Ltd. All rights reserved.
Paltoglou, Aspasia E; Sumner, Christian J; Hall, Deborah A
2011-01-01
Feature-specific enhancement refers to the process by which selectively attending to a particular stimulus feature specifically increases the response in the same region of the brain that codes that stimulus property. Whereas there are many demonstrations of this mechanism in the visual system, the evidence is less clear in the auditory system. The present functional magnetic resonance imaging (fMRI) study examined this process for two complex sound features, namely frequency modulation (FM) and spatial motion. The experimental design enabled us to investigate whether selectively attending to FM and spatial motion enhanced activity in those auditory cortical areas that were sensitive to the two features. To control for attentional effort, the difficulty of the target-detection tasks was matched as closely as possible within listeners. Locations of FM-related and motion-related activation were broadly compatible with previous research. The results also confirmed a general enhancement across the auditory cortex when either feature was being attended to, as compared with passive listening. The feature-specific effects of selective attention revealed the novel finding of enhancement for the nonspatial (FM) feature, but not for the spatial (motion) feature. However, attention to spatial features also recruited several areas outside the auditory cortex. Further analyses led us to conclude that feature-specific effects of selective attention are not statistically robust, and appear to be sensitive to the choice of fMRI experimental design and localizer contrast. PMID:21447093
Visual Masking During Pursuit Eye Movements
ERIC Educational Resources Information Center
White, Charles W.
1976-01-01
Visual masking occurs when one stimulus interferes with the perception of another stimulus. Investigates which matters more for visual masking--that the target and masking stimuli are flashed on the same part of the retina, or, that the target and mask appear in the same place. (Author/RK)
Contextual modulation revealed by optical imaging exhibits figural asymmetry in macaque V1 and V2.
Zarella, Mark D; Ts'o, Daniel Y
2017-01-01
Neurons in early visual cortical areas are influenced by stimuli presented well beyond the confines of their classical receptive fields, endowing them with the ability to encode fine-scale features while also having access to the global context of the visual scene. This property can potentially define a role for the early visual cortex to contribute to a number of important visual functions, such as surface segmentation and figure-ground segregation. It is unknown how extraclassical response properties conform to the functional architecture of the visual cortex, given the high degree of functional specialization in areas V1 and V2. We examined the spatial relationships of contextual activations in macaque V1 and V2 with intrinsic signal optical imaging. Using figure-ground stimulus configurations defined by orientation or motion, we found that extraclassical modulation is restricted to the cortical representations of the figural component of the stimulus. These modulations were positive in sign, suggesting a relative enhancement in neuronal activity that may reflect an excitatory influence. Orientation and motion cues produced similar patterns of activation that traversed the functional subdivisions of V2. The asymmetrical nature of the enhancement demonstrated the capacity for visual cortical areas as early as V1 to contribute to figure-ground segregation, and the results suggest that this information can be extracted from the population activity constrained only by retinotopy, and not the underlying functional organization.
Contextual modulation revealed by optical imaging exhibits figural asymmetry in macaque V1 and V2
Zarella, Mark D; Ts’o, Daniel Y
2017-01-01
Neurons in early visual cortical areas are influenced by stimuli presented well beyond the confines of their classical receptive fields, endowing them with the ability to encode fine-scale features while also having access to the global context of the visual scene. This property can potentially define a role for the early visual cortex to contribute to a number of important visual functions, such as surface segmentation and figure–ground segregation. It is unknown how extraclassical response properties conform to the functional architecture of the visual cortex, given the high degree of functional specialization in areas V1 and V2. We examined the spatial relationships of contextual activations in macaque V1 and V2 with intrinsic signal optical imaging. Using figure–ground stimulus configurations defined by orientation or motion, we found that extraclassical modulation is restricted to the cortical representations of the figural component of the stimulus. These modulations were positive in sign, suggesting a relative enhancement in neuronal activity that may reflect an excitatory influence. Orientation and motion cues produced similar patterns of activation that traversed the functional subdivisions of V2. The asymmetrical nature of the enhancement demonstrated the capacity for visual cortical areas as early as V1 to contribute to figure–ground segregation, and the results suggest that this information can be extracted from the population activity constrained only by retinotopy, and not the underlying functional organization. PMID:28761385
Visual training improves perceptual grouping based on basic stimulus features.
Kurylo, Daniel D; Waxman, Richard; Kidron, Rachel; Silverstein, Steven M
2017-10-01
Training on visual tasks improves performance on basic and higher order visual capacities. Such improvement has been linked to changes in connectivity among mediating neurons. We investigated whether training effects occur for perceptual grouping. It was hypothesized that repeated engagement of integration mechanisms would enhance grouping processes. Thirty-six participants underwent 15 sessions of training on a visual discrimination task that required perceptual grouping. Participants viewed 20 × 20 arrays of dots or Gabor patches and indicated whether the array appeared grouped as vertical or horizontal lines. Across trials stimuli became progressively disorganized, contingent upon successful discrimination. Four visual dimensions were examined, in which grouping was based on similarity in luminance, color, orientation, and motion. Psychophysical thresholds of grouping were assessed before and after training. Results indicate that performance in all four dimensions improved with training. Training on a control condition, which paralleled the discrimination task but without a grouping component, produced no improvement. In addition, training on only the luminance and orientation dimensions improved performance for those conditions as well as for grouping by color, on which training had not occurred. However, improvement from partial training did not generalize to motion. Results demonstrate that a training protocol emphasizing stimulus integration enhanced perceptual grouping. Results suggest that neural mechanisms mediating grouping by common luminance and/or orientation contribute to those mediating grouping by color but do not share resources for grouping by common motion. Results are consistent with theories of perceptual learning emphasizing plasticity in early visual processing regions.
Spatial attention increases high-frequency gamma synchronisation in human medial visual cortex.
Koelewijn, Loes; Rich, Anina N; Muthukumaraswamy, Suresh D; Singh, Krish D
2013-10-01
Visual information processing involves the integration of stimulus and goal-driven information, requiring neuronal communication. Gamma synchronisation is linked to neuronal communication, and is known to be modulated in visual cortex both by stimulus properties and voluntarily-directed attention. Stimulus-driven modulations of gamma activity are particularly associated with early visual areas such as V1, whereas attentional effects are generally localised to higher visual areas such as V4. The absence of a gamma increase in early visual cortex is at odds with robust attentional enhancements found with other measures of neuronal activity in this area. Here we used magnetoencephalography (MEG) to explore the effect of spatial attention on gamma activity in human early visual cortex using a highly effective gamma-inducing stimulus and strong attentional manipulation. In separate blocks, subjects tracked either a parafoveal grating patch that induced gamma activity in contralateral medial visual cortex, or a small line at fixation, effectively attending away from the gamma-inducing grating. Both items were always present, but rotated unpredictably and independently of each other. The rotating grating induced gamma synchronisation in medial visual cortex at 30-70 Hz, and in lateral visual cortex at 60-90 Hz, regardless of whether it was attended. Directing spatial attention to the grating increased gamma synchronisation in medial visual cortex, but only at 60-90 Hz. These results suggest that the generally found increase in gamma activity by spatial attention can be localised to early visual cortex in humans, and that stimulus and goal-driven modulations may be mediated at different frequencies within the gamma range. Copyright © 2013 Elsevier Inc. All rights reserved.
A unified selection signal for attention and reward in primary visual cortex.
Stănişor, Liviu; van der Togt, Chris; Pennartz, Cyriel M A; Roelfsema, Pieter R
2013-05-28
Stimuli associated with high rewards evoke stronger neuronal activity than stimuli associated with lower rewards in many brain regions. It is not well understood how these reward effects influence activity in sensory cortices that represent low-level stimulus features. Here, we investigated the effects of reward information in the primary visual cortex (area V1) of monkeys. We found that the reward value of a stimulus relative to the value of other stimuli is a good predictor of V1 activity. Relative value biases the competition between stimuli, just as has been shown for selective attention. The neuronal latency of this reward value effect in V1 was similar to the latency of attentional influences. Moreover, V1 neurons with a strong value effect also exhibited a strong attention effect, which implies that relative value and top-down attention engage overlapping, if not identical, neuronal selection mechanisms. Our findings demonstrate that the effects of reward value reach down to the earliest sensory processing levels of the cerebral cortex and imply that theories about the effects of reward coding and top-down attention on visual representations should be unified.
Do People Take Stimulus Correlations into Account in Visual Search (Open Source)
2016-03-10
RESEARCH ARTICLE Do People Take Stimulus Correlations into Account in Visual Search ? Manisha Bhardwaj1, Ronald van den Berg2,3, Wei Ji Ma2,4...visual search experiments, distractors are often statistically independent of each other. However, stimuli in more naturalistic settings are often...contribute to bridging the gap between artificial and natural visual search tasks. Introduction Visual target detection in displays consisting of multiple
Attention in the processing of complex visual displays: detecting features and their combinations.
Farell, B
1984-02-01
The distinction between operations in visual processing that are parallel and preattentive and those that are serial and attentional receives both theoretical and empirical support. According to Treisman's feature-integration theory, independent features are available preattentively, but attention is required to veridically combine features into objects. Certain evidence supporting this theory is consistent with a different interpretation, which was tested in four experiments. The first experiment compared the detection of features and feature combinations while eliminating a factor that confounded earlier comparisons. The resulting priority of access to combinatorial information suggests that features and nonlocal combinations of features are not connected solely by a bottom-up hierarchical convergence. Causes of the disparity between the results of Experiment 1 and the results of previous research were investigated in three subsequent experiments. The results showed that of the two confounded factors, it was the difference in the mapping of alternatives onto responses, not the differing attentional demands of features and objects, that underlaid the results of the previous research. The present results are thus counterexamples to the feature-integration theory. Aspects of this theory are shown to be subsumed by more general principles, which are discussed in terms of attentional processes in the detection of features, objects, and stimulus alternatives.
ERIC Educational Resources Information Center
Vause, Tricia; Martin, Garry L.; Yu, C.T.; Marion, Carole; Sakko, Gina
2005-01-01
The relationship between language, performance on the Assessment of Basic Learning Abilities (ABLA) test, and stimulus equivalence was examined. Five participants with minimal verbal repertoires were studied; 3 who passed up to ABLA Level 4, a visual quasi-identity discrimination and 2 who passed ABLA Level 6, an auditory-visual nonidentity…
Discrepant visual speech facilitates covert selective listening in "cocktail party" conditions.
Williams, Jason A
2012-06-01
The presence of congruent visual speech information facilitates the identification of auditory speech, while the addition of incongruent visual speech information often impairs accuracy. This latter arrangement occurs naturally when one is being directly addressed in conversation but listens to a different speaker. Under these conditions, performance may diminish since: (a) one is bereft of the facilitative effects of the corresponding lip motion and (b) one becomes subject to visual distortion by incongruent visual speech; by contrast, speech intelligibility may be improved due to (c) bimodal localization of the central unattended stimulus. Participants were exposed to centrally presented visual and auditory speech while attending to a peripheral speech stream. In some trials, the lip movements of the central visual stimulus matched the unattended speech stream; in others, the lip movements matched the attended peripheral speech. Accuracy for the peripheral stimulus was nearly one standard deviation greater with incongruent visual information, compared to the congruent condition which provided bimodal pattern recognition cues. Likely, the bimodal localization of the central stimulus further differentiated the stimuli and thus facilitated intelligibility. Results are discussed with regard to similar findings in an investigation of the ventriloquist effect, and the relative strength of localization and speech cues in covert listening.
Impaired distractor inhibition on a selective attention task in unmedicated, depressed subjects.
MacQueen, G M; Tipper, S P; Young, L T; Joffe, R T; Levitt, A J
2000-05-01
Impaired distractor inhibition may contribute to the selective attention deficits observed in depressed patients, but studies to date have not tested the distractor inhibition theory against the possibility that processes such as transient memory review processes may account for the observed deficits. A negative priming paradigm can dissociate inhibition from such a potentially confounding process called object review. The negative priming task also isolates features of the distractor such as colour and location for independent examination. A computerized negative priming task was used in which colour, identification and location features of a stimulus and distractor were systematically manipulated across successive prime and probe trials. Thirty-two unmedicated subjects with DSM-IV diagnoses of non-psychotic unipolar depression were compared with 32 age, sex and IQ matched controls. Depressed subjects had reduced levels of negative priming for conditions where the colour feature of the stimulus was repeated across prime and probe trials but not when identity or location was the repeated feature. When both the colour and location feature were the repeated feature across trials, facilitation in response was apparent. The pattern of results supports studies that found reduced distractor inhibition in depressed subjects, and suggests that object review is intact in these subjects. Greater impairment in negative priming for colour versus location suggests that subjects may have greater impairment in the visual stream associated with processing colour features.
Effect of eye position during human visual-vestibular integration of heading perception.
Crane, Benjamin T
2017-09-01
Visual and inertial stimuli provide heading discrimination cues. Integration of these multisensory stimuli has been demonstrated to depend on their relative reliability. However, the reference frame of visual stimuli is eye centered while inertia is head centered, and it remains unclear how these are reconciled with combined stimuli. Seven human subjects completed a heading discrimination task consisting of a 2-s translation with a peak velocity of 16 cm/s. Eye position was varied between 0° and ±25° left/right. Experiments were done with inertial motion, visual motion, or a combined visual-inertial motion. Visual motion coherence varied between 35% and 100%. Subjects reported whether their perceived heading was left or right of the midline in a forced-choice task. With the inertial stimulus the eye position had an effect such that the point of subjective equality (PSE) shifted 4.6 ± 2.4° in the gaze direction. With the visual stimulus the PSE shift was 10.2 ± 2.2° opposite the gaze direction, consistent with retinotopic coordinates. Thus with eccentric eye positions the perceived inertial and visual headings were offset ~15°. During the visual-inertial conditions the PSE varied consistently with the relative reliability of these stimuli such that at low visual coherence the PSE was similar to that of the inertial stimulus and at high coherence it was closer to the visual stimulus. On average, the inertial stimulus was weighted near Bayesian ideal predictions, but there was significant deviation from ideal in individual subjects. These findings support visual and inertial cue integration occurring in independent coordinate systems. NEW & NOTEWORTHY In multiple cortical areas visual heading is represented in retinotopic coordinates while inertial heading is in body coordinates. It remains unclear whether multisensory integration occurs in a common coordinate system. The experiments address this using a multisensory integration task with eccentric gaze positions making the effect of coordinate systems clear. The results indicate that the coordinate systems remain separate to the perceptual level and that during the multisensory task the perception depends on relative stimulus reliability. Copyright © 2017 the American Physiological Society.
Pictures, images, and recollective experience.
Dewhurst, S A; Conway, M A
1994-09-01
Five experiments investigated the influence of picture processing on recollective experience in recognition memory. Subjects studied items that differed in visual or imaginal detail, such as pictures versus words and high-imageability versus low-imageability words, and performed orienting tasks that directed processing either toward a stimulus as a word or toward a stimulus as a picture or image. Standard effects of imageability (e.g., the picture superiority effect and memory advantages following imagery) were obtained only in recognition judgments that featured recollective experience and were eliminated or reversed when recognition was not accompanied by recollective experience. It is proposed that conscious recollective experience in recognition memory is cued by attributes of retrieved memories such as sensory-perceptual attributes and records of cognitive operations performed at encoding.
Stimulus Dependence of Correlated Variability across Cortical Areas
Cohen, Marlene R.
2016-01-01
The way that correlated trial-to-trial variability between pairs of neurons in the same brain area (termed spike count or noise correlation, rSC) depends on stimulus or task conditions can constrain models of cortical circuits and of the computations performed by networks of neurons (Cohen and Kohn, 2011). In visual cortex, rSC tends not to depend on stimulus properties (Kohn and Smith, 2005; Huang and Lisberger, 2009) but does depend on cognitive factors like visual attention (Cohen and Maunsell, 2009; Mitchell et al., 2009). However, neurons across visual areas respond to any visual stimulus or contribute to any perceptual decision, and the way that information from multiple areas is combined to guide perception is unknown. To gain insight into these issues, we recorded simultaneously from neurons in two areas of visual cortex (primary visual cortex, V1, and the middle temporal area, MT) while rhesus monkeys viewed different visual stimuli in different attention conditions. We found that correlations between neurons in different areas depend on stimulus and attention conditions in very different ways than do correlations within an area. Correlations across, but not within, areas depend on stimulus direction and the presence of a second stimulus, and attention has opposite effects on correlations within and across areas. This observed pattern of cross-area correlations is predicted by a normalization model where MT units sum V1 inputs that are passed through a divisive nonlinearity. Together, our results provide insight into how neurons in different areas interact and constrain models of the neural computations performed across cortical areas. SIGNIFICANCE STATEMENT Correlations in the responses of pairs of neurons within the same cortical area have been a subject of growing interest in systems neuroscience. However, correlated variability between different cortical areas is likely just as important. We recorded simultaneously from neurons in primary visual cortex and the middle temporal area while rhesus monkeys viewed different visual stimuli in different attention conditions. We found that correlations between neurons in different areas depend on stimulus and attention conditions in very different ways than do correlations within an area. The observed pattern of cross-area correlations was predicted by a simple normalization model. Our results provide insight into how neurons in different areas interact and constrain models of the neural computations performed across cortical areas. PMID:27413163
GABAergic neurons in ferret visual cortex participate in functionally specific networks
Wilson, Daniel E.; Smith, Gordon B.; Jacob, Amanda; Walker, Theo; Dimidschstein, Jordane; Fishell, Gord J.; Fitzpatrick, David
2017-01-01
Summary Functional circuits in the visual cortex require the coordinated activity of excitatory and inhibitory neurons. Molecular genetic approaches in the mouse have led to the ‘local nonspecific pooling principle’ of inhibitory connectivity, in which inhibitory neurons are untuned for stimulus features due to the random pooling of local inputs. However, it remains unclear whether this principle generalizes to species with a columnar organization of feature selectivity such as carnivores, primates, and humans. Here we use virally-mediated GABAergic-specific GCaMP6f expression to demonstrate that inhibitory neurons in ferret visual cortex respond robustly and selectively to oriented stimuli. We find that the tuning of inhibitory neurons is inconsistent with the local non-specific pooling of excitatory inputs, and that inhibitory neurons exhibit orientation-specific noise correlations with local and distant excitatory neurons. These findings challenge the generality of the non-specific pooling principle for inhibitory neurons, suggesting different rules for functional excitatory-inhibitory interactions in non-murine species. PMID:28279352
Object form discontinuity facilitates displacement discrimination across saccades.
Demeyer, Maarten; De Graef, Peter; Wagemans, Johan; Verfaillie, Karl
2010-06-01
Stimulus displacements coinciding with a saccadic eye movement are poorly detected by human observers. In recent years, converging evidence has shown that this phenomenon does not result from poor transsaccadic retention of presaccadic stimulus position information, but from the visual system's efforts to spatially align presaccadic and postsaccadic perception on the basis of visual landmarks. It is known that this process can be disrupted, and transsaccadic displacement detection performance can be improved, by briefly blanking the stimulus display during and immediately after the saccade. In the present study, we investigated whether this improvement could also follow from a discontinuity in the task-irrelevant form of the displaced stimulus. We observed this to be the case: Subjects more accurately identified the direction of intrasaccadic displacements when the displaced stimulus simultaneously changed form, compared to conditions without a form change. However, larger improvements were still observed under blanking conditions. In a second experiment, we show that facilitation induced by form changes and blanks can combine. We conclude that a strong assumption of visual stability underlies the suppression of transsaccadic change detection performance, the rejection of which generalizes from stimulus form to stimulus position.
Rosselli, Federica B.; Alemi, Alireza; Ansuini, Alessio; Zoccolan, Davide
2015-01-01
In recent years, a number of studies have explored the possible use of rats as models of high-level visual functions. One central question at the root of such an investigation is to understand whether rat object vision relies on the processing of visual shape features or, rather, on lower-order image properties (e.g., overall brightness). In a recent study, we have shown that rats are capable of extracting multiple features of an object that are diagnostic of its identity, at least when those features are, structure-wise, distinct enough to be parsed by the rat visual system. In the present study, we have assessed the impact of object structure on rat perceptual strategy. We trained rats to discriminate between two structurally similar objects, and compared their recognition strategies with those reported in our previous study. We found that, under conditions of lower stimulus discriminability, rat visual discrimination strategy becomes more view-dependent and subject-dependent. Rats were still able to recognize the target objects, in a way that was largely tolerant (i.e., invariant) to object transformation; however, the larger structural and pixel-wise similarity affected the way objects were processed. Compared to the findings of our previous study, the patterns of diagnostic features were: (i) smaller and more scattered; (ii) only partially preserved across object views; and (iii) only partially reproducible across rats. On the other hand, rats were still found to adopt a multi-featural processing strategy and to make use of part of the optimal discriminatory information afforded by the two objects. Our findings suggest that, as in humans, rat invariant recognition can flexibly rely on either view-invariant representations of distinctive object features or view-specific object representations, acquired through learning. PMID:25814936
Oculomotor Reflexes as a Test of Visual Dysfunctions in Cognitively Impaired Observers
2013-09-01
right. Gaze horizontal position is plotted along the y-axis. The red bar indicates a visual nystagmus event detected by the filter. (d) A mild curse word...experimental conditions were chosen to simulate testing cognitively impaired observers. Reflex Stimulus Functions Visual Nystagmus luminance grating low-level...developed a new stimulus for visual nystagmus to 8 test visual motion processing in the presence of incoherent motion noise. The drifting equiluminant
Tilt and Translation Motion Perception during Pitch Tilt with Visual Surround Translation
NASA Technical Reports Server (NTRS)
O'Sullivan, Brita M.; Harm, Deborah L.; Reschke, Millard F.; Wood, Scott J.
2006-01-01
The central nervous system must resolve the ambiguity of inertial motion sensory cues in order to derive an accurate representation of spatial orientation. Previous studies suggest that multisensory integration is critical for discriminating linear accelerations arising from tilt and translation head motion. Visual input is especially important at low frequencies where canal input is declining. The NASA Tilt Translation Device (TTD) was designed to recreate postflight orientation disturbances by exposing subjects to matching tilt self motion with conflicting visual surround translation. Previous studies have demonstrated that brief exposures to pitch tilt with foreaft visual surround translation produced changes in compensatory vertical eye movement responses, postural equilibrium, and motion sickness symptoms. Adaptation appeared greatest with visual scene motion leading (versus lagging) the tilt motion, and the adaptation time constant appeared to be approximately 30 min. The purpose of this study was to compare motion perception when the visual surround translation was inphase versus outofphase with pitch tilt. The inphase stimulus presented visual surround motion one would experience if the linear acceleration was due to foreaft self translation within a stationary surround, while the outofphase stimulus had the visual scene motion leading the tilt by 90 deg as previously used. The tilt stimuli in these conditions were asymmetrical, ranging from an upright orientation to 10 deg pitch back. Another objective of the study was to compare motion perception with the inphase stimulus when the tilts were asymmetrical relative to upright (0 to 10 deg back) versus symmetrical (10 deg forward to 10 deg back). Twelve subjects (6M, 6F, 22-55 yrs) were tested during 3 sessions separated by at least one week. During each of the three sessions (out-of-phase asymmetrical, in-phase asymmetrical, inphase symmetrical), subjects were exposed to visual surround translation synchronized with pitch tilt at 0.1 Hz for a total of 30 min. Tilt and translation motion perception was obtained from verbal reports and a joystick mounted on a linear stage. Horizontal vergence and vertical eye movements were obtained with a binocular video system. Responses were also obtained during darkness before and following 15 min and 30 min of visual surround translation. Each of the three stimulus conditions involving visual surround translation elicited a significantly reduced sense of perceived tilt and strong linear vection (perceived translation) compared to pre-exposure tilt stimuli in darkness. This increase in perceived translation with reduction in tilt perception was also present in darkness following 15 and 30 min exposures, provided the tilt stimuli were not interrupted. Although not significant, there was a trend for the inphase asymmetrical stimulus to elicit a stronger sense of both translation and tilt than the out-of-phase asymmetrical stimulus. Surprisingly, the inphase asymmetrical stimulus also tended to elicit a stronger sense of peak-to-peak translation than the inphase symmetrical stimulus, even though the range of linear acceleration during the symmetrical stimulus was twice that of the asymmetrical stimulus. These results are consistent with the hypothesis that the central nervous system resolves the ambiguity of inertial motion sensory cues by integrating inputs from visual, vestibular, and somatosensory systems.
Sharpening of Hierarchical Visual Feature Representations of Blurred Images.
Abdelhack, Mohamed; Kamitani, Yukiyasu
2018-01-01
The robustness of the visual system lies in its ability to perceive degraded images. This is achieved through interacting bottom-up, recurrent, and top-down pathways that process the visual input in concordance with stored prior information. The interaction mechanism by which they integrate visual input and prior information is still enigmatic. We present a new approach using deep neural network (DNN) representation to reveal the effects of such integration on degraded visual inputs. We transformed measured human brain activity resulting from viewing blurred images to the hierarchical representation space derived from a feedforward DNN. Transformed representations were found to veer toward the original nonblurred image and away from the blurred stimulus image. This indicated deblurring or sharpening in the neural representation, and possibly in our perception. We anticipate these results will help unravel the interplay mechanism between bottom-up, recurrent, and top-down pathways, leading to more comprehensive models of vision.
Lundqvist, Daniel; Bruce, Neil; Öhman, Arne
2015-01-01
In this article, we examine how emotional and perceptual stimulus factors influence visual search efficiency. In an initial task, we run a visual search task, using a large number of target/distractor emotion combinations. In two subsequent tasks, we then assess measures of perceptual (rated and computational distances) and emotional (rated valence, arousal and potency) stimulus properties. In a series of regression analyses, we then explore the degree to which target salience (the size of target/distractor dissimilarities) on these emotional and perceptual measures predict the outcome on search efficiency measures (response times and accuracy) from the visual search task. The results show that both emotional and perceptual stimulus salience contribute to visual search efficiency. The results show that among the emotional measures, salience on arousal measures was more influential than valence salience. The importance of the arousal factor may be a contributing factor to contradictory history of results within this field.
Tang, Xiaoyu; Li, Chunlin; Li, Qi; Gao, Yulin; Yang, Weiping; Yang, Jingjing; Ishikawa, Soushirou; Wu, Jinglong
2013-10-11
Utilizing the high temporal resolution of event-related potentials (ERPs), we examined how visual spatial or temporal cues modulated the auditory stimulus processing. The visual spatial cue (VSC) induces orienting of attention to spatial locations; the visual temporal cue (VTC) induces orienting of attention to temporal intervals. Participants were instructed to respond to auditory targets. Behavioral responses to auditory stimuli following VSC were faster and more accurate than those following VTC. VSC and VTC had the same effect on the auditory N1 (150-170 ms after stimulus onset). The mean amplitude of the auditory P1 (90-110 ms) in VSC condition was larger than that in VTC condition, and the mean amplitude of late positivity (300-420 ms) in VTC condition was larger than that in VSC condition. These findings suggest that modulation of auditory stimulus processing by visually induced spatial or temporal orienting of attention were different, but partially overlapping. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Stropahl, Maren; Schellhardt, Sebastian; Debener, Stefan
2017-06-01
The concurrent presentation of different auditory and visual syllables may result in the perception of a third syllable, reflecting an illusory fusion of visual and auditory information. This well-known McGurk effect is frequently used for the study of audio-visual integration. Recently, it was shown that the McGurk effect is strongly stimulus-dependent, which complicates comparisons across perceivers and inferences across studies. To overcome this limitation, we developed the freely available Oldenburg audio-visual speech stimuli (OLAVS), consisting of 8 different talkers and 12 different syllable combinations. The quality of the OLAVS set was evaluated with 24 normal-hearing subjects. All 96 stimuli were characterized based on their stimulus disparity, which was obtained from a probabilistic model (cf. Magnotti & Beauchamp, 2015). Moreover, the McGurk effect was studied in eight adult cochlear implant (CI) users. By applying the individual, stimulus-independent parameters of the probabilistic model, the predicted effect of stronger audio-visual integration in CI users could be confirmed, demonstrating the validity of the new stimulus material.
Fluctuations of visual awareness: Combining motion-induced blindness with binocular rivalry
Jaworska, Katarzyna; Lages, Martin
2014-01-01
Binocular rivalry (BR) and motion-induced blindness (MIB) are two phenomena of visual awareness where perception alternates between multiple states despite constant retinal input. Both phenomena have been extensively studied, but the underlying processing remains unclear. It has been suggested that BR and MIB involve the same neural mechanism, but how the two phenomena compete for visual awareness in the same stimulus has not been systematically investigated. Here we introduce BR in a dichoptic stimulus display that can also elicit MIB and examine fluctuations of visual awareness over the course of each trial. Exploiting this paradigm we manipulated stimulus characteristics that are known to influence MIB and BR. In two experiments we found that effects on multistable percepts were incompatible with the idea of a common oscillator. The results suggest instead that local and global stimulus attributes can affect the dynamics of each percept differently. We conclude that the two phenomena of visual awareness share basic temporal characteristics but are most likely influenced by processing at different stages within the visual system. PMID:25240063
Neurons with two sites of synaptic integration learn invariant representations.
Körding, K P; König, P
2001-12-01
Neurons in mammalian cerebral cortex combine specific responses with respect to some stimulus features with invariant responses to other stimulus features. For example, in primary visual cortex, complex cells code for orientation of a contour but ignore its position to a certain degree. In higher areas, such as the inferotemporal cortex, translation-invariant, rotation-invariant, and even view point-invariant responses can be observed. Such properties are of obvious interest to artificial systems performing tasks like pattern recognition. It remains to be resolved how such response properties develop in biological systems. Here we present an unsupervised learning rule that addresses this problem. It is based on a neuron model with two sites of synaptic integration, allowing qualitatively different effects of input to basal and apical dendritic trees, respectively. Without supervision, the system learns to extract invariance properties using temporal or spatial continuity of stimuli. Furthermore, top-down information can be smoothly integrated in the same framework. Thus, this model lends a physiological implementation to approaches of unsupervised learning of invariant-response properties.
Differential effects of ongoing EEG beta and theta power on memory formation
Scholz, Sebastian; Schneider, Signe Luisa
2017-01-01
Recently, elevated ongoing pre-stimulus beta power (13–17 Hz) at encoding has been associated with subsequent memory formation for visual stimulus material. It is unclear whether this activity is merely specific to visual processing or whether it reflects a state facilitating general memory formation, independent of stimulus modality. To answer that question, the present study investigated the relationship between neural pre-stimulus oscillations and verbal memory formation in different sensory modalities. For that purpose, a within-subject design was employed to explore differences between successful and failed memory formation in the visual and auditory modality. Furthermore, associative memory was addressed by presenting the stimuli in combination with background images. Results revealed that similar EEG activity in the low beta frequency range (13–17 Hz) is associated with subsequent memory success, independent of stimulus modality. Elevated power prior to stimulus onset differentiated successful from failed memory formation. In contrast, differential effects between modalities were found in the theta band (3–7 Hz), with an increased oscillatory activity before the onset of later remembered visually presented words. In addition, pre-stimulus theta power dissociated between successful and failed encoding of associated context, independent of the stimulus modality of the item itself. We therefore suggest that increased ongoing low beta activity reflects a memory promoting state, which is likely to be moderated by modality-independent attentional or inhibitory processes, whereas high ongoing theta power is suggested as an indicator of the enhanced binding of incoming interlinked information. PMID:28192459
Barrier Effects in Non-retinotopic Feature Attribution
Aydin, Murat; Herzog, Michael H.; Öğmen, Haluk
2011-01-01
When objects move in the environment, their retinal images can undergo drastic changes and features of different objects can be inter-mixed in the retinal image. Notwithstanding these changes and ambiguities, the visual system is capable of establishing correctly feature-object relationships as well as maintaining individual identities of objects through space and time. Recently, by using a Ternus-Pikler display, we have shown that perceived motion correspondences serve as the medium for non-retinotopic attribution of features to objects. The purpose of the work reported in this manuscript was to assess whether perceived motion correspondences provide a sufficient condition for feature attribution. Our results show that the introduction of a static “barrier” stimulus can interfere with the feature attribution process. Our results also indicate that the barrier stops feature attribution based on interferences related to the feature attribution process itself rather than on mechanisms related to perceived motion. PMID:21767561
Glowinski, Donald; Riolfo, Arianna; Shirole, Kanika; Torres-Eliard, Kim; Chiorri, Carlo; Grandjean, Didier
2014-01-01
Visual information is imperative when developing a concrete and context-sensitive understanding of how music performance is perceived. Recent studies highlight natural, automatic, and nonconscious dependence on visual cues that ultimately refer to body expressions observed in the musician. The current study investigated how the social context of a performing musician (eg playing alone or within an ensemble) and the musical expertise of the perceivers influence the strategies used to understand and decode the visual features of music performance. Results revealed that both perceiver groups, nonmusicians and musicians, have a higher sensitivity towards gaze information; therefore, an impoverished stimulus such as a point-light display is insufficient to understand the social context in which the musician is performing. Implications for these findings are discussed.
Cecere, Roberto; Gross, Joachim; Thut, Gregor
2016-06-01
The ability to integrate auditory and visual information is critical for effective perception and interaction with the environment, and is thought to be abnormal in some clinical populations. Several studies have investigated the time window over which audiovisual events are integrated, also called the temporal binding window, and revealed asymmetries depending on the order of audiovisual input (i.e. the leading sense). When judging audiovisual simultaneity, the binding window appears narrower and non-malleable for auditory-leading stimulus pairs and wider and trainable for visual-leading pairs. Here we specifically examined the level of independence of binding mechanisms when auditory-before-visual vs. visual-before-auditory input is bound. Three groups of healthy participants practiced audiovisual simultaneity detection with feedback, selectively training on auditory-leading stimulus pairs (group 1), visual-leading stimulus pairs (group 2) or both (group 3). Subsequently, we tested for learning transfer (crossover) from trained stimulus pairs to non-trained pairs with opposite audiovisual input. Our data confirmed the known asymmetry in size and trainability for auditory-visual vs. visual-auditory binding windows. More importantly, practicing one type of audiovisual integration (e.g. auditory-visual) did not affect the other type (e.g. visual-auditory), even if trainable by within-condition practice. Together, these results provide crucial evidence that audiovisual temporal binding for auditory-leading vs. visual-leading stimulus pairs are independent, possibly tapping into different circuits for audiovisual integration due to engagement of different multisensory sampling mechanisms depending on leading sense. Our results have implications for informing the study of multisensory interactions in healthy participants and clinical populations with dysfunctional multisensory integration. © 2016 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Vinken, Kasper; Vogels, Rufin; Op de Beeck, Hans
2017-03-20
From an ecological point of view, it is generally suggested that the main goal of vision in rats and mice is navigation and (aerial) predator evasion [1-3]. The latter requires fast and accurate detection of a change in the visual environment. An outstanding question is whether there are mechanisms in the rodent visual system that would support and facilitate visual change detection. An experimental protocol frequently used to investigate change detection in humans is the oddball paradigm, in which a rare, unexpected stimulus is presented in a train of stimulus repetitions [4]. A popular "predictive coding" theory of cortical responses states that neural responses should decrease for expected sensory input and increase for unexpected input [5, 6]. Despite evidence for response suppression and enhancement in noninvasive scalp recordings in humans with this paradigm [7, 8], it has proven challenging to observe both phenomena in invasive action potential recordings in other animals [9-11]. During a visual oddball experiment, we recorded multi-unit spiking activity in rat primary visual cortex (V1) and latero-intermediate area (LI), which is a higher area of the rodent ventral visual stream. In rat V1, there was only evidence for response suppression related to stimulus-specific adaptation, and not for response enhancement. However, higher up in area LI, spiking activity showed clear surprise-based response enhancement in addition to stimulus-specific adaptation. These results show that neural responses along the rat ventral visual stream become increasingly sensitive to changes in the visual environment, suggesting a system specialized in the detection of unexpected events. Copyright © 2017 Elsevier Ltd. All rights reserved.
Phonological Concept Learning.
Moreton, Elliott; Pater, Joe; Pertsova, Katya
2017-01-01
Linguistic and non-linguistic pattern learning have been studied separately, but we argue for a comparative approach. Analogous inductive problems arise in phonological and visual pattern learning. Evidence from three experiments shows that human learners can solve them in analogous ways, and that human performance in both cases can be captured by the same models. We test GMECCS (Gradual Maximum Entropy with a Conjunctive Constraint Schema), an implementation of the Configural Cue Model (Gluck & Bower, ) in a Maximum Entropy phonotactic-learning framework (Goldwater & Johnson, ; Hayes & Wilson, ) with a single free parameter, against the alternative hypothesis that learners seek featurally simple algebraic rules ("rule-seeking"). We study the full typology of patterns introduced by Shepard, Hovland, and Jenkins () ("SHJ"), instantiated as both phonotactic patterns and visual analogs, using unsupervised training. Unlike SHJ, Experiments 1 and 2 found that both phonotactic and visual patterns that depended on fewer features could be more difficult than those that depended on more features, as predicted by GMECCS but not by rule-seeking. GMECCS also correctly predicted performance differences between stimulus subclasses within each pattern. A third experiment tried supervised training (which can facilitate rule-seeking in visual learning) to elicit simple rule-seeking phonotactic learning, but cue-based behavior persisted. We conclude that similar cue-based cognitive processes are available for phonological and visual concept learning, and hence that studying either kind of learning can lead to significant insights about the other. Copyright © 2015 Cognitive Science Society, Inc.
Perceptual expertise and top-down expectation of musical notation engages the primary visual cortex.
Wong, Yetta Kwailing; Peng, Cynthia; Fratus, Kristyn N; Woodman, Geoffrey F; Gauthier, Isabel
2014-08-01
Most theories of visual processing propose that object recognition is achieved in higher visual cortex. However, we show that category selectivity for musical notation can be observed in the first ERP component called the C1 (measured 40-60 msec after stimulus onset) with music-reading expertise. Moreover, the C1 note selectivity was observed only when the stimulus category was blocked but not when the stimulus category was randomized. Under blocking, the C1 activity for notes predicted individual music-reading ability, and behavioral judgments of musical stimuli reflected music-reading skill. Our results challenge current theories of object recognition, indicating that the primary visual cortex can be selective for musical notation within the initial feedforward sweep of activity with perceptual expertise and with a testing context that is consistent with the expertise training, such as blocking the stimulus category for music reading.
ERIC Educational Resources Information Center
Reyer, Howard S.; Sturmey, Peter
2009-01-01
Three adults with intellectual disabilities participated to investigate the effects of reinforcer deprivation on choice responding. The experimenter identified the most preferred audio-visual (A-V) stimulus and the least preferred visual-only stimulus for each participant. Participants did not have access to the A-V stimulus for 5 min, 5 and 24 h.…
Stimulus onset predictability modulates proactive action control in a Go/No-go task
Berchicci, Marika; Lucci, Giuliana; Spinelli, Donatella; Di Russo, Francesco
2015-01-01
The aim of the study was to evaluate whether the presence/absence of visual cues specifying the onset of an upcoming, action-related stimulus modulates pre-stimulus brain activity, associated with the proactive control of goal-directed actions. To this aim we asked 12 subjects to perform an equal probability Go/No-go task with four stimulus configurations in two conditions: (1) uncued, i.e., without any external information about the timing of stimulus onset; and (2) cued, i.e., with external visual cues providing precise information about the timing of stimulus onset. During task both behavioral performance and event-related potentials (ERPs) were recorded. Behavioral results showed faster response times in the cued than uncued condition, confirming existing literature. ERPs showed novel results in the proactive control stage, that started about 1 s before the motor response. We observed a slow rising prefrontal positive activity, more pronounced in the cued than the uncued condition. Further, also pre-stimulus activity of premotor areas was larger in cued than uncued condition. In the post-stimulus period, the P3 amplitude was enhanced when the time of stimulus onset was externally driven, confirming that external cueing enhances processing of stimulus evaluation and response monitoring. Our results suggest that different pre-stimulus processing come into play in the two conditions. We hypothesize that the large prefrontal and premotor activities recorded with external visual cues index the monitoring of the external stimuli in order to finely regulate the action. PMID:25964751
Face features and face configurations both contribute to visual crowding.
Sun, Hsin-Mei; Balas, Benjamin
2015-02-01
Crowding refers to the inability to recognize an object in peripheral vision when other objects are presented nearby (Whitney & Levi Trends in Cognitive Sciences, 15, 160-168, 2011). A popular explanation of crowding is that features of the target and flankers are combined inappropriately when they are located within an integration field, thus impairing target recognition (Pelli, Palomares, & Majaj Journal of Vision, 4(12), 12:1136-1169, 2004). However, it remains unclear which features of the target and flankers are combined inappropriately to cause crowding (Levi Vision Research, 48, 635-654, 2008). For example, in a complex stimulus (e.g., a face), to what extent does crowding result from the integration of features at a part-based level or at the level of global processing of the configural appearance? In this study, we used a face categorization task and different types of flankers to examine how much the magnitude of visual crowding depends on the similarity of face parts or of global configurations. We created flankers with face-like features (e.g., the eyes, nose, and mouth) in typical and scrambled configurations to examine the impacts of part appearance and global configuration on the visual crowding of faces. Additionally, we used "electrical socket" flankers that mimicked first-order face configuration but had only schematic features, to examine the extent to which global face geometry impacted crowding. Our results indicated that both face parts and configurations contribute to visual crowding, suggesting that face similarity as realized under crowded conditions includes both aspects of facial appearance.
Reinforcement Learning of Linking and Tracing Contours in Recurrent Neural Networks
Brosch, Tobias; Neumann, Heiko; Roelfsema, Pieter R.
2015-01-01
The processing of a visual stimulus can be subdivided into a number of stages. Upon stimulus presentation there is an early phase of feedforward processing where the visual information is propagated from lower to higher visual areas for the extraction of basic and complex stimulus features. This is followed by a later phase where horizontal connections within areas and feedback connections from higher areas back to lower areas come into play. In this later phase, image elements that are behaviorally relevant are grouped by Gestalt grouping rules and are labeled in the cortex with enhanced neuronal activity (object-based attention in psychology). Recent neurophysiological studies revealed that reward-based learning influences these recurrent grouping processes, but it is not well understood how rewards train recurrent circuits for perceptual organization. This paper examines the mechanisms for reward-based learning of new grouping rules. We derive a learning rule that can explain how rewards influence the information flow through feedforward, horizontal and feedback connections. We illustrate the efficiency with two tasks that have been used to study the neuronal correlates of perceptual organization in early visual cortex. The first task is called contour-integration and demands the integration of collinear contour elements into an elongated curve. We show how reward-based learning causes an enhancement of the representation of the to-be-grouped elements at early levels of a recurrent neural network, just as is observed in the visual cortex of monkeys. The second task is curve-tracing where the aim is to determine the endpoint of an elongated curve composed of connected image elements. If trained with the new learning rule, neural networks learn to propagate enhanced activity over the curve, in accordance with neurophysiological data. We close the paper with a number of model predictions that can be tested in future neurophysiological and computational studies. PMID:26496502
Ambrose, Joseph P; Wijeakumar, Sobanawartiny; Buss, Aaron T; Spencer, John P
2016-01-01
Visual working memory (VWM) is a key cognitive system that enables people to hold visual information in mind after a stimulus has been removed and compare past and present to detect changes that have occurred. VWM is severely capacity limited to around 3-4 items, although there are robust individual differences in this limit. Importantly, these individual differences are evident in neural measures of VWM capacity. Here, we capitalized on recent work showing that capacity is lower for more complex stimulus dimension. In particular, we asked whether individual differences in capacity remain consistent if capacity is shifted by a more demanding task, and, further, whether the correspondence between behavioral and neural measures holds across a shift in VWM capacity. Participants completed a change detection (CD) task with simple colors and complex shapes in an fMRI experiment. As expected, capacity was significantly lower for the shape dimension. Moreover, there were robust individual differences in behavioral estimates of VWM capacity across dimensions. Similarly, participants with a stronger BOLD response for color also showed a strong neural response for shape within the lateral occipital cortex, intraparietal sulcus (IPS), and superior IPS. Although there were robust individual differences in the behavioral and neural measures, we found little evidence of systematic brain-behavior correlations across feature dimensions. This suggests that behavioral and neural measures of capacity provide different views onto the processes that underlie VWM and CD. Recent theoretical approaches that attempt to bridge between behavioral and neural measures are well positioned to address these findings in future work.
Spatiotopic updating of visual feature information.
Zimmermann, Eckart; Weidner, Ralph; Fink, Gereon R
2017-10-01
Saccades shift the retina with high-speed motion. In order to compensate for the sudden displacement, the visuomotor system needs to combine saccade-related information and visual metrics. Many neurons in oculomotor but also in visual areas shift their receptive field shortly before the execution of a saccade (Duhamel, Colby, & Goldberg, 1992; Nakamura & Colby, 2002). These shifts supposedly enable the binding of information from before and after the saccade. It is a matter of current debate whether these shifts are merely location based (i.e., involve remapping of abstract spatial coordinates) or also comprise information about visual features. We have recently presented fMRI evidence for a feature-based remapping mechanism in visual areas V3, V4, and VO (Zimmermann, Weidner, Abdollahi, & Fink, 2016). In particular, we found fMRI adaptation in cortical regions representing a stimulus' retinotopic as well as its spatiotopic position. Here, we asked whether spatiotopic adaptation exists independently from retinotopic adaptation and which type of information is behaviorally more relevant after saccade execution. We first adapted at the saccade target location only and found a spatiotopic tilt aftereffect. Then, we simultaneously adapted both the fixation and the saccade target location but with opposite tilt orientations. As a result, adaptation from the fixation location was carried retinotopically to the saccade target position. The opposite tilt orientation at the retinotopic location altered the effects induced by spatiotopic adaptation. More precisely, it cancelled out spatiotopic adaptation at the saccade target location. We conclude that retinotopic and spatiotopic visual adaptation are independent effects.
Tao, Xiaofeng; Zhang, Bin; Shen, Guofu; Wensveen, Janice; Smith, Earl L.; Nishimoto, Shinji; Ohzawa, Izumi
2014-01-01
Experiencing different quality images in the two eyes soon after birth can cause amblyopia, a developmental vision disorder. Amblyopic humans show the reduced capacity for judging the relative position of a visual target in reference to nearby stimulus elements (position uncertainty) and often experience visual image distortion. Although abnormal pooling of local stimulus information by neurons beyond striate cortex (V1) is often suggested as a neural basis of these deficits, extrastriate neurons in the amblyopic brain have rarely been studied using microelectrode recording methods. The receptive field (RF) of neurons in visual area V2 in normal monkeys is made up of multiple subfields that are thought to reflect V1 inputs and are capable of encoding the spatial relationship between local stimulus features. We created primate models of anisometropic amblyopia and analyzed the RF subfield maps for multiple nearby V2 neurons of anesthetized monkeys by using dynamic two-dimensional noise stimuli and reverse correlation methods. Unlike in normal monkeys, the subfield maps of V2 neurons in amblyopic monkeys were severely disorganized: subfield maps showed higher heterogeneity within each neuron as well as across nearby neurons. Amblyopic V2 neurons exhibited robust binocular suppression and the strength of the suppression was positively correlated with the degree of hereogeneity and the severity of amblyopia in individual monkeys. Our results suggest that the disorganized subfield maps and robust binocular suppression of amblyopic V2 neurons are likely to adversely affect the higher stages of cortical processing resulting in position uncertainty and image distortion. PMID:25297110
Korinth, Sebastian Peter; Breznitz, Zvia
2014-01-01
Higher N170 amplitudes to words and to faces were recently reported for faster readers of German. Since the shallow German orthography allows phonological recoding of single letters, the reported speed advantages might have their origin in especially well-developed visual processing skills of faster readers. In contrast to German, adult readers of Hebrew are forced to process letter chunks up to whole words. This dependence on more complex visual processing might have created ceiling effects for this skill. Therefore, the current study examined whether also in the deep Hebrew orthography visual processing skills as reflected by N170 amplitudes explain reading speed differences. Forty university students, native speakers of Hebrew without reading impairments, accomplished a lexical decision task (i.e., deciding whether a visually presented stimulus represents a real or a pseudo word) and a face decision task (i.e., deciding whether a face was presented complete or with missing facial features) while their electroencephalogram was recorded from 64 scalp positions. In both tasks stronger event related potentials (ERPs) were observed for faster readers in time windows at about 200 ms. Unlike in previous studies, ERP waveforms in relevant time windows did not correspond to N170 scalp topographies. The results support the notion of visual processing ability as an orthography independent marker of reading proficiency, which advances our understanding about regular and impaired reading development.
Harris, Joseph A.; McMahon, Alex R.; Woldorff, Marty G.
2015-01-01
Any information represented in the brain holds the potential to influence behavior. It is therefore of broad interest to determine the extent and quality of neural processing of stimulus input that occurs with and without awareness. The attentional blink is a useful tool for dissociating neural and behavioral measures of perceptual visual processing across conditions of awareness. The extent of higher-order visual information beyond basic sensory signaling that is processed during the attentional blink remains controversial. To determine what neural processing at the level of visual-object identification occurs in the absence of awareness, electrophysiological responses to images of faces and houses were recorded both within and outside of the attentional blink period during a rapid serial visual presentation (RSVP) stream. Electrophysiological results were sorted according to behavioral performance (correctly identified targets versus missed targets) within these blink and non-blink periods. An early index of face-specific processing (the N170, 140–220 ms post-stimulus) was observed regardless of whether the subject demonstrated awareness of the stimulus, whereas a later face-specific effect with the same topographic distribution (500–700 ms post-stimulus) was only seen for accurate behavioral discrimination of the stimulus content. The present findings suggest a multi-stage process of object-category processing, with only the later phase being associated with explicit visual awareness. PMID:23859644
Modality-dependent effect of motion information in sensory-motor synchronised tapping.
Ono, Kentaro
2018-05-14
Synchronised action is important for everyday life. Generally, the auditory domain is more sensitive for coding temporal information, and previous studies have shown that auditory-motor synchronisation is much more precise than visuo-motor synchronisation. Interestingly, adding motion information improves synchronisation with visual stimuli and the advantage of the auditory modality seems to diminish. However, whether adding motion information also improves auditory-motor synchronisation remains unknown. This study compared tapping accuracy with a stationary or moving stimulus in both auditory and visual modalities. Participants were instructed to tap in synchrony with the onset of a sound or flash in the stationary condition, while these stimuli were perceived as moving from side to side in the motion condition. The results demonstrated that synchronised tapping with a moving visual stimulus was significantly more accurate than tapping with a stationary visual stimulus, as previous studies have shown. However, tapping with a moving auditory stimulus was significantly poorer than tapping with a stationary auditory stimulus. Although motion information impaired audio-motor synchronisation, an advantage of auditory modality compared to visual modality still existed. These findings are likely the result of higher temporal resolution in the auditory domain, which is likely due to the physiological and structural differences in the auditory and visual pathways in the brain. Copyright © 2018 Elsevier B.V. All rights reserved.
Hiding and finding: the relationship between visual concealment and visual search.
Smilek, Daniel; Weinheimer, Laura; Kwan, Donna; Reynolds, Mike; Kingstone, Alan
2009-11-01
As an initial step toward developing a theory of visual concealment, we assessed whether people would use factors known to influence visual search difficulty when the degree of concealment of objects among distractors was varied. In Experiment 1, participants arranged search objects (shapes, emotional faces, and graphemes) to create displays in which the targets were in plain sight but were either easy or hard to find. Analyses of easy and hard displays created during Experiment 1 revealed that the participants reliably used factors known to influence search difficulty (e.g., eccentricity, target-distractor similarity, presence/absence of a feature) to vary the difficulty of search across displays. In Experiment 2, a new participant group searched for the targets in the displays created by the participants in Experiment 1. Results indicated that search was more difficult in the hard than in the easy condition. In Experiments 3 and 4, participants used presence versus absence of a feature to vary search difficulty with several novel stimulus sets. Taken together, the results reveal a close link between the factors that govern concealment and the factors known to influence search difficulty, suggesting that a visual search theory can be extended to form the basis of a theory of visual concealment.
High-order statistics of weber local descriptors for image representation.
Han, Xian-Hua; Chen, Yen-Wei; Xu, Gang
2015-06-01
Highly discriminant visual features play a key role in different image classification applications. This study aims to realize a method for extracting highly-discriminant features from images by exploring a robust local descriptor inspired by Weber's law. The investigated local descriptor is based on the fact that human perception for distinguishing a pattern depends not only on the absolute intensity of the stimulus but also on the relative variance of the stimulus. Therefore, we firstly transform the original stimulus (the images in our study) into a differential excitation-domain according to Weber's law, and then explore a local patch, called micro-Texton, in the transformed domain as Weber local descriptor (WLD). Furthermore, we propose to employ a parametric probability process to model the Weber local descriptors, and extract the higher-order statistics to the model parameters for image representation. The proposed strategy can adaptively characterize the WLD space using generative probability model, and then learn the parameters for better fitting the training space, which would lead to more discriminant representation for images. In order to validate the efficiency of the proposed strategy, we apply three different image classification applications including texture, food images and HEp-2 cell pattern recognition, which validates that our proposed strategy has advantages over the state-of-the-art approaches.
Neglect dyslexia: a review of the neuropsychological literature.
Vallar, Giuseppe; Burani, Cristina; Arduino, Lisa S
2010-10-01
Neglect dyslexia (ND) is reviewed, based on published single-patient and group studies. ND is frequently associated with right hemispheric damage and unilateral spatial neglect (USN), and typically involves the left side of the letter string. Left-brain-damaged patients showing ND, ipsilateral (left) or contralateral (right) to the side of the left-sided hemispheric lesion, have also been reported, as well as a few patients with bilateral damage, with more frequently left than right ND. As USN, ND is temporarily ameliorated by lateralized stimulations (vestibular caloric, visual prism adaptation). ND may occur independent of USN, suggesting the damage to specific visuospatial representational/attentional systems, supporting reading. ND errors comprise omission, substitution, and, less frequently, addition of letters on one side of the stimulus, resulting in words or nonwords, also with reference to the stimulus' linguistic features. Patients with ND may show preserved lexical-morphological effects and implicit processing, up to the semantic level, of the misread string. This preserved processing is a feature of ND, shared with the USN syndrome. The mechanisms modulating error type and lexical-morphological effects are partly independent of each other. Different levels of representation of the letter string may be affected, giving rise to egocentric, stimulus-centred, and word-centred patterns of impairment. The anatomical correlates of ND include the temporo-parieto-occipital regions.
Caywood, Matthew S.; Roberts, Daniel M.; Colombe, Jeffrey B.; Greenwald, Hal S.; Weiland, Monica Z.
2017-01-01
There is increasing interest in real-time brain-computer interfaces (BCIs) for the passive monitoring of human cognitive state, including cognitive workload. Too often, however, effective BCIs based on machine learning techniques may function as “black boxes” that are difficult to analyze or interpret. In an effort toward more interpretable BCIs, we studied a family of N-back working memory tasks using a machine learning model, Gaussian Process Regression (GPR), which was both powerful and amenable to analysis. Participants performed the N-back task with three stimulus variants, auditory-verbal, visual-spatial, and visual-numeric, each at three working memory loads. GPR models were trained and tested on EEG data from all three task variants combined, in an effort to identify a model that could be predictive of mental workload demand regardless of stimulus modality. To provide a comparison for GPR performance, a model was additionally trained using multiple linear regression (MLR). The GPR model was effective when trained on individual participant EEG data, resulting in an average standardized mean squared error (sMSE) between true and predicted N-back levels of 0.44. In comparison, the MLR model using the same data resulted in an average sMSE of 0.55. We additionally demonstrate how GPR can be used to identify which EEG features are relevant for prediction of cognitive workload in an individual participant. A fraction of EEG features accounted for the majority of the model’s predictive power; using only the top 25% of features performed nearly as well as using 100% of features. Subsets of features identified by linear models (ANOVA) were not as efficient as subsets identified by GPR. This raises the possibility of BCIs that require fewer model features while capturing all of the information needed to achieve high predictive accuracy. PMID:28123359
Naber, Marnix; Vedder, Anneke; Brown, Stephen B R E; Nieuwenhuis, Sander
2016-01-01
The Stroop task is a popular neuropsychological test that measures executive control. Strong Stroop interference is commonly interpreted in neuropsychology as a diagnostic marker of impairment in executive control, possibly reflecting executive dysfunction. However, popular models of the Stroop task indicate that several other aspects of color and word processing may also account for individual differences in the Stroop task, independent of executive control. Here we use new approaches to investigate the degree to which individual differences in Stroop interference correlate with the relative processing speed of word and color stimuli, and the lateral inhibition between visual stimuli. We conducted an electrophysiological and behavioral experiment to measure (1) how quickly an individual's brain processes words and colors presented in isolation (P3 latency), and (2) the strength of an individual's lateral inhibition between visual representations with a visual illusion. Both measures explained at least 40% of the variance in Stroop interference across individuals. As these measures were obtained in contexts not requiring any executive control, we conclude that the Stroop effect also measures an individual's pre-set way of processing visual features such as words and colors. This study highlights the important contributions of stimulus processing speed and lateral inhibition to individual differences in Stroop interference, and challenges the general view that the Stroop task primarily assesses executive control.
Summation of visual motion across eye movements reflects a nonspatial decision mechanism.
Morris, Adam P; Liu, Charles C; Cropper, Simon J; Forte, Jason D; Krekelberg, Bart; Mattingley, Jason B
2010-07-21
Human vision remains perceptually stable even though retinal inputs change rapidly with each eye movement. Although the neural basis of visual stability remains unknown, a recent psychophysical study pointed to the existence of visual feature-representations anchored in environmental rather than retinal coordinates (e.g., "spatiotopic" receptive fields; Melcher and Morrone, 2003). In that study, sensitivity to a moving stimulus presented after a saccadic eye movement was enhanced when preceded by another moving stimulus at the same spatial location before the saccade. The finding is consistent with spatiotopic sensory integration, but it could also have arisen from a probabilistic improvement in performance due to the presence of more than one motion signal for the perceptual decision. Here we show that this statistical advantage accounts completely for summation effects in this task. We first demonstrate that measurements of summation are confounded by noise related to an observer's uncertainty about motion onset times. When this uncertainty is minimized, comparable summation is observed regardless of whether two motion signals occupy the same or different locations in space, and whether they contain the same or opposite directions of motion. These results are incompatible with the tuning properties of motion-sensitive sensory neurons and provide no evidence for a spatiotopic representation of visual motion. Instead, summation in this context reflects a decision mechanism that uses abstract representations of sensory events to optimize choice behavior.
Masking interrupts figure-ground signals in V1.
Lamme, Victor A F; Zipser, Karl; Spekreijse, Henk
2002-10-01
In a backward masking paradigm, a target stimulus is rapidly (<100 msec) followed by a second stimulus. This typically results in a dramatic decrease in the visibility of the target stimulus. It has been shown that masking reduces responses in V1. It is not known, however, which process in V1 is affected by the mask. In the past, we have shown that in V1, modulations of neural activity that are specifically related to figure-ground segregation can be recorded. Here, we recorded from awake macaque monkeys, engaged in a task where they had to detect figures from background in a pattern backward masking paradigm. We show that the V1 figure-ground signals are selectively and fully suppressed at target-mask intervals that psychophysically result in the target being invisible. Initial response transients, signalling the features that make up the scene, are not affected. As figure-ground modulations depend on feedback from extrastriate areas, these results suggest that masking selectively interrupts the recurrent interactions between V1 and higher visual areas.
Gamma and Beta Oscillations in Human MEG Encode the Contents of Vibrotactile Working Memory.
von Lautz, Alexander H; Herding, Jan; Ludwig, Simon; Nierhaus, Till; Maess, Burkhard; Villringer, Arno; Blankenburg, Felix
2017-01-01
Ample evidence suggests that oscillations in the beta band represent quantitative information about somatosensory features during stimulus retention. Visual and auditory working memory (WM) research, on the other hand, has indicated a predominant role of gamma oscillations for active WM processing. Here we reconciled these findings by recording whole-head magnetoencephalography during a vibrotactile frequency comparison task. A Braille stimulator presented healthy subjects with a vibration to the left fingertip that was retained in WM for comparison with a second stimulus presented after a short delay. During this retention interval spectral power in the beta band from the right intraparietal sulcus and inferior frontal gyrus (IFG) monotonically increased with the to-be-remembered vibrotactile frequency. In contrast, induced gamma power showed the inverse of this pattern and decreased with higher stimulus frequency in the right IFG. Together, these results expand the previously established role of beta oscillations for somatosensory WM to the gamma band and give further evidence that quantitative information may be processed in a fronto-parietal network.
Eramudugolla, Ranmalee; Mattingley, Jason B
2008-01-01
Patients with unilateral spatial neglect following right hemisphere damage are impaired in detecting contralesional targets in both visual and haptic search tasks, and often show a graded improvement in detection performance for more ipsilesional spatial locations. In audition, multiple simultaneous sounds are most effectively perceived if they are distributed along the frequency dimension. Thus, attention to spectro-temporal features alone can allow detection of a target sound amongst multiple simultaneous distracter sounds, regardless of whether these sounds are spatially separated. Spatial bias in attention associated with neglect should not affect auditory search based on spectro-temporal features of a sound target. We report that a right brain damaged patient with neglect demonstrated a significant gradient favouring the ipsilesional side on a visual search task as well as an auditory search task in which the target was a frequency modulated tone amongst steady distractor tones. No such asymmetry was apparent in the auditory search performance of a control patient with a right hemisphere lesion but no neglect. The results suggest that the spatial bias in attention exhibited by neglect patients affects stimulus processing even when spatial information is irrelevant to the task.
Decoding stimulus features in primate somatosensory cortex during perceptual categorization
Alvarez, Manuel; Zainos, Antonio; Romo, Ranulfo
2015-01-01
Neurons of the primary somatosensory cortex (S1) respond as functions of frequency or amplitude of a vibrotactile stimulus. However, whether S1 neurons encode both frequency and amplitude of the vibrotactile stimulus or whether each sensory feature is encoded by separate populations of S1 neurons is not known, To further address these questions, we recorded S1 neurons while trained monkeys categorized only one sensory feature of the vibrotactile stimulus: frequency, amplitude, or duration. The results suggest a hierarchical encoding scheme in S1: from neurons that encode all sensory features of the vibrotactile stimulus to neurons that encode only one sensory feature. We hypothesize that the dynamic representation of each sensory feature in S1 might serve for further downstream processing that leads to the monkey’s psychophysical behavior observed in these tasks. PMID:25825711
Optical images of visible and invisible percepts in the primary visual cortex of primates
Macknik, Stephen L.; Haglund, Michael M.
1999-01-01
We optically imaged a visual masking illusion in primary visual cortex (area V-1) of rhesus monkeys to ask whether activity in the early visual system more closely reflects the physical stimulus or the generated percept. Visual illusions can be a powerful way to address this question because they have the benefit of dissociating the stimulus from perception. We used an illusion in which a flickering target (a bar oriented in visual space) is rendered invisible by two counter-phase flickering bars, called masks, which flank and abut the target. The target and masks, when shown separately, each generated correlated activity on the surface of the cortex. During the illusory condition, however, optical signals generated in the cortex by the target disappeared although the image of the masks persisted. The optical image thus was correlated with perception but not with the physical stimulus. PMID:10611363
A description of discrete internal representation schemes for visual pattern discrimination.
Foster, D H
1980-01-01
A general description of a class of schemes for pattern vision is outlined in which the visual system is assumed to form a discrete internal representation of the stimulus. These representations are discrete in that they are considered to comprise finite combinations of "components" which are selected from a fixed and finite repertoire, and which designate certain simple pattern properties or features. In the proposed description it is supposed that the construction of an internal representation is a probabilistic process. A relationship is then formulated associating the probability density functions governing this construction and performance in visually discriminating patterns when differences in pattern shape are small. Some questions related to the application of this relationship to the experimental investigation of discrete internal representations are briefly discussed.
Model-based analysis of pattern motion processing in mouse primary visual cortex
Muir, Dylan R.; Roth, Morgane M.; Helmchen, Fritjof; Kampa, Björn M.
2015-01-01
Neurons in sensory areas of neocortex exhibit responses tuned to specific features of the environment. In visual cortex, information about features such as edges or textures with particular orientations must be integrated to recognize a visual scene or object. Connectivity studies in rodent cortex have revealed that neurons make specific connections within sub-networks sharing common input tuning. In principle, this sub-network architecture enables local cortical circuits to integrate sensory information. However, whether feature integration indeed occurs locally in rodent primary sensory areas has not been examined directly. We studied local integration of sensory features in primary visual cortex (V1) of the mouse by presenting drifting grating and plaid stimuli, while recording the activity of neuronal populations with two-photon calcium imaging. Using a Bayesian model-based analysis framework, we classified single-cell responses as being selective for either individual grating components or for moving plaid patterns. Rather than relying on trial-averaged responses, our model-based framework takes into account single-trial responses and can easily be extended to consider any number of arbitrary predictive models. Our analysis method was able to successfully classify significantly more responses than traditional partial correlation (PC) analysis, and provides a rigorous statistical framework to rank any number of models and reject poorly performing models. We also found a large proportion of cells that respond strongly to only one stimulus class. In addition, a quarter of selectively responding neurons had more complex responses that could not be explained by any simple integration model. Our results show that a broad range of pattern integration processes already take place at the level of V1. This diversity of integration is consistent with processing of visual inputs by local sub-networks within V1 that are tuned to combinations of sensory features. PMID:26300738
Stimulus change as a factor in response maintenance with free food available.
Osborne, S R; Shelby, M
1975-01-01
Rats bar pressed for food on a reinforcement schedule in which every response was reinforced, even though a dish of pellets was present. Initially, auditory and visual stimuli accompanied response-produced food presentation. With stimulus feedback as an added consequence of bar pressing, responding was maintained in the presence of free food; without stimulus feedback, responding decreased to a low level. Auditory feedback maintained slightly more responding than did visual feedback, and both together maintained more responding than did either separately. Almost no responding occurred when the only consequence of bar pressing was stimulus feedback. The data indicated conditioned and sensory reinforcement effects of response-produced stimulus feedback. PMID:1202121
Tsatsishvili, Valeri; Burunat, Iballa; Cong, Fengyu; Toiviainen, Petri; Alluri, Vinoo; Ristaniemi, Tapani
2018-06-01
There has been growing interest towards naturalistic neuroimaging experiments, which deepen our understanding of how human brain processes and integrates incoming streams of multifaceted sensory information, as commonly occurs in real world. Music is a good example of such complex continuous phenomenon. In a few recent fMRI studies examining neural correlates of music in continuous listening settings, multiple perceptual attributes of music stimulus were represented by a set of high-level features, produced as the linear combination of the acoustic descriptors computationally extracted from the stimulus audio. NEW METHOD: fMRI data from naturalistic music listening experiment were employed here. Kernel principal component analysis (KPCA) was applied to acoustic descriptors extracted from the stimulus audio to generate a set of nonlinear stimulus features. Subsequently, perceptual and neural correlates of the generated high-level features were examined. The generated features captured musical percepts that were hidden from the linear PCA features, namely Rhythmic Complexity and Event Synchronicity. Neural correlates of the new features revealed activations associated to processing of complex rhythms, including auditory, motor, and frontal areas. Results were compared with the findings in the previously published study, which analyzed the same fMRI data but applied linear PCA for generating stimulus features. To enable comparison of the results, methodology for finding stimulus-driven functional maps was adopted from the previous study. Exploiting nonlinear relationships among acoustic descriptors can lead to the novel high-level stimulus features, which can in turn reveal new brain structures involved in music processing. Copyright © 2018 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Teubert, Manuel; Lohaus, Arnold; Fassbender, Ina; Vierhaus, Marc; Spangler, Sibylle; Borchert, Sonja; Freitag, Claudia; Goertz, Claudia; Graf, Frauke; Gudi, Helene; Kolling, Thorsten; Lamm, Bettina; Keller, Heidi; Knopf, Monika; Schwarzer, Gudrun
2012-01-01
This longitudinal study examined the influence of stimulus material on attention and expectation learning in the visual expectation paradigm. Female faces were used as attention-attracting stimuli, and non-meaningful visual stimuli of comparable complexity (Greebles) were used as low attention-attracting stimuli. Expectation learning performance…
Perception of emotion in abstract artworks: a multidisciplinary approach.
Melcher, David; Bacci, Francesca
2013-01-01
There is a long-standing and fundamental debate regarding how emotion can be expressed by fine art. Some artists and theorists have claimed that certain features of paintings, such as color, line, form, and composition, can consistently express an "objective" emotion, while others have argued that emotion perception is subjective and depends more on expertise of the observer. Here, we discuss two studies in which we have found evidence for consistency in observer ratings of emotion for abstract artworks. We have developed a stimulus set of abstract art images to test emotional priming, both between different painting images and between paintings and faces. The ratings were also used in a computational vision analysis of the visual features underlying emotion expression. Overall, these findings suggest that there is a strong bottom-up and objective aspect to perception of emotion in abstract artworks that may tap into basic visual mechanisms. © 2013 Elsevier B.V. All rights reserved.
Li, Fengling; Jiang, Weiqian; Wang, Tian-Yi; Xie, Taorong; Yao, Haishan
2018-05-21
In the primary visual cortex (V1), neuronal responses to stimuli within the receptive field (RF) are modulated by stimuli in the RF surround. A common effect of surround modulation is surround suppression, which is dependent on the feature difference between stimuli within and surround the RF and is suggested to be involved in the perceptual phenomenon of figure-ground segregation. In this study, we examined the relationship between feature-specific surround suppression of V1 neurons and figure detection behavior based on figure-ground feature difference. We trained freely moving mice to perform a figure detection task using figure and ground gratings that differed in spatial phase. The performance of figure detection increased with the figure-ground phase difference, and was modulated by stimulus contrast. Electrophysiological recordings from V1 in head-fixed mice showed that the increase in phase difference between stimuli within and surround the RF caused a reduction in surround suppression, which was associated with an increase in V1 neural discrimination between stimuli with and without RF-surround phase difference. Consistent with the behavioral performance, the sensitivity of V1 neurons to RF-surround phase difference could be influenced by stimulus contrast. Furthermore, inhibiting V1 by optogenetically activating either parvalbumin (PV)- or somatostatin (SOM)-expressing inhibitory neurons both decreased the behavioral performance of figure detection. Thus, the phase-specific surround suppression in V1 represents a neural correlate of figure detection behavior based on figure-ground phase discontinuity. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.
Rapid feature-driven changes in the attentional window.
Leonard, Carly J; Lopez-Calderon, Javier; Kreither, Johanna; Luck, Steven J
2013-07-01
Spatial attention must adjust around an object of interest in a manner that reflects the object's size on the retina as well as the proximity of distracting objects, a process often guided by nonspatial features. This study used ERPs to investigate how quickly the size of this type of "attentional window" can adjust around a fixated target object defined by its color and whether this variety of attention influences the feedforward flow of subsequent information through the visual system. The task involved attending either to a circular region at fixation or to a surrounding annulus region, depending on which region contained an attended color. The region containing the attended color varied randomly from trial to trial, so the spatial distribution of attention had to be adjusted on each trial. We measured the initial sensory ERP response elicited by an irrelevant probe stimulus that appeared in one of the two regions at different times after task display onset. This allowed us to measure the amount of time required to adjust spatial attention on the basis of the location of the task-relevant feature. We found that the probe-elicited sensory response was larger when the probe occurred within the region of the attended dots, and this effect required a delay of approximately 175 msec between the onset of the task display and the onset of the probe. Thus, the window of attention is rapidly adjusted around the point of fixation in a manner that reflects the spatial extent of a task-relevant stimulus, leading to changes in the feedforward flow of subsequent information through the visual system.
Schallmo, Michael-Paul; Grant, Andrea N; Burton, Philip C; Olman, Cheryl A
2016-08-01
Although V1 responses are driven primarily by elements within a neuron's receptive field, which subtends about 1° visual angle in parafoveal regions, previous work has shown that localized fMRI responses to visual elements reflect not only local feature encoding but also long-range pattern attributes. However, separating the response to an image feature from the response to the surrounding stimulus and studying the interactions between these two responses demands both spatial precision and signal independence, which may be challenging to attain with fMRI. The present study used 7 Tesla fMRI with 1.2-mm resolution to measure the interactions between small sinusoidal grating patches (targets) at 3° eccentricity and surrounds of various sizes and orientations to test the conditions under which localized, context-dependent fMRI responses could be predicted from either psychophysical or electrophysiological data. Targets were presented at 8%, 16%, and 32% contrast while manipulating (a) spatial extent of parallel (strongly suppressive) or orthogonal (weakly suppressive) surrounds, (b) locus of attention, (c) stimulus onset asynchrony between target and surround, and (d) blocked versus event-related design. In all experiments, the V1 fMRI signal was lower when target stimuli were flanked by parallel versus orthogonal context. Attention amplified fMRI responses to all stimuli but did not show a selective effect on central target responses or a measurable effect on orientation-dependent surround suppression. Suppression of the V1 fMRI response by parallel surrounds was stronger than predicted from psychophysics but showed a better match to previous electrophysiological reports.
Yashar, Amit; Denison, Rachel N
2017-12-01
Training can modify the visual system to produce a substantial improvement on perceptual tasks and therefore has applications for treating visual deficits. Visual perceptual learning (VPL) is often specific to the trained feature, which gives insight into processes underlying brain plasticity, but limits VPL's effectiveness in rehabilitation. Under what circumstances VPL transfers to untrained stimuli is poorly understood. Here we report a qualitatively new phenomenon: intrinsic variation in the representation of features determines the transfer of VPL. Orientations around cardinal are represented more reliably than orientations around oblique in V1, which has been linked to behavioral consequences such as visual search asymmetries. We studied VPL for visual search of near-cardinal or oblique targets among distractors of the other orientation while controlling for other display and task attributes, including task precision, task difficulty, and stimulus exposure. Learning was the same in all training conditions; however, transfer depended on the orientation of the target, with full transfer of learning from near-cardinal to oblique targets but not the reverse. To evaluate the idea that representational reliability was the key difference between the orientations in determining VPL transfer, we created a model that combined orientation-dependent reliability, improvement of reliability with learning, and an optimal search strategy. Modeling suggested that not only search asymmetries but also the asymmetric transfer of VPL depended on preexisting differences between the reliability of near-cardinal and oblique representations. Transfer asymmetries in model behavior also depended on having different learning rates for targets and distractors, such that greater learning for low-reliability distractors facilitated transfer. These findings suggest that training on sensory features with intrinsically low reliability may maximize the generalizability of learning in complex visual environments.
Feature reliability determines specificity and transfer of perceptual learning in orientation search
2017-01-01
Training can modify the visual system to produce a substantial improvement on perceptual tasks and therefore has applications for treating visual deficits. Visual perceptual learning (VPL) is often specific to the trained feature, which gives insight into processes underlying brain plasticity, but limits VPL’s effectiveness in rehabilitation. Under what circumstances VPL transfers to untrained stimuli is poorly understood. Here we report a qualitatively new phenomenon: intrinsic variation in the representation of features determines the transfer of VPL. Orientations around cardinal are represented more reliably than orientations around oblique in V1, which has been linked to behavioral consequences such as visual search asymmetries. We studied VPL for visual search of near-cardinal or oblique targets among distractors of the other orientation while controlling for other display and task attributes, including task precision, task difficulty, and stimulus exposure. Learning was the same in all training conditions; however, transfer depended on the orientation of the target, with full transfer of learning from near-cardinal to oblique targets but not the reverse. To evaluate the idea that representational reliability was the key difference between the orientations in determining VPL transfer, we created a model that combined orientation-dependent reliability, improvement of reliability with learning, and an optimal search strategy. Modeling suggested that not only search asymmetries but also the asymmetric transfer of VPL depended on preexisting differences between the reliability of near-cardinal and oblique representations. Transfer asymmetries in model behavior also depended on having different learning rates for targets and distractors, such that greater learning for low-reliability distractors facilitated transfer. These findings suggest that training on sensory features with intrinsically low reliability may maximize the generalizability of learning in complex visual environments. PMID:29240813
Neural processing of visual information under interocular suppression: a critical review
Sterzer, Philipp; Stein, Timo; Ludwig, Karin; Rothkirch, Marcus; Hesselmann, Guido
2014-01-01
When dissimilar stimuli are presented to the two eyes, only one stimulus dominates at a time while the other stimulus is invisible due to interocular suppression. When both stimuli are equally potent in competing for awareness, perception alternates spontaneously between the two stimuli, a phenomenon called binocular rivalry. However, when one stimulus is much stronger, e.g., due to higher contrast, the weaker stimulus can be suppressed for prolonged periods of time. A technique that has recently become very popular for the investigation of unconscious visual processing is continuous flash suppression (CFS): High-contrast dynamic patterns shown to one eye can render a low-contrast stimulus shown to the other eye invisible for up to minutes. Studies using CFS have produced new insights but also controversies regarding the types of visual information that can be processed unconsciously as well as the neural sites and the relevance of such unconscious processing. Here, we review the current state of knowledge in regard to neural processing of interocularly suppressed information. Focusing on recent neuroimaging findings, we discuss whether and to what degree such suppressed visual information is processed at early and more advanced levels of the visual processing hierarchy. We review controversial findings related to the influence of attention on early visual processing under interocular suppression, the putative differential roles of dorsal and ventral areas in unconscious object processing, and evidence suggesting privileged unconscious processing of emotional and other socially relevant information. On a more general note, we discuss methodological and conceptual issues, from practical issues of how unawareness of a stimulus is assessed to the overarching question of what constitutes an adequate operational definition of unawareness. Finally, we propose approaches for future research to resolve current controversies in this exciting research area. PMID:24904469
Value-Driven Attentional Capture is Modulated by Spatial Context
Anderson, Brian A.
2014-01-01
When stimuli are associated with reward outcome, their visual features acquire high attentional priority such that stimuli possessing those features involuntarily capture attention. Whether a particular feature is predictive of reward, however, will vary with a number of contextual factors. One such factor is spatial location: for example, red berries are likely to be found in low-lying bushes, whereas yellow bananas are likely to be found on treetops. In the present study, I explore whether the attentional priority afforded to reward-associated features is modulated by such location-based contingencies. The results demonstrate that when a stimulus feature is associated with a reward outcome in one spatial location but not another, attentional capture by that feature is selective to when it appears in the rewarded location. This finding provides insight into how reward learning effectively modulates attention in an environment with complex stimulus–reward contingencies, thereby supporting efficient foraging. PMID:26069450
Learning-dependent plasticity with and without training in the human brain.
Zhang, Jiaxiang; Kourtzi, Zoe
2010-07-27
Long-term experience through development and evolution and shorter-term training in adulthood have both been suggested to contribute to the optimization of visual functions that mediate our ability to interpret complex scenes. However, the brain plasticity mechanisms that mediate the detection of objects in cluttered scenes remain largely unknown. Here, we combine behavioral and functional MRI (fMRI) measurements to investigate the human-brain mechanisms that mediate our ability to learn statistical regularities and detect targets in clutter. We show two different routes to visual learning in clutter with discrete brain plasticity signatures. Specifically, opportunistic learning of regularities typical in natural contours (i.e., collinearity) can occur simply through frequent exposure, generalize across untrained stimulus features, and shape processing in occipitotemporal regions implicated in the representation of global forms. In contrast, learning to integrate discontinuities (i.e., elements orthogonal to contour paths) requires task-specific training (bootstrap-based learning), is stimulus-dependent, and enhances processing in intraparietal regions implicated in attention-gated learning. We propose that long-term experience with statistical regularities may facilitate opportunistic learning of collinear contours, whereas learning to integrate discontinuities entails bootstrap-based training for the detection of contours in clutter. These findings provide insights in understanding how long-term experience and short-term training interact to shape the optimization of visual recognition processes.
Bolin, B. Levi; Singleton, Destiny L.; Akins, Chana K.
2014-01-01
Pavlovian drug discrimination (DD) procedures demonstrate that interoceptive drug stimuli may come to control behavior by informing the status of conditional relationships between stimuli and outcomes. This technique may provide insight into processes that contribute to drug-seeking, relapse, and other maladaptive behaviors associated with drug abuse. The purpose of the current research was to establish a model of Pavlovian DD in male Japanese quail. A Pavlovian conditioning procedure was used such that 3.0 mg/kg methamphetamine served as a feature positive stimulus for brief periods of visual access to a female quail and approach behavior was measured. After acquisition training, generalization tests were conducted with cocaine, nicotine, and haloperidol under extinction conditions. SCH 23390 was used to investigate the involvement of the dopamine D1 receptor subtype in the methamphetamine discriminative stimulus. Results showed that cocaine fully substituted for methamphetamine but nicotine only partially substituted for methamphetamine in quail. Haloperidol dose-dependently decreased approach behavior. Pretreatment with SCH 23390 modestly attenuated the methamphetamine discrimination suggesting that the D1 receptor subtype may be involved in the discriminative stimulus effects of methamphetamine. The findings are discussed in relation to drug abuse and associated negative health consequences. PMID:24965811
Adaptability and specificity of inhibition processes in distractor-induced blindness.
Winther, Gesche N; Niedeggen, Michael
2017-12-01
In a rapid serial visual presentation task, inhibition processes cumulatively impair processing of a target possessing distractor properties. This phenomenon-known as distractor-induced blindness-has thus far only been elicited using dynamic visual features, such as motion and orientation changes. In three ERP experiments, we used a visual object feature-color-to test for the adaptability and specificity of the effect. In Experiment I, participants responded to a color change (target) in the periphery whose onset was signaled by a central cue. Presentation of irrelevant color changes prior to the cue (distractors) led to reduced target detection, accompanied by a frontal ERP negativity that increased with increasing number of distractors, similar to the effects previously found for dynamic targets. This suggests that distractor-induced blindness is adaptable to color features. In Experiment II, the target consisted of coherent motion contrasting the color distractors. Correlates of distractor-induced blindness were found neither in the behavioral nor in the ERP data, indicating a feature specificity of the process. Experiment III confirmed the strict distinction between congruent and incongruent distractors: A single color distractor was embedded in a stream of motion distractors with the target consisting of a coherent motion. While behavioral performance was affected by the distractors, the color distractor did not elicit a frontal negativity. The experiments show that distractor-induced blindness is also triggered by visual stimuli predominantly processed in the ventral stream. The strict specificity of the central inhibition process also applies to these stimulus features. © 2017 Society for Psychophysiological Research.
Bressler, David W.; Fortenbaugh, Francesca C.; Robertson, Lynn C.; Silver, Michael A.
2013-01-01
Endogenous visual spatial attention improves perception and enhances neural responses to visual stimuli at attended locations. Although many aspects of visual processing differ significantly between central and peripheral vision, little is known regarding the neural substrates of the eccentricity dependence of spatial attention effects. We measured amplitudes of positive and negative fMRI responses to visual stimuli as a function of eccentricity in a large number of topographically-organized cortical areas. Responses to each stimulus were obtained when the stimulus was attended and when spatial attention was directed to a stimulus in the opposite visual hemifield. Attending to the stimulus increased both positive and negative response amplitudes in all cortical areas we studied: V1, V2, V3, hV4, VO1, LO1, LO2, V3A/B, IPS0, TO1, and TO2. However, the eccentricity dependence of these effects differed considerably across cortical areas. In early visual, ventral, and lateral occipital cortex, attentional enhancement of positive responses was greater for central compared to peripheral eccentricities. The opposite pattern was observed in dorsal stream areas IPS0 and putative MT homolog TO1, where attentional enhancement of positive responses was greater in the periphery. Both the magnitude and the eccentricity dependence of attentional modulation of negative fMRI responses closely mirrored that of positive responses across cortical areas. PMID:23562388
Synchronization to auditory and visual rhythms in hearing and deaf individuals
Iversen, John R.; Patel, Aniruddh D.; Nicodemus, Brenda; Emmorey, Karen
2014-01-01
A striking asymmetry in human sensorimotor processing is that humans synchronize movements to rhythmic sound with far greater precision than to temporally equivalent visual stimuli (e.g., to an auditory vs. a flashing visual metronome). Traditionally, this finding is thought to reflect a fundamental difference in auditory vs. visual processing, i.e., superior temporal processing by the auditory system and/or privileged coupling between the auditory and motor systems. It is unclear whether this asymmetry is an inevitable consequence of brain organization or whether it can be modified (or even eliminated) by stimulus characteristics or by experience. With respect to stimulus characteristics, we found that a moving, colliding visual stimulus (a silent image of a bouncing ball with a distinct collision point on the floor) was able to drive synchronization nearly as accurately as sound in hearing participants. To study the role of experience, we compared synchronization to flashing metronomes in hearing and profoundly deaf individuals. Deaf individuals performed better than hearing individuals when synchronizing with visual flashes, suggesting that cross-modal plasticity enhances the ability to synchronize with temporally discrete visual stimuli. Furthermore, when deaf (but not hearing) individuals synchronized with the bouncing ball, their tapping patterns suggest that visual timing may access higher-order beat perception mechanisms for deaf individuals. These results indicate that the auditory advantage in rhythmic synchronization is more experience- and stimulus-dependent than has been previously reported. PMID:25460395
Square or sine: finding a waveform with high success rate of eliciting SSVEP.
Teng, Fei; Chen, Yixin; Choong, Aik Min; Gustafson, Scott; Reichley, Christopher; Lawhead, Pamela; Waddell, Dwight
2011-01-01
Steady state visual evoked potential (SSVEP) is the brain's natural electrical potential response for visual stimuli at specific frequencies. Using a visual stimulus flashing at some given frequency will entrain the SSVEP at the same frequency, thereby allowing determination of the subject's visual focus. The faster an SSVEP is identified, the higher information transmission rate the system achieves. Thus, an effective stimulus, defined as one with high success rate of eliciting SSVEP and high signal-noise ratio, is desired. Also, researchers observed that harmonic frequencies often appear in the SSVEP at a reduced magnitude. Are the harmonics in the SSVEP elicited by the fundamental stimulating frequency or by the artifacts of the stimuli? In this paper, we compare the SSVEP responses of three periodic stimuli: square wave (with different duty cycles), triangle wave, and sine wave to find an effective stimulus. We also demonstrate the connection between the strength of the harmonics in SSVEP and the type of stimulus.
Stimulus-dependent modulation of spontaneous low-frequency oscillations in the rat visual cortex.
Huang, Liangming; Liu, Yadong; Gui, Jianjun; Li, Ming; Hu, Dewen
2014-08-06
Research on spontaneous low-frequency oscillations is important to reveal underlying regulatory mechanisms in the brain. The mechanism for the stimulus modulation of low-frequency oscillations is not known. Here, we used the intrinsic optical imaging technique to examine stimulus-modulated low-frequency oscillation signals in the rat visual cortex. The stimulation was presented monocularly as a flashing light with different frequencies and intensities. The phases of low-frequency oscillations in different regions tended to be synchronized and the rhythms typically accelerated within a 30-s period after stimulation. These phenomena were confined to visual stimuli with specific flashing frequencies (12.5-17.5 Hz) and intensities (5-10 mA). The acceleration and synchronization induced by the flashing frequency were more marked than those induced by the intensity. These results show that spontaneous low-frequency oscillations can be modulated by parameter-dependent flashing lights and indicate the potential utility of the visual stimulus paradigm in exploring the origin and function of low-frequency oscillations.
Using Prosopagnosia to Test and Modify Visual Recognition Theory.
O'Brien, Alexander M
2018-02-01
Biederman's contemporary theory of basic visual object recognition (Recognition-by-Components) is based on structural descriptions of objects and presumes 36 visual primitives (geons) people can discriminate, but there has been no empirical test of the actual use of these 36 geons to visually distinguish objects. In this study, we tested for the actual use of these geons in basic visual discrimination by comparing object discrimination performance patterns (when distinguishing varied stimuli) of an acquired prosopagnosia patient (LB) and healthy control participants. LB's prosopagnosia left her heavily reliant on structural descriptions or categorical object differences in visual discrimination tasks versus the control participants' additional ability to use face recognition or coordinate systems (Coordinate Relations Hypothesis). Thus, when LB performed comparably to control participants with a given stimulus, her restricted reliance on basic or categorical discriminations meant that the stimuli must be distinguishable on the basis of a geon feature. By varying stimuli in eight separate experiments and presenting all 36 geons, we discerned that LB coded only 12 (vs. 36) distinct visual primitives (geons), apparently reflective of human visual systems generally.
Temporal parameters and time course of perceptual latency priming.
Scharlau, Ingrid; Neumann, Odmar
2003-06-01
Visual stimuli (primes) reduce the perceptual latency of a target appearing at the same location (perceptual latency priming, PLP). Three experiments assessed the time course of PLP by masked and, in Experiment 3, unmasked primes. Experiments 1 and 2 investigated the temporal parameters that determine the size of priming. Stimulus onset asynchrony was found to exert the main influence accompanied by a small effect of prime duration. Experiment 3 used a large range of priming onset asynchronies. We suggest to explain PLP by the Asynchronous Updating Model which relates it to the asynchrony of 2 central coding processes, preattentive coding of basic visual features and attentional orienting as a prerequisite for perceptual judgments and conscious perception.
Comparing different stimulus configurations for population receptive field mapping in human fMRI
Alvarez, Ivan; de Haas, Benjamin; Clark, Chris A.; Rees, Geraint; Schwarzkopf, D. Samuel
2015-01-01
Population receptive field (pRF) mapping is a widely used approach to measuring aggregate human visual receptive field properties by recording non-invasive signals using functional MRI. Despite growing interest, no study to date has systematically investigated the effects of different stimulus configurations on pRF estimates from human visual cortex. Here we compared the effects of three different stimulus configurations on a model-based approach to pRF estimation: size-invariant bars and eccentricity-scaled bars defined in Cartesian coordinates and traveling along the cardinal axes, and a novel simultaneous “wedge and ring” stimulus defined in polar coordinates, systematically covering polar and eccentricity axes. We found that the presence or absence of eccentricity scaling had a significant effect on goodness of fit and pRF size estimates. Further, variability in pRF size estimates was directly influenced by stimulus configuration, particularly for higher visual areas including V5/MT+. Finally, we compared eccentricity estimation between phase-encoded and model-based pRF approaches. We observed a tendency for more peripheral eccentricity estimates using phase-encoded methods, independent of stimulus size. We conclude that both eccentricity scaling and polar rather than Cartesian stimulus configuration are important considerations for optimal experimental design in pRF mapping. While all stimulus configurations produce adequate estimates, simultaneous wedge and ring stimulation produced higher fit reliability, with a significant advantage in reduced acquisition time. PMID:25750620
[Microcomputer control of a LED stimulus display device].
Ohmoto, S; Kikuchi, T; Kumada, T
1987-02-01
A visual stimulus display system controlled by a microcomputer was constructed at low cost. The system consists of a LED stimulus display device, a microcomputer, two interface boards, a pointing device (a "mouse") and two kinds of software. The first software package is written in BASIC. Its functions are: to construct stimulus patterns using the mouse, to construct letter patterns (alphabet, digit, symbols and Japanese letters--kanji, hiragana, katakana), to modify the patterns, to store the patterns on a floppy disc, to translate the patterns into integer data which are used to display the patterns in the second software. The second software package, written in BASIC and machine language, controls display of a sequence of stimulus patterns in predetermined time schedules in visual experiments.
Nieuwenstein, Mark; Wyble, Brad
2014-06-01
While studies on visual memory commonly assume that the consolidation of a visual stimulus into working memory is interrupted by a trailing mask, studies on dual-task interference suggest that the consolidation of a stimulus can continue for several hundred milliseconds after a mask. As a result, estimates of the time course of working memory consolidation differ more than an order of magnitude. Here, we contrasted these opposing views by examining if and for how long the processing of a masked display of visual stimuli can be disturbed by a trailing 2-alternative forced choice task (2-AFC; a color discrimination task or a visual or auditory parity judgment task). The results showed that the presence of the 2-AFC task produced a pronounced retroactive interference effect that dissipated across stimulus onset asynchronies of 250-1,000 ms, indicating that the processing elicited by the 2-AFC task interfered with the gradual consolidation of the earlier shown stimuli. Furthermore, this interference effect occurred regardless of whether the to-be-remembered stimuli comprised a string of letters or an unfamiliar complex visual shape, and it occurred regardless of whether these stimuli were masked. Conversely, the interference effect was reduced when the memory load for the 1st task was reduced, or when the 2nd task was a color detection task that did not require decision making. Taken together, these findings show that the formation of a durable and consciously accessible working memory trace for a briefly shown visual stimulus can be disturbed by a trailing 2-AFC task for up to several hundred milliseconds after the stimulus has been masked. By implication, the current findings challenge the common view that working memory consolidation involves an immutable central processing bottleneck, and they also make clear that consolidation does not stop when a stimulus is masked. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Characterizing the effects of feature salience and top-down attention in the early visual system.
Poltoratski, Sonia; Ling, Sam; McCormack, Devin; Tong, Frank
2017-07-01
The visual system employs a sophisticated balance of attentional mechanisms: salient stimuli are prioritized for visual processing, yet observers can also ignore such stimuli when their goals require directing attention elsewhere. A powerful determinant of visual salience is local feature contrast: if a local region differs from its immediate surround along one or more feature dimensions, it will appear more salient. We used high-resolution functional MRI (fMRI) at 7T to characterize the modulatory effects of bottom-up salience and top-down voluntary attention within multiple sites along the early visual pathway, including visual areas V1-V4 and the lateral geniculate nucleus (LGN). Observers viewed arrays of spatially distributed gratings, where one of the gratings immediately to the left or right of fixation differed from all other items in orientation or motion direction, making it salient. To investigate the effects of directed attention, observers were cued to attend to the grating to the left or right of fixation, which was either salient or nonsalient. Results revealed reliable additive effects of top-down attention and stimulus-driven salience throughout visual areas V1-hV4. In comparison, the LGN exhibited significant attentional enhancement but was not reliably modulated by orientation- or motion-defined salience. Our findings indicate that top-down effects of spatial attention can influence visual processing at the earliest possible site along the visual pathway, including the LGN, whereas the processing of orientation- and motion-driven salience primarily involves feature-selective interactions that take place in early cortical visual areas. NEW & NOTEWORTHY While spatial attention allows for specific, goal-driven enhancement of stimuli, salient items outside of the current focus of attention must also be prioritized. We used 7T fMRI to compare salience and spatial attentional enhancement along the early visual hierarchy. We report additive effects of attention and bottom-up salience in early visual areas, suggesting that salience enhancement is not contingent on the observer's attentional state. Copyright © 2017 the American Physiological Society.
Memory-guided attention during active viewing of edited dynamic scenes.
Valuch, Christian; König, Peter; Ansorge, Ulrich
2017-01-01
Films, TV shows, and other edited dynamic scenes contain many cuts, which are abrupt transitions from one video shot to the next. Cuts occur within or between scenes, and often join together visually and semantically related shots. Here, we tested to which degree memory for the visual features of the precut shot facilitates shifting attention to the postcut shot. We manipulated visual similarity across cuts, and measured how this affected covert attention (Experiment 1) and overt attention (Experiments 2 and 3). In Experiments 1 and 2, participants actively viewed a target movie that randomly switched locations with a second, distractor movie at the time of the cuts. In Experiments 1 and 2, participants were able to deploy attention more rapidly and accurately to the target movie's continuation when visual similarity was high than when it was low. Experiment 3 tested whether this could be explained by stimulus-driven (bottom-up) priming by feature similarity, using one clip at screen center that was followed by two alternative continuations to the left and right. Here, even the highest similarity across cuts did not capture attention. We conclude that following cuts of high visual similarity, memory-guided attention facilitates the deployment of attention, but this effect is (top-down) dependent on the viewer's active matching of scene content across cuts.
Neural Pathways Conveying Novisual Information to the Visual Cortex
2013-01-01
The visual cortex has been traditionally considered as a stimulus-driven, unimodal system with a hierarchical organization. However, recent animal and human studies have shown that the visual cortex responds to non-visual stimuli, especially in individuals with visual deprivation congenitally, indicating the supramodal nature of the functional representation in the visual cortex. To understand the neural substrates of the cross-modal processing of the non-visual signals in the visual cortex, we firstly showed the supramodal nature of the visual cortex. We then reviewed how the nonvisual signals reach the visual cortex. Moreover, we discussed if these non-visual pathways are reshaped by early visual deprivation. Finally, the open question about the nature (stimulus-driven or top-down) of non-visual signals is also discussed. PMID:23840972
A neural correlate of working memory in the monkey primary visual cortex.
Supèr, H; Spekreijse, H; Lamme, V A
2001-07-06
The brain frequently needs to store information for short periods. In vision, this means that the perceptual correlate of a stimulus has to be maintained temporally once the stimulus has been removed from the visual scene. However, it is not known how the visual system transfers sensory information into a memory component. Here, we identify a neural correlate of working memory in the monkey primary visual cortex (V1). We propose that this component may link sensory activity with memory activity.
Preattentive visual search and perceptual grouping in schizophrenia.
Carr, V J; Dewis, S A; Lewin, T J
1998-06-15
To help determine whether patients with schizophrenia show deficits in the stimulus-based aspects of preattentive processing, we undertook a series of experiments within the framework of feature integration theory. Thirty subjects with a DSM-III-R diagnosis of schizophrenia and 30 age-, gender-, and education-matched normal control subjects completed two computerized experimental tasks, a visual search task assessing parallel and serial information processing (Experiment 1) and a task which examined the effects of perceptual grouping on visual search strategies (Experiment 2). We also assessed current symptomatology and its relationship to task performance. While the schizophrenia subjects had longer reaction times in Experiment 1, their overall pattern of performance across both experimental tasks was similar to that of the control subjects, and generally unrelated to current symptomatology. Predictions from feature integration theory about the impact of varying display size (Experiment 1) and number of perceptual groups (Experiment 2) on the detection of feature and conjunction targets were strongly supported. This study revealed no firm evidence that schizophrenia is associated with a preattentive abnormality in visual search using stimuli that differ on the basis of physical characteristics. While subject and task characteristics may partially account for differences between this and previous studies, it is more likely that preattentive processing abnormalities in schizophrenia may occur only under conditions involving selected 'top-down' factors such as context and meaning.
Non-conscious processing of motion coherence can boost conscious access.
Kaunitz, Lisandro; Fracasso, Alessio; Lingnau, Angelika; Melcher, David
2013-01-01
Research on the scope and limits of non-conscious vision can advance our understanding of the functional and neural underpinnings of visual awareness. Here we investigated whether distributed local features can be bound, outside of awareness, into coherent patterns. We used continuous flash suppression (CFS) to create interocular suppression, and thus lack of awareness, for a moving dot stimulus that varied in terms of coherence with an overall pattern (radial flow). Our results demonstrate that for radial motion, coherence favors the detection of patterns of moving dots even under interocular suppression. Coherence caused dots to break through the masks more often: this indicates that the visual system was able to integrate low-level motion signals into a coherent pattern outside of visual awareness. In contrast, in an experiment using meaningful or scrambled biological motion we did not observe any increase in the sensitivity of detection for meaningful patterns. Overall, our results are in agreement with previous studies on face processing and with the hypothesis that certain features are spatiotemporally bound into coherent patterns even outside of attention or awareness.
Critical and maximally informative encoding between neural populations in the retina
Kastner, David B.; Baccus, Stephen A.; Sharpee, Tatyana O.
2015-01-01
Computation in the brain involves multiple types of neurons, yet the organizing principles for how these neurons work together remain unclear. Information theory has offered explanations for how different types of neurons can maximize the transmitted information by encoding different stimulus features. However, recent experiments indicate that separate neuronal types exist that encode the same filtered version of the stimulus, but then the different cell types signal the presence of that stimulus feature with different thresholds. Here we show that the emergence of these neuronal types can be quantitatively described by the theory of transitions between different phases of matter. The two key parameters that control the separation of neurons into subclasses are the mean and standard deviation (SD) of noise affecting neural responses. The average noise across the neural population plays the role of temperature in the classic theory of phase transitions, whereas the SD is equivalent to pressure or magnetic field, in the case of liquid–gas and magnetic transitions, respectively. Our results account for properties of two recently discovered types of salamander Off retinal ganglion cells, as well as the absence of multiple types of On cells. We further show that, across visual stimulus contrasts, retinal circuits continued to operate near the critical point whose quantitative characteristics matched those expected near a liquid–gas critical point and described by the nearest-neighbor Ising model in three dimensions. By operating near a critical point, neural circuits can maximize information transmission in a given environment while retaining the ability to quickly adapt to a new environment. PMID:25675497
Barack Obama Blindness (BOB): Absence of Visual Awareness to a Single Object.
Persuh, Marjan; Melara, Robert D
2016-01-01
In two experiments, we evaluated whether a perceiver's prior expectations could alone obliterate his or her awareness of a salient visual stimulus. To establish expectancy, observers first made a demanding visual discrimination on each of three baseline trials. Then, on a fourth, critical trial, a single, salient and highly visible object appeared in full view at the center of the visual field and in the absence of any competing visual input. Surprisingly, fully half of the participants were unaware of the solitary object in front of their eyes. Dramatically, observers were blind even when the only stimulus on display was the face of U.S. President Barack Obama. We term this novel, counterintuitive phenomenon, Barack Obama Blindness (BOB). Employing a method that rules out putative memory effects by probing awareness immediately after presentation of the critical stimulus, we demonstrate that the BOB effect is a true failure of conscious vision.
Barack Obama Blindness (BOB): Absence of Visual Awareness to a Single Object
Persuh, Marjan; Melara, Robert D.
2016-01-01
In two experiments, we evaluated whether a perceiver’s prior expectations could alone obliterate his or her awareness of a salient visual stimulus. To establish expectancy, observers first made a demanding visual discrimination on each of three baseline trials. Then, on a fourth, critical trial, a single, salient and highly visible object appeared in full view at the center of the visual field and in the absence of any competing visual input. Surprisingly, fully half of the participants were unaware of the solitary object in front of their eyes. Dramatically, observers were blind even when the only stimulus on display was the face of U.S. President Barack Obama. We term this novel, counterintuitive phenomenon, Barack Obama Blindness (BOB). Employing a method that rules out putative memory effects by probing awareness immediately after presentation of the critical stimulus, we demonstrate that the BOB effect is a true failure of conscious vision. PMID:27047362
Attention distributed across sensory modalities enhances perceptual performance
Mishra, Jyoti; Gazzaley, Adam
2012-01-01
This study investigated the interaction between top-down attentional control and multisensory processing in humans. Using semantically congruent and incongruent audiovisual stimulus streams, we found target detection to be consistently improved in the setting of distributed audiovisual attention versus focused visual attention. This performance benefit was manifested as faster reaction times for congruent audiovisual stimuli, and as accuracy improvements for incongruent stimuli, resulting in a resolution of stimulus interference. Electrophysiological recordings revealed that these behavioral enhancements were associated with reduced neural processing of both auditory and visual components of the audiovisual stimuli under distributed vs. focused visual attention. These neural changes were observed at early processing latencies, within 100–300 ms post-stimulus onset, and localized to auditory, visual, and polysensory temporal cortices. These results highlight a novel neural mechanism for top-down driven performance benefits via enhanced efficacy of sensory neural processing during distributed audiovisual attention relative to focused visual attention. PMID:22933811
The Naked Truth: The Face and Body Sensitive N170 Response Is Enhanced for Nude Bodies
Hietanen, Jari K.; Nummenmaa, Lauri
2011-01-01
Recent event-related potential studies have shown that the occipitotemporal N170 component - best known for its sensitivity to faces - is also sensitive to perception of human bodies. Considering that in the timescale of evolution clothing is a relatively new invention that hides the bodily features relevant for sexual selection and arousal, we investigated whether the early N170 brain response would be enhanced to nude over clothed bodies. In two experiments, we measured N170 responses to nude bodies, bodies wearing swimsuits, clothed bodies, faces, and control stimuli (cars). We found that the N170 amplitude was larger to opposite and same-sex nude vs. clothed bodies. Moreover, the N170 amplitude increased linearly as the amount of clothing decreased from full clothing via swimsuits to nude bodies. Strikingly, the N170 response to nude bodies was even greater than that to faces, and the N170 amplitude to bodies was independent of whether the face of the bodies was visible or not. All human stimuli evoked greater N170 responses than did the control stimulus. Autonomic measurements and self-evaluations showed that nude bodies were affectively more arousing compared to the other stimulus categories. We conclude that the early visual processing of human bodies is sensitive to the visibility of the sex-related features of human bodies and that the visual processing of other people's nude bodies is enhanced in the brain. This enhancement is likely to reflect affective arousal elicited by nude bodies. Such facilitated visual processing of other people's nude bodies is possibly beneficial in identifying potential mating partners and competitors, and for triggering sexual behavior. PMID:22110574
Out of sight, out of mind: Categorization learning and normal aging.
Schenk, Sabrina; Minda, John P; Lech, Robert K; Suchan, Boris
2016-10-01
The present combined EEG and eye tracking study examined the process of categorization learning at different age ranges and aimed to investigate to which degree categorization learning is mediated by visual attention and perceptual strategies. Seventeen young subjects and ten elderly subjects had to perform a visual categorization task with two abstract categories. Each category consisted of prototypical stimuli and an exception. The categorization of prototypical stimuli was learned very early during the experiment, while the learning of exceptions was delayed. The categorization of exceptions was accompanied by higher P150, P250 and P300 amplitudes. In contrast to younger subjects, elderly subjects had problems in the categorization of exceptions, but showed an intact categorization performance for prototypical stimuli. Moreover, elderly subjects showed higher fixation rates for important stimulus features and higher P150 amplitudes, which were positively correlated with the categorization performances. These results indicate that elderly subjects compensate for cognitive decline through enhanced perceptual and attentional processing of individual stimulus features. Additionally, a computational approach has been applied and showed a transition away from purely abstraction-based learning to an exemplar-based learning in the middle block for both groups. However, the calculated models provide a better fit for younger subjects than for elderly subjects. The current study demonstrates that human categorization learning is based on early abstraction-based processing followed by an exemplar-memorization stage. This strategy combination facilitates the learning of real world categories with a nuanced category structure. In addition, the present study suggests that categorization learning is affected by normal aging and modulated by perceptual processing and visual attention. Copyright © 2016 Elsevier Ltd. All rights reserved.
Zivcevska, Marija; Lei, Shaobo; Blakeman, Alan; Goltz, Herbert C; Wong, Agnes M F
2018-03-01
To develop an objective psychophysical method to quantify light-induced visual discomfort, and to measure the effects of viewing condition and stimulus wavelength. Eleven visually normal subjects participated in the study. Their pupils were dilated (2.5% phenylephrine) before the experiment. A Ganzfeld system presented either red (1.5, 19.1, 38.2, 57.3, 76.3, 152.7, 305.3 cd/m2) or blue (1.4, 7.1, 14.3, 28.6, 42.9, 57.1, 71.4 cd/m2) randomized light intensities (1 s each) in four blocks. Constant white-light stimuli (3 cd/m2, 4 s duration) were interleaved with the chromatic trials. Participants reported each stimulus as either "uncomfortably bright" or "not uncomfortably bright." The experiment was done binocularly and monocularly in separate sessions, and the order of color/viewing condition sequence was randomized across participants. The proportion of "uncomfortable" responses was used to generate individual psychometric functions, from which 50% discomfort thresholds were calculated. Light-induced discomfort was higher under blue compared with red light stimulation, both during binocular (t(10) = 3.58, P < 0.01) and monocular viewing (t(10) = 3.15, P = 0.01). There was also a significant difference in discomfort between viewing conditions, with binocular viewing inducing more discomfort than monocular viewing for blue (P < 0.001), but not for red light stimulation. The light-induced discomfort characteristics reported here are consistent with features of the melanopsin-containing intrinsically photosensitive retinal ganglion cell light irradiance pathway, which may mediate photophobia, a prominent feature in many clinical disorders. This is the first psychometric assessment designed around melanopsin spectral properties that can be customized further to assess photophobia in different clinical populations.
Griffeth, Valerie E M; Simon, Aaron B; Buxton, Richard B
2015-01-01
Quantitative functional MRI (fMRI) experiments to measure blood flow and oxygen metabolism coupling in the brain typically rely on simple repetitive stimuli. Here we compared such stimuli with a more naturalistic stimulus. Previous work on the primary visual cortex showed that direct attentional modulation evokes a blood flow (CBF) response with a relatively large oxygen metabolism (CMRO2) response in comparison to an unattended stimulus, which evokes a much smaller metabolic response relative to the flow response. We hypothesized that a similar effect would be associated with a more engaging stimulus, and tested this by measuring the primary human visual cortex response to two contrast levels of a radial flickering checkerboard in comparison to the response to free viewing of brief movie clips. We did not find a significant difference in the blood flow-metabolism coupling (n=%ΔCBF/%ΔCMRO2) between the movie stimulus and the flickering checkerboards employing two different analysis methods: a standard analysis using the Davis model and a new analysis using a heuristic model dependent only on measured quantities. This finding suggests that in the primary visual cortex a naturalistic stimulus (in comparison to a simple repetitive stimulus) is either not sufficient to provoke a change in flow-metabolism coupling by attentional modulation as hypothesized, that the experimental design disrupted the cognitive processes underlying the response to a more natural stimulus, or that the technique used is not sensitive enough to detect a small difference. Copyright © 2014 Elsevier Inc. All rights reserved.
Tao, Xiaofeng; Zhang, Bin; Shen, Guofu; Wensveen, Janice; Smith, Earl L; Nishimoto, Shinji; Ohzawa, Izumi; Chino, Yuzo M
2014-10-08
Experiencing different quality images in the two eyes soon after birth can cause amblyopia, a developmental vision disorder. Amblyopic humans show the reduced capacity for judging the relative position of a visual target in reference to nearby stimulus elements (position uncertainty) and often experience visual image distortion. Although abnormal pooling of local stimulus information by neurons beyond striate cortex (V1) is often suggested as a neural basis of these deficits, extrastriate neurons in the amblyopic brain have rarely been studied using microelectrode recording methods. The receptive field (RF) of neurons in visual area V2 in normal monkeys is made up of multiple subfields that are thought to reflect V1 inputs and are capable of encoding the spatial relationship between local stimulus features. We created primate models of anisometropic amblyopia and analyzed the RF subfield maps for multiple nearby V2 neurons of anesthetized monkeys by using dynamic two-dimensional noise stimuli and reverse correlation methods. Unlike in normal monkeys, the subfield maps of V2 neurons in amblyopic monkeys were severely disorganized: subfield maps showed higher heterogeneity within each neuron as well as across nearby neurons. Amblyopic V2 neurons exhibited robust binocular suppression and the strength of the suppression was positively correlated with the degree of hereogeneity and the severity of amblyopia in individual monkeys. Our results suggest that the disorganized subfield maps and robust binocular suppression of amblyopic V2 neurons are likely to adversely affect the higher stages of cortical processing resulting in position uncertainty and image distortion. Copyright © 2014 the authors 0270-6474/14/3413840-15$15.00/0.
Fukatsu, Y; Miyake, Y; Sugita, S; Saito, A; Watanabe, S
1990-11-01
To analyze the Electrically evoked response (EER) in relation to the central visual pathway, the authors studied the properties of wave patterns and peak latencies of EER in 35 anesthetized adult cats. The cat EER showed two early positive waves on outward current (cornea cathode) stimulus and three or four early positive waves on inward current (cornea anode) stimulus. These waves were recorded within 50 ms after stimulus onset, and were the most consistent components in cat EER. The stimulus threshold for EER showed a less individual variation than amplitude. The difference of stimulus threshold between outward and inward current stimulus was also essentially negligible. The stimulus threshold was higher in early components than in late components. The peak latency of EER became shorter and the amplitude became higher, as the stimulus intensity was increased. However, this tendency was reversed and some wavelets started to appear when the stimulus was extremely strong. The recording using short stimulus duration and bipolar electrodes enabled us to reduce the electrical artifact of EER. These results obtained from cats were compared with those of humans and rabbits.
Prestimulus EEG Power Predicts Conscious Awareness But Not Objective Visual Performance
Veniero, Domenica
2017-01-01
Abstract Prestimulus oscillatory neural activity has been linked to perceptual outcomes during performance of psychophysical detection and discrimination tasks. Specifically, the power and phase of low frequency oscillations have been found to predict whether an upcoming weak visual target will be detected or not. However, the mechanisms by which baseline oscillatory activity influences perception remain unclear. Recent studies suggest that the frequently reported negative relationship between α power and stimulus detection may be explained by changes in detection criterion (i.e., increased target present responses regardless of whether the target was present/absent) driven by the state of neural excitability, rather than changes in visual sensitivity (i.e., more veridical percepts). Here, we recorded EEG while human participants performed a luminance discrimination task on perithreshold stimuli in combination with single-trial ratings of perceptual awareness. Our aim was to investigate whether the power and/or phase of prestimulus oscillatory activity predict discrimination accuracy and/or perceptual awareness on a trial-by-trial basis. Prestimulus power (3–28 Hz) was inversely related to perceptual awareness ratings (i.e., higher ratings in states of low prestimulus power/high excitability) but did not predict discrimination accuracy. In contrast, prestimulus oscillatory phase did not predict awareness ratings or accuracy in any frequency band. These results provide evidence that prestimulus α power influences the level of subjective awareness of threshold visual stimuli but does not influence visual sensitivity when a decision has to be made regarding stimulus features. Hence, we find a clear dissociation between the influence of ongoing neural activity on conscious awareness and objective performance. PMID:29255794
Effect of ethanol on the visual-evoked potential in rat: dynamics of ON and OFF responses.
Dulinskas, Redas; Buisas, Rokas; Vengeliene, Valentina; Ruksenas, Osvaldas
2017-01-01
The effect of acute ethanol administration on the flash visual-evoked potential (VEP) was investigated in numerous studies. However, it is still unclear which brain structures are responsible for the differences observed in stimulus onset (ON) and offset (OFF) responses and how these responses are modulated by ethanol. The aim of our study was to investigate the pattern of ON and OFF responses in the visual system, measured as amplitude and latency of each VEP component following acute administration of ethanol. VEPs were recorded at the onset and offset of a 500 ms visual stimulus in anesthetized male Wistar rats. The effect of alcohol on VEP latency and amplitude was measured for one hour after injection of 2 g/kg ethanol dose. Three VEP components - N63, P89 and N143 - were analyzed. Our results showed that, except for component N143, ethanol increased the latency of both ON and OFF responses in a similar manner. The latency of N143 during OFF response was not affected by ethanol but its amplitude was reduced. Our study demonstrated that the activation of the visual system during the ON response to a 500 ms visual stimulus is qualitatively different from that during the OFF response. Ethanol interfered with processing of the stimulus duration at the level of the visual cortex and reduced the activation of cortical regions.
Role of somatosensory and vestibular cues in attenuating visually induced human postural sway
NASA Technical Reports Server (NTRS)
Peterka, Robert J.; Benolken, Martha S.
1993-01-01
The purpose was to determine the contribution of visual, vestibular, and somatosensory cues to the maintenance of stance in humans. Postural sway was induced by full field, sinusoidal visual surround rotations about an axis at the level of the ankle joints. The influences of vestibular and somatosensory cues were characterized by comparing postural sway in normal and bilateral vestibular absent subjects in conditions that provided either accurate or inaccurate somatosensory orientation information. In normal subjects, the amplitude of visually induced sway reached a saturation level as stimulus amplitude increased. The saturation amplitude decreased with increasing stimulus frequency. No saturation phenomena was observed in subjects with vestibular loss, implying that vestibular cues were responsible for the saturation phenomenon. For visually induced sways below the saturation level, the stimulus-response curves for both normal and vestibular loss subjects were nearly identical implying that (1) normal subjects were not using vestibular information to attenuate their visually induced sway, possibly because sway was below a vestibular-related threshold level, and (2) vestibular loss subjects did not utilize visual cues to a greater extent than normal subjects; that is, a fundamental change in visual system 'gain' was not used to compensate for a vestibular deficit. An unexpected finding was that the amplitude of body sway induced by visual surround motion could be almost three times greater than the amplitude of the visual stimulus in normals and vestibular loss subjects. This occurred in conditions where somatosensory cues were inaccurate and at low stimulus amplitudes. A control system model of visually induced postural sway was developed to explain this finding. For both subject groups, the amplitude of visually induced sway was smaller by a factor of about four in tests where somatosensory cues provided accurate versus inaccurate orientation information. This implied that (1) the vestibular loss subjects did not utilize somatosensory cues to a greater extent than normal subjects; that is, changes in somatosensory system 'gain' were not used to compensate for a vestibular deficit, and (2) the threshold for the use of vestibular cues in normals was apparently lower in test conditions where somatosensory cues were providing accurate orientation information.
Is nevtral NEUTRAL? Visual similarity effects in the early phases of written-word recognition.
Marcet, Ana; Perea, Manuel
2017-08-01
For simplicity, contemporary models of written-word recognition and reading have unspecified feature/letter levels-they predict that the visually similar substituted-letter nonword PEQPLE is as effective at activating the word PEOPLE as the visually dissimilar substituted-letter nonword PEYPLE. Previous empirical evidence on the effects of visual similarly across letters during written-word recognition is scarce and nonconclusive. To examine whether visual similarity across letters plays a role early in word processing, we conducted two masked priming lexical decision experiments (stimulus-onset asynchrony = 50 ms). The substituted-letter primes were visually very similar to the target letters (u/v in Experiment 1 and i/j in Experiment 2; e.g., nevtral-NEUTRAL). For comparison purposes, we included an identity prime condition (neutral-NEUTRAL) and a dissimilar-letter prime condition (neztral-NEUTRAL). Results showed that the similar-letter prime condition produced faster word identification times than the dissimilar-letter prime condition. We discuss how models of written-word recognition should be amended to capture visual similarity effects across letters.
Spatiotemporal proximity effects in visual short-term memory examined by target-nontarget analysis.
Sapkota, Raju P; Pardhan, Shahina; van der Linde, Ian
2016-08-01
Visual short-term memory (VSTM) is a limited-capacity system that holds a small number of objects online simultaneously, implying that competition for limited storage resources occurs (Phillips, 1974). How the spatial and temporal proximity of stimuli affects this competition is unclear. In this 2-experiment study, we examined the effect of the spatial and temporal separation of real-world memory targets and erroneously selected nontarget items examined during location-recognition and object-recall tasks. In Experiment 1 (the location-recognition task), our test display comprised either the picture or name of 1 previously examined memory stimulus (rendered above as the stimulus-display area), together with numbered square boxes at each of the memory-stimulus locations used in that trial. Participants were asked to report the number inside the square box corresponding to the location at which the cued object was originally presented. In Experiment 2 (the object-recall task), the test display comprised a single empty square box presented at 1 memory-stimulus location. Participants were asked to report the name of the object presented at that location. In both experiments, nontarget objects that were spatially and temporally proximal to the memory target were confused more often than nontarget objects that were spatially and temporally distant (i.e., a spatiotemporal proximity effect); this effect generalized across memory tasks, and the object feature (picture or name) that cued the test-display memory target. Our findings are discussed in terms of spatial and temporal confusion "fields" in VSTM, wherein objects occupy diffuse loci in a spatiotemporal coordinate system, wherein neighboring locations are more susceptible to confusion. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Moors, Pieter; Wagemans, Johan; de-Wit, Lee
2014-01-01
Continuous flash suppression (CFS) is a powerful interocular suppression technique, which is often described as an effective means to reliably suppress stimuli from visual awareness. Suppression through CFS has been assumed to depend upon a reduction in (retinotopically specific) neural adaptation caused by the continual updating of the contents of the visual input to one eye. In this study, we started from the observation that suppressing a moving stimulus through CFS appeared to be more effective when using a mask that was actually more prone to retinotopically specific neural adaptation, but in which the properties of the mask were more similar to those of the to-be-suppressed stimulus. In two experiments, we find that using a moving Mondrian mask (i.e., one that includes motion) is more effective in suppressing a moving stimulus than a regular CFS mask. The observed pattern of results cannot be explained by a simple simulation that computes the degree of retinotopically specific neural adaptation over time, suggesting that this kind of neural adaptation does not play a large role in predicting the differences between conditions in this context. We also find some evidence consistent with the idea that the most effective CFS mask is the one that matches the properties (speed) of the suppressed stimulus. These results question the general importance of retinotopically specific neural adaptation in CFS, and potentially help to explain an implicit trend in the literature to adapt one's CFS mask to match one's to-be-suppressed stimuli. Finally, the results should help to guide the methodological development of future research where continuous suppression of moving stimuli is desired.
On the role of covarying functions in stimulus class formation and transfer of function.
Markham, Rebecca G; Markham, Michael R
2002-01-01
This experiment investigated whether directly trained covarying functions are necessary for stimulus class formation and transfer of function in humans. Initial class training was designed to establish two respondent-based stimulus classes by pairing two visual stimuli with shock and two other visual stimuli with no shock. Next, two operant discrimination functions were trained to one stimulus of each putative class. The no-shock group received the same training and testing in all phases, except no stimuli were ever paired with shock. The data indicated that skin conductance response conditioning did not occur for the shock groups or for the no-shock group. Tests showed transfer of the established discriminative functions, however, only for the shock groups, indicating the formation of two stimulus classes only for those participants who received respondent class training. The results suggest that transfer of function does not depend on first covarying the stimulus class functions. PMID:12507017
Bressler, David W; Fortenbaugh, Francesca C; Robertson, Lynn C; Silver, Michael A
2013-06-07
Endogenous visual spatial attention improves perception and enhances neural responses to visual stimuli at attended locations. Although many aspects of visual processing differ significantly between central and peripheral vision, little is known regarding the neural substrates of the eccentricity dependence of spatial attention effects. We measured amplitudes of positive and negative fMRI responses to visual stimuli as a function of eccentricity in a large number of topographically-organized cortical areas. Responses to each stimulus were obtained when the stimulus was attended and when spatial attention was directed to a stimulus in the opposite visual hemifield. Attending to the stimulus increased both positive and negative response amplitudes in all cortical areas we studied: V1, V2, V3, hV4, VO1, LO1, LO2, V3A/B, IPS0, TO1, and TO2. However, the eccentricity dependence of these effects differed considerably across cortical areas. In early visual, ventral, and lateral occipital cortex, attentional enhancement of positive responses was greater for central compared to peripheral eccentricities. The opposite pattern was observed in dorsal stream areas IPS0 and putative MT homolog TO1, where attentional enhancement of positive responses was greater in the periphery. Both the magnitude and the eccentricity dependence of attentional modulation of negative fMRI responses closely mirrored that of positive responses across cortical areas. Copyright © 2013 Elsevier Ltd. All rights reserved.
Filbrich, Lieve; Alamia, Andrea; Burns, Soline; Legrain, Valéry
2017-07-01
Despite their high relevance for defending the integrity of the body, crossmodal links between nociception, the neural system specifically coding potentially painful information, and vision are still poorly studied, especially the effects of nociception on visual perception. This study investigated if, and in which time window, a nociceptive stimulus can attract attention to its location on the body, independently of voluntary control, to facilitate the processing of visual stimuli occurring in the same side of space as the limb on which the visual stimulus was applied. In a temporal order judgment task based on an adaptive procedure, participants judged which of two visual stimuli, one presented next to either hand in either side of space, had been perceived first. Each pair of visual stimuli was preceded (by 200, 400, or 600 ms) by a nociceptive stimulus applied either unilaterally on one single hand, or bilaterally, on both hands simultaneously. Results show that, as compared to the bilateral condition, participants' judgments were biased to the advantage of the visual stimuli that occurred in the same side of space as the hand on which a unilateral, nociceptive stimulus was applied. This effect was present in a time window ranging from 200 to 600 ms, but importantly, biases increased with decreasing time interval. These results suggest that nociceptive stimuli can affect the perceptual processing of spatially congruent visual inputs.
Perceived duration decreases with increasing eccentricity.
Kliegl, Katrin M; Huckauf, Anke
2014-07-01
Previous studies examining the influence of stimulus location on temporal perception yield inhomogeneous and contradicting results. Therefore, the aim of the present study is to soundly examine the effect of stimulus eccentricity. In a series of five experiments, subjects compared the duration of foveal disks to disks presented at different retinal eccentricities on the horizontal meridian. The results show that the perceived duration of a visual stimulus declines with increasing eccentricity. The effect was replicated with various stimulus orders (Experiments 1-3), as well as with cortically magnified stimuli (Experiments 4-5), ruling out that the effect was merely caused by different cortical representation sizes. The apparent decreasing duration of stimuli with increasing eccentricity is discussed with respect to current models of time perception, the possible influence of visual attention and respective underlying physiological characteristics of the visual system. Copyright © 2014 Elsevier B.V. All rights reserved.
Encoding of Target Detection during Visual Search by Single Neurons in the Human Brain.
Wang, Shuo; Mamelak, Adam N; Adolphs, Ralph; Rutishauser, Ueli
2018-06-08
Neurons in the primate medial temporal lobe (MTL) respond selectively to visual categories such as faces, contributing to how the brain represents stimulus meaning. However, it remains unknown whether MTL neurons continue to encode stimulus meaning when it changes flexibly as a function of variable task demands imposed by goal-directed behavior. While classically associated with long-term memory, recent lesion and neuroimaging studies show that the MTL also contributes critically to the online guidance of goal-directed behaviors such as visual search. Do such tasks modulate responses of neurons in the MTL, and if so, do their responses mirror bottom-up input from visual cortices or do they reflect more abstract goal-directed properties? To answer these questions, we performed concurrent recordings of eye movements and single neurons in the MTL and medial frontal cortex (MFC) in human neurosurgical patients performing a memory-guided visual search task. We identified a distinct population of target-selective neurons in both the MTL and MFC whose response signaled whether the currently fixated stimulus was a target or distractor. This target-selective response was invariant to visual category and predicted whether a target was detected or missed behaviorally during a given fixation. The response latencies, relative to fixation onset, of MFC target-selective neurons preceded those in the MTL by ∼200 ms, suggesting a frontal origin for the target signal. The human MTL thus represents not only fixed stimulus identity, but also task-specified stimulus relevance due to top-down goal relevance. Copyright © 2018 Elsevier Ltd. All rights reserved.
Inverse target- and cue-priming effects of masked stimuli.
Mattler, Uwe
2007-02-01
The processing of a visual target that follows a briefly presented prime stimulus can be facilitated if prime and target stimuli are similar. In contrast to these positive priming effects, inverse priming effects (or negative compatibility effects) have been found when a mask follows prime stimuli before the target stimulus is presented: Responses are facilitated after dissimilar primes. Previous studies on inverse priming effects examined target-priming effects, which arise when the prime and the target stimuli share features that are critical for the response decision. In contrast, 3 experiments of the present study demonstrate inverse priming effects in a nonmotor cue-priming paradigm. Inverse cue-priming effects exhibited time courses comparable to inverse target-priming effects. Results suggest that inverse priming effects do not arise from specific processes of the response system but follow from operations that are more general.
Crossmodal attention switching: auditory dominance in temporal discrimination tasks.
Lukas, Sarah; Philipp, Andrea M; Koch, Iring
2014-11-01
Visual stimuli are often processed more efficiently than accompanying stimuli in another modality. In line with this "visual dominance", earlier studies on attentional switching showed a clear benefit for visual stimuli in a bimodal visual-auditory modality-switch paradigm that required spatial stimulus localization in the relevant modality. The present study aimed to examine the generality of this visual dominance effect. The modality appropriateness hypothesis proposes that stimuli in different modalities are differentially effectively processed depending on the task dimension, so that processing of visual stimuli is favored in the dimension of space, whereas processing auditory stimuli is favored in the dimension of time. In the present study, we examined this proposition by using a temporal duration judgment in a bimodal visual-auditory switching paradigm. Two experiments demonstrated that crossmodal interference (i.e., temporal stimulus congruence) was larger for visual stimuli than for auditory stimuli, suggesting auditory dominance when performing temporal judgment tasks. However, attention switch costs were larger for the auditory modality than for visual modality, indicating a dissociation of the mechanisms underlying crossmodal competition in stimulus processing and modality-specific biasing of attentional set. Copyright © 2014 Elsevier B.V. All rights reserved.
Unconscious Familiarity-based Color-Form Binding: Evidence from Visual Extinction.
Rappaport, Sarah J; Riddoch, M Jane; Chechlacz, Magda; Humphreys, Glyn W
2016-03-01
There is good evidence that early visual processing involves the coding of different features in independent brain regions. A major question, then, is how we see the world in an integrated manner, in which the different features are "bound" together. A standard account of this has been that feature binding depends on attention to the stimulus, which enables only the relevant features to be linked together [Treisman, A., & Gelade, G. A feature-integration theory of attention. Cognitive Psychology, 12, 97-136, 1980]. Here we test this influential idea by examining whether, in patients showing visual extinction, the processing of otherwise unconscious (extinguished) stimuli is modulated by presenting objects in their correct (familiar) color. Correctly colored objects showed reduced extinction when they had a learned color, and this color matched across the ipsi- and contralesional items (red strawberry + red tomato). In contrast, there was no reduction in extinction under the same conditions when the stimuli were colored incorrectly (blue strawberry + blue tomato; Experiment 1). The result was not due to the speeded identification of a correctly colored ipsilesional item, as there was no benefit from having correctly colored objects in different colors (red strawberry + yellow lemon; Experiment 2). There was also no benefit to extinction from presenting the correct colors in the background of each item (Experiment 3). The data suggest that learned color-form binding can reduce extinction even when color is irrelevant for the task. The result is consistent with preattentive binding of color and shape for familiar stimuli.
Electrophysiological evidence for phenomenal consciousness.
Revonsuo, Antti; Koivisto, Mika
2010-09-01
Abstract Recent evidence from event-related brain potentials (ERPs) lends support to two central theses in Lamme's theory. The earliest ERP correlate of visual consciousness appears over posterior visual cortex around 100-200 ms after stimulus onset. Its scalp topography and time window are consistent with recurrent processing in the visual cortex. This electrophysiological correlate of visual consciousness is mostly independent of later ERPs reflecting selective attention and working memory functions. Overall, the ERP evidence supports the view that phenomenal consciousness of a visual stimulus emerges earlier than access consciousness, and that attention and awareness are served by distinct neural processes.
O'Connor, Constance M; Reddon, Adam R; Odetunde, Aderinsola; Jindal, Shagun; Balshine, Sigal
2015-12-01
Predation is one of the primary drivers of fitness for prey species. Therefore, there should be strong selection for accurate assessment of predation risk, and whenever possible, individuals should use all available information to fine-tune their response to the current threat of predation. Here, we used a controlled laboratory experiment to assess the responses of individual Neolamprologus pulcher, a social cichlid fish, to a live predator stimulus, to the odour of damaged conspecifics, or to both indicators of predation risk combined. We found that fish in the presence of the visual predator stimulus showed typical antipredator behaviour. Namely, these fish decreased activity and exploration, spent more time seeking shelter, and more time near conspecifics. Surprisingly, there was no effect of the chemical cue alone, and fish showed a reduced response to the combination of the visual predator stimulus and the odour of damaged conspecifics relative to the visual predator stimulus alone. These results demonstrate that N. pulcher adjust their anti-predator behaviour to the information available about current predation risk, and we suggest a possible role for the use of social information in the assessment of predation risk in a cooperatively breeding fish. Copyright © 2015. Published by Elsevier B.V.
Hales, J. B.; Brewer, J. B.
2018-01-01
Given the diversity of stimuli encountered in daily life, a variety of strategies must be used for learning new information. Relating and encoding visual and verbal stimuli into memory has been probed using various tasks and stimulus-types. Engagement of specific subsequent memory and cortical processing regions depends on the stimulus modality of studied material; however, it remains unclear whether different encoding strategies similarly influence regional activity when stimulus-type is held constant. In this study, subjects encoded object pairs using a visual or verbal associative strategy during functional magnetic resonance imaging (fMRI), and subsequent memory was assessed for pairs encoded under each strategy. Each strategy elicited distinct regional processing and subsequent memory effects: middle / superior frontal, lateral parietal, and lateral occipital for visually-associated pairs and inferior frontal, medial frontal, and medial occipital for verbally-associated pairs. This regional selectivity mimics the effects of stimulus modality, suggesting that cortical involvement in associative encoding is driven by strategy, and not simply by stimulus-type. The clinical relevance of these findings, probed in two patients with recent aphasic strokes, suggest that training with strategies utilizing unaffected cortical regions might improve memory ability in patients with brain damage. PMID:22390467
Memorable Audiovisual Narratives Synchronize Sensory and Supramodal Neural Responses
2016-01-01
Abstract Our brains integrate information across sensory modalities to generate perceptual experiences and form memories. However, it is difficult to determine the conditions under which multisensory stimulation will benefit or hinder the retrieval of everyday experiences. We hypothesized that the determining factor is the reliability of information processing during stimulus presentation, which can be measured through intersubject correlation of stimulus-evoked activity. We therefore presented biographical auditory narratives and visual animations to 72 human subjects visually, auditorily, or combined, while neural activity was recorded using electroencephalography. Memory for the narrated information, contained in the auditory stream, was tested 3 weeks later. While the visual stimulus alone led to no meaningful retrieval, this related stimulus improved memory when it was combined with the story, even when it was temporally incongruent with the audio. Further, individuals with better subsequent memory elicited neural responses during encoding that were more correlated with their peers. Surprisingly, portions of this predictive synchronized activity were present regardless of the sensory modality of the stimulus. These data suggest that the strength of sensory and supramodal activity is predictive of memory performance after 3 weeks, and that neural synchrony may explain the mnemonic benefit of the functionally uninformative visual context observed for these real-world stimuli. PMID:27844062
Cecere, Roberto; Gross, Joachim; Willis, Ashleigh; Thut, Gregor
2017-05-24
In multisensory integration, processing in one sensory modality is enhanced by complementary information from other modalities. Intersensory timing is crucial in this process because only inputs reaching the brain within a restricted temporal window are perceptually bound. Previous research in the audiovisual field has investigated various features of the temporal binding window, revealing asymmetries in its size and plasticity depending on the leading input: auditory-visual (AV) or visual-auditory (VA). Here, we tested whether separate neuronal mechanisms underlie this AV-VA dichotomy in humans. We recorded high-density EEG while participants performed an audiovisual simultaneity judgment task including various AV-VA asynchronies and unisensory control conditions (visual-only, auditory-only) and tested whether AV and VA processing generate different patterns of brain activity. After isolating the multisensory components of AV-VA event-related potentials (ERPs) from the sum of their unisensory constituents, we ran a time-resolved topographical representational similarity analysis (tRSA) comparing the AV and VA ERP maps. Spatial cross-correlation matrices were built from real data to index the similarity between the AV and VA maps at each time point (500 ms window after stimulus) and then correlated with two alternative similarity model matrices: AV maps = VA maps versus AV maps ≠ VA maps The tRSA results favored the AV maps ≠ VA maps model across all time points, suggesting that audiovisual temporal binding (indexed by synchrony perception) engages different neural pathways depending on the leading sense. The existence of such dual route supports recent theoretical accounts proposing that multiple binding mechanisms are implemented in the brain to accommodate different information parsing strategies in auditory and visual sensory systems. SIGNIFICANCE STATEMENT Intersensory timing is a crucial aspect of multisensory integration, determining whether and how inputs in one modality enhance stimulus processing in another modality. Our research demonstrates that evaluating synchrony of auditory-leading (AV) versus visual-leading (VA) audiovisual stimulus pairs is characterized by two distinct patterns of brain activity. This suggests that audiovisual integration is not a unitary process and that different binding mechanisms are recruited in the brain based on the leading sense. These mechanisms may be relevant for supporting different classes of multisensory operations, for example, auditory enhancement of visual attention (AV) and visual enhancement of auditory speech (VA). Copyright © 2017 Cecere et al.
Leon-Carrion, Jose; Martín-Rodríguez, Juan Francisco; Damas-López, Jesús; Pourrezai, Kambiz; Izzetoglu, Kurtulus; Barroso Y Martin, Juan Manuel; Dominguez-Morales, M Rosario
2007-04-06
A fundamental question in human sexuality regards the neural substrate underlying sexually-arousing representations. Lesion and neuroimaging studies suggest that dorsolateral pre-frontal cortex (DLPFC) plays an important role in regulating the processing of visual sexual stimulation. The aim of this Functional Near-Infrared Spectroscopy (fNIRS) study was to explore DLPFC structures involved in the processing of erotic and non-sexual films. fNIRS was used to image the evoked-cerebral blood oxygenation (CBO) response in 15 male and 15 female subjects. Our hypothesis is that a sexual stimulus would produce DLPFC activation during the period of direct stimulus perception ("on" period), and that this activation would continue after stimulus cessation ("off" period). A new paradigm was used to measure the relative oxygenated hemoglobin (oxyHb) concentrations in DLPFC while subjects viewed the two selected stimuli (Roman orgy and a non-sexual film clip), and also immediately following stimulus cessation. Viewing of the non-sexual stimulus produced no overshoot in DLPFC, whereas exposure to the erotic stimulus produced rapidly ascendant overshoot, which became even more pronounced following stimulus cessation. We also report on gender differences in the timing and intensity of DLPFC activation in response to a sexually explicit visual stimulus. We found evidence indicating that men experience greater and more rapid sexual arousal when exposed to erotic stimuli than do women. Our results point out that self-regulation of DLPFC activation is modulated by subjective arousal and that cognitive appraisal of the sexual stimulus (valence) plays a secondary role in this regulation.
Fischmeister, Florian Ph.S.; Leodolter, Ulrich; Windischberger, Christian; Kasess, Christian H.; Schöpf, Veronika; Moser, Ewald; Bauer, Herbert
2010-01-01
Throughout recent years there has been an increasing interest in studying unconscious visual processes. Such conditions of unawareness are typically achieved by either a sufficient reduction of the stimulus presentation time or visual masking. However, there are growing concerns about the reliability of the presentation devices used. As all these devices show great variability in presentation parameters, the processing of visual stimuli becomes dependent on the display-device, e.g. minimal changes in the physical stimulus properties may have an enormous impact on stimulus processing by the sensory system and on the actual experience of the stimulus. Here we present a custom-built three-way LC-shutter-tachistoscope which allows experimental setups with both, precise and reliable stimulus delivery, and millisecond resolution. This tachistoscope consists of three LCD-projectors equipped with zoom lenses to enable stimulus presentation via a built-in mirror-system onto a back projection screen from an adjacent room. Two high-speed liquid crystal shutters are mounted serially in front of each projector to control the stimulus duration. To verify the intended properties empirically, different sequences of presentation times were performed while changes in optical power were measured using a photoreceiver. The obtained results demonstrate that interfering variabilities in stimulus parameters and stimulus rendering are markedly reduced. Together with the possibility to collect external signals and to send trigger-signals to other devices, this tachistoscope represents a highly flexible and easy to set up research tool not only for the study of unconscious processing in the brain but for vision research in general. PMID:20122963
Overgaard, Morten; Lindeløv, Jonas; Svejstrup, Stinna; Døssing, Marianne; Hvid, Tanja; Kauffmann, Oliver; Mouridsen, Kim
2013-01-01
This paper reports an experiment intended to test a particular hypothesis derived from blindsight research, which we name the “source misidentification hypothesis.” According to this hypothesis, a subject may be correct about a stimulus without being correct about how she had access to this knowledge (whether the stimulus was visual, auditory, or something else). We test this hypothesis in healthy subjects, asking them to report whether a masked stimulus was presented auditorily or visually, what the stimulus was, and how clearly they experienced the stimulus using the Perceptual Awareness Scale (PAS). We suggest that knowledge about perceptual modality may be a necessary precondition in order to issue correct reports of which stimulus was presented. Furthermore, we find that PAS ratings correlate with correctness, and that subjects are at chance level when reporting no conscious experience of the stimulus. To demonstrate that particular levels of reporting accuracy are obtained, we employ a statistical strategy, which operationally tests the hypothesis of non-equality, such that the usual rejection of the null-hypothesis admits the conclusion of equivalence. PMID:23508677
Nakajima, S
2000-03-14
Pigeons were trained with the A+, AB-, ABC+, AD- and ADE+ task where each of stimulus A and stimulus compounds ABC and ADE signalled food (positive trials), and each of stimulus compounds AB and AD signalled no food (negative trials). Stimuli A, B, C and E were small visual figures localised on a response key, and stimulus D was a white noise. Stimulus B was more effective than D as an inhibitor of responding to A during the training. After the birds learned to respond exclusively on the positive trials, effects of B and D on responding to C and E, respectively, were tested by comparing C, BC, E and DE trials. Stimulus B continuously facilitated responding to C on the BC test trials, but D's facilitative effect was observed only on the first DE test trial. Stimulus B also facilitated responding to E on BE test trials. Implications for the Rescorla-Wagner elemental model and the Pearce configural model of Pavlovian conditioning were discussed.
Shades of yellow: interactive effects of visual and odour cues in a pest beetle
Stevenson, Philip C.; Belmain, Steven R.
2016-01-01
Background: The visual ecology of pest insects is poorly studied compared to the role of odour cues in determining their behaviour. Furthermore, the combined effects of both odour and vision on insect orientation are frequently ignored, but could impact behavioural responses. Methods: A locomotion compensator was used to evaluate use of different visual stimuli by a major coleopteran pest of stored grains (Sitophilus zeamais), with and without the presence of host odours (known to be attractive to this species), in an open-loop setup. Results: Some visual stimuli—in particular, one shade of yellow, solid black and high-contrast black-against-white stimuli—elicited positive orientation behaviour from the beetles in the absence of odour stimuli. When host odours were also present, at 90° to the source of the visual stimulus, the beetles presented with yellow and vertical black-on-white grating patterns changed their walking course and typically adopted a path intermediate between the two stimuli. The beetles presented with a solid black-on-white target continued to orient more strongly towards the visual than the odour stimulus. Discussion: Visual stimuli can strongly influence orientation behaviour, even in species where use of visual cues is sometimes assumed to be unimportant, while the outcomes from exposure to multimodal stimuli are unpredictable and need to be determined under differing conditions. The importance of the two modalities of stimulus (visual and olfactory) in food location is likely to depend upon relative stimulus intensity and motivational state of the insect. PMID:27478707
Visual Categorization of Natural Movies by Rats
Vinken, Kasper; Vermaercke, Ben
2014-01-01
Visual categorization of complex, natural stimuli has been studied for some time in human and nonhuman primates. Recent interest in the rodent as a model for visual perception, including higher-level functional specialization, leads to the question of how rodents would perform on a categorization task using natural stimuli. To answer this question, rats were trained in a two-alternative forced choice task to discriminate movies containing rats from movies containing other objects and from scrambled movies (ordinate-level categorization). Subsequently, transfer to novel, previously unseen stimuli was tested, followed by a series of control probes. The results show that the animals are capable of acquiring a decision rule by abstracting common features from natural movies to generalize categorization to new stimuli. Control probes demonstrate that they did not use single low-level features, such as motion energy or (local) luminance. Significant generalization was even present with stationary snapshots from untrained movies. The variability within and between training and test stimuli, the complexity of natural movies, and the control experiments and analyses all suggest that a more high-level rule based on more complex stimulus features than local luminance-based cues was used to classify the novel stimuli. In conclusion, natural stimuli can be used to probe ordinate-level categorization in rats. PMID:25100598
The Spotlight of Attention Illuminates Failed Feature-based Expectancies
Bengson, Jesse J.; Lopez-Calderon, Javier; Mangun, George R.
2012-01-01
A well-replicated finding is that visual stimuli presented at an attended location are afforded a processing benefit in the form of speeded reaction times and increased accuracy (Posner, 1979; Mangun 1995). This effect has been described using a spotlight metaphor, in which all stimuli within the focus of spatial attention receive facilitated processing, irrespective of other stimulus parameters. However, the spotlight metaphor has been brought into question by a series of combined expectancy studies which demonstrated that the behavioral benefits of spatial attention are contingent upon secondary feature-based expectancies (Kingstone, 1992). The present work used an event-related potential (ERP) approach to reveal that the early neural signature of the spotlight of spatial attention is not sensitive to the validity of secondary feature-based expectancies. PMID:22775503
Top-Down Beta Enhances Bottom-Up Gamma
Thompson, William H.
2017-01-01
Several recent studies have demonstrated that the bottom-up signaling of a visual stimulus is subserved by interareal gamma-band synchronization, whereas top-down influences are mediated by alpha-beta band synchronization. These processes may implement top-down control of stimulus processing if top-down and bottom-up mediating rhythms are coupled via cross-frequency interaction. To test this possibility, we investigated Granger-causal influences among awake macaque primary visual area V1, higher visual area V4, and parietal control area 7a during attentional task performance. Top-down 7a-to-V1 beta-band influences enhanced visually driven V1-to-V4 gamma-band influences. This enhancement was spatially specific and largest when beta-band activity preceded gamma-band activity by ∼0.1 s, suggesting a causal effect of top-down processes on bottom-up processes. We propose that this cross-frequency interaction mechanistically subserves the attentional control of stimulus selection. SIGNIFICANCE STATEMENT Contemporary research indicates that the alpha-beta frequency band underlies top-down control, whereas the gamma-band mediates bottom-up stimulus processing. This arrangement inspires an attractive hypothesis, which posits that top-down beta-band influences directly modulate bottom-up gamma band influences via cross-frequency interaction. We evaluate this hypothesis determining that beta-band top-down influences from parietal area 7a to visual area V1 are correlated with bottom-up gamma frequency influences from V1 to area V4, in a spatially specific manner, and that this correlation is maximal when top-down activity precedes bottom-up activity. These results show that for top-down processes such as spatial attention, elevated top-down beta-band influences directly enhance feedforward stimulus-induced gamma-band processing, leading to enhancement of the selected stimulus. PMID:28592697
ERIC Educational Resources Information Center
Mullen, Stuart; Dixon, Mark R.; Belisle, Jordan; Stanley, Caleb
2017-01-01
The current study sought to evaluate the efficacy of a stimulus equivalence training procedure in establishing auditory-tactile-visual stimulus classes with 2 children with autism and developmental delays. Participants were exposed to vocal-tactile (A-B) and tactile-picture (B-C) conditional discrimination training and were tested for the…
Components of Attention Modulated by Temporal Expectation
ERIC Educational Resources Information Center
Sørensen, Thomas Alrik; Vangkilde, Signe; Bundesen, Claus
2015-01-01
By varying the probabilities that a stimulus would appear at particular times after the presentation of a cue and modeling the data by the theory of visual attention (Bundesen, 1990), Vangkilde, Coull, and Bundesen (2012) provided evidence that the speed of encoding a singly presented stimulus letter into visual short-term memory (VSTM) is…
Discriminating External and Internal Causes for Heading Changes in Freely Flying Drosophila
Sayaman, Rosalyn W.; Murray, Richard M.; Dickinson, Michael H.
2013-01-01
As animals move through the world in search of resources, they change course in reaction to both external sensory cues and internally-generated programs. Elucidating the functional logic of complex search algorithms is challenging because the observable actions of the animal cannot be unambiguously assigned to externally- or internally-triggered events. We present a technique that addresses this challenge by assessing quantitatively the contribution of external stimuli and internal processes. We apply this technique to the analysis of rapid turns (“saccades”) of freely flying Drosophila melanogaster. We show that a single scalar feature computed from the visual stimulus experienced by the animal is sufficient to explain a majority (93%) of the turning decisions. We automatically estimate this scalar value from the observable trajectory, without any assumption regarding the sensory processing. A posteriori, we show that the estimated feature field is consistent with previous results measured in other experimental conditions. The remaining turning decisions, not explained by this feature of the visual input, may be attributed to a combination of deterministic processes based on unobservable internal states and purely stochastic behavior. We cannot distinguish these contributions using external observations alone, but we are able to provide a quantitative bound of their relative importance with respect to stimulus-triggered decisions. Our results suggest that comparatively few saccades in free-flying conditions are a result of an intrinsic spontaneous process, contrary to previous suggestions. We discuss how this technique could be generalized for use in other systems and employed as a tool for classifying effects into sensory, decision, and motor categories when used to analyze data from genetic behavioral screens. PMID:23468601
Blur adaptation: contrast sensitivity changes and stimulus extent.
Venkataraman, Abinaya Priya; Winter, Simon; Unsbo, Peter; Lundström, Linda
2015-05-01
A prolonged exposure to foveal defocus is well known to affect the visual functions in the fovea. However, the effects of peripheral blur adaptation on foveal vision, or vice versa, are still unclear. In this study, we therefore examined the changes in contrast sensitivity function from baseline, following blur adaptation to small as well as laterally extended stimuli in four subjects. The small field stimulus (7.5° visual field) was a 30min video of forest scenery projected on a screen and the large field stimulus consisted of 7-tiles of the 7.5° stimulus stacked horizontally. Both stimuli were used for adaptation with optical blur (+2.00D trial lens) as well as for clear control conditions. After small field blur adaptation foveal contrast sensitivity improved in the mid spatial frequency region. However, these changes neither spread to the periphery nor occurred for the large field blur adaptation. To conclude, visual performance after adaptation is dependent on the lateral extent of the adaptation stimulus. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Pooresmaeili, Arezoo; Arrighi, Roberto; Biagi, Laura; Morrone, Maria Concetta
2016-01-01
In natural scenes, objects rarely occur in isolation but appear within a spatiotemporal context. Here, we show that the perceived size of a stimulus is significantly affected by the context of the scene: brief previous presentation of larger or smaller adapting stimuli at the same region of space changes the perceived size of a test stimulus, with larger adapting stimuli causing the test to appear smaller than veridical and vice versa. In a human fMRI study, we measured the blood oxygen level-dependent activation (BOLD) responses of the primary visual cortex (V1) to the contours of large-diameter stimuli and found that activation closely matched the perceptual rather than the retinal stimulus size: the activated area of V1 increased or decreased, depending on the size of the preceding stimulus. A model based on local inhibitory V1 mechanisms simulated the inward or outward shifts of the stimulus contours and hence the perceptual effects. Our findings suggest that area V1 is actively involved in reshaping our perception to match the short-term statistics of the visual scene. PMID:24089504
Response properties of ON-OFF retinal ganglion cells to high-order stimulus statistics.
Xiao, Lei; Gong, Han-Yan; Gong, Hai-Qing; Liang, Pei-Ji; Zhang, Pu-Ming
2014-10-17
The visual stimulus statistics are the fundamental parameters to provide the reference for studying visual coding rules. In this study, the multi-electrode extracellular recording experiments were designed and implemented on bullfrog retinal ganglion cells to explore the neural response properties to the changes in stimulus statistics. The changes in low-order stimulus statistics, such as intensity and contrast, were clearly reflected in the neuronal firing rate. However, it was difficult to distinguish the changes in high-order statistics, such as skewness and kurtosis, only based on the neuronal firing rate. The neuronal temporal filtering and sensitivity characteristics were further analyzed. We observed that the peak-to-peak amplitude of the temporal filter and the neuronal sensitivity, which were obtained from either neuronal ON spikes or OFF spikes, could exhibit significant changes when the high-order stimulus statistics were changed. These results indicate that in the retina, the neuronal response properties may be reliable and powerful in carrying some complex and subtle visual information. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
A versatile stereoscopic visual display system for vestibular and oculomotor research.
Kramer, P D; Roberts, D C; Shelhamer, M; Zee, D S
1998-01-01
Testing of the vestibular system requires a vestibular stimulus (motion) and/or a visual stimulus. We have developed a versatile, low cost, stereoscopic visual display system, using "virtual reality" (VR) technology. The display system can produce images for each eye that correspond to targets at any virtual distance relative to the subject, and so require the appropriate ocular vergence. We elicited smooth pursuit, "stare" optokinetic nystagmus (OKN) and after-nystagmus (OKAN), vergence for targets at various distances, and short-term adaptation of the vestibulo-ocular reflex (VOR), using both conventional methods and the stereoscopic display. Pursuit, OKN, and OKAN were comparable with both methods. When used with a vestibular stimulus, VR induced appropriate adaptive changes of the phase and gain of the angular VOR. In addition, using the VR display system and a human linear acceleration sled, we adapted the phase of the linear VOR. The VR-based stimulus system not only offers an alternative to more cumbersome means of stimulating the visual system in vestibular experiments, it also can produce visual stimuli that would otherwise be impractical or impossible. Our techniques provide images without the latencies encountered in most VR systems. Its inherent versatility allows it to be useful in several different types of experiments, and because it is software driven it can be quickly adapted to provide a new stimulus. These two factors allow VR to provide considerable savings in time and money, as well as flexibility in developing experimental paradigms.
Distributed Fading Memory for Stimulus Properties in the Primary Visual Cortex
Singer, Wolf; Maass, Wolfgang
2009-01-01
It is currently not known how distributed neuronal responses in early visual areas carry stimulus-related information. We made multielectrode recordings from cat primary visual cortex and applied methods from machine learning in order to analyze the temporal evolution of stimulus-related information in the spiking activity of large ensembles of around 100 neurons. We used sequences of up to three different visual stimuli (letters of the alphabet) presented for 100 ms and with intervals of 100 ms or larger. Most of the information about visual stimuli extractable by sophisticated methods of machine learning, i.e., support vector machines with nonlinear kernel functions, was also extractable by simple linear classification such as can be achieved by individual neurons. New stimuli did not erase information about previous stimuli. The responses to the most recent stimulus contained about equal amounts of information about both this and the preceding stimulus. This information was encoded both in the discharge rates (response amplitudes) of the ensemble of neurons and, when using short time constants for integration (e.g., 20 ms), in the precise timing of individual spikes (≤∼20 ms), and persisted for several 100 ms beyond the offset of stimuli. The results indicate that the network from which we recorded is endowed with fading memory and is capable of performing online computations utilizing information about temporally sequential stimuli. This result challenges models assuming frame-by-frame analyses of sequential inputs. PMID:20027205
Wessel, Jan R.; Aron, Adam R.
2014-01-01
Much research has modeled action-stopping using the stop-signal task (SST), in which an impending response has to be stopped when an explicit stop-signal occurs. A limitation of the SST is that real-world action-stopping rarely involves explicit stop-signals. Instead, the stopping-system engages when environmental features match more complex stopping goals. For example, when stepping into the street, one monitors path, velocity, size, and types of objects; and only stops if there is a vehicle approaching. Here, we developed a task in which participants compared the visual features of a multidimensional go-stimulus to a complex stopping-template, and stopped their go-response if all features matched the template. We used independent component analysis of EEG data to show that the same motor inhibition brain network that explains action-stopping in the SST also implements motor inhibition in the complex-stopping task. Furthermore, we found that partial feature overlap between go-stimulus and stopping-template lead to motor slowing, which also corresponded with greater stopping-network activity. This shows that the same brain system for action-stopping to explicit stop-signals is recruited to slow or stop behavior when stimuli match a complex stopping goal. The results imply a generalizability of the brain’s network for simple action-stopping to more ecologically valid scenarios. PMID:25270603
Diagnostic Features of Emotional Expressions Are Processed Preferentially
Scheller, Elisa; Büchel, Christian; Gamer, Matthias
2012-01-01
Diagnostic features of emotional expressions are differentially distributed across the face. The current study examined whether these diagnostic features are preferentially attended to even when they are irrelevant for the task at hand or when faces appear at different locations in the visual field. To this aim, fearful, happy and neutral faces were presented to healthy individuals in two experiments while measuring eye movements. In Experiment 1, participants had to accomplish an emotion classification, a gender discrimination or a passive viewing task. To differentiate fast, potentially reflexive, eye movements from a more elaborate scanning of faces, stimuli were either presented for 150 or 2000 ms. In Experiment 2, similar faces were presented at different spatial positions to rule out the possibility that eye movements only reflect a general bias for certain visual field locations. In both experiments, participants fixated the eye region much longer than any other region in the face. Furthermore, the eye region was attended to more pronouncedly when fearful or neutral faces were shown whereas more attention was directed toward the mouth of happy facial expressions. Since these results were similar across the other experimental manipulations, they indicate that diagnostic features of emotional expressions are preferentially processed irrespective of task demands and spatial locations. Saliency analyses revealed that a computational model of bottom-up visual attention could not explain these results. Furthermore, as these gaze preferences were evident very early after stimulus onset and occurred even when saccades did not allow for extracting further information from these stimuli, they may reflect a preattentive mechanism that automatically detects relevant facial features in the visual field and facilitates the orientation of attention towards them. This mechanism might crucially depend on amygdala functioning and it is potentially impaired in a number of clinical conditions such as autism or social anxiety disorders. PMID:22848607
Diagnostic features of emotional expressions are processed preferentially.
Scheller, Elisa; Büchel, Christian; Gamer, Matthias
2012-01-01
Diagnostic features of emotional expressions are differentially distributed across the face. The current study examined whether these diagnostic features are preferentially attended to even when they are irrelevant for the task at hand or when faces appear at different locations in the visual field. To this aim, fearful, happy and neutral faces were presented to healthy individuals in two experiments while measuring eye movements. In Experiment 1, participants had to accomplish an emotion classification, a gender discrimination or a passive viewing task. To differentiate fast, potentially reflexive, eye movements from a more elaborate scanning of faces, stimuli were either presented for 150 or 2000 ms. In Experiment 2, similar faces were presented at different spatial positions to rule out the possibility that eye movements only reflect a general bias for certain visual field locations. In both experiments, participants fixated the eye region much longer than any other region in the face. Furthermore, the eye region was attended to more pronouncedly when fearful or neutral faces were shown whereas more attention was directed toward the mouth of happy facial expressions. Since these results were similar across the other experimental manipulations, they indicate that diagnostic features of emotional expressions are preferentially processed irrespective of task demands and spatial locations. Saliency analyses revealed that a computational model of bottom-up visual attention could not explain these results. Furthermore, as these gaze preferences were evident very early after stimulus onset and occurred even when saccades did not allow for extracting further information from these stimuli, they may reflect a preattentive mechanism that automatically detects relevant facial features in the visual field and facilitates the orientation of attention towards them. This mechanism might crucially depend on amygdala functioning and it is potentially impaired in a number of clinical conditions such as autism or social anxiety disorders.
Sketchy Rendering for Information Visualization.
Wood, J; Isenberg, P; Isenberg, T; Dykes, J; Boukhelifa, N; Slingsby, A
2012-12-01
We present and evaluate a framework for constructing sketchy style information visualizations that mimic data graphics drawn by hand. We provide an alternative renderer for the Processing graphics environment that redefines core drawing primitives including line, polygon and ellipse rendering. These primitives allow higher-level graphical features such as bar charts, line charts, treemaps and node-link diagrams to be drawn in a sketchy style with a specified degree of sketchiness. The framework is designed to be easily integrated into existing visualization implementations with minimal programming modification or design effort. We show examples of use for statistical graphics, conveying spatial imprecision and for enhancing aesthetic and narrative qualities of visualization. We evaluate user perception of sketchiness of areal features through a series of stimulus-response tests in order to assess users' ability to place sketchiness on a ratio scale, and to estimate area. Results suggest relative area judgment is compromised by sketchy rendering and that its influence is dependent on the shape being rendered. They show that degree of sketchiness may be judged on an ordinal scale but that its judgement varies strongly between individuals. We evaluate higher-level impacts of sketchiness through user testing of scenarios that encourage user engagement with data visualization and willingness to critique visualization design. Results suggest that where a visualization is clearly sketchy, engagement may be increased and that attitudes to participating in visualization annotation are more positive. The results of our work have implications for effective information visualization design that go beyond the traditional role of sketching as a tool for prototyping or its use for an indication of general uncertainty.
The extraction of motion-onset VEP BCI features based on deep learning and compressed sensing.
Ma, Teng; Li, Hui; Yang, Hao; Lv, Xulin; Li, Peiyang; Liu, Tiejun; Yao, Dezhong; Xu, Peng
2017-01-01
Motion-onset visual evoked potentials (mVEP) can provide a softer stimulus with reduced fatigue, and it has potential applications for brain computer interface(BCI)systems. However, the mVEP waveform is seriously masked in the strong background EEG activities, and an effective approach is needed to extract the corresponding mVEP features to perform task recognition for BCI control. In the current study, we combine deep learning with compressed sensing to mine discriminative mVEP information to improve the mVEP BCI performance. The deep learning and compressed sensing approach can generate the multi-modality features which can effectively improve the BCI performance with approximately 3.5% accuracy incensement over all 11 subjects and is more effective for those subjects with relatively poor performance when using the conventional features. Compared with the conventional amplitude-based mVEP feature extraction approach, the deep learning and compressed sensing approach has a higher classification accuracy and is more effective for subjects with relatively poor performance. According to the results, the deep learning and compressed sensing approach is more effective for extracting the mVEP feature to construct the corresponding BCI system, and the proposed feature extraction framework is easy to extend to other types of BCIs, such as motor imagery (MI), steady-state visual evoked potential (SSVEP)and P300. Copyright © 2016 Elsevier B.V. All rights reserved.
Kaarthigeyan, J; Dharmaretnam, Meena
2005-04-15
Cerebral lateralisation once thought to be confined to humans has been reported for a range of vertebrate species now. We report here biases in visual perceptual processing in a teleost fish. Female guppy fish used the right eye preferentially to view a familiar stimulus. This bias reversed on being presented with a strange female guppy, the left eye being used more to view it. This pattern of viewing is probably associated with the right eye system, which is used to view a stimulus with an intention to approach it. The increase in the left eye use, to view a stranger may be associated with the role of the left eye in comparing the features of a strange conspecific. In the second experiment, lateralisation of viewing visual stimuli that could evoke different levels of motivation to biologically relevant stimuli was tested. It is known that female guppies prefer to approach orange coloured males. Lateralisation of detour response as well as eye use after detour to view a dull or an orange male stimulus was recorded in deprived female fish. There was a bias to detour to the left side; which was more significant for the orange than the dull male. Once the female guppies detoured the cage they preferentially used the left eye to view the male conspecific; this being significant for the deeply orange male. Thus, colouration of males evoking different levels motivation can be used to measure lateralisation in guppies.
Tao, X.; Zhang, B.; Smith, E. L.; Nishimoto, S.; Ohzawa, I.
2012-01-01
We used dynamic dense noise stimuli and local spectral reverse correlation methods to reveal the local sensitivities of neurons in visual area 2 (V2) of macaque monkeys to orientation and spatial frequency within their receptive fields. This minimized the potentially confounding assumptions that are inherent in stimulus selections. The majority of neurons exhibited a relatively high degree of homogeneity for the preferred orientations and spatial frequencies in the spatial matrix of facilitatory subfields. However, about 20% of all neurons showed maximum orientation differences between neighboring subfields that were greater than 25 deg. The neurons preferring horizontal or vertical orientations showed less inhomogeneity in space than the neurons preferring oblique orientations. Over 50% of all units also exhibited suppressive profiles, and those were more heterogeneous than facilitatory profiles. The preferred orientation and spatial frequency of suppressive profiles differed substantially from those of facilitatory profiles, and the neurons with suppressive subfields had greater orientation selectivity than those without suppressive subfields. The peak suppression occurred with longer delays than the peak facilitation. These results suggest that the receptive field profiles of the majority of V2 neurons reflect the orderly convergence of V1 inputs over space, but that a subset of V2 neurons exhibit more complex response profiles having both suppressive and facilitatory subfields. These V2 neurons with heterogeneous subfield profiles could play an important role in the initial processing of complex stimulus features. PMID:22114163
Effects of set-size and lateral masking in visual search.
Põder, Endel
2004-01-01
In the present research, the roles of lateral masking and central processing limitations in visual search were studied. Two search conditions were used: (1) target differed from distractors by presence/absence of a simple feature; (2) target differed by relative position of the same components only. The number of displayed stimuli (set-size) and the distance between neighbouring stimuli were varied as independently as possible in order to measure the effect of both. The effect of distance between stimuli (lateral masking) was found to be similar in both conditions. The effect of set-size was much larger for relative position stimuli. The results support the view that perception of relative position of stimulus components is limited mainly by the capacity of central processing.
Adaptive Acceleration of Visually Evoked Smooth Eye Movements in Mice
2016-01-01
The optokinetic response (OKR) consists of smooth eye movements following global motion of the visual surround, which suppress image slip on the retina for visual acuity. The effective performance of the OKR is limited to rather slow and low-frequency visual stimuli, although it can be adaptably improved by cerebellum-dependent mechanisms. To better understand circuit mechanisms constraining OKR performance, we monitored how distinct kinematic features of the OKR change over the course of OKR adaptation, and found that eye acceleration at stimulus onset primarily limited OKR performance but could be dramatically potentiated by visual experience. Eye acceleration in the temporal-to-nasal direction depended more on the ipsilateral floccular complex of the cerebellum than did that in the nasal-to-temporal direction. Gaze-holding following the OKR was also modified in parallel with eye-acceleration potentiation. Optogenetic manipulation revealed that synchronous excitation and inhibition of floccular complex Purkinje cells could effectively accelerate eye movements in the nasotemporal and temporonasal directions, respectively. These results collectively delineate multiple motor pathways subserving distinct aspects of the OKR in mice and constrain hypotheses regarding cellular mechanisms of the cerebellum-dependent tuning of movement acceleration. SIGNIFICANCE STATEMENT Although visually evoked smooth eye movements, known as the optokinetic response (OKR), have been studied in various species for decades, circuit mechanisms of oculomotor control and adaptation remain elusive. In the present study, we assessed kinematics of the mouse OKR through the course of adaptation training. Our analyses revealed that eye acceleration at visual-stimulus onset primarily limited working velocity and frequency range of the OKR, yet could be dramatically potentiated during OKR adaptation. Potentiation of eye acceleration exhibited different properties between the nasotemporal and temporonasal OKRs, indicating distinct visuomotor circuits underlying the two. Lesions and optogenetic manipulation of the cerebellum provide constraints on neural circuits mediating visually driven eye acceleration and its adaptation. PMID:27335412
Short-term memory for event duration: modality specificity and goal dependency.
Takahashi, Kohske; Watanabe, Katsumi
2012-11-01
Time perception is involved in various cognitive functions. This study investigated the characteristics of short-term memory for event duration by examining how the length of the retention period affects inter- and intramodal duration judgment. On each trial, a sample stimulus was followed by a comparison stimulus, after a variable delay period (0.5-5 s). The sample and comparison stimuli were presented in the visual or auditory modality. The participants determined whether the comparison stimulus was longer or shorter than the sample stimulus. The distortion pattern of subjective duration during the delay period depended on the sensory modality of the comparison stimulus but was not affected by that of the sample stimulus. When the comparison stimulus was visually presented, the retained duration of the sample stimulus was shortened as the delay period increased. Contrarily, when the comparison stimulus was presented in the auditory modality, the delay period had little to no effect on the retained duration. Furthermore, whenever the participants did not know the sensory modality of the comparison stimulus beforehand, the effect of the delay period disappeared. These results suggest that the memory process for event duration is specific to sensory modality and that its performance is determined depending on the sensory modality in which the retained duration will be used subsequently.
An investigation of the spatial selectivity of the duration after-effect.
Maarseveen, Jim; Hogendoorn, Hinze; Verstraten, Frans A J; Paffen, Chris L E
2017-01-01
Adaptation to the duration of a visual stimulus causes the perceived duration of a subsequently presented stimulus with a slightly different duration to be skewed away from the adapted duration. This pattern of repulsion following adaptation is similar to that observed for other visual properties, such as orientation, and is considered evidence for the involvement of duration-selective mechanisms in duration encoding. Here, we investigated whether the encoding of duration - by duration-selective mechanisms - occurs early on in the visual processing hierarchy. To this end, we investigated the spatial specificity of the duration after-effect in two experiments. We measured the duration after-effect at adapter-test distances ranging between 0 and 15° of visual angle and for within- and between-hemifield presentations. We replicated the duration after-effect: the test stimulus was perceived to have a longer duration following adaptation to a shorter duration, and a shorter duration following adaptation to a longer duration. Importantly, this duration after-effect occurred at all measured distances, with no evidence for a decrease in the magnitude of the after-effect at larger distances or across hemifields. This shows that adaptation to duration does not result from adaptation occurring early on in the visual processing hierarchy. Instead, it seems likely that duration information is a high-level stimulus property that is encoded later on in the visual processing hierarchy. Copyright © 2016 Elsevier Ltd. All rights reserved.
Williamson, Ross S.; Sahani, Maneesh; Pillow, Jonathan W.
2015-01-01
Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron’s probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as “single-spike information” to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex. PMID:25831448
Lee, Hannah; Kim, Jejoong
2017-06-01
It has been reported that visual perception can be influenced not only by the physical features of a stimulus but also by the emotional valence of the stimulus, even without explicit emotion recognition. Some previous studies reported an anger superiority effect while others found a happiness superiority effect during visual perception. It thus remains unclear as to which emotion is more influential. In the present study, we conducted two experiments using biological motion (BM) stimuli to examine whether emotional valence of the stimuli would affect BM perception; and if so, whether a specific type of emotion is associated with a superiority effect. Point-light walkers with three emotion types (anger, happiness, and neutral) were used, and the threshold to detect BM within noise was measured in Experiment 1. Participants showed higher performance in detecting happy walkers compared with the angry and neutral walkers. Follow-up motion velocity analysis revealed that physical difference among the stimuli was not the main factor causing the effect. The results of the emotion recognition task in Experiment 2 also showed a happiness superiority effect, as in Experiment 1. These results show that emotional valence (happiness) of the stimuli can facilitate the processing of BM.
Simon Effect with and without Awareness of the Accessory Stimulus
ERIC Educational Resources Information Center
Treccani, Barbara; Umilta, Carlo; Tagliabue, Mariaelena
2006-01-01
The authors investigated whether a Simon effect could be observed in an accessory-stimulus Simon task when participants were unaware of the task-irrelevant accessory cue. In Experiment 1A a central visual target was accompanied by a suprathreshold visual lateral cue. A regular Simon effect (i.e., faster cue-response corresponding reaction times…
Nelson, D E; Takahashi, J S
1991-01-01
1. Light-induced phase shifts of the circadian rhythm of wheel-running activity were used to measure the photic sensitivity of a circadian pacemaker and the visual pathway that conveys light information to it in the golden hamster (Mesocricetus auratus). The sensitivity to stimulus irradiance and duration was assessed by measuring the magnitude of phase-shift responses to photic stimuli of different irradiance and duration. The visual sensitivity was also measured at three different phases of the circadian rhythm. 2. The stimulus-response curves measured at different circadian phases suggest that the maximum phase-shift is the only aspect of visual responsivity to change as a function of the circadian day. The half-saturation constants (sigma) for the stimulus-response curves are not significantly different over the three circadian phases tested. The photic sensitivity to irradiance (1/sigma) appears to remain constant over the circadian day. 3. The hamster circadian pacemaker and the photoreceptive system that subserves it are more sensitive to the irradiance of longer-duration stimuli than to irradiance of briefer stimuli. The system is maximally sensitive to the irradiance of stimuli of 300 s and longer in duration. A quantitative model is presented to explain the changes that occur in the stimulus-response curves as a function of photic stimulus duration. 4. The threshold for photic stimulation of the hamster circadian pacemaker is also quite high. The threshold irradiance (the minimum irradiance necessary to induce statistically significant responses) is approximately 10(11) photons cm-2 s-1 for optimal stimulus durations. This threshold is equivalent to a luminance at the cornea of 0.1 cd m-2. 5. We also measured the sensitivity of this visual pathway to the total number of photons in a stimulus. This system is maximally sensitive to photons in stimuli between 30 and 3600 s in duration. The maximum quantum efficiency of photic integration occurs in 300 s stimuli. 6. These results suggest that the visual pathways that convey light information to the mammalian circadian pacemaker possess several unique characteristics. These pathways are relatively insensitive to light irradiance and also integrate light inputs over relatively long durations. This visual system, therefore, possesses an optimal sensitivity of 'tuning' to total photons delivered in stimuli of several minutes in duration. Together these characteristics may make this visual system unresponsive to environmental 'noise' that would interfere with the entrainment of circadian rhythms to light-dark cycles. PMID:1895235
Tachistoscopic exposure and masking of real three-dimensional scenes
Pothier, Stephen; Philbeck, John; Chichka, David; Gajewski, Daniel A.
2010-01-01
Although there are many well-known forms of visual cues specifying absolute and relative distance, little is known about how visual space perception develops at small temporal scales. How much time does the visual system require to extract the information in the various absolute and relative distance cues? In this article, we describe a system that may be used to address this issue by presenting brief exposures of real, three-dimensional scenes, followed by a masking stimulus. The system is composed of an electronic shutter (a liquid crystal smart window) for exposing the stimulus scene, and a liquid crystal projector coupled with an electromechanical shutter for presenting the masking stimulus. This system can be used in both full- and reduced-cue viewing conditions, under monocular and binocular viewing, and at distances limited only by the testing space. We describe a configuration that may be used for studying the microgenesis of visual space perception in the context of visually directed walking. PMID:19182129
NASA Astrophysics Data System (ADS)
Namazi, Hamidreza; Kulish, Vladimir V.; Akrami, Amin
2016-05-01
One of the major challenges in vision research is to analyze the effect of visual stimuli on human vision. However, no relationship has been yet discovered between the structure of the visual stimulus, and the structure of fixational eye movements. This study reveals the plasticity of human fixational eye movements in relation to the ‘complex’ visual stimulus. We demonstrated that the fractal temporal structure of visual dynamics shifts towards the fractal dynamics of the visual stimulus (image). The results showed that images with higher complexity (higher fractality) cause fixational eye movements with lower fractality. Considering the brain, as the main part of nervous system that is engaged in eye movements, we analyzed the governed Electroencephalogram (EEG) signal during fixation. We have found out that there is a coupling between fractality of image, EEG and fixational eye movements. The capability observed in this research can be further investigated and applied for treatment of different vision disorders.
Improved Discrimination of Visual Stimuli Following Repetitive Transcranial Magnetic Stimulation
Waterston, Michael L.; Pack, Christopher C.
2010-01-01
Background Repetitive transcranial magnetic stimulation (rTMS) at certain frequencies increases thresholds for motor-evoked potentials and phosphenes following stimulation of cortex. Consequently rTMS is often assumed to introduce a “virtual lesion” in stimulated brain regions, with correspondingly diminished behavioral performance. Methodology/Principal Findings Here we investigated the effects of rTMS to visual cortex on subjects' ability to perform visual psychophysical tasks. Contrary to expectations of a visual deficit, we find that rTMS often improves the discrimination of visual features. For coarse orientation tasks, discrimination of a static stimulus improved consistently following theta-burst stimulation of the occipital lobe. Using a reaction-time task, we found that these improvements occurred throughout the visual field and lasted beyond one hour post-rTMS. Low-frequency (1 Hz) stimulation yielded similar improvements. In contrast, we did not find consistent effects of rTMS on performance in a fine orientation discrimination task. Conclusions/Significance Overall our results suggest that rTMS generally improves or has no effect on visual acuity, with the nature of the effect depending on the type of stimulation and the task. We interpret our results in the context of an ideal-observer model of visual perception. PMID:20442776
Evaluation of an organic light-emitting diode display for precise visual stimulation.
Ito, Hiroyuki; Ogawa, Masaki; Sunaga, Shoji
2013-06-11
A new type of visual display for presentation of a visual stimulus with high quality was assessed. The characteristics of an organic light-emitting diode (OLED) display (Sony PVM-2541, 24.5 in.; Sony Corporation, Tokyo, Japan) were measured in detail from the viewpoint of its applicability to visual psychophysics. We found the new display to be superior to other display types in terms of spatial uniformity, color gamut, and contrast ratio. Changes in the intensity of luminance were sharper on the OLED display than those on a liquid crystal display. Therefore, such OLED displays could replace conventional cathode ray tube displays in vision research for high quality stimulus presentation. Benefits of using OLED displays in vision research were especially apparent in the fields of low-level vision, where precise control and description of the stimulus are needed, e.g., in mesopic or scotopic vision, color vision, and motion perception.
Emotional facilitation of sensory processing in the visual cortex.
Schupp, Harald T; Junghöfer, Markus; Weike, Almut I; Hamm, Alfons O
2003-01-01
A key function of emotion is the preparation for action. However, organization of successful behavioral strategies depends on efficient stimulus encoding. The present study tested the hypothesis that perceptual encoding in the visual cortex is modulated by the emotional significance of visual stimuli. Event-related brain potentials were measured while subjects viewed pleasant, neutral, and unpleasant pictures. Early selective encoding of pleasant and unpleasant images was associated with a posterior negativity, indicating primary sources of activation in the visual cortex. The study also replicated previous findings in that affective cues also elicited enlarged late positive potentials, indexing increased stimulus relevance at higher-order stages of stimulus processing. These results support the hypothesis that sensory encoding of affective stimuli is facilitated implicitly by natural selective attention. Thus, the affect system not only modulates motor output (i.e., favoring approach or avoidance dispositions), but already operates at an early level of sensory encoding.
High-resolution eye tracking using V1 neuron activity
McFarland, James M.; Bondy, Adrian G.; Cumming, Bruce G.; Butts, Daniel A.
2014-01-01
Studies of high-acuity visual cortical processing have been limited by the inability to track eye position with sufficient accuracy to precisely reconstruct the visual stimulus on the retina. As a result, studies on primary visual cortex (V1) have been performed almost entirely on neurons outside the high-resolution central portion of the visual field (the fovea). Here we describe a procedure for inferring eye position using multi-electrode array recordings from V1 coupled with nonlinear stimulus processing models. We show that this method can be used to infer eye position with one arc-minute accuracy – significantly better than conventional techniques. This allows for analysis of foveal stimulus processing, and provides a means to correct for eye-movement induced biases present even outside the fovea. This method could thus reveal critical insights into the role of eye movements in cortical coding, as well as their contribution to measures of cortical variability. PMID:25197783
The stimulus-evoked population response in visual cortex of awake monkey is a propagating wave
Muller, Lyle; Reynaud, Alexandre; Chavane, Frédéric; Destexhe, Alain
2014-01-01
Propagating waves occur in many excitable media and were recently found in neural systems from retina to neocortex. While propagating waves are clearly present under anaesthesia, whether they also appear during awake and conscious states remains unclear. One possibility is that these waves are systematically missed in trial-averaged data, due to variability. Here we present a method for detecting propagating waves in noisy multichannel recordings. Applying this method to single-trial voltage-sensitive dye imaging data, we show that the stimulus-evoked population response in primary visual cortex of the awake monkey propagates as a travelling wave, with consistent dynamics across trials. A network model suggests that this reliability is the hallmark of the horizontal fibre network of superficial cortical layers. Propagating waves with similar properties occur independently in secondary visual cortex, but maintain precise phase relations with the waves in primary visual cortex. These results show that, in response to a visual stimulus, propagating waves are systematically evoked in several visual areas, generating a consistent spatiotemporal frame for further neuronal interactions. PMID:24770473
Visual memory performance for color depends on spatiotemporal context.
Olivers, Christian N L; Schreij, Daniel
2014-10-01
Performance on visual short-term memory for features has been known to depend on stimulus complexity, spatial layout, and feature context. However, with few exceptions, memory capacity has been measured for abruptly appearing, single-instance displays. In everyday life, objects often have a spatiotemporal history as they or the observer move around. In three experiments, we investigated the effect of spatiotemporal history on explicit memory for color. Observers saw a memory display emerge from behind a wall, after which it disappeared again. The test display then emerged from either the same side as the memory display or the opposite side. In the first two experiments, memory improved for intermediate set sizes when the test display emerged in the same way as the memory display. A third experiment then showed that the benefit was tied to the original motion trajectory and not to the display object per se. The results indicate that memory for color is embedded in a richer episodic context that includes the spatiotemporal history of the display.
Taylor, Kirsten I.; Devereux, Barry J.; Acres, Kadia; Randall, Billi; Tyler, Lorraine K.
2013-01-01
Conceptual representations are at the heart of our mental lives, involved in every aspect of cognitive functioning. Despite their centrality, a long-standing debate persists as to how the meanings of concepts are represented and processed. Many accounts agree that the meanings of concrete concepts are represented by their individual features, but disagree about the importance of different feature-based variables: some views stress the importance of the information carried by distinctive features in conceptual processing, others the features which are shared over many concepts, and still others the extent to which features co-occur. We suggest that previously disparate theoretical positions and experimental findings can be unified by an account which claims that task demands determine how concepts are processed in addition to the effects of feature distinctiveness and co-occurrence. We tested these predictions in a basic-level naming task which relies on distinctive feature information (Experiment 1) and a domain decision task which relies on shared feature information (Experiment 2). Both used large-scale regression designs with the same visual objects, and mixed-effects models incorporating participant, session, stimulus-related and feature statistic variables to model the performance. We found that concepts with relatively more distinctive and more highly correlated distinctive relative to shared features facilitated basic-level naming latencies, while concepts with relatively more shared and more highly correlated shared relative to distinctive features speeded domain decisions. These findings demonstrate that the feature statistics of distinctiveness (shared vs. distinctive) and correlational strength, as well as the task demands, determine how concept meaning is processed in the conceptual system. PMID:22137770
Griffis, Joseph C.; Elkhetali, Abdurahman S.; Burge, Wesley K.; Chen, Richard H.; Visscher, Kristina M.
2015-01-01
Attention facilitates the processing of task-relevant visual information and suppresses interference from task-irrelevant information. Modulations of neural activity in visual cortex depend on attention, and likely result from signals originating in fronto-parietal and cingulo-opercular regions of cortex. Here, we tested the hypothesis that attentional facilitation of visual processing is accomplished in part by changes in how brain networks involved in attentional control interact with sectors of V1 that represent different retinal eccentricities. We measured the strength of background connectivity between fronto-parietal and cingulo-opercular regions with different eccentricity sectors in V1 using functional MRI data that were collected while participants performed tasks involving attention to either a centrally presented visual stimulus or a simultaneously presented auditory stimulus. We found that when the visual stimulus was attended, background connectivity between V1 and the left frontal eye fields (FEF), left intraparietal sulcus (IPS), and right IPS varied strongly across different eccentricity sectors in V1 so that foveal sectors were more strongly connected than peripheral sectors. This retinotopic gradient was weaker when the visual stimulus was ignored, indicating that it was driven by attentional effects. Greater task-driven differences between foveal and peripheral sectors in background connectivity to these regions were associated with better performance on the visual task and faster response times on correct trials. These findings are consistent with the notion that attention drives the configuration of task-specific functional pathways that enable the prioritized processing of task-relevant visual information, and show that the prioritization of visual information by attentional processes may be encoded in the retinotopic gradient of connectivty between V1 and fronto-parietal regions. PMID:26106320
Visual motion perception predicts driving hazard perception ability.
Lacherez, Philippe; Au, Sandra; Wood, Joanne M
2014-02-01
To examine the basis of previous findings of an association between indices of driving safety and visual motion sensitivity and to examine whether this association could be explained by low-level changes in visual function. A total of 36 visually normal participants (aged 19-80 years) completed a battery of standard vision tests including visual acuity, contrast sensitivity and automated visual fields and two tests of motion perception including sensitivity for movement of a drifting Gabor stimulus and sensitivity for displacement in a random dot kinematogram (Dmin ). Participants also completed a hazard perception test (HPT), which measured participants' response times to hazards embedded in video recordings of real-world driving, which has been shown to be linked to crash risk. Dmin for the random dot stimulus ranged from -0.88 to -0.12 log minutes of arc, and the minimum drift rate for the Gabor stimulus ranged from 0.01 to 0.35 cycles per second. Both measures of motion sensitivity significantly predicted response times on the HPT. In addition, while the relationship involving the HPT and motion sensitivity for the random dot kinematogram was partially explained by the other visual function measures, the relationship with sensitivity for detection of the drifting Gabor stimulus remained significant even after controlling for these variables. These findings suggest that motion perception plays an important role in the visual perception of driving-relevant hazards independent of other areas of visual function and should be further explored as a predictive test of driving safety. Future research should explore the causes of reduced motion perception to develop better interventions to improve road safety. © 2012 The Authors. Acta Ophthalmologica © 2012 Acta Ophthalmologica Scandinavica Foundation.
Perceptual Learning Induces Persistent Attentional Capture by Nonsalient Shapes.
Qu, Zhe; Hillyard, Steven A; Ding, Yulong
2017-02-01
Visual attention can be attracted automatically by salient simple features, but whether and how nonsalient complex stimuli such as shapes may capture attention in humans remains unclear. Here, we present strong electrophysiological evidence that a nonsalient shape presented among similar shapes can provoke a robust and persistent capture of attention as a consequence of extensive training in visual search (VS) for that shape. Strikingly, this attentional capture that followed perceptual learning (PL) was evident even when the trained shape was task-irrelevant, was presented outside the focus of top-down spatial attention, and was undetected by the observer. Moreover, this attentional capture persisted for at least 3-5 months after training had been terminated. This involuntary capture of attention was indexed by electrophysiological recordings of the N2pc component of the event-related brain potential, which was localized to ventral extrastriate visual cortex, and was highly predictive of stimulus-specific improvement in VS ability following PL. These findings provide the first evidence that nonsalient shapes can capture visual attention automatically following PL and challenge the prominent view that detection of feature conjunctions requires top-down focal attention. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Neural Correlates of Changes in a Visual Search Task due to Cognitive Training in Seniors
Wild-Wall, Nele; Falkenstein, Michael; Gajewski, Patrick D.
2012-01-01
This study aimed to elucidate the underlying neural sources of near transfer after a multidomain cognitive training in older participants in a visual search task. Participants were randomly assigned to a social control, a no-contact control and a training group, receiving a 4-month paper-pencil and PC-based trainer guided cognitive intervention. All participants were tested in a before and after session with a conjunction visual search task. Performance and event-related potentials (ERPs) suggest that the cognitive training improved feature processing of the stimuli which was expressed in an increased rate of target detection compared to the control groups. This was paralleled by enhanced amplitudes of the frontal P2 in the ERP and by higher activation in lingual and parahippocampal brain areas which are discussed to support visual feature processing. Enhanced N1 and N2 potentials in the ERP for nontarget stimuli after cognitive training additionally suggest improved attention and subsequent processing of arrays which were not immediately recognized as targets. Possible test repetition effects were confined to processes of stimulus categorisation as suggested by the P3b potential. The results show neurocognitive plasticity in aging after a broad cognitive training and allow pinpointing the functional loci of effects induced by cognitive training. PMID:23029625
Does bimodal stimulus presentation increase ERP components usable in BCIs?
NASA Astrophysics Data System (ADS)
Thurlings, Marieke E.; Brouwer, Anne-Marie; Van Erp, Jan B. F.; Blankertz, Benjamin; Werkhoven, Peter J.
2012-08-01
Event-related potential (ERP)-based brain-computer interfaces (BCIs) employ differences in brain responses to attended and ignored stimuli. Typically, visual stimuli are used. Tactile stimuli have recently been suggested as a gaze-independent alternative. Bimodal stimuli could evoke additional brain activity due to multisensory integration which may be of use in BCIs. We investigated the effect of visual-tactile stimulus presentation on the chain of ERP components, BCI performance (classification accuracies and bitrates) and participants’ task performance (counting of targets). Ten participants were instructed to navigate a visual display by attending (spatially) to targets in sequences of either visual, tactile or visual-tactile stimuli. We observe that attending to visual-tactile (compared to either visual or tactile) stimuli results in an enhanced early ERP component (N1). This bimodal N1 may enhance BCI performance, as suggested by a nonsignificant positive trend in offline classification accuracies. A late ERP component (P300) is reduced when attending to visual-tactile compared to visual stimuli, which is consistent with the nonsignificant negative trend of participants’ task performance. We discuss these findings in the light of affected spatial attention at high-level compared to low-level stimulus processing. Furthermore, we evaluate bimodal BCIs from a practical perspective and for future applications.
Brockmole, James R; Boot, Walter R
2009-06-01
Distinctive aspects of a scene can capture attention even when they are irrelevant to one's goals. The authors address whether visually unique, unexpected, but task-irrelevant features also tend to hold attention. Observers searched through displays in which the color of each item was irrelevant. At the start of search, all objects changed color. Critically, the foveated item changed to an unexpected color (it was novel), became a color singleton (it was unique), or both. Saccade latency revealed the time required to disengage overt attention from this object. Singletons resulted in longer latencies, but only if they were unexpected. Conversely, unexpected items only delayed disengagement if they were singletons. Thus, the time spent overtly attending to an object is determined, at least in part, by task-irrelevant stimulus properties, but this depends on the confluence of expectation and visual salience. (c) 2009 APA, all rights reserved.
Stimulus encoding and feature extraction by multiple sensory neurons.
Krahe, Rüdiger; Kreiman, Gabriel; Gabbiani, Fabrizio; Koch, Christof; Metzner, Walter
2002-03-15
Neighboring cells in topographical sensory maps may transmit similar information to the next higher level of processing. How information transmission by groups of nearby neurons compares with the performance of single cells is a very important question for understanding the functioning of the nervous system. To tackle this problem, we quantified stimulus-encoding and feature extraction performance by pairs of simultaneously recorded electrosensory pyramidal cells in the hindbrain of weakly electric fish. These cells constitute the output neurons of the first central nervous stage of electrosensory processing. Using random amplitude modulations (RAMs) of a mimic of the fish's own electric field within behaviorally relevant frequency bands, we found that pyramidal cells with overlapping receptive fields exhibit strong stimulus-induced correlations. To quantify the encoding of the RAM time course, we estimated the stimuli from simultaneously recorded spike trains and found significant improvements over single spike trains. The quality of stimulus reconstruction, however, was still inferior to the one measured for single primary sensory afferents. In an analysis of feature extraction, we found that spikes of pyramidal cell pairs coinciding within a time window of a few milliseconds performed significantly better at detecting upstrokes and downstrokes of the stimulus compared with isolated spikes and even spike bursts of single cells. Coincident spikes can thus be considered "distributed bursts." Our results suggest that stimulus encoding by primary sensory afferents is transformed into feature extraction at the next processing stage. There, stimulus-induced coincident activity can improve the extraction of behaviorally relevant features from the stimulus.
Balcarras, Matthew; Ardid, Salva; Kaping, Daniel; Everling, Stefan; Womelsdorf, Thilo
2016-02-01
Attention includes processes that evaluate stimuli relevance, select the most relevant stimulus against less relevant stimuli, and bias choice behavior toward the selected information. It is not clear how these processes interact. Here, we captured these processes in a reinforcement learning framework applied to a feature-based attention task that required macaques to learn and update the value of stimulus features while ignoring nonrelevant sensory features, locations, and action plans. We found that value-based reinforcement learning mechanisms could account for feature-based attentional selection and choice behavior but required a value-independent stickiness selection process to explain selection errors while at asymptotic behavior. By comparing different reinforcement learning schemes, we found that trial-by-trial selections were best predicted by a model that only represents expected values for the task-relevant feature dimension, with nonrelevant stimulus features and action plans having only a marginal influence on covert selections. These findings show that attentional control subprocesses can be described by (1) the reinforcement learning of feature values within a restricted feature space that excludes irrelevant feature dimensions, (2) a stochastic selection process on feature-specific value representations, and (3) value-independent stickiness toward previous feature selections akin to perseveration in the motor domain. We speculate that these three mechanisms are implemented by distinct but interacting brain circuits and that the proposed formal account of feature-based stimulus selection will be important to understand how attentional subprocesses are implemented in primate brain networks.
Role of somatosensory and vestibular cues in attenuating visually induced human postural sway
NASA Technical Reports Server (NTRS)
Peterka, R. J.; Benolken, M. S.
1995-01-01
The purpose of this study was to determine the contribution of visual, vestibular, and somatosensory cues to the maintenance of stance in humans. Postural sway was induced by full-field, sinusoidal visual surround rotations about an axis at the level of the ankle joints. The influences of vestibular and somatosensory cues were characterized by comparing postural sway in normal and bilateral vestibular absent subjects in conditions that provided either accurate or inaccurate somatosensory orientation information. In normal subjects, the amplitude of visually induced sway reached a saturation level as stimulus amplitude increased. The saturation amplitude decreased with increasing stimulus frequency. No saturation phenomena were observed in subjects with vestibular loss, implying that vestibular cues were responsible for the saturation phenomenon. For visually induced sways below the saturation level, the stimulus-response curves for both normal subjects and subjects experiencing vestibular loss were nearly identical, implying (1) that normal subjects were not using vestibular information to attenuate their visually induced sway, possibly because sway was below a vestibular-related threshold level, and (2) that subjects with vestibular loss did not utilize visual cues to a greater extent than normal subjects; that is, a fundamental change in visual system "gain" was not used to compensate for a vestibular deficit. An unexpected finding was that the amplitude of body sway induced by visual surround motion could be almost 3 times greater than the amplitude of the visual stimulus in normal subjects and subjects with vestibular loss. This occurred in conditions where somatosensory cues were inaccurate and at low stimulus amplitudes. A control system model of visually induced postural sway was developed to explain this finding. For both subject groups, the amplitude of visually induced sway was smaller by a factor of about 4 in tests where somatosensory cues provided accurate versus inaccurate orientation information. This implied (1) that the subjects experiencing vestibular loss did not utilize somatosensory cues to a greater extent than normal subjects; that is, changes in somatosensory system "gain" were not used to compensate for a vestibular deficit, and (2) that the threshold for the use of vestibular cues in normal subjects was apparently lower in test conditions where somatosensory cues were providing accurate orientation information.
Natural image sequences constrain dynamic receptive fields and imply a sparse code.
Häusler, Chris; Susemihl, Alex; Nawrot, Martin P
2013-11-06
In their natural environment, animals experience a complex and dynamic visual scenery. Under such natural stimulus conditions, neurons in the visual cortex employ a spatially and temporally sparse code. For the input scenario of natural still images, previous work demonstrated that unsupervised feature learning combined with the constraint of sparse coding can predict physiologically measured receptive fields of simple cells in the primary visual cortex. This convincingly indicated that the mammalian visual system is adapted to the natural spatial input statistics. Here, we extend this approach to the time domain in order to predict dynamic receptive fields that can account for both spatial and temporal sparse activation in biological neurons. We rely on temporal restricted Boltzmann machines and suggest a novel temporal autoencoding training procedure. When tested on a dynamic multi-variate benchmark dataset this method outperformed existing models of this class. Learning features on a large dataset of natural movies allowed us to model spatio-temporal receptive fields for single neurons. They resemble temporally smooth transformations of previously obtained static receptive fields and are thus consistent with existing theories. A neuronal spike response model demonstrates how the dynamic receptive field facilitates temporal and population sparseness. We discuss the potential mechanisms and benefits of a spatially and temporally sparse representation of natural visual input. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.
Spatial Correlations in Natural Scenes Modulate Response Reliability in Mouse Visual Cortex
Rikhye, Rajeev V.
2015-01-01
Intrinsic neuronal variability significantly limits information encoding in the primary visual cortex (V1). Certain stimuli can suppress this intertrial variability to increase the reliability of neuronal responses. In particular, responses to natural scenes, which have broadband spatiotemporal statistics, are more reliable than responses to stimuli such as gratings. However, very little is known about which stimulus statistics modulate reliable coding and how this occurs at the neural ensemble level. Here, we sought to elucidate the role that spatial correlations in natural scenes play in reliable coding. We developed a novel noise-masking method to systematically alter spatial correlations in natural movies, without altering their edge structure. Using high-speed two-photon calcium imaging in vivo, we found that responses in mouse V1 were much less reliable at both the single neuron and population level when spatial correlations were removed from the image. This change in reliability was due to a reorganization of between-neuron correlations. Strongly correlated neurons formed ensembles that reliably and accurately encoded visual stimuli, whereas reducing spatial correlations reduced the activation of these ensembles, leading to an unreliable code. Together with an ensemble-specific normalization model, these results suggest that the coordinated activation of specific subsets of neurons underlies the reliable coding of natural scenes. SIGNIFICANCE STATEMENT The natural environment is rich with information. To process this information with high fidelity, V1 neurons have to be robust to noise and, consequentially, must generate responses that are reliable from trial to trial. While several studies have hinted that both stimulus attributes and population coding may reduce noise, the details remain unclear. Specifically, what features of natural scenes are important and how do they modulate reliability? This study is the first to investigate the role of spatial correlations, which are a fundamental attribute of natural scenes, in shaping stimulus coding by V1 neurons. Our results provide new insights into how stimulus spatial correlations reorganize the correlated activation of specific ensembles of neurons to ensure accurate information processing in V1. PMID:26511254
De Loof, Esther; Van Opstal, Filip; Verguts, Tom
2016-04-01
Theories on visual awareness claim that predicted stimuli reach awareness faster than unpredicted ones. In the current study, we disentangle whether prior information about the upcoming stimulus affects visual awareness of stimulus location (i.e., individuation) by modulating processing efficiency or threshold setting. Analogous research on stimulus identification revealed that prior information modulates threshold setting. However, as identification and individuation are two functionally and neurally distinct processes, the mechanisms underlying identification cannot simply be extrapolated directly to individuation. The goal of this study was therefore to investigate how individuation is influenced by prior information about the upcoming stimulus. To do so, a drift diffusion model was fitted to estimate the processing efficiency and threshold setting for predicted versus unpredicted stimuli in a cued individuation paradigm. Participants were asked to locate a picture, following a cue that was congruent, incongruent or neutral with respect to the picture's identity. Pictures were individuated faster in the congruent and neutral condition compared to the incongruent condition. In the diffusion model analysis, the processing efficiency was not significantly different across conditions. However, the threshold setting was significantly higher following an incongruent cue compared to both congruent and neutral cues. Our results indicate that predictive information about the upcoming stimulus influences visual awareness by shifting the threshold for individuation rather than by enhancing processing efficiency. Copyright © 2016 Elsevier Ltd. All rights reserved.
Fortier-Gauthier, Ulysse; Moffat, Nicolas; Dell'Acqua, Roberto; McDonald, John J; Jolicœur, Pierre
2012-07-01
We studied brain activity during retention and retrieval phases of two visual short-term memory (VSTM) experiments. Experiment 1 used a balanced memory array, with one color stimulus in each hemifield, followed by a retention interval and a central probe, at the fixation point that designated the target stimulus in memory about which to make a determination of orientation. Retrieval of information from VSTM was associated with an event-related lateralization (ERL) with a contralateral negativity relative to the visual field from which the probed stimulus was originally encoded, suggesting a lateralized organization of VSTM. The scalp distribution of the retrieval ERL was more anterior than what is usually associated with simple maintenance activity, which is consistent with the involvement of different brain structures for these distinct visual memory mechanisms. Experiment 2 was like Experiment 1, but used an unbalanced memory array consisting of one lateral color stimulus in a hemifield and one color stimulus on the vertical mid-line. This design enabled us to separate lateralized activity related to target retrieval from distractor processing. Target retrieval was found to generate a negative-going ERL at electrode sites found in Experiment 1, and suggested representations were retrieved from anterior cortical structures. Distractor processing elicited a positive-going ERL at posterior electrodes sites, which could be indicative of a return to baseline of retention activity for the discarded memory of the now-irrelevant stimulus, or an active inhibition mechanism mediating distractor suppression. Copyright © 2012 Elsevier Ltd. All rights reserved.
Hellmann, B; Güntürkün, O
2001-01-01
Visual information processing within the ascending tectofugal pathway to the forebrain undergoes essential rearrangements between the mesencephalic tectum opticum and the diencephalic nucleus rotundus of birds. The outer tectal layers constitute a two-dimensional map of the visual surrounding, whereas nucleus rotundus is characterized by functional domains in which different visual features such as movement, color, or luminance are processed in parallel. Morphologic correlates of this reorganization were investigated by means of focal injections of the neuronal tracer choleratoxin subunit B into different regions of the nuclei rotundus and triangularis of the pigeon. Dependent on the thalamic injection site, variations in the retrograde labeling pattern of ascending tectal efferents were observed. All rotundal projecting neurons were located within the deep tectal layer 13. Five different cell populations were distinguished that could be differentiated according to their dendritic ramifications within different retinorecipient laminae and their axons projecting to different subcomponents of the nucleus rotundus. Because retinorecipient tectal layers differ in their input from distinct classes of retinal ganglion cells, each tectorotundal cell type probably processes different aspects of the visual surrounding. Therefore, the differential input/output connections of the five tectorotundal cell groups might constitute the structural basis for spatially segregated parallel information processing of different stimulus aspects within the tectofugal visual system. Because two of five rotundal projecting cell groups additionally exhibited quantitative shifts along the dorsoventral extension of the tectum, data also indicate visual field-dependent alterations in information processing for particular visual features. Copyright 2001 Wiley-Liss, Inc.
Henriksson, Linda; Karvonen, Juha; Salminen-Vaparanta, Niina; Railo, Henry; Vanni, Simo
2012-01-01
The localization of visual areas in the human cortex is typically based on mapping the retinotopic organization with functional magnetic resonance imaging (fMRI). The most common approach is to encode the response phase for a slowly moving visual stimulus and to present the result on an individual's reconstructed cortical surface. The main aims of this study were to develop complementary general linear model (GLM)-based retinotopic mapping methods and to characterize the inter-individual variability of the visual area positions on the cortical surface. We studied 15 subjects with two methods: a 24-region multifocal checkerboard stimulus and a blocked presentation of object stimuli at different visual field locations. The retinotopic maps were based on weighted averaging of the GLM parameter estimates for the stimulus regions. In addition to localizing visual areas, both methods could be used to localize multiple retinotopic regions-of-interest. The two methods yielded consistent retinotopic maps in the visual areas V1, V2, V3, hV4, and V3AB. In the higher-level areas IPS0, VO1, LO1, LO2, TO1, and TO2, retinotopy could only be mapped with the blocked stimulus presentation. The gradual widening of spatial tuning and an increase in the responses to stimuli in the ipsilateral visual field along the hierarchy of visual areas likely reflected the increase in the average receptive field size. Finally, after registration to Freesurfer's surface-based atlas of the human cerebral cortex, we calculated the mean and variability of the visual area positions in the spherical surface-based coordinate system and generated probability maps of the visual areas on the average cortical surface. The inter-individual variability in the area locations decreased when the midpoints were calculated along the spherical cortical surface compared with volumetric coordinates. These results can facilitate both analysis of individual functional anatomy and comparisons of visual cortex topology across studies. PMID:22590626
The Verriest Lecture: Color lessons from space, time, and motion
Shevell, Steven K.
2012-01-01
The appearance of a chromatic stimulus depends on more than the wavelengths composing it. The scientific literature has countless examples showing that spatial and temporal features of light influence the colors we see. Studying chromatic stimuli that vary over space, time or direction of motion has a further benefit beyond predicting color appearance: the unveiling of otherwise concealed neural processes of color vision. Spatial or temporal stimulus variation uncovers multiple mechanisms of brightness and color perception at distinct levels of the visual pathway. Spatial variation in chromaticity and luminance can change perceived three-dimensional shape, an example of chromatic signals that affect a percept other than color. Chromatic objects in motion expose the surprisingly weak link between the chromaticity of objects and their physical direction of motion, and the role of color in inducing an illusory motion direction. Space, time and motion – color’s colleagues – reveal the richness of chromatic neural processing. PMID:22330398
Reward alters the perception of time.
Failing, Michel; Theeuwes, Jan
2016-03-01
Recent findings indicate that monetary rewards have a powerful effect on cognitive performance. In order to maximize overall gain, the prospect of earning reward biases visual attention to specific locations or stimulus features improving perceptual sensitivity and processing. The question we addressed in this study is whether the prospect of reward also affects the subjective perception of time. Here, participants performed a prospective timing task using temporal oddballs. The results show that temporal oddballs, displayed for varying durations, presented in a sequence of standard stimuli were perceived to last longer when they signaled a relatively high reward compared to when they signaled no or low reward. When instead of the oddball the standards signaled reward, the perception of the temporal oddball remained unaffected. We argue that by signaling reward, a stimulus becomes subjectively more salient thereby modulating its attentional deployment and distorting how it is perceived in time. Copyright © 2015 Elsevier B.V. All rights reserved.
Wilbiks, Jonathan M P; Dyson, Benjamin J
2016-01-01
Over 5 experiments, we challenge the idea that the capacity of audio-visual integration need be fixed at 1 item. We observe that the conditions under which audio-visual integration is most likely to exceed 1 occur when stimulus change operates at a slow rather than fast rate of presentation and when the task is of intermediate difficulty such as when low levels of proactive interference (3 rather than 8 interfering visual presentations) are combined with the temporal unpredictability of the critical frame (Experiment 2), or, high levels of proactive interference are combined with the temporal predictability of the critical frame (Experiment 4). Neural data suggest that capacity might also be determined by the quality of perceptual information entering working memory. Experiment 5 supported the proposition that audio-visual integration was at play during the previous experiments. The data are consistent with the dynamic nature usually associated with cross-modal binding, and while audio-visual integration capacity likely cannot exceed uni-modal capacity estimates, performance may be better than being able to associate only one visual stimulus with one auditory stimulus.
Wilbiks, Jonathan M. P.; Dyson, Benjamin J.
2016-01-01
Over 5 experiments, we challenge the idea that the capacity of audio-visual integration need be fixed at 1 item. We observe that the conditions under which audio-visual integration is most likely to exceed 1 occur when stimulus change operates at a slow rather than fast rate of presentation and when the task is of intermediate difficulty such as when low levels of proactive interference (3 rather than 8 interfering visual presentations) are combined with the temporal unpredictability of the critical frame (Experiment 2), or, high levels of proactive interference are combined with the temporal predictability of the critical frame (Experiment 4). Neural data suggest that capacity might also be determined by the quality of perceptual information entering working memory. Experiment 5 supported the proposition that audio-visual integration was at play during the previous experiments. The data are consistent with the dynamic nature usually associated with cross-modal binding, and while audio-visual integration capacity likely cannot exceed uni-modal capacity estimates, performance may be better than being able to associate only one visual stimulus with one auditory stimulus. PMID:27977790
Ellingson, Roger M; Oken, Barry
2010-01-01
Report contains the design overview and key performance measurements demonstrating the feasibility of generating and recording ambulatory visual stimulus evoked potentials using the previously reported custom Complementary and Alternative Medicine physiologic data collection and monitoring system, CAMAS. The methods used to generate visual stimuli on a PDA device and the design of an optical coupling device to convert the display to an electrical waveform which is recorded by the CAMAS base unit are presented. The optical sensor signal, synchronized to the visual stimulus emulates the brain's synchronized EEG signal input to CAMAS normally reviewed for the evoked potential response. Most importantly, the PDA also sends a marker message over the wireless Bluetooth connection to the CAMAS base unit synchronized to the visual stimulus which is the critical averaging reference component to obtain VEP results. Results show the variance in the latency of the wireless marker messaging link is consistent enough to support the generation and recording of visual evoked potentials. The averaged sensor waveforms at multiple CPU speeds are presented and demonstrate suitability of the Bluetooth interface for portable ambulatory visual evoked potential implementation on our CAMAS platform.
Todd, J Jay; Fougnie, Daryl; Marois, René
2005-12-01
The right temporo-parietal junction (TPJ) is critical for stimulus-driven attention and visual awareness. Here we show that as the visual short-term memory (VSTM) load of a task increases, activity in this region is increasingly suppressed. Correspondingly, increasing VSTM load impairs the ability of subjects to consciously detect the presence of a novel, unexpected object in the visual field. These results not only demonstrate that VSTM load suppresses TPJ activity and induces inattentional blindness, but also offer a plausible neural mechanism for this perceptual deficit: suppression of the stimulus-driven attentional network.
Donohue, Sarah E; Todisco, Alexandra E; Woldorff, Marty G
2013-04-01
Neuroimaging work on multisensory conflict suggests that the relevant modality receives enhanced processing in the face of incongruency. However, the degree of stimulus processing in the irrelevant modality and the temporal cascade of the attentional modulations in either the relevant or irrelevant modalities are unknown. Here, we employed an audiovisual conflict paradigm with a sensory probe in the task-irrelevant modality (vision) to gauge the attentional allocation to that modality. ERPs were recorded as participants attended to and discriminated spoken auditory letters while ignoring simultaneous bilateral visual letter stimuli that were either fully congruent, fully incongruent, or partially incongruent (one side incongruent, one congruent) with the auditory stimulation. Half of the audiovisual letter stimuli were followed 500-700 msec later by a bilateral visual probe stimulus. As expected, ERPs to the audiovisual stimuli showed an incongruency ERP effect (fully incongruent versus fully congruent) of an enhanced, centrally distributed, negative-polarity wave starting ∼250 msec. More critically here, the sensory ERP components to the visual probes were larger when they followed fully incongruent versus fully congruent multisensory stimuli, with these enhancements greatest on fully incongruent trials with the slowest RTs. In addition, on the slowest-response partially incongruent trials, the P2 sensory component to the visual probes was larger contralateral to the preceding incongruent visual stimulus. These data suggest that, in response to conflicting multisensory stimulus input, the initial cognitive effect is a capture of attention by the incongruent irrelevant-modality input, pulling neural processing resources toward that modality, resulting in rapid enhancement, rather than rapid suppression, of that input.
Swalve, Natashia; Barrett, Scott T.; Bevins, Rick A.; Li, Ming
2015-01-01
Nicotine is a widely-abused drug, yet its primary reinforcing effect does not seem potent as other stimulants such as cocaine. Recent research on the contributing factors toward chronic use of nicotine-containing products has implicated the role of reinforcement-enhancing effects of nicotine. The present study investigates whether phencyclidine (PCP) may also possess a reinforcement-enhancement effect and how this may interact with the reinforcement-enhancement effect of nicotine. PCP was tested for two reasons: 1) it produces discrepant results on overall reward, similar to that seen with nicotine and 2) it may elucidate how other compounds may interact with the reinforcement-enhancement of nicotine. Adult male Sprague-Dawley rats were trained to lever press for brief visual stimulus presentations under fixed-ratio (FR) schedules of reinforcement and then were tested with nicotine (0.2 or 0.4 mg/kg) and/or PCP (2.0 mg/kg) over six increasing FR values. A selective increase in active lever-pressing for the visual stimulus with drug treatment was considered evidence of a reinforcement-enhancement effect. PCP and nicotine separately increased active lever pressing for a visual stimulus in a dose-dependent manner and across the different FR schedules. The addition of PCP to nicotine did not increase lever-pressing for the visual stimulus, possibly due to a ceiling effect. The effect of PCP may be driven largely by its locomotor stimulant effects, whereas the effect of nicotine was independent of locomotor stimulation. This dissociation emphasizes that distinct pharmacological properties contribute to the reinforcement-enhancement effects of substances. PMID:26026783
2017-01-01
In multisensory integration, processing in one sensory modality is enhanced by complementary information from other modalities. Intersensory timing is crucial in this process because only inputs reaching the brain within a restricted temporal window are perceptually bound. Previous research in the audiovisual field has investigated various features of the temporal binding window, revealing asymmetries in its size and plasticity depending on the leading input: auditory–visual (AV) or visual–auditory (VA). Here, we tested whether separate neuronal mechanisms underlie this AV–VA dichotomy in humans. We recorded high-density EEG while participants performed an audiovisual simultaneity judgment task including various AV–VA asynchronies and unisensory control conditions (visual-only, auditory-only) and tested whether AV and VA processing generate different patterns of brain activity. After isolating the multisensory components of AV–VA event-related potentials (ERPs) from the sum of their unisensory constituents, we ran a time-resolved topographical representational similarity analysis (tRSA) comparing the AV and VA ERP maps. Spatial cross-correlation matrices were built from real data to index the similarity between the AV and VA maps at each time point (500 ms window after stimulus) and then correlated with two alternative similarity model matrices: AVmaps = VAmaps versus AVmaps ≠ VAmaps. The tRSA results favored the AVmaps ≠ VAmaps model across all time points, suggesting that audiovisual temporal binding (indexed by synchrony perception) engages different neural pathways depending on the leading sense. The existence of such dual route supports recent theoretical accounts proposing that multiple binding mechanisms are implemented in the brain to accommodate different information parsing strategies in auditory and visual sensory systems. SIGNIFICANCE STATEMENT Intersensory timing is a crucial aspect of multisensory integration, determining whether and how inputs in one modality enhance stimulus processing in another modality. Our research demonstrates that evaluating synchrony of auditory-leading (AV) versus visual-leading (VA) audiovisual stimulus pairs is characterized by two distinct patterns of brain activity. This suggests that audiovisual integration is not a unitary process and that different binding mechanisms are recruited in the brain based on the leading sense. These mechanisms may be relevant for supporting different classes of multisensory operations, for example, auditory enhancement of visual attention (AV) and visual enhancement of auditory speech (VA). PMID:28450537
10-Month-Olds Visually Anticipate an Outcome Contingent on Their Own Action
ERIC Educational Resources Information Center
Kenward, Ben
2010-01-01
It is known that young infants can learn to perform an action that elicits a reinforcer, and that they can visually anticipate a predictable stimulus by looking at its location before it begins. Here, in an investigation of the display of these abilities in tandem, I report that 10-month-olds anticipate a reward stimulus that they generate through…
Startle Auditory Stimuli Enhance the Performance of Fast Dynamic Contractions
Fernandez-Del-Olmo, Miguel; Río-Rodríguez, Dan; Iglesias-Soler, Eliseo; Acero, Rafael M.
2014-01-01
Fast reaction times and the ability to develop a high rate of force development (RFD) are crucial for sports performance. However, little is known regarding the relationship between these parameters. The aim of this study was to investigate the effects of auditory stimuli of different intensities on the performance of a concentric bench-press exercise. Concentric bench-presses were performed by thirteen trained subjects in response to three different conditions: a visual stimulus (VS); a visual stimulus accompanied by a non-startle auditory stimulus (AS); and a visual stimulus accompanied by a startle auditory stimulus (SS). Peak RFD, peak velocity, onset movement, movement duration and electromyography from pectoralis and tricep muscles were recorded. The SS condition induced an increase in the RFD and peak velocity and a reduction in the movement onset and duration, in comparison with the VS and AS condition. The onset activation of the pectoralis and tricep muscles was shorter for the SS than for the VS and AS conditions. These findings point out to specific enhancement effects of loud auditory stimulation on the rate of force development. This is of relevance since startle stimuli could be used to explore neural adaptations to resistance training. PMID:24489967
Multiple-stage ambiguity in motion perception reveals global computation of local motion directions.
Rider, Andrew T; Nishida, Shin'ya; Johnston, Alan
2016-12-01
The motion of a 1D image feature, such as a line, seen through a small aperture, or the small receptive field of a neural motion sensor, is underconstrained, and it is not possible to derive the true motion direction from a single local measurement. This is referred to as the aperture problem. How the visual system solves the aperture problem is a fundamental question in visual motion research. In the estimation of motion vectors through integration of ambiguous local motion measurements at different positions, conventional theories assume that the object motion is a rigid translation, with motion signals sharing a common motion vector within the spatial region over which the aperture problem is solved. However, this strategy fails for global rotation. Here we show that the human visual system can estimate global rotation directly through spatial pooling of locally ambiguous measurements, without an intervening step that computes local motion vectors. We designed a novel ambiguous global flow stimulus, which is globally as well as locally ambiguous. The global ambiguity implies that the stimulus is simultaneously consistent with both a global rigid translation and an infinite number of global rigid rotations. By the standard view, the motion should always be seen as a global translation, but it appears to shift from translation to rotation as observers shift fixation. This finding indicates that the visual system can estimate local vectors using a global rotation constraint, and suggests that local motion ambiguity may not be resolved until consistencies with multiple global motion patterns are assessed.
Spatial attention improves the quality of population codes in human visual cortex.
Saproo, Sameer; Serences, John T
2010-08-01
Selective attention enables sensory input from behaviorally relevant stimuli to be processed in greater detail, so that these stimuli can more accurately influence thoughts, actions, and future goals. Attention has been shown to modulate the spiking activity of single feature-selective neurons that encode basic stimulus properties (color, orientation, etc.). However, the combined output from many such neurons is required to form stable representations of relevant objects and little empirical work has formally investigated the relationship between attentional modulations on population responses and improvements in encoding precision. Here, we used functional MRI and voxel-based feature tuning functions to show that spatial attention induces a multiplicative scaling in orientation-selective population response profiles in early visual cortex. In turn, this multiplicative scaling correlates with an improvement in encoding precision, as evidenced by a concurrent increase in the mutual information between population responses and the orientation of attended stimuli. These data therefore demonstrate how multiplicative scaling of neural responses provides at least one mechanism by which spatial attention may improve the encoding precision of population codes. Increased encoding precision in early visual areas may then enhance the speed and accuracy of perceptual decisions computed by higher-order neural mechanisms.
Serial dependence in the perception of attractiveness.
Xia, Ye; Leib, Allison Yamanashi; Whitney, David
2016-12-01
The perception of attractiveness is essential for choices of food, object, and mate preference. Like perception of other visual features, perception of attractiveness is stable despite constant changes of image properties due to factors like occlusion, visual noise, and eye movements. Recent results demonstrate that perception of low-level stimulus features and even more complex attributes like human identity are biased towards recent percepts. This effect is often called serial dependence. Some recent studies have suggested that serial dependence also exists for perceived facial attractiveness, though there is also concern that the reported effects are due to response bias. Here we used an attractiveness-rating task to test the existence of serial dependence in perceived facial attractiveness. Our results demonstrate that perceived face attractiveness was pulled by the attractiveness level of facial images encountered up to 6 s prior. This effect was not due to response bias and did not rely on the previous motor response. This perceptual pull increased as the difference in attractiveness between previous and current stimuli increased. Our results reconcile previously conflicting findings and extend previous work, demonstrating that sequential dependence in perception operates across different levels of visual analysis, even at the highest levels of perceptual interpretation.
Aural, visual, and pictorial stimulus formats in false recall.
Beauchamp, Heather M
2002-12-01
The present investigation is an initial simultaneous examination of the influence of three stimulus formats on false memories. Several pilot tests were conducted to develop new category associate stimulus lists. 73 women and 26 men (M age=21.1 yr.) were in one of three conditions: they either heard words, were shown words, or were shown pictures highly related to critical nonpresented items. As expected, recall of critical nonpresented stimuli was significantly greater for aural lists than for visually presented words and pictorial images. These findings demonstrate that the accuracy of memory is influenced by the format of the information encoded.
Visual and proprioceptive interaction in patients with bilateral vestibular loss☆
Cutfield, Nicholas J.; Scott, Gregory; Waldman, Adam D.; Sharp, David J.; Bronstein, Adolfo M.
2014-01-01
Following bilateral vestibular loss (BVL) patients gradually adapt to the loss of vestibular input and rely more on other sensory inputs. Here we examine changes in the way proprioceptive and visual inputs interact. We used functional magnetic resonance imaging (fMRI) to investigate visual responses in the context of varying levels of proprioceptive input in 12 BVL subjects and 15 normal controls. A novel metal-free vibrator was developed to allow vibrotactile neck proprioceptive input to be delivered in the MRI system. A high level (100 Hz) and low level (30 Hz) control stimulus was applied over the left splenius capitis; only the high frequency stimulus generates a significant proprioceptive stimulus. The neck stimulus was applied in combination with static and moving (optokinetic) visual stimuli, in a factorial fMRI experimental design. We found that high level neck proprioceptive input had more cortical effect on brain activity in the BVL patients. This included a reduction in visual motion responses during high levels of proprioceptive input and differential activation in the midline cerebellum. In early visual cortical areas, the effect of high proprioceptive input was present for both visual conditions but in lateral visual areas, including V5/MT, the effect was only seen in the context of visual motion stimulation. The finding of a cortical visuo-proprioceptive interaction in BVL patients is consistent with behavioural data indicating that, in BVL patients, neck afferents partly replace vestibular input during the CNS-mediated compensatory process. An fMRI cervico-visual interaction may thus substitute the known visuo-vestibular interaction reported in normal subject fMRI studies. The results provide evidence for a cortical mechanism of adaptation to vestibular failure, in the form of an enhanced proprioceptive influence on visual processing. The results may provide the basis for a cortical mechanism involved in proprioceptive substitution of vestibular function in BVL patients. PMID:25061564
Visual and proprioceptive interaction in patients with bilateral vestibular loss.
Cutfield, Nicholas J; Scott, Gregory; Waldman, Adam D; Sharp, David J; Bronstein, Adolfo M
2014-01-01
Following bilateral vestibular loss (BVL) patients gradually adapt to the loss of vestibular input and rely more on other sensory inputs. Here we examine changes in the way proprioceptive and visual inputs interact. We used functional magnetic resonance imaging (fMRI) to investigate visual responses in the context of varying levels of proprioceptive input in 12 BVL subjects and 15 normal controls. A novel metal-free vibrator was developed to allow vibrotactile neck proprioceptive input to be delivered in the MRI system. A high level (100 Hz) and low level (30 Hz) control stimulus was applied over the left splenius capitis; only the high frequency stimulus generates a significant proprioceptive stimulus. The neck stimulus was applied in combination with static and moving (optokinetic) visual stimuli, in a factorial fMRI experimental design. We found that high level neck proprioceptive input had more cortical effect on brain activity in the BVL patients. This included a reduction in visual motion responses during high levels of proprioceptive input and differential activation in the midline cerebellum. In early visual cortical areas, the effect of high proprioceptive input was present for both visual conditions but in lateral visual areas, including V5/MT, the effect was only seen in the context of visual motion stimulation. The finding of a cortical visuo-proprioceptive interaction in BVL patients is consistent with behavioural data indicating that, in BVL patients, neck afferents partly replace vestibular input during the CNS-mediated compensatory process. An fMRI cervico-visual interaction may thus substitute the known visuo-vestibular interaction reported in normal subject fMRI studies. The results provide evidence for a cortical mechanism of adaptation to vestibular failure, in the form of an enhanced proprioceptive influence on visual processing. The results may provide the basis for a cortical mechanism involved in proprioceptive substitution of vestibular function in BVL patients.
Visual adaptation and novelty responses in the superior colliculus
Boehnke, Susan E.; Berg, David J.; Marino, Robert M.; Baldi, Pierre F.; Itti, Laurent; Munoz, Douglas P.
2011-01-01
The brain's ability to ignore repeating, often redundant, information while enhancing novel information processing is paramount to survival. When stimuli are repeatedly presented, the response of visually-sensitive neurons decreases in magnitude, i.e. neurons adapt or habituate, although the mechanism is not yet known. We monitored activity of visual neurons in the superior colliculus (SC) of rhesus monkeys who actively fixated while repeated visual events were presented. We dissociated adaptation from habituation as mechanisms of the response decrement by using a Bayesian model of adaptation, and by employing a paradigm including rare trials that included an oddball stimulus that was either brighter or dimmer. If the mechanism is adaptation, response recovery should be seen only for the brighter stimulus; if habituation, response recovery (‘dishabituation’) should be seen for both the brighter and dimmer stimulus. We observed a reduction in the magnitude of the initial transient response and an increase in response onset latency with stimulus repetition for all visually responsive neurons in the SC. Response decrement was successfully captured by the adaptation model which also predicted the effects of presentation rate and rare luminance changes. However, in a subset of neurons with sustained activity to visual stimuli, a novelty signal akin to dishabituation was observed late in the visual response profile to both brighter and dimmer stimuli and was not captured by the model. This suggests that SC neurons integrate both rapidly discounted information about repeating stimuli and novelty information about oddball events, to support efficient selection in a cluttered dynamic world. PMID:21864319
Norman, J Farley; Phillips, Flip; Holmin, Jessica S; Norman, Hideko F; Beers, Amanda M; Boswell, Alexandria M; Cheeseman, Jacob R; Stethen, Angela G; Ronning, Cecilia
2012-10-01
A set of three experiments evaluated 96 participants' ability to visually and haptically discriminate solid object shape. In the past, some researchers have found haptic shape discrimination to be substantially inferior to visual shape discrimination, while other researchers have found haptics and vision to be essentially equivalent. A primary goal of the present study was to understand these discrepant past findings and to determine the true capabilities of the haptic system. All experiments used the same task (same vs. different shape discrimination) and stimulus objects (James Gibson's "feelies" and a set of naturally shaped objects--bell peppers). However, the methodology varied across experiments. Experiment 1 used random 3-dimensional (3-D) orientations of the stimulus objects, and the conditions were full-cue (active manipulation of objects and rotation of the visual objects in depth). Experiment 2 restricted the 3-D orientations of the stimulus objects and limited the haptic and visual information available to the participants. Experiment 3 compared restricted and full-cue conditions using random 3-D orientations. We replicated both previous findings in the current study. When we restricted visual and haptic information (and placed the stimulus objects in the same orientation on every trial), the participants' visual performance was superior to that obtained for haptics (replicating the earlier findings of Davidson et al. in Percept Psychophys 15(3):539-543, 1974). When the circumstances resembled those of ordinary life (e.g., participants able to actively manipulate objects and see them from a variety of perspectives), we found no significant difference between visual and haptic solid shape discrimination.
Rademaker, Rosanne L; van de Ven, Vincent G; Tong, Frank; Sack, Alexander T
2017-01-01
Neuroimaging studies have demonstrated that activity patterns in early visual areas predict stimulus properties actively maintained in visual working memory. Yet, the mechanisms by which such information is represented remain largely unknown. In this study, observers remembered the orientations of 4 briefly presented gratings, one in each quadrant of the visual field. A 10Hz Transcranial Magnetic Stimulation (TMS) triplet was applied directly at stimulus offset, or midway through a 2-second delay, targeting early visual cortex corresponding retinotopically to a sample item in the lower hemifield. Memory for one of the four gratings was probed at random, and participants reported this orientation via method of adjustment. Recall errors were smaller when the visual field location targeted by TMS overlapped with that of the cued memory item, compared to errors for stimuli probed diagonally to TMS. This implied topographic storage of orientation information, and a memory-enhancing effect at the targeted location. Furthermore, early pulses impaired performance at all four locations, compared to late pulses. Next, response errors were fit empirically using a mixture model to characterize memory precision and guess rates. Memory was more precise for items proximal to the pulse location, irrespective of pulse timing. Guesses were more probable with early TMS pulses, regardless of stimulus location. Thus, while TMS administered at the offset of the stimulus array might disrupt early-phase consolidation in a non-topographic manner, TMS also boosts the precise representation of an item at its targeted retinotopic location, possibly by increasing attentional resources or by injecting a beneficial amount of noise.
van de Ven, Vincent G.; Tong, Frank; Sack, Alexander T.
2017-01-01
Neuroimaging studies have demonstrated that activity patterns in early visual areas predict stimulus properties actively maintained in visual working memory. Yet, the mechanisms by which such information is represented remain largely unknown. In this study, observers remembered the orientations of 4 briefly presented gratings, one in each quadrant of the visual field. A 10Hz Transcranial Magnetic Stimulation (TMS) triplet was applied directly at stimulus offset, or midway through a 2-second delay, targeting early visual cortex corresponding retinotopically to a sample item in the lower hemifield. Memory for one of the four gratings was probed at random, and participants reported this orientation via method of adjustment. Recall errors were smaller when the visual field location targeted by TMS overlapped with that of the cued memory item, compared to errors for stimuli probed diagonally to TMS. This implied topographic storage of orientation information, and a memory-enhancing effect at the targeted location. Furthermore, early pulses impaired performance at all four locations, compared to late pulses. Next, response errors were fit empirically using a mixture model to characterize memory precision and guess rates. Memory was more precise for items proximal to the pulse location, irrespective of pulse timing. Guesses were more probable with early TMS pulses, regardless of stimulus location. Thus, while TMS administered at the offset of the stimulus array might disrupt early-phase consolidation in a non-topographic manner, TMS also boosts the precise representation of an item at its targeted retinotopic location, possibly by increasing attentional resources or by injecting a beneficial amount of noise. PMID:28384347
Barban, Francesco; Zannino, Gian Daniele; Macaluso, Emiliano; Caltagirone, Carlo; Carlesimo, Giovanni A
2013-06-01
Iconic memory is a high-capacity low-duration visual memory store that allows the persistence of a visual stimulus after its offset. The categorical nature of this store has been extensively debated. This study provides functional magnetic resonance imaging evidence for brain regions underlying the persistence of postcategorical representations of visual stimuli. In a partial report paradigm, subjects matched a cued row of a 3 × 3 array of letters (postcategorical stimuli) or false fonts (precategorical stimuli) with a subsequent triplet of stimuli. The cued row was indicated by two visual flankers presented at the onset (physical stimulus readout) or after the offset of the array (iconic memory readout). The left planum temporale showed a greater modulation of the source of readout (iconic memory vs. physical stimulus) when letters were presented compared to false fonts. This is a multimodal brain region responsible for matching incoming acoustic and visual patterns with acoustic pattern templates. These findings suggest that letters persist after their physical offset in an abstract postcategorical representation. A targeted region of interest analysis revealed a similar pattern of activation in the Visual Word Form Area. These results suggest that multiple higher-order visual areas mediate iconic memory for postcategorical stimuli. Copyright © 2012 Wiley Periodicals, Inc.
Van Ombergen, Angelique; Lubeck, Astrid J; Van Rompaey, Vincent; Maes, Leen K; Stins, John F; Van de Heyning, Paul H; Wuyts, Floris L; Bos, Jelte E
2016-01-01
Vestibular patients occasionally report aggravation or triggering of their symptoms by visual stimuli, which is called visual vestibular mismatch (VVM). These patients therefore experience discomfort, disorientation, dizziness and postural unsteadiness. Firstly, we aimed to get a better insight in the underlying mechanism of VVM by examining perceptual and postural symptoms. Secondly, we wanted to investigate whether roll-motion is a necessary trait to evoke these symptoms or whether a complex but stationary visual pattern equally provokes them. Nine VVM patients and healthy matched control group were examined by exposing both groups to a stationary stimulus as well as an optokinetic stimulus rotating around the naso-occipital axis for a prolonged period of time. Subjective visual vertical (SVV) measurements, posturography and relevant questionnaires were assessed. No significant differences between both groups were found for SVV measurements. Patients always swayed more and reported more symptoms than healthy controls. Prolonged exposure to roll-motion caused in patients and controls an increase in postural sway and symptoms. However, only VVM patients reported significantly more symptoms after prolonged exposure to the optokinetic stimulus compared to scores after exposure to a stationary stimulus. VVM patients differ from healthy controls in postural and subjective symptoms and motion is a crucial factor in provoking these symptoms. A possible explanation could be a central visual-vestibular integration deficit, which has implications for diagnostics and clinical rehabilitation purposes. Future research should focus on the underlying central mechanism of VVM and the effectiveness of optokinetic stimulation in resolving it.
Modulation of visual physiology by behavioral state in monkeys, mice, and flies.
Maimon, Gaby
2011-08-01
When a monkey attends to a visual stimulus, neurons in visual cortex respond differently to that stimulus than when the monkey attends elsewhere. In the 25 years since the initial discovery, the study of attention in primates has been central to understanding flexible visual processing. Recent experiments demonstrate that visual neurons in mice and fruit flies are modulated by locomotor behaviors, like running and flying, in a manner that resembles attention-based modulations in primates. The similar findings across species argue for a more generalized view of state-dependent sensory processing and for a renewed dialogue among vertebrate and invertebrate research communities. Copyright © 2011 Elsevier Ltd. All rights reserved.
Bressler, David W.; Silver, Michael A.
2010-01-01
Spatial attention improves visual perception and increases the amplitude of neural responses in visual cortex. In addition, spatial attention tasks and fMRI have been used to discover topographic visual field representations in regions outside visual cortex. We therefore hypothesized that requiring subjects to attend to a retinotopic mapping stimulus would facilitate the characterization of visual field representations in a number of cortical areas. In our study, subjects attended either a central fixation point or a wedge-shaped stimulus that rotated about the fixation point. Response reliability was assessed by computing coherence between the fMRI time series and a sinusoid with the same frequency as the rotating wedge stimulus. When subjects attended to the rotating wedge instead of ignoring it, the reliability of retinotopic mapping signals increased by approximately 50% in early visual cortical areas (V1, V2, V3, V3A/B, V4) and ventral occipital cortex (VO1) and by approximately 75% in lateral occipital (LO1, LO2) and posterior parietal (IPS0, IPS1 and IPS2) cortical areas. Additionally, one 5-minute run of retinotopic mapping in the attention-to-wedge condition produced responses as reliable as the average of three to five (early visual cortex) or more than five (lateral occipital, ventral occipital, and posterior parietal cortex) attention-to-fixation runs. These results demonstrate that allocating attention to the retinotopic mapping stimulus substantially reduces the amount of scanning time needed to determine the visual field representations in occipital and parietal topographic cortical areas. Attention significantly increased response reliability in every cortical area we examined and may therefore be a general mechanism for improving the fidelity of neural representations of sensory stimuli at multiple levels of the cortical processing hierarchy. PMID:20600961
Qin, Pengmin; Duncan, Niall W; Wiebking, Christine; Gravel, Paul; Lyttelton, Oliver; Hayes, Dave J; Verhaeghe, Jeroen; Kostikov, Alexey; Schirrmacher, Ralf; Reader, Andrew J; Northoff, Georg
2012-01-01
Recent imaging studies have demonstrated that levels of resting γ-aminobutyric acid (GABA) in the visual cortex predict the degree of stimulus-induced activity in the same region. These studies have used the presentation of discrete visual stimulus; the change from closed eyes to open also represents a simple visual stimulus, however, and has been shown to induce changes in local brain activity and in functional connectivity between regions. We thus aimed to investigate the role of the GABA system, specifically GABA(A) receptors, in the changes in brain activity between the eyes closed (EC) and eyes open (EO) state in order to provide detail at the receptor level to complement previous studies of GABA concentrations. We conducted an fMRI study involving two different modes of the change from EC to EO: an EO and EC block design, allowing the modeling of the haemodynamic response, followed by longer periods of EC and EO to allow the measuring of functional connectivity. The same subjects also underwent [(18)F]Flumazenil PET to measure GABA(A) receptor binding potentials. It was demonstrated that the local-to-global ratio of GABA(A) receptor binding potential in the visual cortex predicted the degree of changes in neural activity from EC to EO. This same relationship was also shown in the auditory cortex. Furthermore, the local-to-global ratio of GABA(A) receptor binding potential in the visual cortex also predicted the change in functional connectivity between the visual and auditory cortex from EC to EO. These findings contribute to our understanding of the role of GABA(A) receptors in stimulus-induced neural activity in local regions and in inter-regional functional connectivity.
Miki, Kensaku; Takeshima, Yasuyuki; Watanabe, Shoko; Honda, Yukiko; Kakigi, Ryusuke
2011-04-06
We investigated the effects of inverting facial contour (hair and chin) and features (eyes, nose and mouth) on processing for static and dynamic face perception using magnetoencephalography (MEG). We used apparent motion, in which the first stimulus (S1) was replaced by a second stimulus (S2) with no interstimulus interval and subjects perceived visual motion, and presented three conditions as follows: (1) U&U: Upright contour and Upright features, (2) U&I: Upright contour and Inverted features, and (3) I&I: Inverted contour and Inverted features. In static face perception (S1 onset), the peak latency of the fusiform area's activity, which was related to static face perception, was significantly longer for U&I and I&I than for U&U in the right hemisphere and for U&I than for U&U and I&I in the left. In dynamic face perception (S2 onset), the strength (moment) of the occipitotemporal area's activity, which was related to dynamic face perception, was significantly larger for I&I than for U&U and U&I in the right hemisphere, but not the left. These results can be summarized as follows: (1) in static face perception, the activity of the right fusiform area was more affected by the inversion of features while that of the left fusiform area was more affected by the disruption of the spatial relation between the contour and features, and (2) in dynamic face perception, the activity of the right occipitotemporal area was affected by the inversion of the facial contour. Copyright © 2011 Elsevier B.V. All rights reserved.
A neural measure of precision in visual working memory.
Ester, Edward F; Anderson, David E; Serences, John T; Awh, Edward
2013-05-01
Recent studies suggest that the temporary storage of visual detail in working memory is mediated by sensory recruitment or sustained patterns of stimulus-specific activation within feature-selective regions of visual cortex. According to a strong version of this hypothesis, the relative "quality" of these patterns should determine the clarity of an individual's memory. Here, we provide a direct test of this claim. We used fMRI and a forward encoding model to characterize population-level orientation-selective responses in visual cortex while human participants held an oriented grating in memory. This analysis, which enables a precise quantitative description of multivoxel, population-level activity measured during working memory storage, revealed graded response profiles whose amplitudes were greatest for the remembered orientation and fell monotonically as the angular distance from this orientation increased. Moreover, interparticipant differences in the dispersion-but not the amplitude-of these response profiles were strongly correlated with performance on a concurrent memory recall task. These findings provide important new evidence linking the precision of sustained population-level responses in visual cortex and memory acuity.
Visual scan paths are abnormal in deluded schizophrenics.
Phillips, M L; David, A S
1997-01-01
One explanation for delusion formation is that they result from distorted appreciation of complex stimuli. The study investigated delusions in schizophrenia using a physiological marker of visual attention and information processing, the visual scan path-a map tracing the direction and duration of gaze when an individual views a stimulus. The aim was to demonstrate the presence of a specific deficit in processing meaningful stimuli (e.g. human faces) in deluded schizophrenics (DS) by relating this to abnormal viewing strategies. Visual scan paths were measured in acutely-deluded (n = 7) and non-deluded (n = 7) schizophrenics matched for medication, illness duration and negative symptoms, plus 10 age-matched normal controls. DS employed abnormal strategies for viewing single faces and face pairs in a recognition task, staring at fewer points and fixating non-feature areas to a significantly greater extent than both control groups (P < 0.05). The results indicate that DS direct their attention to less salient visual information when viewing faces. Future paradigms employing more complex stimuli and testing DS when less-deluded will allow further clarification of the relationship between viewing strategies and delusions.
Kinsey, K; Anderson, S J; Hadjipapas, A; Holliday, I E
2011-03-01
The perception of an object as a single entity within a visual scene requires that its features are bound together and segregated from the background and/or other objects. Here, we used magnetoencephalography (MEG) to assess the hypothesis that coherent percepts may arise from the synchronized high frequency (gamma) activity between neurons that code features of the same object. We also assessed the role of low frequency (alpha, beta) activity in object processing. The target stimulus (i.e. object) was a small patch of a concentric grating of 3c/°, viewed eccentrically. The background stimulus was either a blank field or a concentric grating of 3c/° periodicity, viewed centrally. With patterned backgrounds, the target stimulus emerged--through rotation about its own centre--as a circular subsection of the background. Data were acquired using a 275-channel whole-head MEG system and analyzed using Synthetic Aperture Magnetometry (SAM), which allows one to generate images of task-related cortical oscillatory power changes within specific frequency bands. Significant oscillatory activity across a broad range of frequencies was evident at the V1/V2 border, and subsequent analyses were based on a virtual electrode at this location. When the target was presented in isolation, we observed that: (i) contralateral stimulation yielded a sustained power increase in gamma activity; and (ii) both contra- and ipsilateral stimulation yielded near identical transient power changes in alpha (and beta) activity. When the target was presented against a patterned background, we observed that: (i) contralateral stimulation yielded an increase in high-gamma (>55 Hz) power together with a decrease in low-gamma (40-55 Hz) power; and (ii) both contra- and ipsilateral stimulation yielded a transient decrease in alpha (and beta) activity, though the reduction tended to be greatest for contralateral stimulation. The opposing power changes across different regions of the gamma spectrum with 'figure/ground' stimulation suggest a possible dual role for gamma rhythms in visual object coding, and provide general support of the binding-by-synchronization hypothesis. As the power changes in alpha and beta activity were largely independent of the spatial location of the target, however, we conclude that their role in object processing may relate principally to changes in visual attention. Copyright © 2010 Elsevier B.V. All rights reserved.
Deacon, D; Nousak, J M; Pilotti, M; Ritter, W; Yang, C M
1998-07-01
The effects of global and feature-specific probabilities of auditory stimuli were manipulated to determine their effects on the mismatch negativity (MMN) of the human event-related potential. The question of interest was whether the automatic comparison of stimuli indexed by the MMN was performed on representations of individual stimulus features or on gestalt representations of their combined attributes. The design of the study was such that both feature and gestalt representations could have been available to the comparator mechanism generating the MMN. The data were consistent with the interpretation that the MMN was generated following an analysis of stimulus features.
Sneve, Markus H; Magnussen, Svein; Alnæs, Dag; Endestad, Tor; D'Esposito, Mark
2013-11-01
Visual STM of simple features is achieved through interactions between retinotopic visual cortex and a set of frontal and parietal regions. In the present fMRI study, we investigated effective connectivity between central nodes in this network during the different task epochs of a modified delayed orientation discrimination task. Our univariate analyses demonstrate that the inferior frontal junction (IFJ) is preferentially involved in memory encoding, whereas activity in the putative FEFs and anterior intraparietal sulcus (aIPS) remains elevated throughout periods of memory maintenance. We have earlier reported, using the same task, that areas in visual cortex sustain information about task-relevant stimulus properties during delay intervals [Sneve, M. H., Alnæs, D., Endestad, T., Greenlee, M. W., & Magnussen, S. Visual short-term memory: Activity supporting encoding and maintenance in retinotopic visual cortex. Neuroimage, 63, 166-178, 2012]. To elucidate the temporal dynamics of the IFJ-FEF-aIPS-visual cortex network during memory operations, we estimated Granger causality effects between these regions with fMRI data representing memory encoding/maintenance as well as during memory retrieval. We also investigated a set of control conditions involving active processing of stimuli not associated with a memory task and passive viewing. In line with the developing understanding of IFJ as a region critical for control processes with a possible initiating role in visual STM operations, we observed influence from IFJ to FEF and aIPS during memory encoding. Furthermore, FEF predicted activity in a set of higher-order visual areas during memory retrieval, a finding consistent with its suggested role in top-down biasing of sensory cortex.
Levichkina, Ekaterina; Saalmann, Yuri B; Vidyasagar, Trichur R
2017-03-01
Primate posterior parietal cortex (PPC) is known to be involved in controlling spatial attention. Neurons in one part of the PPC, the lateral intraparietal area (LIP), show enhanced responses to objects at attended locations. Although many are selective for object features, such as the orientation of a visual stimulus, it is not clear how LIP circuits integrate feature-selective information when providing attentional feedback about behaviorally relevant locations to the visual cortex. We studied the relationship between object feature and spatial attention properties of LIP cells in two macaques by measuring the cells' orientation selectivity and the degree of attentional enhancement while performing a delayed match-to-sample task. Monkeys had to match both the location and orientation of two visual gratings presented separately in time. We found a wide range in orientation selectivity and degree of attentional enhancement among LIP neurons. However, cells with significant attentional enhancement had much less orientation selectivity in their response than cells which showed no significant modulation by attention. Additionally, orientation-selective cells showed working memory activity for their preferred orientation, whereas cells showing attentional enhancement also synchronized with local neuronal activity. These results are consistent with models of selective attention incorporating two stages, where an initial feature-selective process guides a second stage of focal spatial attention. We suggest that LIP contributes to both stages, where the first stage involves orientation-selective LIP cells that support working memory of the relevant feature, and the second stage involves attention-enhanced LIP cells that synchronize to provide feedback on spatial priorities. © 2017 The Authors. Physiological Reports published by Wiley Periodicals, Inc. on behalf of The Physiological Society and the American Physiological Society.
Short-term perceptual learning in visual conjunction search.
Su, Yuling; Lai, Yunpeng; Huang, Wanyi; Tan, Wei; Qu, Zhe; Ding, Yulong
2014-08-01
Although some studies showed that training can improve the ability of cross-dimension conjunction search, less is known about the underlying mechanism. Specifically, it remains unclear whether training of visual conjunction search can successfully bind different features of separated dimensions into a new function unit at early stages of visual processing. In the present study, we utilized stimulus specificity and generalization to provide a new approach to investigate the mechanisms underlying perceptual learning (PL) in visual conjunction search. Five experiments consistently showed that after 40 to 50 min of training of color-shape/orientation conjunction search, the ability to search for a certain conjunction target improved significantly and the learning effects did not transfer to a new target that differed from the trained target in both color and shape/orientation features. However, the learning effects were not strictly specific. In color-shape conjunction search, although the learning effect could not transfer to a same-shape different-color target, it almost completely transferred to a same-color different-shape target. In color-orientation conjunction search, the learning effect partly transferred to a new target that shared same color or same orientation with the trained target. Moreover, the sum of transfer effects for the same color target and the same orientation target in color-orientation conjunction search was algebraically equivalent to the learning effect for trained target, showing an additive transfer effect. The different transfer patterns in color-shape and color-orientation conjunction search learning might reflect the different complexity and discriminability between feature dimensions. These results suggested a feature-based attention enhancement mechanism rather than a unitization mechanism underlying the short-term PL of color-shape/orientation conjunction search.
Visual categorization of natural movies by rats.
Vinken, Kasper; Vermaercke, Ben; Op de Beeck, Hans P
2014-08-06
Visual categorization of complex, natural stimuli has been studied for some time in human and nonhuman primates. Recent interest in the rodent as a model for visual perception, including higher-level functional specialization, leads to the question of how rodents would perform on a categorization task using natural stimuli. To answer this question, rats were trained in a two-alternative forced choice task to discriminate movies containing rats from movies containing other objects and from scrambled movies (ordinate-level categorization). Subsequently, transfer to novel, previously unseen stimuli was tested, followed by a series of control probes. The results show that the animals are capable of acquiring a decision rule by abstracting common features from natural movies to generalize categorization to new stimuli. Control probes demonstrate that they did not use single low-level features, such as motion energy or (local) luminance. Significant generalization was even present with stationary snapshots from untrained movies. The variability within and between training and test stimuli, the complexity of natural movies, and the control experiments and analyses all suggest that a more high-level rule based on more complex stimulus features than local luminance-based cues was used to classify the novel stimuli. In conclusion, natural stimuli can be used to probe ordinate-level categorization in rats. Copyright © 2014 the authors 0270-6474/14/3410645-14$15.00/0.
How Are Bodies Special? Effects Of Body Features On Spatial Reasoning
Yu, Alfred B.; Zacks, Jeffrey M.
2015-01-01
Embodied views of cognition argue that cognitive processes are influenced by bodily experience. This implies that when people make spatial judgments about human bodies, they bring to bear embodied knowledge that affects spatial reasoning performance. Here, we examined the specific contribution to spatial reasoning of visual features associated with the human body. We used two different tasks to elicit distinct visuospatial transformations: object-based transformations, as elicited in typical mental rotation tasks, and perspective transformations, used in tasks in which people deliberately adopt the egocentric perspective of another person. Body features facilitated performance in both tasks. This result suggests that observers are particularly sensitive to the presence of a human head and body, and that these features allow observers to quickly recognize and encode the spatial configuration of a figure. Contrary to prior reports, this facilitation was not related to the transformation component of task performance. These results suggest that body features facilitate task components other than spatial transformation, including the encoding of stimulus orientation. PMID:26252072
ERIC Educational Resources Information Center
Kyllingsbaek, Soren; Markussen, Bo; Bundesen, Claus
2012-01-01
The authors propose and test a simple model of the time course of visual identification of briefly presented, mutually confusable single stimuli in pure accuracy tasks. The model implies that during stimulus analysis, tentative categorizations that stimulus i belongs to category j are made at a constant Poisson rate, v(i, j). The analysis is…
ERIC Educational Resources Information Center
Fortier-Gauthier, Ulysse; Moffat, Nicolas; Dell'Acqua, Robert; McDonald, John J.; Jolicoeur, Pierre
2012-01-01
We studied brain activity during retention and retrieval phases of two visual short-term memory (VSTM) experiments. Experiment 1 used a balanced memory array, with one color stimulus in each hemifield, followed by a retention interval and a central probe, at the fixation point that designated the target stimulus in memory about which to make a…
Seno, Takeharu; Fukuda, Haruaki
2012-01-01
Over the last 100 years, numerous studies have examined the effective visual stimulus properties for inducing illusory self-motion (known as vection). This vection is often experienced more strongly in daily life than under controlled experimental conditions. One well-known example of vection in real life is the so-called 'train illusion'. In the present study, we showed that this train illusion can also be generated in the laboratory using virtual computer graphics-based motion stimuli. We also demonstrated that this vection can be modified by altering the meaning of the visual stimuli (i.e., top down effects). Importantly, we show that the semantic meaning of a stimulus can inhibit or facilitate vection, even when there is no physical change to the stimulus.
Object activation in semantic memory from visual multimodal feature input.
Kraut, Michael A; Kremen, Sarah; Moo, Lauren R; Segal, Jessica B; Calhoun, Vincent; Hart, John
2002-01-01
The human brain's representation of objects has been proposed to exist as a network of coactivated neural regions present in multiple cognitive systems. However, it is not known if there is a region specific to the process of activating an integrated object representation in semantic memory from multimodal feature stimuli (e.g., picture-word). A previous study using word-word feature pairs as stimulus input showed that the left thalamus is integrally involved in object activation (Kraut, Kremen, Segal, et al., this issue). In the present study, participants were presented picture-word pairs that are features of objects, with the task being to decide if together they "activated" an object not explicitly presented (e.g., picture of a candle and the word "icing" activate the internal representation of a "cake"). For picture-word pairs that combine to elicit an object, signal change was detected in the ventral temporo-occipital regions, pre-SMA, left primary somatomotor cortex, both caudate nuclei, and the dorsal thalami bilaterally. These findings suggest that the left thalamus is engaged for either picture or word stimuli, but the right thalamus appears to be involved when picture stimuli are also presented with words in semantic object activation tasks. The somatomotor signal changes are likely secondary to activation of the semantic object representations from multimodal visual stimuli.
Bottlenecks of Motion Processing during a Visual Glance: The Leaky Flask Model
Öğmen, Haluk; Ekiz, Onur; Huynh, Duong; Bedell, Harold E.; Tripathy, Srimant P.
2013-01-01
Where do the bottlenecks for information and attention lie when our visual system processes incoming stimuli? The human visual system encodes the incoming stimulus and transfers its contents into three major memory systems with increasing time scales, viz., sensory (or iconic) memory, visual short-term memory (VSTM), and long-term memory (LTM). It is commonly believed that the major bottleneck of information processing resides in VSTM. In contrast to this view, we show major bottlenecks for motion processing prior to VSTM. In the first experiment, we examined bottlenecks at the stimulus encoding stage through a partial-report technique by delivering the cue immediately at the end of the stimulus presentation. In the second experiment, we varied the cue delay to investigate sensory memory and VSTM. Performance decayed exponentially as a function of cue delay and we used the time-constant of the exponential-decay to demarcate sensory memory from VSTM. We then decomposed performance in terms of quality and quantity measures to analyze bottlenecks along these dimensions. In terms of the quality of information, two thirds to three quarters of the motion-processing bottleneck occurs in stimulus encoding rather than memory stages. In terms of the quantity of information, the motion-processing bottleneck is distributed, with the stimulus-encoding stage accounting for one third of the bottleneck. The bottleneck for the stimulus-encoding stage is dominated by the selection compared to the filtering function of attention. We also found that the filtering function of attention is operating mainly at the sensory memory stage in a specific manner, i.e., influencing only quantity and sparing quality. These results provide a novel and more complete understanding of information processing and storage bottlenecks for motion processing. PMID:24391806
Bottlenecks of motion processing during a visual glance: the leaky flask model.
Öğmen, Haluk; Ekiz, Onur; Huynh, Duong; Bedell, Harold E; Tripathy, Srimant P
2013-01-01
Where do the bottlenecks for information and attention lie when our visual system processes incoming stimuli? The human visual system encodes the incoming stimulus and transfers its contents into three major memory systems with increasing time scales, viz., sensory (or iconic) memory, visual short-term memory (VSTM), and long-term memory (LTM). It is commonly believed that the major bottleneck of information processing resides in VSTM. In contrast to this view, we show major bottlenecks for motion processing prior to VSTM. In the first experiment, we examined bottlenecks at the stimulus encoding stage through a partial-report technique by delivering the cue immediately at the end of the stimulus presentation. In the second experiment, we varied the cue delay to investigate sensory memory and VSTM. Performance decayed exponentially as a function of cue delay and we used the time-constant of the exponential-decay to demarcate sensory memory from VSTM. We then decomposed performance in terms of quality and quantity measures to analyze bottlenecks along these dimensions. In terms of the quality of information, two thirds to three quarters of the motion-processing bottleneck occurs in stimulus encoding rather than memory stages. In terms of the quantity of information, the motion-processing bottleneck is distributed, with the stimulus-encoding stage accounting for one third of the bottleneck. The bottleneck for the stimulus-encoding stage is dominated by the selection compared to the filtering function of attention. We also found that the filtering function of attention is operating mainly at the sensory memory stage in a specific manner, i.e., influencing only quantity and sparing quality. These results provide a novel and more complete understanding of information processing and storage bottlenecks for motion processing.
Stevenson, Ryan A; Fister, Juliane Krueger; Barnett, Zachary P; Nidiffer, Aaron R; Wallace, Mark T
2012-05-01
In natural environments, human sensory systems work in a coordinated and integrated manner to perceive and respond to external events. Previous research has shown that the spatial and temporal relationships of sensory signals are paramount in determining how information is integrated across sensory modalities, but in ecologically plausible settings, these factors are not independent. In the current study, we provide a novel exploration of the impact on behavioral performance for systematic manipulations of the spatial location and temporal synchrony of a visual-auditory stimulus pair. Simple auditory and visual stimuli were presented across a range of spatial locations and stimulus onset asynchronies (SOAs), and participants performed both a spatial localization and simultaneity judgment task. Response times in localizing paired visual-auditory stimuli were slower in the periphery and at larger SOAs, but most importantly, an interaction was found between the two factors, in which the effect of SOA was greater in peripheral as opposed to central locations. Simultaneity judgments also revealed a novel interaction between space and time: individuals were more likely to judge stimuli as synchronous when occurring in the periphery at large SOAs. The results of this study provide novel insights into (a) how the speed of spatial localization of an audiovisual stimulus is affected by location and temporal coincidence and the interaction between these two factors and (b) how the location of a multisensory stimulus impacts judgments concerning the temporal relationship of the paired stimuli. These findings provide strong evidence for a complex interdependency between spatial location and temporal structure in determining the ultimate behavioral and perceptual outcome associated with a paired multisensory (i.e., visual-auditory) stimulus.
A frontal but not parietal neural correlate of auditory consciousness.
Brancucci, Alfredo; Lugli, Victor; Perrucci, Mauro Gianni; Del Gratta, Cosimo; Tommasi, Luca
2016-01-01
Hemodynamic correlates of consciousness were investigated in humans during the presentation of a dichotic sequence inducing illusory auditory percepts with features analogous to visual multistability. The sequence consisted of a variation of the original stimulation eliciting the Deutsch's octave illusion, created to maintain a stable illusory percept long enough to allow the detection of the underlying hemodynamic activity using functional magnetic resonance imaging (fMRI). Two specular 500 ms dichotic stimuli (400 and 800 Hz) presented in alternation by means of earphones cause an illusory segregation of pitch and ear of origin which can yield up to four different auditory percepts per dichotic stimulus. Such percepts are maintained stable when one of the two dichotic stimuli is presented repeatedly for 6 s, immediately after the alternation. We observed hemodynamic activity specifically accompanying conscious experience of pitch in a bilateral network including the superior frontal gyrus (SFG, BA9 and BA10), medial frontal gyrus (BA6 and BA9), insula (BA13), and posterior lateral nucleus of the thalamus. Conscious experience of side (ear of origin) was instead specifically accompanied by bilateral activity in the MFG (BA6), STG (BA41), parahippocampal gyrus (BA28), and insula (BA13). These results suggest that the neural substrate of auditory consciousness, differently from that of visual consciousness, may rest upon a fronto-temporal rather than upon a fronto-parietal network. Moreover, they indicate that the neural correlates of consciousness depend on the specific features of the stimulus and suggest the SFG-MFG and the insula as important cortical nodes for auditory conscious experience.
Reimer, Christina B; Strobach, Tilo; Schubert, Torsten
2017-12-01
Visual attention and response selection are limited in capacity. Here, we investigated whether visual attention requires the same bottleneck mechanism as response selection in a dual-task of the psychological refractory period (PRP) paradigm. The dual-task consisted of an auditory two-choice discrimination Task 1 and a conjunction search Task 2, which were presented at variable temporal intervals (stimulus onset asynchrony, SOA). In conjunction search, visual attention is required to select items and to bind their features resulting in a serial search process around the items in the search display (i.e., set size). We measured the reaction time of the visual search task (RT2) and the N2pc, an event-related potential (ERP), which reflects lateralized visual attention processes. If the response selection processes in Task 1 influence the visual attention processes in Task 2, N2pc latency and amplitude would be delayed and attenuated at short SOA compared to long SOA. The results, however, showed that latency and amplitude were independent of SOA, indicating that visual attention was concurrently deployed to response selection. Moreover, the RT2 analysis revealed an underadditive interaction of SOA and set size. We concluded that visual attention does not require the same bottleneck mechanism as response selection in dual-tasks.
Is improved contrast sensitivity a natural consequence of visual training?
Levi, Aaron; Shaked, Danielle; Tadin, Duje; Huxlin, Krystel R.
2015-01-01
Many studies have shown that training and testing conditions modulate specificity of visual learning to trained stimuli and tasks. In visually impaired populations, generalizability of visual learning to untrained stimuli/tasks is almost always reported, with contrast sensitivity (CS) featuring prominently among these collaterally-improved functions. To understand factors underlying this difference, we measured CS for direction and orientation discrimination in the visual periphery of three groups of visually-intact subjects. Group 1 trained on an orientation discrimination task with static Gabors whose luminance contrast was decreased as performance improved. Group 2 trained on a global direction discrimination task using high-contrast random dot stimuli previously used to recover motion perception in cortically blind patients. Group 3 underwent no training. Both forms of training improved CS with some degree of specificity for basic attributes of the trained stimulus/task. Group 1's largest enhancement was in CS around the trained spatial/temporal frequencies; similarly, Group 2's largest improvements occurred in CS for discriminating moving and flickering stimuli. Group 3 saw no significant CS changes. These results indicate that CS improvements may be a natural consequence of multiple forms of visual training in visually intact humans, albeit with some specificity to the trained visual domain(s). PMID:26305736
Visual adaptation enhances action sound discrimination.
Barraclough, Nick E; Page, Steve A; Keefe, Bruce D
2017-01-01
Prolonged exposure, or adaptation, to a stimulus in 1 modality can bias, but also enhance, perception of a subsequent stimulus presented within the same modality. However, recent research has also found that adaptation in 1 modality can bias perception in another modality. Here, we show a novel crossmodal adaptation effect, where adaptation to a visual stimulus enhances subsequent auditory perception. We found that when compared to no adaptation, prior adaptation to visual, auditory, or audiovisual hand actions enhanced discrimination between 2 subsequently presented hand action sounds. Discrimination was most enhanced when the visual action "matched" the auditory action. In addition, prior adaptation to a visual, auditory, or audiovisual action caused subsequent ambiguous action sounds to be perceived as less like the adaptor. In contrast, these crossmodal action aftereffects were not generated by adaptation to the names of actions. Enhanced crossmodal discrimination and crossmodal perceptual aftereffects may result from separate mechanisms operating in audiovisual action sensitive neurons within perceptual systems. Adaptation-induced crossmodal enhancements cannot be explained by postperceptual responses or decisions. More generally, these results together indicate that adaptation is a ubiquitous mechanism for optimizing perceptual processing of multisensory stimuli.