Stimulus onset predictability modulates proactive action control in a Go/No-go task
Berchicci, Marika; Lucci, Giuliana; Spinelli, Donatella; Di Russo, Francesco
2015-01-01
The aim of the study was to evaluate whether the presence/absence of visual cues specifying the onset of an upcoming, action-related stimulus modulates pre-stimulus brain activity, associated with the proactive control of goal-directed actions. To this aim we asked 12 subjects to perform an equal probability Go/No-go task with four stimulus configurations in two conditions: (1) uncued, i.e., without any external information about the timing of stimulus onset; and (2) cued, i.e., with external visual cues providing precise information about the timing of stimulus onset. During task both behavioral performance and event-related potentials (ERPs) were recorded. Behavioral results showed faster response times in the cued than uncued condition, confirming existing literature. ERPs showed novel results in the proactive control stage, that started about 1 s before the motor response. We observed a slow rising prefrontal positive activity, more pronounced in the cued than the uncued condition. Further, also pre-stimulus activity of premotor areas was larger in cued than uncued condition. In the post-stimulus period, the P3 amplitude was enhanced when the time of stimulus onset was externally driven, confirming that external cueing enhances processing of stimulus evaluation and response monitoring. Our results suggest that different pre-stimulus processing come into play in the two conditions. We hypothesize that the large prefrontal and premotor activities recorded with external visual cues index the monitoring of the external stimuli in order to finely regulate the action. PMID:25964751
Visual awareness suppression by pre-stimulus brain stimulation; a neural effect.
Jacobs, Christianne; Goebel, Rainer; Sack, Alexander T
2012-01-02
Transcranial magnetic stimulation (TMS) has established the functional relevance of early visual cortex (EVC) for visual awareness with great temporal specificity non-invasively in conscious human volunteers. Many studies have found a suppressive effect when TMS was applied over EVC 80-100 ms after the onset of the visual stimulus (post-stimulus TMS time window). Yet, few studies found task performance to also suffer when TMS was applied even before visual stimulus presentation (pre-stimulus TMS time window). This pre-stimulus TMS effect, however, remains controversially debated and its origin had mainly been ascribed to TMS-induced eye-blinking artifacts. Here, we applied chronometric TMS over EVC during the execution of a visual discrimination task, covering an exhaustive range of visual stimulus-locked TMS time windows ranging from -80 pre-stimulus to 300 ms post-stimulus onset. Electrooculographical (EoG) recordings, sham TMS stimulation, and vertex TMS stimulation controlled for different types of non-neural TMS effects. Our findings clearly reveal TMS-induced masking effects for both pre- and post-stimulus time windows, and for both objective visual discrimination performance and subjective visibility. Importantly, all effects proved to be still present after post hoc removal of eye blink trials, suggesting a neural origin for the pre-stimulus TMS suppression effect on visual awareness. We speculate based on our data that TMS exerts its pre-stimulus effect via generation of a neural state which interacts with subsequent visual input. Copyright © 2011 Elsevier Inc. All rights reserved.
Visual and auditory accessory stimulus offset and the Simon effect.
Nishimura, Akio; Yokosawa, Kazuhiko
2010-10-01
We investigated the effect on the right and left responses of the disappearance of a task-irrelevant stimulus located on the right or left side. Participants pressed a right or left response key on the basis of the color of a centrally located visual target. Visual (Experiment 1) or auditory (Experiment 2) task-irrelevant accessory stimuli appeared or disappeared at locations to the right or left of the central target. In Experiment 1, responses were faster when onset or offset of the visual accessory stimulus was spatially congruent with the response. In Experiment 2, responses were again faster when onset of the auditory accessory stimulus and the response were on the same side. However, responses were slightly slower when offset of the auditory accessory stimulus and the response were on the same side than when they were on opposite sides. These findings indicate that transient change information is crucial for a visual Simon effect, whereas sustained stimulation from an ongoing stimulus also contributes to an auditory Simon effect.
Kavcic, Voyko; Triplett, Regina L.; Das, Anasuya; Martin, Tim; Huxlin, Krystel R.
2015-01-01
Partial cortical blindness is a visual deficit caused by unilateral damage to the primary visual cortex, a condition previously considered beyond hopes of rehabilitation. However, recent data demonstrate that patients may recover both simple and global motion discrimination following intensive training in their blind field. The present experiments characterized motion-induced neural activity of cortically blind (CB) subjects prior to the onset of visual rehabilitation. This was done to provide information about visual processing capabilities available to mediate training-induced visual improvements. Visual Evoked Potentials (VEPs) were recorded from two experimental groups consisting of 9 CB subjects and 9 age-matched, visually-intact controls. VEPs were collected following lateralized stimulus presentation to each of the 4 visual field quadrants. VEP waveforms were examined for both stimulus-onset (SO) and motion-onset (MO) related components in postero-lateral electrodes. While stimulus presentation to intact regions of the visual field elicited normal SO-P1, SO-N1, SO-P2 and MO-N2 amplitudes and latencies in contralateral brain regions of CB subjects, these components were not observed contralateral to stimulus presentation in blind quadrants of the visual field. In damaged brain hemispheres, SO-VEPs were only recorded following stimulus presentation to intact visual field quadrants, via inter-hemispheric transfer. MO-VEPs were only recorded from damaged left brain hemispheres, possibly reflecting a native left/right asymmetry in inter-hemispheric connections. The present findings suggest that damaged brain hemispheres contain areas capable of responding to visual stimulation. However, in the absence of training or rehabilitation, these areas only generate detectable VEPs in response to stimulation of the intact hemifield of vision. PMID:25575450
Startle Auditory Stimuli Enhance the Performance of Fast Dynamic Contractions
Fernandez-Del-Olmo, Miguel; Río-Rodríguez, Dan; Iglesias-Soler, Eliseo; Acero, Rafael M.
2014-01-01
Fast reaction times and the ability to develop a high rate of force development (RFD) are crucial for sports performance. However, little is known regarding the relationship between these parameters. The aim of this study was to investigate the effects of auditory stimuli of different intensities on the performance of a concentric bench-press exercise. Concentric bench-presses were performed by thirteen trained subjects in response to three different conditions: a visual stimulus (VS); a visual stimulus accompanied by a non-startle auditory stimulus (AS); and a visual stimulus accompanied by a startle auditory stimulus (SS). Peak RFD, peak velocity, onset movement, movement duration and electromyography from pectoralis and tricep muscles were recorded. The SS condition induced an increase in the RFD and peak velocity and a reduction in the movement onset and duration, in comparison with the VS and AS condition. The onset activation of the pectoralis and tricep muscles was shorter for the SS than for the VS and AS conditions. These findings point out to specific enhancement effects of loud auditory stimulation on the rate of force development. This is of relevance since startle stimuli could be used to explore neural adaptations to resistance training. PMID:24489967
Chen, Yi-Chuan; Spence, Charles
2013-01-01
The time-course of cross-modal semantic interactions between pictures and either naturalistic sounds or spoken words was compared. Participants performed a speeded picture categorization task while hearing a task-irrelevant auditory stimulus presented at various stimulus onset asynchronies (SOAs) with respect to the visual picture. Both naturalistic sounds and spoken words gave rise to cross-modal semantic congruency effects (i.e., facilitation by semantically congruent sounds and inhibition by semantically incongruent sounds, as compared to a baseline noise condition) when the onset of the sound led that of the picture by 240 ms or more. Both naturalistic sounds and spoken words also gave rise to inhibition irrespective of their semantic congruency when presented within 106 ms of the onset of the picture. The peak of this cross-modal inhibitory effect occurred earlier for spoken words than for naturalistic sounds. These results therefore demonstrate that the semantic priming of visual picture categorization by auditory stimuli only occurs when the onset of the sound precedes that of the visual stimulus. The different time-courses observed for naturalistic sounds and spoken words likely reflect the different processing pathways to access the relevant semantic representations.
Recalibration of the Multisensory Temporal Window of Integration Results from Changing Task Demands
Mégevand, Pierre; Molholm, Sophie; Nayak, Ashabari; Foxe, John J.
2013-01-01
The notion of the temporal window of integration, when applied in a multisensory context, refers to the breadth of the interval across which the brain perceives two stimuli from different sensory modalities as synchronous. It maintains a unitary perception of multisensory events despite physical and biophysical timing differences between the senses. The boundaries of the window can be influenced by attention and past sensory experience. Here we examined whether task demands could also influence the multisensory temporal window of integration. We varied the stimulus onset asynchrony between simple, short-lasting auditory and visual stimuli while participants performed two tasks in separate blocks: a temporal order judgment task that required the discrimination of subtle auditory-visual asynchronies, and a reaction time task to the first incoming stimulus irrespective of its sensory modality. We defined the temporal window of integration as the range of stimulus onset asynchronies where performance was below 75% in the temporal order judgment task, as well as the range of stimulus onset asynchronies where responses showed multisensory facilitation (race model violation) in the reaction time task. In 5 of 11 participants, we observed audio-visual stimulus onset asynchronies where reaction time was significantly accelerated (indicating successful integration in this task) while performance was accurate in the temporal order judgment task (indicating successful segregation in that task). This dissociation suggests that in some participants, the boundaries of the temporal window of integration can adaptively recalibrate in order to optimize performance according to specific task demands. PMID:23951203
ten Oever, Sanne; Sack, Alexander T.; Wheat, Katherine L.; Bien, Nina; van Atteveldt, Nienke
2013-01-01
Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception. PMID:23805110
Ten Oever, Sanne; Sack, Alexander T; Wheat, Katherine L; Bien, Nina; van Atteveldt, Nienke
2013-01-01
Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception.
Differential effects of ongoing EEG beta and theta power on memory formation
Scholz, Sebastian; Schneider, Signe Luisa
2017-01-01
Recently, elevated ongoing pre-stimulus beta power (13–17 Hz) at encoding has been associated with subsequent memory formation for visual stimulus material. It is unclear whether this activity is merely specific to visual processing or whether it reflects a state facilitating general memory formation, independent of stimulus modality. To answer that question, the present study investigated the relationship between neural pre-stimulus oscillations and verbal memory formation in different sensory modalities. For that purpose, a within-subject design was employed to explore differences between successful and failed memory formation in the visual and auditory modality. Furthermore, associative memory was addressed by presenting the stimuli in combination with background images. Results revealed that similar EEG activity in the low beta frequency range (13–17 Hz) is associated with subsequent memory success, independent of stimulus modality. Elevated power prior to stimulus onset differentiated successful from failed memory formation. In contrast, differential effects between modalities were found in the theta band (3–7 Hz), with an increased oscillatory activity before the onset of later remembered visually presented words. In addition, pre-stimulus theta power dissociated between successful and failed encoding of associated context, independent of the stimulus modality of the item itself. We therefore suggest that increased ongoing low beta activity reflects a memory promoting state, which is likely to be moderated by modality-independent attentional or inhibitory processes, whereas high ongoing theta power is suggested as an indicator of the enhanced binding of incoming interlinked information. PMID:28192459
Visual Salience in the Change Detection Paradigm: The Special Role of Object Onset
ERIC Educational Resources Information Center
Cole, Geoff G.; Kentridge, Robert W.; Heywood, Charles A.
2004-01-01
The relative efficacy with which appearance of a new object orients visual attention was investigated. At issue is whether the visual system treats onset as being of particular importance or only 1 of a number of stimulus events equally likely to summon attention. Using the 1-shot change detection paradigm, the authors compared detectability of…
Top-down knowledge modulates onset capture in a feedforward manner.
Becker, Stefanie I; Lewis, Amanda J; Axtens, Jenna E
2017-04-01
How do we select behaviourally important information from cluttered visual environments? Previous research has shown that both top-down, goal-driven factors and bottom-up, stimulus-driven factors determine which stimuli are selected. However, it is still debated when top-down processes modulate visual selection. According to a feedforward account, top-down processes modulate visual processing even before the appearance of any stimuli, whereas others claim that top-down processes modulate visual selection only at a late stage, via feedback processing. In line with such a dual stage account, some studies found that eye movements to an irrelevant onset distractor are not modulated by its similarity to the target stimulus, especially when eye movements are launched early (within 150-ms post stimulus onset). However, in these studies the target transiently changed colour due to a colour after-effect that occurred during premasking, and the time course analyses were incomplete. The present study tested the feedforward account against the dual stage account in two eye tracking experiments, with and without colour after-effects (Exp. 1), as well when the target colour varied randomly and observers were informed of the target colour with a word cue (Exp. 2). The results showed that top-down processes modulated the earliest eye movements to the onset distractors (<150-ms latencies), without incurring any costs for selection of target matching distractors. These results unambiguously support a feedforward account of top-down modulation.
Effect of ethanol on the visual-evoked potential in rat: dynamics of ON and OFF responses.
Dulinskas, Redas; Buisas, Rokas; Vengeliene, Valentina; Ruksenas, Osvaldas
2017-01-01
The effect of acute ethanol administration on the flash visual-evoked potential (VEP) was investigated in numerous studies. However, it is still unclear which brain structures are responsible for the differences observed in stimulus onset (ON) and offset (OFF) responses and how these responses are modulated by ethanol. The aim of our study was to investigate the pattern of ON and OFF responses in the visual system, measured as amplitude and latency of each VEP component following acute administration of ethanol. VEPs were recorded at the onset and offset of a 500 ms visual stimulus in anesthetized male Wistar rats. The effect of alcohol on VEP latency and amplitude was measured for one hour after injection of 2 g/kg ethanol dose. Three VEP components - N63, P89 and N143 - were analyzed. Our results showed that, except for component N143, ethanol increased the latency of both ON and OFF responses in a similar manner. The latency of N143 during OFF response was not affected by ethanol but its amplitude was reduced. Our study demonstrated that the activation of the visual system during the ON response to a 500 ms visual stimulus is qualitatively different from that during the OFF response. Ethanol interfered with processing of the stimulus duration at the level of the visual cortex and reduced the activation of cortical regions.
The Role of Temporal Disparity on Audiovisual Integration in Low-Vision Individuals.
Targher, Stefano; Micciolo, Rocco; Occelli, Valeria; Zampini, Massimiliano
2017-12-01
Recent findings have shown that sounds improve visual detection in low vision individuals when the audiovisual stimuli pairs of stimuli are presented simultaneously and from the same spatial position. The present study purports to investigate the temporal aspects of the audiovisual enhancement effect previously reported. Low vision participants were asked to detect the presence of a visual stimulus (yes/no task) presented either alone or together with an auditory stimulus at different stimulus onset asynchronies (SOAs). In the first experiment, the sound was presented either simultaneously or before the visual stimulus (i.e., SOAs 0, 100, 250, 400 ms). The results show that the presence of a task-irrelevant auditory stimulus produced a significant visual detection enhancement in all the conditions. In the second experiment, the sound was either synchronized with, or randomly preceded/lagged behind the visual stimulus (i.e., SOAs 0, ± 250, ± 400 ms). The visual detection enhancement was reduced in magnitude and limited only to the synchronous condition and to the condition in which the sound stimulus was presented 250 ms before the visual stimulus. Taken together, the evidence of the present study seems to suggest that audiovisual interaction in low vision individuals is highly modulated by top-down mechanisms.
Stenner, Max-Philipp; Bauer, Markus; Haggard, Patrick; Heinze, Hans-Jochen; Dolan, Ray
2014-11-01
The perceived intensity of sensory stimuli is reduced when these stimuli are caused by the observer's actions. This phenomenon is traditionally explained by forward models of sensory action-outcome, which arise from motor processing. Although these forward models critically predict anticipatory modulation of sensory neural processing, neurophysiological evidence for anticipatory modulation is sparse and has not been linked to perceptual data showing sensory attenuation. By combining a psychophysical task involving contrast discrimination with source-level time-frequency analysis of MEG data, we demonstrate that the amplitude of alpha-oscillations in visual cortex is enhanced before the onset of a visual stimulus when the identity and onset of the stimulus are controlled by participants' motor actions. Critically, this prestimulus enhancement of alpha-amplitude is paralleled by psychophysical judgments of a reduced contrast for this stimulus. We suggest that alpha-oscillations in visual cortex preceding self-generated visual stimulation are a likely neurophysiological signature of motor-induced sensory anticipation and mediate sensory attenuation. We discuss our results in relation to proposals that attribute generic inhibitory functions to alpha-oscillations in prioritizing and gating sensory information via top-down control.
Yeh, Su-Ling; Liao, Hsin-I
2010-10-01
The contingent orienting hypothesis (Folk, Remington, & Johnston, 1992) states that attentional capture is contingent on top-down control settings induced by task demands. Past studies supporting this hypothesis have identified three kinds of top-down control settings: for target-specific features, for the strategy to search for a singleton, and for visual features in the target display as a whole. Previously, we have found stimulus-driven capture by onset that was not contingent on the first two kinds of settings (Yeh & Liao, 2008). The current study aims to test the third kind: the displaywide contingent orienting hypothesis (Gibson & Kelsey, 1998). Specifically, we ask whether an onset stimulus can still capture attention in the spatial cueing paradigm when attentional control settings for the displaywide onset of the target are excluded by making all letters in the target display emerge from placeholders. Results show that a preceding uninformative onset cue still captured attention to its location in a stimulus-driven fashion, whereas a color cue captured attention only when it was contingent on the setting for displaywide color. These results raise doubts as to the generality of the displaywide contingent orienting hypothesis and help delineate the boundary conditions on this hypothesis. Copyright © 2010 Elsevier B.V. All rights reserved.
Tapia, Evelina; Beck, Diane M
2014-01-01
A number of influential theories posit that visual awareness relies not only on the initial, stimulus-driven (i.e., feedforward) sweep of activation but also on recurrent feedback activity within and between brain regions. These theories of awareness draw heavily on data from masking paradigms in which visibility of one stimulus is reduced due to the presence of another stimulus. More recently transcranial magnetic stimulation (TMS) has been used to study the temporal dynamics of visual awareness. TMS over occipital cortex affects performance on visual tasks at distinct time points and in a manner that is comparable to visual masking. We draw parallels between these two methods and examine evidence for the neural mechanisms by which visual masking and TMS suppress stimulus visibility. Specifically, both methods have been proposed to affect feedforward as well as feedback signals when applied at distinct time windows relative to stimulus onset and as a result modify visual awareness. Most recent empirical evidence, moreover, suggests that while visual masking and TMS impact stimulus visibility comparably, the processes these methods affect may not be as similar as previously thought. In addition to reviewing both masking and TMS studies that examine feedforward and feedback processes in vision, we raise questions to guide future studies and further probe the necessary conditions for visual awareness.
The role of the right posterior parietal cortex in temporal order judgment.
Woo, Sung-Ho; Kim, Ki-Hyun; Lee, Kyoung-Min
2009-03-01
Perceived order of two consecutive stimuli may not correspond to the order of their physical onsets. Such a disagreement presumably results from a difference in the speed of stimulus processing toward central decision mechanisms. Since previous evidence suggests that the right posterior parietal cortex (PPC) plays a role in modulating the processing speed of a visual target, we applied single-pulse TMS over the region in 14 normal subjects, while they judged the temporal order of two consecutive visual stimuli. Stimulus-onset-asynchrony (SOA) randomly varied between -100 and 100 ms in 20-ms steps (with a positive SOA when a target appeared on the right hemi-field before the other on the left), and a point of subjective simultaneity was measured for individual subjects. TMS stimulation was time-locked at 50, 100, 150, and 200 ms after the onset of the first stimulus, and results in trials with TMS on right PPC were compared with those in trials without TMS. TMS over the right PPC delayed the detection of a visual target in the contralateral, i.e., left hemi-field by 24 (+/-7 SE) ms and 16 (+/-4 SE) ms, when the stimulation was given at 50 and 100 ms after the first target onset. In contrast, TMS on the left PPC was not effective. These results show that the right PPC is important in a timely detection of a target appearing on the left visual field, especially in competition with another target simultaneously appearing in the opposite field.
Visual Motion Processing Subserves Faster Visuomotor Reaction in Badminton Players.
Hülsdünker, Thorben; Strüder, Heiko K; Mierau, Andreas
2017-06-01
Athletes participating in ball or racquet sports have to respond to visual stimuli under critical time pressure. Previous studies used visual contrast stimuli to determine visual perception and visuomotor reaction in athletes and nonathletes; however, ball and racquet sports are characterized by motion rather than contrast visual cues. Because visual contrast and motion signals are processed in different cortical regions, this study aimed to determine differences in perception and processing of visual motion between athletes and nonathletes. Twenty-five skilled badminton players and 28 age-matched nonathletic controls participated in this study. Using a 64-channel EEG system, we investigated visual motion perception/processing in the motion-sensitive middle temporal (MT) cortical area in response to radial motion of different velocities. In a simple visuomotor reaction task, visuomotor transformation in Brodmann area 6 (BA6) and BA4 as well as muscular activation (EMG onset) and visuomotor reaction time (VMRT) were investigated. Stimulus- and response-locked potentials were determined to differentiate between perceptual and motor-related processes. As compared with nonathletes, athletes showed earlier EMG onset times (217 vs 178 ms, P < 0.001), accompanied by a faster VMRT (274 vs 243 ms, P < 0.001). Furthermore, athletes showed an earlier stimulus-locked peak activation of MT (200 vs 182 ms, P = 0.002) and BA6 (161 vs 137 ms, P = 0.009). Response-locked peak activation in MT was later in athletes (-7 vs 26 ms, P < 0.001), whereas no group differences were observed in BA6 and BA4. Multiple regression analyses with stimulus- and response-locked cortical potentials predicted EMG onset (r = 0.83) and VMRT (r = 0.77). The athletes' superior visuomotor performance in response to visual motion is primarily related to visual perception and, to a minor degree, to motor-related processes.
Electrophysiological evidence for phenomenal consciousness.
Revonsuo, Antti; Koivisto, Mika
2010-09-01
Abstract Recent evidence from event-related brain potentials (ERPs) lends support to two central theses in Lamme's theory. The earliest ERP correlate of visual consciousness appears over posterior visual cortex around 100-200 ms after stimulus onset. Its scalp topography and time window are consistent with recurrent processing in the visual cortex. This electrophysiological correlate of visual consciousness is mostly independent of later ERPs reflecting selective attention and working memory functions. Overall, the ERP evidence supports the view that phenomenal consciousness of a visual stimulus emerges earlier than access consciousness, and that attention and awareness are served by distinct neural processes.
Hecht, Marcus; Thiemann, Ulf; Freitag, Christine M; Bender, Stephan
2016-01-15
Post-perceptual cues can enhance visual short term memory encoding even after the offset of the visual stimulus. However, both the mechanisms by which the sensory stimulus characteristics are buffered as well as the mechanisms by which post-perceptual selective attention enhances short term memory encoding remain unclear. We analyzed late post-perceptual event-related potentials (ERPs) in visual change detection tasks (100ms stimulus duration) by high-resolution ERP analysis to elucidate these mechanisms. The effects of early and late auditory post-cues (300ms or 850ms after visual stimulus onset) as well as the effects of a visual interference stimulus were examined in 27 healthy right-handed adults. Focusing attention with post-perceptual cues at both latencies significantly improved memory performance, i.e. sensory stimulus characteristics were available for up to 850ms after stimulus presentation. Passive watching of the visual stimuli without auditory cue presentation evoked a slow negative wave (N700) over occipito-temporal visual areas. N700 was strongly reduced by a visual interference stimulus which impeded memory maintenance. In contrast, contralateral delay activity (CDA) still developed in this condition after the application of auditory post-cues and was thereby dissociated from N700. CDA and N700 seem to represent two different processes involved in short term memory encoding. While N700 could reflect visual post processing by automatic attention attraction, CDA may reflect the top-down process of searching selectively for the required information through post-perceptual attention. Copyright © 2015 Elsevier Inc. All rights reserved.
Perceptual expertise and top-down expectation of musical notation engages the primary visual cortex.
Wong, Yetta Kwailing; Peng, Cynthia; Fratus, Kristyn N; Woodman, Geoffrey F; Gauthier, Isabel
2014-08-01
Most theories of visual processing propose that object recognition is achieved in higher visual cortex. However, we show that category selectivity for musical notation can be observed in the first ERP component called the C1 (measured 40-60 msec after stimulus onset) with music-reading expertise. Moreover, the C1 note selectivity was observed only when the stimulus category was blocked but not when the stimulus category was randomized. Under blocking, the C1 activity for notes predicted individual music-reading ability, and behavioral judgments of musical stimuli reflected music-reading skill. Our results challenge current theories of object recognition, indicating that the primary visual cortex can be selective for musical notation within the initial feedforward sweep of activity with perceptual expertise and with a testing context that is consistent with the expertise training, such as blocking the stimulus category for music reading.
Tapia, Evelina; Beck, Diane M.
2014-01-01
A number of influential theories posit that visual awareness relies not only on the initial, stimulus-driven (i.e., feedforward) sweep of activation but also on recurrent feedback activity within and between brain regions. These theories of awareness draw heavily on data from masking paradigms in which visibility of one stimulus is reduced due to the presence of another stimulus. More recently transcranial magnetic stimulation (TMS) has been used to study the temporal dynamics of visual awareness. TMS over occipital cortex affects performance on visual tasks at distinct time points and in a manner that is comparable to visual masking. We draw parallels between these two methods and examine evidence for the neural mechanisms by which visual masking and TMS suppress stimulus visibility. Specifically, both methods have been proposed to affect feedforward as well as feedback signals when applied at distinct time windows relative to stimulus onset and as a result modify visual awareness. Most recent empirical evidence, moreover, suggests that while visual masking and TMS impact stimulus visibility comparably, the processes these methods affect may not be as similar as previously thought. In addition to reviewing both masking and TMS studies that examine feedforward and feedback processes in vision, we raise questions to guide future studies and further probe the necessary conditions for visual awareness. PMID:25374548
Rolfs, Martin; Carrasco, Marisa
2012-01-01
Humans and other animals with foveate vision make saccadic eye movements to prioritize the visual analysis of behaviorally relevant information. Even before movement onset, visual processing is selectively enhanced at the target of a saccade, presumably gated by brain areas controlling eye movements. Here we assess concurrent changes in visual performance and perceived contrast before saccades, and show that saccade preparation enhances perception rapidly, altering early visual processing in a manner akin to increasing the physical contrast of the visual input. Observers compared orientation and contrast of a test stimulus, appearing briefly before a saccade, to a standard stimulus, presented previously during a fixation period. We found simultaneous progressive enhancement in both orientation discrimination performance and perceived contrast as time approached saccade onset. These effects were robust as early as 60 ms after the eye movement was cued, much faster than the voluntary deployment of covert attention (without eye movements), which takes ~300 ms. Our results link the dynamics of saccade preparation, visual performance, and subjective experience and show that upcoming eye movements alter visual processing by increasing the signal strength. PMID:23035086
Sjöstrand, F S
2002-01-01
Each rod is connected to one depolarizing and one hyperpolarizing bipolar cell. The synaptic connections of cone processes to each bipolar cell and presynaptically to the two rod-bipolar cell synapses establishes conditions for lateral interaction at this level. Thus, the cones raise the threshold for bipolar cell depolarization which is the basis for spatial brightness contrast enhancement and consequently for high visual acuity (Sjöstrand, 2001a). The cones facilitate ganglion cell depolarization by the bipolar cells and cone input prevents horizontal cell blocking of depolarization of the depolarizing bipolar cell, extending rod vision to low illumination. The combination of reduced cone input and transient hyperpolarization of the hyperpolarizing bipolar cell at onset of a light stimulus facilitates ganglion cell depolarization extensively at onset of the stimulus while no corresponding enhancement applies to the ganglion cell response at cessation of the stimulus, possibly establishing conditions for discrimination between on- vs. off-signals in the visual centre. Reduced cone input and hyperpolarization of the hyperpolarizing bipolar cell at onset of a light stimulus accounts for Granit's (1941) 'preexcitatory inhibition'. Presynaptic inhibition maintains transmitter concentration low in the synaptic gap at rod-bipolar cell and bipolar cell-ganglion cell synapses, securing proportional and amplified postsynaptic responses at these synapses. Perfect timing of variations in facilitatory and inhibitory input to the ganglion cell confines the duration of ganglion cell depolarization at onset and at cessation of a light stimulus to that of a single synaptic transmission.
Tang, Xiaoyu; Li, Chunlin; Li, Qi; Gao, Yulin; Yang, Weiping; Yang, Jingjing; Ishikawa, Soushirou; Wu, Jinglong
2013-10-11
Utilizing the high temporal resolution of event-related potentials (ERPs), we examined how visual spatial or temporal cues modulated the auditory stimulus processing. The visual spatial cue (VSC) induces orienting of attention to spatial locations; the visual temporal cue (VTC) induces orienting of attention to temporal intervals. Participants were instructed to respond to auditory targets. Behavioral responses to auditory stimuli following VSC were faster and more accurate than those following VTC. VSC and VTC had the same effect on the auditory N1 (150-170 ms after stimulus onset). The mean amplitude of the auditory P1 (90-110 ms) in VSC condition was larger than that in VTC condition, and the mean amplitude of late positivity (300-420 ms) in VTC condition was larger than that in VSC condition. These findings suggest that modulation of auditory stimulus processing by visually induced spatial or temporal orienting of attention were different, but partially overlapping. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Attention distributed across sensory modalities enhances perceptual performance
Mishra, Jyoti; Gazzaley, Adam
2012-01-01
This study investigated the interaction between top-down attentional control and multisensory processing in humans. Using semantically congruent and incongruent audiovisual stimulus streams, we found target detection to be consistently improved in the setting of distributed audiovisual attention versus focused visual attention. This performance benefit was manifested as faster reaction times for congruent audiovisual stimuli, and as accuracy improvements for incongruent stimuli, resulting in a resolution of stimulus interference. Electrophysiological recordings revealed that these behavioral enhancements were associated with reduced neural processing of both auditory and visual components of the audiovisual stimuli under distributed vs. focused visual attention. These neural changes were observed at early processing latencies, within 100–300 ms post-stimulus onset, and localized to auditory, visual, and polysensory temporal cortices. These results highlight a novel neural mechanism for top-down driven performance benefits via enhanced efficacy of sensory neural processing during distributed audiovisual attention relative to focused visual attention. PMID:22933811
Preparatory attention in visual cortex.
Battistoni, Elisa; Stein, Timo; Peelen, Marius V
2017-05-01
Top-down attention is the mechanism that allows us to selectively process goal-relevant aspects of a scene while ignoring irrelevant aspects. A large body of research has characterized the effects of attention on neural activity evoked by a visual stimulus. However, attention also includes a preparatory phase before stimulus onset in which the attended dimension is internally represented. Here, we review neurophysiological, functional magnetic resonance imaging, magnetoencephalography, electroencephalography, and transcranial magnetic stimulation (TMS) studies investigating the neural basis of preparatory attention, both when attention is directed to a location in space and when it is directed to nonspatial stimulus attributes (content-based attention) ranging from low-level features to object categories. Results show that both spatial and content-based attention lead to increased baseline activity in neural populations that selectively code for the attended attribute. TMS studies provide evidence that this preparatory activity is causally related to subsequent attentional selection and behavioral performance. Attention thus acts by preactivating selective neurons in the visual cortex before stimulus onset. This appears to be a general mechanism that can operate on multiple levels of representation. We discuss the functional relevance of this mechanism, its limitations, and its relation to working memory, imagery, and expectation. We conclude by outlining open questions and future directions. © 2017 New York Academy of Sciences.
Kasties, Nils; Starosta, Sarah; Güntürkün, Onur; Stüttgen, Maik C.
2016-01-01
Animals exploit visual information to identify objects, form stimulus-reward associations, and prepare appropriate behavioral responses. The nidopallium caudolaterale (NCL), an associative region of the avian endbrain, contains neurons exhibiting prominent response modulation during presentation of reward-predicting visual stimuli, but it is unclear whether neural activity represents valuation signals, stimulus properties, or sensorimotor contingencies. To test the hypothesis that NCL neurons represent stimulus value, we subjected pigeons to a Pavlovian sign-tracking paradigm in which visual cues predicted rewards differing in magnitude (large vs. small) and delay to presentation (short vs. long). Subjects’ strength of conditioned responding to visual cues reliably differentiated between predicted reward types and thus indexed valuation. The majority of NCL neurons discriminated between visual cues, with discriminability peaking shortly after stimulus onset and being maintained at lower levels throughout the stimulus presentation period. However, while some cells’ firing rates correlated with reward value, such neurons were not more frequent than expected by chance. Instead, neurons formed discernible clusters which differed in their preferred visual cue. We propose that this activity pattern constitutes a prerequisite for using visual information in more complex situations e.g. requiring value-based choices. PMID:27762287
Coggan, David D; Baker, Daniel H; Andrews, Timothy J
2016-01-01
Brain-imaging studies have found distinct spatial and temporal patterns of response to different object categories across the brain. However, the extent to which these categorical patterns of response reflect higher-level semantic or lower-level visual properties of the stimulus remains unclear. To address this question, we measured patterns of EEG response to intact and scrambled images in the human brain. Our rationale for using scrambled images is that they have many of the visual properties found in intact images, but do not convey any semantic information. Images from different object categories (bottle, face, house) were briefly presented (400 ms) in an event-related design. A multivariate pattern analysis revealed categorical patterns of response to intact images emerged ∼80-100 ms after stimulus onset and were still evident when the stimulus was no longer present (∼800 ms). Next, we measured the patterns of response to scrambled images. Categorical patterns of response to scrambled images also emerged ∼80-100 ms after stimulus onset. However, in contrast to the intact images, distinct patterns of response to scrambled images were mostly evident while the stimulus was present (∼400 ms). Moreover, scrambled images were able to account only for all the variance in the intact images at early stages of processing. This direct manipulation of visual and semantic content provides new insights into the temporal dynamics of object perception and the extent to which different stages of processing are dependent on lower-level or higher-level properties of the image.
Stimulus-Driven Attentional Capture by a Static Discontinuity between Perceptual Groups
ERIC Educational Resources Information Center
Burnham, Bryan R.; Neely, James H.; Naginsky, Yelena; Thomas, Matthew
2010-01-01
After C. L. Folk, R. W. Remington, and J. C. Johnston (1992) proposed their contingent-orienting hypothesis, there has been an ongoing debate over whether purely stimulus-driven attentional capture can occur for visual events that are salient by virtue of a distinctive static property (as opposed to a dynamic property such as abrupt onset). The…
van Laarhoven, Thijs; Stekelenburg, Jeroen J; Vroomen, Jean
2017-04-15
A rare omission of a sound that is predictable by anticipatory visual information induces an early negative omission response (oN1) in the EEG during the period of silence where the sound was expected. It was previously suggested that the oN1 was primarily driven by the identity of the anticipated sound. Here, we examined the role of temporal prediction in conjunction with identity prediction of the anticipated sound in the evocation of the auditory oN1. With incongruent audiovisual stimuli (a video of a handclap that is consistently combined with the sound of a car horn) we demonstrate in Experiment 1 that a natural match in identity between the visual and auditory stimulus is not required for inducing the oN1, and that the perceptual system can adapt predictions to unnatural stimulus events. In Experiment 2 we varied either the auditory onset (relative to the visual onset) or the identity of the sound across trials in order to hamper temporal and identity predictions. Relative to the natural stimulus with correct auditory timing and matching audiovisual identity, the oN1 was abolished when either the timing or the identity of the sound could not be predicted reliably from the video. Our study demonstrates the flexibility of the perceptual system in predictive processing (Experiment 1) and also shows that precise predictions of timing and content are both essential elements for inducing an oN1 (Experiment 2). Copyright © 2017 Elsevier B.V. All rights reserved.
Properties of visual evoked potentials to onset of movement on a television screen.
Kubová, Z; Kuba, M; Hubacek, J; Vít, F
1990-08-01
In 80 subjects the dependence of movement-onset visual evoked potentials on some measures of stimulation was examined, and these responses were compared with pattern-reversal visual evoked potentials to verify the effectiveness of pattern movement application for visual evoked potential acquisition. Horizontally moving vertical gratings were generated on a television screen. The typical movement-onset reactions were characterized by one marked negative peak only, with a peak time between 140 and 200 ms. In all subjects the sufficient stimulus duration for acquisition of movement-onset-related visual evoked potentials was 100 ms; in some cases it was only 20 ms. Higher velocity (5.6 degree/s) produced higher amplitudes of movement-onset visual evoked potentials than did the lower velocity (2.8 degrees/s). In 80% of subjects, the more distinct reactions were found in the leads from lateral occipital areas (in 60% from the right hemisphere), with no correlation to handedness of subjects. Unlike pattern-reversal visual evoked potentials, the movement-onset responses tended to be larger to extramacular stimulation (annular target of 5 degrees-9 degrees) than to macular stimulation (circular target of 5 degrees diameter).
Warabi, Tateo; Furuyama, Hiroyasu; Sugai, Eri; Kato, Masamichi; Yanagisawa, Nobuo
2018-01-01
This study examined how gait bradykinesia is changed by the motor programming in Parkinson's disease. Thirty-five idiopathic Parkinson's disease patients and nine age-matched healthy subjects participated in this study. After the patients fixated on a visual-fixation target (conditioning-stimulus), the voluntary-gait was triggered by a visual on-stimulus. While the subject walked on a level floor, soleus, tibialis anterior EMG latencies, and the y-axis-vector of the sole-floor reaction force were examined. Three paradigms were used to distinguish between the off-/on-latencies. The gap-task: the visual-fixation target was turned off; 200 ms before the on-stimulus was engaged (resulting in a 200 ms-gap). EMG latency was not influenced by the visual-fixation target. The overlap-task: the on-stimulus was turned on during the visual-fixation target presentation (200 ms-overlap). The no-gap-task: the fixation target was turned off and the on-stimulus was turned on simultaneously. The onset of EMG pause following the tonic soleus EMG was defined as the off-latency of posture (termination). The onset of the tibialis anterior EMG burst was defined as the on-latency of gait (initiation). In the gap-task, the on-latency was unchanged in all of the subjects. In Parkinson's disease, the visual-fixation target prolonged both the off-/on-latencies in the overlap-task. In all tasks, the off-latency was prolonged and the off-/on-latencies were unsynchronized, which changed the synergic movement to a slow, short-step-gait. The synergy of gait was regulated by two independent sensory-motor programs of the off- and on-latency levels. In Parkinson's disease, the delayed gait initiation was due to the difficulty in terminating the sensory-motor program which controls the subject's fixation. The dynamic gait bradykinesia was involved in the difficulty (long off-latency) in terminating the motor program of the prior posture/movement.
Kaneoke, Y; Urakawa, T; Kakigi, R
2009-05-19
We investigated whether direction information is represented in the population-level neural response evoked by the visual motion stimulus, as measured by magnetoencephalography. Coherent motions with varied speed, varied direction, and different coherence level were presented using random dot kinematography. Peak latency of responses to motion onset was inversely related to speed in all directions, as previously reported, but no significant effect of direction on latency changes was identified. Mutual information entropy (IE) calculated using four-direction response data increased significantly (>2.14) after motion onset in 41.3% of response data and maximum IE was distributed at approximately 20 ms after peak response latency. When response waveforms showing significant differences (by multivariate discriminant analysis) in distribution of the three waveform parameters (peak amplitude, peak latency, and 75% waveform width) with stimulus directions were analyzed, 87 waveform stimulus directions (80.6%) were correctly estimated using these parameters. Correct estimation rate was unaffected by stimulus speed, but was affected by coherence level, even though both speed and coherence affected response amplitude similarly. Our results indicate that speed and direction of stimulus motion are represented in the distinct properties of a response waveform, suggesting that the human brain processes speed and direction separately, at least in part.
The role of prestimulus activity in visual extinction☆
Urner, Maren; Sarri, Margarita; Grahn, Jessica; Manly, Tom; Rees, Geraint; Friston, Karl
2013-01-01
Patients with visual extinction following right-hemisphere damage sometimes see and sometimes miss stimuli in the left visual field, particularly when stimuli are presented simultaneously to both visual fields. Awareness of left visual field stimuli is associated with increased activity in bilateral parietal and frontal cortex. However, it is unknown why patients see or miss these stimuli. Previous neuroimaging studies in healthy adults show that prestimulus activity biases perceptual decisions, and biases in visual perception can be attributed to fluctuations in prestimulus activity in task relevant brain regions. Here, we used functional MRI to investigate whether prestimulus activity affected perception in the context of visual extinction following stroke. We measured prestimulus activity in stimulus-responsive cortical areas during an extinction paradigm in a patient with unilateral right parietal damage and visual extinction. This allowed us to compare prestimulus activity on physically identical bilateral trials that either did or did not lead to visual extinction. We found significantly increased activity prior to stimulus presentation in two areas that were also activated by visual stimulation: the left calcarine sulcus and right occipital inferior cortex. Using dynamic causal modelling (DCM) we found that both these differences in prestimulus activity and stimulus evoked responses could be explained by enhanced effective connectivity within and between visual areas, prior to stimulus presentation. Thus, we provide evidence for the idea that differences in ongoing neural activity in visually responsive areas prior to stimulus onset affect awareness in visual extinction, and that these differences are mediated by fluctuations in extrinsic and intrinsic connectivity. PMID:23680398
The role of prestimulus activity in visual extinction.
Urner, Maren; Sarri, Margarita; Grahn, Jessica; Manly, Tom; Rees, Geraint; Friston, Karl
2013-07-01
Patients with visual extinction following right-hemisphere damage sometimes see and sometimes miss stimuli in the left visual field, particularly when stimuli are presented simultaneously to both visual fields. Awareness of left visual field stimuli is associated with increased activity in bilateral parietal and frontal cortex. However, it is unknown why patients see or miss these stimuli. Previous neuroimaging studies in healthy adults show that prestimulus activity biases perceptual decisions, and biases in visual perception can be attributed to fluctuations in prestimulus activity in task relevant brain regions. Here, we used functional MRI to investigate whether prestimulus activity affected perception in the context of visual extinction following stroke. We measured prestimulus activity in stimulus-responsive cortical areas during an extinction paradigm in a patient with unilateral right parietal damage and visual extinction. This allowed us to compare prestimulus activity on physically identical bilateral trials that either did or did not lead to visual extinction. We found significantly increased activity prior to stimulus presentation in two areas that were also activated by visual stimulation: the left calcarine sulcus and right occipital inferior cortex. Using dynamic causal modelling (DCM) we found that both these differences in prestimulus activity and stimulus evoked responses could be explained by enhanced effective connectivity within and between visual areas, prior to stimulus presentation. Thus, we provide evidence for the idea that differences in ongoing neural activity in visually responsive areas prior to stimulus onset affect awareness in visual extinction, and that these differences are mediated by fluctuations in extrinsic and intrinsic connectivity. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.
Enhanced Access to Early Visual Processing of Perceptual Simultaneity in Autism Spectrum Disorders
ERIC Educational Resources Information Center
Falter, Christine M.; Braeutigam, Sven; Nathan, Roger; Carrington, Sarah; Bailey, Anthony J.
2013-01-01
We compared judgements of the simultaneity or asynchrony of visual stimuli in individuals with autism spectrum disorders (ASD) and typically-developing controls using Magnetoencephalography (MEG). Two vertical bars were presented simultaneously or non-simultaneously with two different stimulus onset delays. Participants with ASD distinguished…
Kaganovich, Natalya; Schumaker, Jennifer
2016-01-01
Sensitivity to the temporal relationship between auditory and visual stimuli is key to efficient audiovisual integration. However, even adults vary greatly in their ability to detect audiovisual temporal asynchrony. What underlies this variability is currently unknown. We recorded event-related potentials (ERPs) while participants performed a simultaneity judgment task on a range of audiovisual (AV) and visual-auditory (VA) stimulus onset asynchronies (SOAs) and compared ERP responses in good and poor performers to the 200 ms SOA, which showed the largest individual variability in the number of synchronous perceptions. Analysis of ERPs to the VA200 stimulus yielded no significant results. However, those individuals who were more sensitive to the AV200 SOA had significantly more positive voltage between 210 and 270 ms following the sound onset. In a follow-up analysis, we showed that the mean voltage within this window predicted approximately 36% of variability in sensitivity to AV temporal asynchrony in a larger group of participants. The relationship between the ERP measure in the 210-270 ms window and accuracy on the simultaneity judgment task also held for two other AV SOAs with significant individual variability - 100 and 300 ms. Because the identified window was time-locked to the onset of sound in the AV stimulus, we conclude that sensitivity to AV temporal asynchrony is shaped to a large extent by the efficiency in the neural encoding of sound onsets. PMID:27094850
Iconic-memory processing of unfamiliar stimuli by retarded and nonretarded individuals.
Hornstein, H A; Mosley, J L
1979-07-01
The iconic-memory processing of unfamiliar stimuli was undertaken employing a visually cued partial-report procedure and a visual masking procedure. Subjects viewed stimulus arrays consisting of six Chinese characters arranged in a circular pattern for 100 msec. At variable stimulus-onset asynchronies, a teardrop indicator or an annulus was presented for 100 msec. Immediately upon cue offset, the subject was required to recognize the cued stimulus from a card containing a single character. Retarded subjects' performance was comparable to that of MA- and CA-matched subjects. We suggested that earlier reported iconic-memory differences between retarded and nonretarded individuals may be attributable to processes other than iconic memory.
On the Role of Mentalizing Processes in Aesthetic Appreciation: An ERP Study.
Beudt, Susan; Jacobsen, Thomas
2015-01-01
We used event-related brain potentials to explore the impact of mental perspective taking on processes of aesthetic appreciation of visual art. Participants (non-experts) were first presented with information about the life and attitudes of a fictitious artist. Subsequently, they were cued trial-wise to make an aesthetic judgment regarding an image depicting a piece of abstract art either from their own perspective or from the imagined perspective of the fictitious artist [i.e., theory of mind (ToM) condition]. Positive self-referential judgments were made more quickly and negative self-referential judgments were made more slowly than the corresponding judgments from the imagined perspective. Event-related potential analyses revealed significant differences between the two tasks both within the preparation period (i.e., during the cue-stimulus interval) and within the stimulus presentation period. For the ToM condition we observed a relative centro-parietal negativity during the preparation period (700-330 ms preceding picture onset) and a relative centro-parietal positivity during the stimulus presentation period (700-1100 ms after stimulus onset). These findings suggest that different subprocesses are involved in aesthetic appreciation and judgment of visual abstract art from one's own vs. from another person's perspective.
Modality-dependent effect of motion information in sensory-motor synchronised tapping.
Ono, Kentaro
2018-05-14
Synchronised action is important for everyday life. Generally, the auditory domain is more sensitive for coding temporal information, and previous studies have shown that auditory-motor synchronisation is much more precise than visuo-motor synchronisation. Interestingly, adding motion information improves synchronisation with visual stimuli and the advantage of the auditory modality seems to diminish. However, whether adding motion information also improves auditory-motor synchronisation remains unknown. This study compared tapping accuracy with a stationary or moving stimulus in both auditory and visual modalities. Participants were instructed to tap in synchrony with the onset of a sound or flash in the stationary condition, while these stimuli were perceived as moving from side to side in the motion condition. The results demonstrated that synchronised tapping with a moving visual stimulus was significantly more accurate than tapping with a stationary visual stimulus, as previous studies have shown. However, tapping with a moving auditory stimulus was significantly poorer than tapping with a stationary auditory stimulus. Although motion information impaired audio-motor synchronisation, an advantage of auditory modality compared to visual modality still existed. These findings are likely the result of higher temporal resolution in the auditory domain, which is likely due to the physiological and structural differences in the auditory and visual pathways in the brain. Copyright © 2018 Elsevier B.V. All rights reserved.
Fukatsu, Y; Miyake, Y; Sugita, S; Saito, A; Watanabe, S
1990-11-01
To analyze the Electrically evoked response (EER) in relation to the central visual pathway, the authors studied the properties of wave patterns and peak latencies of EER in 35 anesthetized adult cats. The cat EER showed two early positive waves on outward current (cornea cathode) stimulus and three or four early positive waves on inward current (cornea anode) stimulus. These waves were recorded within 50 ms after stimulus onset, and were the most consistent components in cat EER. The stimulus threshold for EER showed a less individual variation than amplitude. The difference of stimulus threshold between outward and inward current stimulus was also essentially negligible. The stimulus threshold was higher in early components than in late components. The peak latency of EER became shorter and the amplitude became higher, as the stimulus intensity was increased. However, this tendency was reversed and some wavelets started to appear when the stimulus was extremely strong. The recording using short stimulus duration and bipolar electrodes enabled us to reduce the electrical artifact of EER. These results obtained from cats were compared with those of humans and rabbits.
Identifiable Orthographically Similar Word Primes Interfere in Visual Word Identification
ERIC Educational Resources Information Center
Burt, Jennifer S.
2009-01-01
University students participated in five experiments concerning the effects of unmasked, orthographically similar, primes on visual word recognition in the lexical decision task (LDT) and naming tasks. The modal prime-target stimulus onset asynchrony (SOA) was 350 ms. When primes were words that were orthographic neighbors of the targets, and…
Gaspelin, Nicholas; Ruthruff, Eric; Lien, Mei-Ching
2016-08-01
Researchers are sharply divided regarding whether irrelevant abrupt onsets capture spatial attention. Numerous studies report that they do and a roughly equal number report that they do not. This puzzle has inspired numerous attempts at reconciliation, none gaining general acceptance. The authors propose that abrupt onsets routinely capture attention, but the size of observed capture effects depends critically on how long attention dwells on distractor items which, in turn, depends critically on search difficulty. In a series of spatial cuing experiments, the authors show that irrelevant abrupt onsets produce robust capture effects when visual search is difficult, but not when search is easy. Critically, this effect occurs even when search difficulty varies randomly across trials, preventing any strategic adjustments of the attentional set that could modulate probability of capture by the onset cue. The authors argue that easy visual search provides an insensitive test for stimulus-driven capture by abrupt onsets: even though onsets truly capture attention, the effects of capture can be latent. This observation helps to explain previous failures to find capture by onsets, nearly all of which used an easy visual search. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Gaspelin, Nicholas; Ruthruff, Eric; Lien, Mei-Ching
2016-01-01
Researchers are sharply divided regarding whether irrelevant abrupt onsets capture spatial attention. Numerous studies report that they do and a roughly equal number report that they do not. This puzzle has inspired numerous attempts at reconciliation, none gaining general acceptance. We propose that abrupt onsets routinely capture attention, but the size of observed capture effects depends critically on how long attention dwells on distractor items which, in turn, depends critically on search difficulty. In a series of spatial cuing experiments, we show that irrelevant abrupt onsets produce robust capture effects when visual search is difficult, but not when search is easy. Critically, this effect occurs even when search difficulty varies randomly across trials, preventing any strategic adjustments of the attentional set that could modulate probability of capture by the onset cue. We argue that easy visual search provides an insensitive test for stimulus-driven capture by abrupt onsets: even though onsets truly capture attention, the effects of capture can be latent. This observation helps to explain previous failures to find capture by onsets, nearly all of which employed an easy visual search. PMID:26854530
The time course of protecting a visual memory representation from perceptual interference
van Moorselaar, Dirk; Gunseli, Eren; Theeuwes, Jan; N. L. Olivers, Christian
2015-01-01
Cueing a remembered item during the delay of a visual memory task leads to enhanced recall of the cued item compared to when an item is not cued. This cueing benefit has been proposed to reflect attention within visual memory being shifted from a distributed mode to a focused mode, thus protecting the cued item against perceptual interference. Here we investigated the dynamics of building up this mnemonic protection against visual interference by systematically varying the stimulus onset asynchrony (SOA) between cue onset and a subsequent visual mask in an orientation memory task. Experiment 1 showed that a cue counteracted the deteriorating effect of pattern masks. Experiment 2 demonstrated that building up this protection is a continuous process that is completed in approximately half a second after cue onset. The similarities between shifting attention in perceptual and remembered space are discussed. PMID:25628555
Cid-Fernández, Susana; Lindín, Mónica; Díaz, Fernando
2014-01-01
The main aim of the present study was to assess whether aging modulates the effects of involuntary capture of attention by novel stimuli on performance, and on event-related potentials (ERPs) associated with target processing (N2b and P3b) and subsequent response processes (stimulus-locked Lateralized Readiness Potential -sLRP- and response-locked Lateralized Readiness Potential -rLRP-). An auditory-visual distraction-attention task was performed by 77 healthy participants, divided into three age groups (Young: 21–29, Middle-aged: 51–64, Old: 65–84 years old). Participants were asked to attend to visual stimuli and to ignore auditory stimuli. Aging was associated with slowed reaction times, target stimulus processing in working memory (WM, longer N2b and P3b latencies) and selection and preparation of the motor response (longer sLRP and earlier rLRP onset latencies). In the novel relative to the standard condition we observed, in the three age groups: (1) a distraction effect, reflected in a slowing of reaction times, of stimuli categorization in WM (longer P3b latency), and of motor response selection (longer sLRP onset latency); (2) a facilitation effect on response preparation (later rLRP onset latency), and (3) an increase in arousal (larger amplitudes of all ERPs evaluated, except for N2b amplitude in the Old group). A distraction effect on the stimulus evaluation processes (longer N2b latency) were also observed, but only in middle-aged and old participants, indicating that the attentional capture slows the stimulus evaluation in WM from early ages (from 50 years onwards, without differences between middle-age and older adults), but not in young adults. PMID:25294999
Spatio-Temporal Brain Mapping of Motion-Onset VEPs Combined with fMRI and Retinotopic Maps
Pitzalis, Sabrina; Strappini, Francesca; De Gasperis, Marco; Bultrini, Alessandro; Di Russo, Francesco
2012-01-01
Neuroimaging studies have identified several motion-sensitive visual areas in the human brain, but the time course of their activation cannot be measured with these techniques. In the present study, we combined electrophysiological and neuroimaging methods (including retinotopic brain mapping) to determine the spatio-temporal profile of motion-onset visual evoked potentials for slow and fast motion stimuli and to localize its neural generators. We found that cortical activity initiates in the primary visual area (V1) for slow stimuli, peaking 100 ms after the onset of motion. Subsequently, activity in the mid-temporal motion-sensitive areas, MT+, peaked at 120 ms, followed by peaks in activity in the more dorsal area, V3A, at 160 ms and the lateral occipital complex at 180 ms. Approximately 250 ms after stimulus onset, activity fast motion stimuli was predominant in area V6 along the parieto-occipital sulcus. Finally, at 350 ms (100 ms after the motion offset) brain activity was visible again in area V1. For fast motion stimuli, the spatio-temporal brain pattern was similar, except that the first activity was detected at 70 ms in area MT+. Comparing functional magnetic resonance data for slow vs. fast motion, we found signs of slow-fast motion stimulus topography along the posterior brain in at least three cortical regions (MT+, V3A and LOR). PMID:22558222
Effects of Temporal Integration on the Shape of Visual Backward Masking Functions
ERIC Educational Resources Information Center
Francis, Gregory; Cho, Yang Seok
2008-01-01
Many studies of cognition and perception use a visual mask to explore the dynamics of information processing of a target. Especially important in these applications is the time between the target and mask stimuli. A plot of some measure of target visibility against stimulus onset asynchrony is called a masking function, which can sometimes be…
On the Role of Mentalizing Processes in Aesthetic Appreciation: An ERP Study
Beudt, Susan; Jacobsen, Thomas
2015-01-01
We used event-related brain potentials to explore the impact of mental perspective taking on processes of aesthetic appreciation of visual art. Participants (non-experts) were first presented with information about the life and attitudes of a fictitious artist. Subsequently, they were cued trial-wise to make an aesthetic judgment regarding an image depicting a piece of abstract art either from their own perspective or from the imagined perspective of the fictitious artist [i.e., theory of mind (ToM) condition]. Positive self-referential judgments were made more quickly and negative self-referential judgments were made more slowly than the corresponding judgments from the imagined perspective. Event-related potential analyses revealed significant differences between the two tasks both within the preparation period (i.e., during the cue-stimulus interval) and within the stimulus presentation period. For the ToM condition we observed a relative centro-parietal negativity during the preparation period (700–330 ms preceding picture onset) and a relative centro-parietal positivity during the stimulus presentation period (700–1100 ms after stimulus onset). These findings suggest that different subprocesses are involved in aesthetic appreciation and judgment of visual abstract art from one’s own vs. from another person’s perspective. PMID:26617506
Coppens, Milou J M; Roelofs, Jolanda M B; Donkers, Nicole A J; Nonnekes, Jorik; Geurts, Alexander C H; Weerdesteyn, Vivian
2018-05-14
A startling acoustic stimulus (SAS) involuntary releases prepared movements at accelerated latencies, known as the StartReact effect. Previous work has demonstrated intact StartReact in paretic upper extremity movements in people after stroke, suggesting preserved motor preparation. The question remains whether motor preparation of lower extremity movements is also unaffected after stroke. Here, we investigated StartReact effects on ballistic lower extremity movements and on automatic postural responses (APRs) following perturbations to standing balance. These APRs are particularly interesting as they are critical to prevent a fall following balance perturbations, but show substantial delays and poor muscle coordination after stroke. Twelve chronic stroke patients and 12 healthy controls performed voluntary ankle dorsiflexion movements in response to a visual stimulus, and responded to backward balance perturbations evoking APRs. Twenty-five percent of all trials contained a SAS (120 dB) simultaneously with the visual stimulus or balance perturbation. As expected, in the absence of a SAS muscle and movement onset latencies at the paretic side were delayed compared to the non-paretic leg and to controls. The SAS accelerated ankle dorsiflexion onsets in both the legs of the stroke subjects and in controls. Following perturbations, the SAS accelerated bilateral APR onsets not only in controls, but for the first time, we also demonstrated this effect in people after stroke. Moreover, APR inter- and intra-limb muscle coordination was rather weak in our stroke subjects, but substantially improved when the SAS was applied. These findings show preserved movement preparation, suggesting that there is residual (subcortical) capacity for motor recovery.
Rolke, Bettina; Festl, Freya; Seibold, Verena C
2016-11-01
We used ERPs to investigate whether temporal attention interacts with spatial attention and feature-based attention to enhance visual processing. We presented a visual search display containing one singleton stimulus among a set of homogenous distractors. Participants were asked to respond only to target singletons of a particular color and shape that were presented in an attended spatial position. We manipulated temporal attention by presenting a warning signal before each search display and varying the foreperiod (FP) between the warning signal and the search display in a blocked manner. We observed distinctive ERP effects of both spatial and temporal attention. The amplitudes for the N2pc, SPCN, and P3 were enhanced by spatial attention indicating a processing benefit of relevant stimulus features at the attended side. Temporal attention accelerated stimulus processing; this was indexed by an earlier onset of the N2pc component and a reduction in reaction times to targets. Most importantly, temporal attention did not interact with spatial attention or stimulus features to influence visual processing. Taken together, the results suggest that temporal attention fosters visual perceptual processing in a visual search task independently from spatial attention and feature-based attention; this provides support for the nonspecific enhancement hypothesis of temporal attention. © 2016 Society for Psychophysiological Research.
Incomplete cortical reorganization in macular degeneration.
Liu, Tingting; Cheung, Sing-Hang; Schuchard, Ronald A; Glielmi, Christopher B; Hu, Xiaoping; He, Sheng; Legge, Gordon E
2010-12-01
Activity in regions of the visual cortex corresponding to central scotomas in subjects with macular degeneration (MD) is considered evidence for functional reorganization in the brain. Three unresolved issues related to cortical activity in subjects with MD were addressed: Is the cortical response to stimuli presented to the preferred retinal locus (PRL) different from other retinal loci at the same eccentricity? What effect does the role of age of onset and etiology of MD have on cortical responses? How do functional responses in an MD subject's visual cortex vary for task and stimulus conditions? Eight MD subjects-four with age-related onset (AMD) and four with juvenile onset (JMD)-and two age-matched normal vision controls, participated in three testing conditions while undergoing functional magnetic resonance imaging (fMRI). First, subjects viewed a small stimulus presented at the PRL compared with a non-PRL control location to investigate the role of the PRL. Second, they viewed a full-field flickering checkerboard compared with a small stimulus in the original fovea to investigate brain activation with passive viewing. Third, they performed a one-back task with scene images to investigate brain activation with active viewing. A small stimulus at the PRL generated more extensive cortical activation than at a non-PRL location, but neither yielded activation in the foveal cortical projection. Both passive and active viewing of full-field stimuli left a silent zone at the posterior pole of the occipital cortex, implying a lack of complete cortical reorganization. The silent zone was smaller in the task requiring active viewing compared with the task requiring passive viewing, especially in JMD subjects. The PRL for MD subjects has more extensive cortical representation than a retinal region with matched eccentricity. There is evidence for incomplete functional reorganization of early visual cortex in both JMD and AMD. Functional reorganization is more prominent in JMD. Feedback signals, possibly associated with attention, play an important role in the reorganization.
Incomplete Cortical Reorganization in Macular Degeneration
Cheung, Sing-Hang; Schuchard, Ronald A.; Glielmi, Christopher B.; Hu, Xiaoping; He, Sheng; Legge, Gordon E.
2010-01-01
Purpose. Activity in regions of the visual cortex corresponding to central scotomas in subjects with macular degeneration (MD) is considered evidence for functional reorganization in the brain. Three unresolved issues related to cortical activity in subjects with MD were addressed: Is the cortical response to stimuli presented to the preferred retinal locus (PRL) different from other retinal loci at the same eccentricity? What effect does the role of age of onset and etiology of MD have on cortical responses? How do functional responses in an MD subject's visual cortex vary for task and stimulus conditions? Methods. Eight MD subjects—four with age-related onset (AMD) and four with juvenile onset (JMD)—and two age-matched normal vision controls, participated in three testing conditions while undergoing functional magnetic resonance imaging (fMRI). First, subjects viewed a small stimulus presented at the PRL compared with a non-PRL control location to investigate the role of the PRL. Second, they viewed a full-field flickering checkerboard compared with a small stimulus in the original fovea to investigate brain activation with passive viewing. Third, they performed a one-back task with scene images to investigate brain activation with active viewing. Results. A small stimulus at the PRL generated more extensive cortical activation than at a non-PRL location, but neither yielded activation in the foveal cortical projection. Both passive and active viewing of full-field stimuli left a silent zone at the posterior pole of the occipital cortex, implying a lack of complete cortical reorganization. The silent zone was smaller in the task requiring active viewing compared with the task requiring passive viewing, especially in JMD subjects. Conclusions. The PRL for MD subjects has more extensive cortical representation than a retinal region with matched eccentricity. There is evidence for incomplete functional reorganization of early visual cortex in both JMD and AMD. Functional reorganization is more prominent in JMD. Feedback signals, possibly associated with attention, play an important role in the reorganization. PMID:20631240
Barban, Francesco; Zannino, Gian Daniele; Macaluso, Emiliano; Caltagirone, Carlo; Carlesimo, Giovanni A
2013-06-01
Iconic memory is a high-capacity low-duration visual memory store that allows the persistence of a visual stimulus after its offset. The categorical nature of this store has been extensively debated. This study provides functional magnetic resonance imaging evidence for brain regions underlying the persistence of postcategorical representations of visual stimuli. In a partial report paradigm, subjects matched a cued row of a 3 × 3 array of letters (postcategorical stimuli) or false fonts (precategorical stimuli) with a subsequent triplet of stimuli. The cued row was indicated by two visual flankers presented at the onset (physical stimulus readout) or after the offset of the array (iconic memory readout). The left planum temporale showed a greater modulation of the source of readout (iconic memory vs. physical stimulus) when letters were presented compared to false fonts. This is a multimodal brain region responsible for matching incoming acoustic and visual patterns with acoustic pattern templates. These findings suggest that letters persist after their physical offset in an abstract postcategorical representation. A targeted region of interest analysis revealed a similar pattern of activation in the Visual Word Form Area. These results suggest that multiple higher-order visual areas mediate iconic memory for postcategorical stimuli. Copyright © 2012 Wiley Periodicals, Inc.
Temporal Integration Windows in Neural Processing and Perception Aligned to Saccadic Eye Movements.
Wutz, Andreas; Muschter, Evelyn; van Koningsbruggen, Martijn G; Weisz, Nathan; Melcher, David
2016-07-11
When processing dynamic input, the brain balances the opposing needs of temporal integration and sensitivity to change. We hypothesized that the visual system might resolve this challenge by aligning integration windows to the onset of newly arriving sensory samples. In a series of experiments, human participants observed the same sequence of two displays separated by a brief blank delay when performing either an integration or segregation task. First, using magneto-encephalography (MEG), we found a shift in the stimulus-evoked time courses by a 150-ms time window between task signals. After stimulus onset, multivariate pattern analysis (MVPA) decoding of task in occipital-parietal sources remained above chance for almost 1 s, and the task-decoding pattern interacted with task outcome. In the pre-stimulus period, the oscillatory phase in the theta frequency band was informative about both task processing and behavioral outcome for each task separately, suggesting that the post-stimulus effects were caused by a theta-band phase shift. Second, when aligning stimulus presentation to the onset of eye fixations, there was a similar phase shift in behavioral performance according to task demands. In both MEG and behavioral measures, task processing was optimal first for segregation and then integration, with opposite phase in the theta frequency range (3-5 Hz). The best fit to neurophysiological and behavioral data was given by a dampened 3-Hz oscillation from stimulus or eye fixation onset. The alignment of temporal integration windows to input changes found here may serve to actively organize the temporal processing of continuous sensory input. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Encoding of Target Detection during Visual Search by Single Neurons in the Human Brain.
Wang, Shuo; Mamelak, Adam N; Adolphs, Ralph; Rutishauser, Ueli
2018-06-08
Neurons in the primate medial temporal lobe (MTL) respond selectively to visual categories such as faces, contributing to how the brain represents stimulus meaning. However, it remains unknown whether MTL neurons continue to encode stimulus meaning when it changes flexibly as a function of variable task demands imposed by goal-directed behavior. While classically associated with long-term memory, recent lesion and neuroimaging studies show that the MTL also contributes critically to the online guidance of goal-directed behaviors such as visual search. Do such tasks modulate responses of neurons in the MTL, and if so, do their responses mirror bottom-up input from visual cortices or do they reflect more abstract goal-directed properties? To answer these questions, we performed concurrent recordings of eye movements and single neurons in the MTL and medial frontal cortex (MFC) in human neurosurgical patients performing a memory-guided visual search task. We identified a distinct population of target-selective neurons in both the MTL and MFC whose response signaled whether the currently fixated stimulus was a target or distractor. This target-selective response was invariant to visual category and predicted whether a target was detected or missed behaviorally during a given fixation. The response latencies, relative to fixation onset, of MFC target-selective neurons preceded those in the MTL by ∼200 ms, suggesting a frontal origin for the target signal. The human MTL thus represents not only fixed stimulus identity, but also task-specified stimulus relevance due to top-down goal relevance. Copyright © 2018 Elsevier Ltd. All rights reserved.
Pastukhov, Alexander
2016-02-01
We investigated the relation between perception and sensory memory of multi-stable structure-from-motion displays. The latter is an implicit visual memory that reflects a recent history of perceptual dominance and influences only the initial perception of multi-stable displays. First, we established the earliest time point when the direction of an illusory rotation can be reversed after the display onset (29-114 ms). Because our display manipulation did not bias perception towards a specific direction of illusory rotation but only signaled the change in motion, this means that the perceptual dominance was established no later than 29-114 ms after the stimulus onset. Second, we used orientation-selectivity of sensory memory to establish which display orientation produced the strongest memory trace and when this orientation was presented during the preceding prime interval (80-140 ms). Surprisingly, both estimates point towards the time interval immediately after the display onset, indicating that both perception and sensory memory form at approximately the same time. This suggests a tighter integration between perception and sensory memory than previously thought, warrants a reconsideration of its role in visual perception, and indicates that sensory memory could be a unique behavioral correlate of the earlier perceptual inference that can be studied post hoc.
Modulation of Neuronal Responses by Exogenous Attention in Macaque Primary Visual Cortex.
Wang, Feng; Chen, Minggui; Yan, Yin; Zhaoping, Li; Li, Wu
2015-09-30
Visual perception is influenced by attention deployed voluntarily or triggered involuntarily by salient stimuli. Modulation of visual cortical processing by voluntary or endogenous attention has been extensively studied, but much less is known about how involuntary or exogenous attention affects responses of visual cortical neurons. Using implanted microelectrode arrays, we examined the effects of exogenous attention on neuronal responses in the primary visual cortex (V1) of awake monkeys. A bright annular cue was flashed either around the receptive fields of recorded neurons or in the opposite visual field to capture attention. A subsequent grating stimulus probed the cue-induced effects. In a fixation task, when the cue-to-probe stimulus onset asynchrony (SOA) was <240 ms, the cue induced a transient increase of neuronal responses to the probe at the cued location during 40-100 ms after the onset of neuronal responses to the probe. This facilitation diminished and disappeared after repeated presentations of the same cue but recurred for a new cue of a different color. In another task to detect the probe, relative shortening of monkey's reaction times for the validly cued probe depended on the SOA in a way similar to the cue-induced V1 facilitation, and the behavioral and physiological cueing effects remained after repeated practice. Flashing two cues simultaneously in the two opposite visual fields weakened or diminished both the physiological and behavioral cueing effects. Our findings indicate that exogenous attention significantly modulates V1 responses and that the modulation strength depends on both novelty and task relevance of the stimulus. Significance statement: Visual attention can be involuntarily captured by a sudden appearance of a conspicuous object, allowing rapid reactions to unexpected events of significance. The current study discovered a correlate of this effect in monkey primary visual cortex. An abrupt, salient, flash enhanced neuronal responses, and shortened the animal's reaction time, to a subsequent visual probe stimulus at the same location. However, the enhancement of the neural responses diminished after repeated exposures to this flash if the animal was not required to react to the probe. Moreover, a second, simultaneous, flash at another location weakened the neuronal and behavioral effects of the first one. These findings revealed, beyond the observations reported so far, the effects of exogenous attention in the brain. Copyright © 2015 the authors 0270-6474/15/3513419-11$15.00/0.
Coherent modulation of stimulus colour can affect visually induced self-motion perception.
Nakamura, Shinji; Seno, Takeharu; Ito, Hiroyuki; Sunaga, Shoji
2010-01-01
The effects of dynamic colour modulation on vection were investigated to examine whether perceived variation of illumination affects self-motion perception. Participants observed expanding optic flow which simulated their forward self-motion. Onset latency, accumulated duration, and estimated magnitude of the self-motion were measured as indices of vection strength. Colour of the dots in the visual stimulus was modulated between white and red (experiment 1), white and grey (experiment 2), and grey and red (experiment 3). The results indicated that coherent colour oscillation in the visual stimulus significantly suppressed the strength of vection, whereas incoherent or static colour modulation did not affect vection. There was no effect of the types of the colour modulation; both achromatic and chromatic modulations turned out to be effective in inhibiting self-motion perception. Moreover, in a situation where the simulated direction of a spotlight was manipulated dynamically, vection strength was also suppressed (experiment 4). These results suggest that observer's perception of illumination is critical for self-motion perception, and rapid variation of perceived illumination would impair the reliabilities of visual information in determining self-motion.
Setting and changing feature priorities in visual short-term memory.
Kalogeropoulou, Zampeta; Jagadeesh, Akshay V; Ohl, Sven; Rolfs, Martin
2017-04-01
Many everyday tasks require prioritizing some visual features over competing ones, both during the selection from the rich sensory input and while maintaining information in visual short-term memory (VSTM). Here, we show that observers can change priorities in VSTM when, initially, they attended to a different feature. Observers reported from memory the orientation of one of two spatially interspersed groups of black and white gratings. Using colored pre-cues (presented before stimulus onset) and retro-cues (presented after stimulus offset) predicting the to-be-reported group, we manipulated observers' feature priorities independently during stimulus encoding and maintenance, respectively. Valid pre-cues reliably increased observers' performance (reduced guessing, increased report precision) as compared to neutral ones; invalid pre-cues had the opposite effect. Valid retro-cues also consistently improved performance (by reducing random guesses), even if the unexpected group suddenly became relevant (invalid-valid condition). Thus, feature-based attention can reshape priorities in VSTM protecting information that would otherwise be forgotten.
Unique sudden onsets capture attention even when observers are in feature-search mode.
Spalek, Thomas M; Yanko, Matthew R; Poiese, Paola; Lagroix, Hayley E P
2012-01-01
Two sources of attentional capture have been proposed: stimulus-driven (exogenous) and goal-oriented (endogenous). A resolution between these modes of capture has not been straightforward. Even such a clearly exogenous event as the sudden onset of a stimulus can be said to capture attention endogenously if observers operate in singleton-detection mode rather than feature-search mode. In four experiments we show that a unique sudden onset captures attention even when observers are in feature-search mode. The displays were rapid serial visual presentation (RSVP) streams of differently coloured letters with the target letter defined by a specific colour. Distractors were four #s, one of the target colour, surrounding one of the non-target letters. Capture was substantially reduced when the onset of the distractor array was not unique because it was preceded by other sets of four grey # arrays in the RSVP stream. This provides unambiguous evidence that attention can be captured both exogenously and endogenously within a single task.
Ren, Yanna; Yang, Weiping; Nakahashi, Kohei; Takahashi, Satoshi; Wu, Jinglong
2017-02-01
Although neuronal studies have shown that audiovisual integration is regulated by temporal factors, there is still little knowledge about the impact of temporal factors on audiovisual integration in older adults. To clarify how stimulus onset asynchrony (SOA) between auditory and visual stimuli modulates age-related audiovisual integration, 20 younger adults (21-24 years) and 20 older adults (61-80 years) were instructed to perform an auditory or visual stimuli discrimination experiment. The results showed that in younger adults, audiovisual integration was altered from an enhancement (AV, A ± 50 V) to a depression (A ± 150 V). In older adults, the alterative pattern was similar to that for younger adults with the expansion of SOA; however, older adults showed significantly delayed onset for the time-window-of-integration and peak latency in all conditions, which further demonstrated that audiovisual integration was delayed more severely with the expansion of SOA, especially in the peak latency for V-preceded-A conditions in older adults. Our study suggested that audiovisual facilitative integration occurs only within a certain SOA range (e.g., -50 to 50 ms) in both younger and older adults. Moreover, our results confirm that the response for older adults was slowed and provided empirical evidence that integration ability is much more sensitive to the temporal alignment of audiovisual stimuli in older adults.
Nieuwenstein, Mark; Wyble, Brad
2014-06-01
While studies on visual memory commonly assume that the consolidation of a visual stimulus into working memory is interrupted by a trailing mask, studies on dual-task interference suggest that the consolidation of a stimulus can continue for several hundred milliseconds after a mask. As a result, estimates of the time course of working memory consolidation differ more than an order of magnitude. Here, we contrasted these opposing views by examining if and for how long the processing of a masked display of visual stimuli can be disturbed by a trailing 2-alternative forced choice task (2-AFC; a color discrimination task or a visual or auditory parity judgment task). The results showed that the presence of the 2-AFC task produced a pronounced retroactive interference effect that dissipated across stimulus onset asynchronies of 250-1,000 ms, indicating that the processing elicited by the 2-AFC task interfered with the gradual consolidation of the earlier shown stimuli. Furthermore, this interference effect occurred regardless of whether the to-be-remembered stimuli comprised a string of letters or an unfamiliar complex visual shape, and it occurred regardless of whether these stimuli were masked. Conversely, the interference effect was reduced when the memory load for the 1st task was reduced, or when the 2nd task was a color detection task that did not require decision making. Taken together, these findings show that the formation of a durable and consciously accessible working memory trace for a briefly shown visual stimulus can be disturbed by a trailing 2-AFC task for up to several hundred milliseconds after the stimulus has been masked. By implication, the current findings challenge the common view that working memory consolidation involves an immutable central processing bottleneck, and they also make clear that consolidation does not stop when a stimulus is masked. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Neurophysiology underlying influence of stimulus reliability on audiovisual integration.
Shatzer, Hannah; Shen, Stanley; Kerlin, Jess R; Pitt, Mark A; Shahin, Antoine J
2018-01-24
We tested the predictions of the dynamic reweighting model (DRM) of audiovisual (AV) speech integration, which posits that spectrotemporally reliable (informative) AV speech stimuli induce a reweighting of processing from low-level to high-level auditory networks. This reweighting decreases sensitivity to acoustic onsets and in turn increases tolerance to AV onset asynchronies (AVOA). EEG was recorded while subjects watched videos of a speaker uttering trisyllabic nonwords that varied in spectrotemporal reliability and asynchrony of the visual and auditory inputs. Subjects judged the stimuli as in-sync or out-of-sync. Results showed that subjects exhibited greater AVOA tolerance for non-blurred than blurred visual speech and for less than more degraded acoustic speech. Increased AVOA tolerance was reflected in reduced amplitude of the P1-P2 auditory evoked potentials, a neurophysiological indication of reduced sensitivity to acoustic onsets and successful AV integration. There was also sustained visual alpha band (8-14 Hz) suppression (desynchronization) following acoustic speech onsets for non-blurred vs. blurred visual speech, consistent with continuous engagement of the visual system as the speech unfolds. The current findings suggest that increased spectrotemporal reliability of acoustic and visual speech promotes robust AV integration, partly by suppressing sensitivity to acoustic onsets, in support of the DRM's reweighting mechanism. Increased visual signal reliability also sustains the engagement of the visual system with the auditory system to maintain alignment of information across modalities. © 2018 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Choe, Kyoung Whan; Blake, Randolph
2014-01-01
Primary visual cortex (V1) forms the initial cortical representation of objects and events in our visual environment, and it distributes information about that representation to higher cortical areas within the visual hierarchy. Decades of work have established tight linkages between neural activity occurring in V1 and features comprising the retinal image, but it remains debatable how that activity relates to perceptual decisions. An actively debated question is the extent to which V1 responses determine, on a trial-by-trial basis, perceptual choices made by observers. By inspecting the population activity of V1 from human observers engaged in a difficult visual discrimination task, we tested one essential prediction of the deterministic view: choice-related activity, if it exists in V1, and stimulus-related activity should occur in the same neural ensemble of neurons at the same time. Our findings do not support this prediction: while cortical activity signifying the variability in choice behavior was indeed found in V1, that activity was dissociated from activity representing stimulus differences relevant to the task, being advanced in time and carried by a different neural ensemble. The spatiotemporal dynamics of population responses suggest that short-term priors, perhaps formed in higher cortical areas involved in perceptual inference, act to modulate V1 activity prior to stimulus onset without modifying subsequent activity that actually represents stimulus features within V1. PMID:24523561
Visual adaptation and novelty responses in the superior colliculus
Boehnke, Susan E.; Berg, David J.; Marino, Robert M.; Baldi, Pierre F.; Itti, Laurent; Munoz, Douglas P.
2011-01-01
The brain's ability to ignore repeating, often redundant, information while enhancing novel information processing is paramount to survival. When stimuli are repeatedly presented, the response of visually-sensitive neurons decreases in magnitude, i.e. neurons adapt or habituate, although the mechanism is not yet known. We monitored activity of visual neurons in the superior colliculus (SC) of rhesus monkeys who actively fixated while repeated visual events were presented. We dissociated adaptation from habituation as mechanisms of the response decrement by using a Bayesian model of adaptation, and by employing a paradigm including rare trials that included an oddball stimulus that was either brighter or dimmer. If the mechanism is adaptation, response recovery should be seen only for the brighter stimulus; if habituation, response recovery (‘dishabituation’) should be seen for both the brighter and dimmer stimulus. We observed a reduction in the magnitude of the initial transient response and an increase in response onset latency with stimulus repetition for all visually responsive neurons in the SC. Response decrement was successfully captured by the adaptation model which also predicted the effects of presentation rate and rare luminance changes. However, in a subset of neurons with sustained activity to visual stimuli, a novelty signal akin to dishabituation was observed late in the visual response profile to both brighter and dimmer stimuli and was not captured by the model. This suggests that SC neurons integrate both rapidly discounted information about repeating stimuli and novelty information about oddball events, to support efficient selection in a cluttered dynamic world. PMID:21864319
Saccadic eye movements do not disrupt the deployment of feature-based attention.
Kalogeropoulou, Zampeta; Rolfs, Martin
2017-07-01
The tight link of saccades to covert spatial attention has been firmly established, yet their relation to other forms of visual selection remains poorly understood. Here we studied the temporal dynamics of feature-based attention (FBA) during fixation and across saccades. Participants reported the orientation (on a continuous scale) of one of two sets of spatially interspersed Gabors (black or white). We tested performance at different intervals between the onset of a colored cue (black or white, indicating which stimulus was the most probable target; red: neutral condition) and the stimulus. FBA built up after cue onset: Benefits (errors for valid vs. neutral cues), costs (invalid vs. neutral), and the overall cueing effect (valid vs. invalid) increased with the cue-stimulus interval. Critically, we also tested visual performance at different intervals after a saccade, when FBA had been fully deployed before saccade initiation. Cueing effects were evident immediately after the saccade and were predicted most accurately and most precisely by fully deployed FBA, indicating that FBA was continuous throughout saccades. Finally, a decomposition of orientation reports into target reports and random guesses confirmed continuity of report precision and guess rates across the saccade. We discuss the role of FBA in perceptual continuity across saccades.
Iconic Memory Deficit of Mildly Mentally Retarded Individuals.
ERIC Educational Resources Information Center
Hornstein, Henry A.; Mosley, James L.
1987-01-01
Ten mildly retarded young adult males and nonretarded subjects matched for chronological age or mental age were required to recognize both verbal and nonverbal stimuli presented tachistoscopically. Results of a backward visual masking paradigm varying stimulus onset asynchrony (SOA) indicated the retarded subjects performed poorer at the longest…
ERIC Educational Resources Information Center
Zhang, Qingfang; Chen, Hsuan-Chih; Weekes, Brendan Stuart; Yang, Yufang
2009-01-01
A picture-word interference paradigm with visually presented distractors was used to investigate the independent effects of orthographic and phonological facilitation on Mandarin monosyllabic word production. Both the stimulus-onset asynchrony (SOA) and the picture-word relationship along different lexical dimensions were varied. We observed a…
Park, Jason C.; McAnany, J. Jason
2015-01-01
This study determined if the pupillary light reflex (PLR) driven by brief stimulus presentations can be accounted for by the product of stimulus luminance and area (i.e., corneal flux density, CFD) under conditions biased toward the rod, cone, and melanopsin pathways. Five visually normal subjects participated in the study. Stimuli consisted of 1-s short- and long-wavelength flashes that spanned a large range of luminance and angular subtense. The stimuli were presented in the central visual field in the dark (rod and melanopsin conditions) and against a rod-suppressing short-wavelength background (cone condition). Rod- and cone-mediated PLRs were measured at the maximum constriction after stimulus onset whereas the melanopsin-mediated PLR was measured 5–7 s after stimulus offset. The rod- and melanopsin-mediated PLRs were well accounted for by CFD, such that doubling the stimulus luminance had the same effect on the PLR as doubling the stimulus area. Melanopsin-mediated PLRs were elicited only by short-wavelength, large (>16°) stimuli with luminance greater than 10 cd/m2, but when present, the melanopsin-mediated PLR was well accounted for by CFD. In contrast, CFD could not account for the cone-mediated PLR because the PLR was approximately independent of stimulus size but strongly dependent on stimulus luminance. These findings highlight important differences in how stimulus luminance and size combine to govern the PLR elicited by brief flashes under rod-, cone-, and melanopsin-mediated conditions. PMID:25788707
Putative mechanisms mediating tolerance for audiovisual stimulus onset asynchrony.
Bhat, Jyoti; Miller, Lee M; Pitt, Mark A; Shahin, Antoine J
2015-03-01
Audiovisual (AV) speech perception is robust to temporal asynchronies between visual and auditory stimuli. We investigated the neural mechanisms that facilitate tolerance for audiovisual stimulus onset asynchrony (AVOA) with EEG. Individuals were presented with AV words that were asynchronous in onsets of voice and mouth movement and judged whether they were synchronous or not. Behaviorally, individuals tolerated (perceived as synchronous) longer AVOAs when mouth movement preceded the speech (V-A) stimuli than when the speech preceded mouth movement (A-V). Neurophysiologically, the P1-N1-P2 auditory evoked potentials (AEPs), time-locked to sound onsets and known to arise in and surrounding the primary auditory cortex (PAC), were smaller for the in-sync than the out-of-sync percepts. Spectral power of oscillatory activity in the beta band (14-30 Hz) following the AEPs was larger during the in-sync than out-of-sync perception for both A-V and V-A conditions. However, alpha power (8-14 Hz), also following AEPs, was larger for the in-sync than out-of-sync percepts only in the V-A condition. These results demonstrate that AVOA tolerance is enhanced by inhibiting low-level auditory activity (e.g., AEPs representing generators in and surrounding PAC) that code for acoustic onsets. By reducing sensitivity to acoustic onsets, visual-to-auditory onset mapping is weakened, allowing for greater AVOA tolerance. In contrast, beta and alpha results suggest the involvement of higher-level neural processes that may code for language cues (phonetic, lexical), selective attention, and binding of AV percepts, allowing for wider neural windows of temporal integration, i.e., greater AVOA tolerance. Copyright © 2015 the American Physiological Society.
Masking reduces orientation selectivity in rat visual cortex
Alwis, Dasuni S.; Richards, Katrina L.
2016-01-01
In visual masking the perception of a target stimulus is impaired by a preceding (forward) or succeeding (backward) mask stimulus. The illusion is of interest because it allows uncoupling of the physical stimulus, its neuronal representation, and its perception. To understand the neuronal correlates of masking, we examined how masks affected the neuronal responses to oriented target stimuli in the primary visual cortex (V1) of anesthetized rats (n = 37). Target stimuli were circular gratings with 12 orientations; mask stimuli were plaids created as a binarized sum of all possible target orientations. Spatially, masks were presented either overlapping or surrounding the target. Temporally, targets and masks were presented for 33 ms, but the stimulus onset asynchrony (SOA) of their relative appearance was varied. For the first time, we examine how spatially overlapping and center-surround masking affect orientation discriminability (rather than visibility) in V1. Regardless of the spatial or temporal arrangement of stimuli, the greatest reductions in firing rate and orientation selectivity occurred for the shortest SOAs. Interestingly, analyses conducted separately for transient and sustained target response components showed that changes in orientation selectivity do not always coincide with changes in firing rate. Given the near-instantaneous reductions observed in orientation selectivity even when target and mask do not spatially overlap, we suggest that monotonic visual masking is explained by a combination of neural integration and lateral inhibition. PMID:27535373
Coding “What” and “When” in the Archer Fish Retina
Vasserman, Genadiy; Shamir, Maoz; Ben Simon, Avi; Segev, Ronen
2010-01-01
Traditionally, the information content of the neural response is quantified using statistics of the responses relative to stimulus onset time with the assumption that the brain uses onset time to infer stimulus identity. However, stimulus onset time must also be estimated by the brain, making the utility of such an approach questionable. How can stimulus onset be estimated from the neural responses with sufficient accuracy to ensure reliable stimulus identification? We address this question using the framework of colour coding by the archer fish retinal ganglion cell. We found that stimulus identity, “what”, can be estimated from the responses of best single cells with an accuracy comparable to that of the animal's psychophysical estimation. However, to extract this information, an accurate estimation of stimulus onset is essential. We show that stimulus onset time, “when”, can be estimated using a linear-nonlinear readout mechanism that requires the response of a population of 100 cells. Thus, stimulus onset time can be estimated using a relatively simple readout. However, large nerve cell populations are required to achieve sufficient accuracy. PMID:21079682
Stevenson, Ryan A; Fister, Juliane Krueger; Barnett, Zachary P; Nidiffer, Aaron R; Wallace, Mark T
2012-05-01
In natural environments, human sensory systems work in a coordinated and integrated manner to perceive and respond to external events. Previous research has shown that the spatial and temporal relationships of sensory signals are paramount in determining how information is integrated across sensory modalities, but in ecologically plausible settings, these factors are not independent. In the current study, we provide a novel exploration of the impact on behavioral performance for systematic manipulations of the spatial location and temporal synchrony of a visual-auditory stimulus pair. Simple auditory and visual stimuli were presented across a range of spatial locations and stimulus onset asynchronies (SOAs), and participants performed both a spatial localization and simultaneity judgment task. Response times in localizing paired visual-auditory stimuli were slower in the periphery and at larger SOAs, but most importantly, an interaction was found between the two factors, in which the effect of SOA was greater in peripheral as opposed to central locations. Simultaneity judgments also revealed a novel interaction between space and time: individuals were more likely to judge stimuli as synchronous when occurring in the periphery at large SOAs. The results of this study provide novel insights into (a) how the speed of spatial localization of an audiovisual stimulus is affected by location and temporal coincidence and the interaction between these two factors and (b) how the location of a multisensory stimulus impacts judgments concerning the temporal relationship of the paired stimuli. These findings provide strong evidence for a complex interdependency between spatial location and temporal structure in determining the ultimate behavioral and perceptual outcome associated with a paired multisensory (i.e., visual-auditory) stimulus.
Berchicci, M; Pontifex, M B; Drollette, E S; Pesce, C; Hillman, C H; Di Russo, F
2015-07-09
The association between a fit body and a fit brain in children has led to a rise of behavioral and neuroscientific research. Yet, the relation of cardiorespiratory fitness on premotor neurocognitive preparation with early visual processing has received little attention. Here, 41 healthy, lower and higher fit preadolescent children were administered a modified version of the Eriksen flanker task while electroencephalography (EEG) and behavioral measures were recorded. Event-related potentials (ERPs) locked to the stimulus onset with an earlier than usual baseline (-900/-800 ms) allowed investigation of both the usual post-stimulus (i.e., the P1, N1 and P2) as well as the pre-stimulus ERP components, such as the Bereitschaftspotential (BP) and the prefrontal negativity (pN component). At the behavioral level, aerobic fitness was associated response accuracy, with higher fit children being more accurate than lower fit children. Fitness-related differences selectively emerged at prefrontal brain regions during response preparation, with larger pN amplitude for higher than lower fit children, and at early perceptual stages after stimulus onset, with larger P1 and N1 amplitudes in higher relative to lower fit children. Collectively, the results suggest that the benefits of being aerobically fit appear at the stage of cognitive preparation prior to stimulus presentation and the behavioral response during the performance of a task that challenges cognitive control. Further, it is likely that enhanced activity in prefrontal brain areas may improve cognitive control of visuo-motor tasks, allowing for stronger proactive inhibition and larger early allocation of selective attention resources on relevant external stimuli. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.
Temporal parameters and time course of perceptual latency priming.
Scharlau, Ingrid; Neumann, Odmar
2003-06-01
Visual stimuli (primes) reduce the perceptual latency of a target appearing at the same location (perceptual latency priming, PLP). Three experiments assessed the time course of PLP by masked and, in Experiment 3, unmasked primes. Experiments 1 and 2 investigated the temporal parameters that determine the size of priming. Stimulus onset asynchrony was found to exert the main influence accompanied by a small effect of prime duration. Experiment 3 used a large range of priming onset asynchronies. We suggest to explain PLP by the Asynchronous Updating Model which relates it to the asynchrony of 2 central coding processes, preattentive coding of basic visual features and attentional orienting as a prerequisite for perceptual judgments and conscious perception.
Gottschalk, Caroline; Fischer, Rico
2017-03-01
Different contexts with high versus low conflict frequencies require a specific attentional control involvement, i.e., strong attentional control for high conflict contexts and less attentional control for low conflict contexts. While it is assumed that the corresponding control set can be activated upon stimulus presentation at the respective context (e.g., upper versus lower location), the actual features that trigger control set activation are to date not described. Here, we ask whether the perceptual priming of the location context by an abrupt onset of irrelevant stimuli is sufficient in activating the context-specific attentional control set. For example, the mere onset of a stimulus might disambiguate the relevant location context and thus, serve as a low-level perceptual trigger mechanism that activates the context-specific attentional control set. In Experiment 1 and 2, the onsets of task-relevant and task-irrelevant (distracter) stimuli were manipulated at each context location to compete for triggering the activation of the appropriate control set. In Experiment 3, a prior training session enabled distracter stimuli to establish contextual control associations of their own before entering the test session. Results consistently showed that the mere onset of a task-irrelevant stimulus (with or without a context-control association) is not sufficient to activate the context-associated attentional control set by disambiguating the relevant context location. Instead, we argue that the identification of the relevant stimulus at the respective context is a precondition to trigger the activation of the context-associated attentional control set.
Wittevrongel, Benjamin; Van Wolputte, Elia; Van Hulle, Marc M
2017-11-08
When encoding visual targets using various lagged versions of a pseudorandom binary sequence of luminance changes, the EEG signal recorded over the viewer's occipital pole exhibits so-called code-modulated visual evoked potentials (cVEPs), the phase lags of which can be tied to these targets. The cVEP paradigm has enjoyed interest in the brain-computer interfacing (BCI) community for the reported high information transfer rates (ITR, in bits/min). In this study, we introduce a novel decoding algorithm based on spatiotemporal beamforming, and show that this algorithm is able to accurately identify the gazed target. Especially for a small number of repetitions of the coding sequence, our beamforming approach significantly outperforms an optimised support vector machine (SVM)-based classifier, which is considered state-of-the-art in cVEP-based BCI. In addition to the traditional 60 Hz stimulus presentation rate for the coding sequence, we also explore the 120 Hz rate, and show that the latter enables faster communication, with a maximal median ITR of 172.87 bits/min. Finally, we also report on a transition effect in the EEG signal following the onset of the stimulus sequence, and recommend to exclude the first 150 ms of the trials from decoding when relying on a single presentation of the stimulus sequence.
Salience Is Only Briefly Represented: Evidence from Probe-Detection Performance
ERIC Educational Resources Information Center
Donk, Mieke; Soesman, Leroy
2010-01-01
Salient objects in the visual field tend to capture attention. The present study aimed to examine the time-course of salience effects using a probe-detection task. Eight experiments investigated how the salience of different orientation singletons affected probe reaction time as a function of stimulus onset asynchrony (SOA) between the…
Submillisecond unmasked subliminal visual stimuli evoke electrical brain responses.
Sperdin, Holger F; Spierer, Lucas; Becker, Robert; Michel, Christoph M; Landis, Theodor
2015-04-01
Subliminal perception is strongly associated to the processing of meaningful or emotional information and has mostly been studied using visual masking. In this study, we used high density 256-channel EEG coupled with an liquid crystal display (LCD) tachistoscope to characterize the spatio-temporal dynamics of the brain response to visual checkerboard stimuli (Experiment 1) or blank stimuli (Experiment 2) presented without a mask for 1 ms (visible), 500 µs (partially visible), and 250 µs (subliminal) by applying time-wise, assumption-free nonparametric randomization statistics on the strength and on the topography of high-density scalp-recorded electric field. Stimulus visibility was assessed in a third separate behavioral experiment. Results revealed that unmasked checkerboards presented subliminally for 250 µs evoked weak but detectable visual evoked potential (VEP) responses. When the checkerboards were replaced by blank stimuli, there was no evidence for the presence of an evoked response anymore. Furthermore, the checkerboard VEPs were modulated topographically between 243 and 296 ms post-stimulus onset as a function of stimulus duration, indicative of the engagement of distinct configuration of active brain networks. A distributed electrical source analysis localized this modulation within the right superior parietal lobule near the precuneus. These results show the presence of a brain response to submillisecond unmasked subliminal visual stimuli independently of their emotional saliency or meaningfulness and opens an avenue for new investigations of subliminal stimulation without using visual masking. © 2014 Wiley Periodicals, Inc.
Figure-ground processing during fixational saccades in V1: indication for higher-order stability.
Gilad, Ariel; Pesoa, Yair; Ayzenshtat, Inbal; Slovin, Hamutal
2014-02-26
In a typical visual scene we continuously perceive a "figure" that is segregated from the surrounding "background" despite ongoing microsaccades and small saccades that are performed when attempting fixation (fixational saccades [FSs]). Previously reported neuronal correlates of figure-ground (FG) segregation in the primary visual cortex (V1) showed enhanced activity in the "figure" along with suppressed activity in the noisy "background." However, it is unknown how this FG modulation in V1 is affected by FSs. To investigate this question, we trained two monkeys to detect a contour embedded in a noisy background while simultaneously imaging V1 using voltage-sensitive dyes. During stimulus presentation, the monkeys typically performed 1-3 FSs, which displaced the contour over the retina. Using eye position and a 2D analytical model to map the stimulus onto V1, we were able to compute FG modulation before and after each FS. On the spatial cortical scale, we found that, after each FS, FG modulation follows the stimulus retinal displacement and "hops" within the V1 retinotopic map, suggesting visual instability. On the temporal scale, FG modulation is initiated in the new retinotopic position before it disappeared from the old retinotopic position. Moreover, the FG modulation developed faster after an FS, compared with after stimulus onset, which may contribute to visual stability of FG segregation, along the timeline of stimulus presentation. Therefore, despite spatial discontinuity of FG modulation in V1, the higher-order stability of FG modulation along time may enable our stable and continuous perception.
Fast Coding of Orientation in Primary Visual Cortex
Shriki, Oren; Kohn, Adam; Shamir, Maoz
2012-01-01
Understanding how populations of neurons encode sensory information is a major goal of systems neuroscience. Attempts to answer this question have focused on responses measured over several hundred milliseconds, a duration much longer than that frequently used by animals to make decisions about the environment. How reliably sensory information is encoded on briefer time scales, and how best to extract this information, is unknown. Although it has been proposed that neuronal response latency provides a major cue for fast decisions in the visual system, this hypothesis has not been tested systematically and in a quantitative manner. Here we use a simple ‘race to threshold’ readout mechanism to quantify the information content of spike time latency of primary visual (V1) cortical cells to stimulus orientation. We find that many V1 cells show pronounced tuning of their spike latency to stimulus orientation and that almost as much information can be extracted from spike latencies as from firing rates measured over much longer durations. To extract this information, stimulus onset must be estimated accurately. We show that the responses of cells with weak tuning of spike latency can provide a reliable onset detector. We find that spike latency information can be pooled from a large neuronal population, provided that the decision threshold is scaled linearly with the population size, yielding a processing time of the order of a few tens of milliseconds. Our results provide a novel mechanism for extracting information from neuronal populations over the very brief time scales in which behavioral judgments must sometimes be made. PMID:22719237
NASA Technical Reports Server (NTRS)
Huebner, W. P.; Leigh, R. J.; Seidman, S. H.; Thomas, C. W.; Billian, C.; DiScenna, A. O.; Dell'Osso, L. F.
1992-01-01
1. We used a modeling approach to test the hypothesis that, in humans, the smooth pursuit (SP) system provides the primary signal for cancelling the vestibuloocular reflex (VOR) during combined eye-head tracking (CEHT) of a target moving smoothly in the horizontal plane. Separate models for SP and the VOR were developed. The optimal values of parameters of the two models were calculated using measured responses of four subjects to trials of SP and the visually enhanced VOR. After optimal parameter values were specified, each model generated waveforms that accurately reflected the subjects' responses to SP and vestibular stimuli. The models were then combined into a CEHT model wherein the final eye movement command signal was generated as the linear summation of the signals from the SP and VOR pathways. 2. The SP-VOR superposition hypothesis was tested using two types of CEHT stimuli, both of which involved passive rotation of subjects in a vestibular chair. The first stimulus consisted of a "chair brake" or sudden stop of the subject's head during CEHT; the visual target continued to move. The second stimulus consisted of a sudden change from the visually enhanced VOR to CEHT ("delayed target onset" paradigm); as the vestibular chair rotated past the angular position of the stationary visual stimulus, the latter started to move in synchrony with the chair. Data collected during experiments that employed these stimuli were compared quantitatively with predictions made by the CEHT model. 3. During CEHT, when the chair was suddenly and unexpectedly stopped, the eye promptly began to move in the orbit to track the moving target. Initially, gaze velocity did not completely match target velocity, however; this finally occurred approximately 100 ms after the brake onset. The model did predict the prompt onset of eye-in-orbit motion after the brake, but it did not predict that gaze velocity would initially be only approximately 70% of target velocity. One possible explanation for this discrepancy is that VOR gain can be dynamically modulated and, during sustained CEHT, it may assume a lower value. Consequently, during CEHT, a smaller-amplitude SP signal would be needed to cancel the lower-gain VOR. This reduction of the SP signal could account for the attenuated tracking response observed immediately after the brake. We found evidence for the dynamic modulation of VOR gain by noting differences in responses to the onset and offset of head rotation in trials of the visually enhanced VOR.(ABSTRACT TRUNCATED AT 400 WORDS).
Rutishauser, Ueli; Kotowicz, Andreas; Laurent, Gilles
2013-01-01
Brain activity often consists of interactions between internal—or on-going—and external—or sensory—activity streams, resulting in complex, distributed patterns of neural activity. Investigation of such interactions could benefit from closed-loop experimental protocols in which one stream can be controlled depending on the state of the other. We describe here methods to present rapid and precisely timed visual stimuli to awake animals, conditional on features of the animal’s on-going brain state; those features are the presence, power and phase of oscillations in local field potentials (LFP). The system can process up to 64 channels in real time. We quantified its performance using simulations, synthetic data and animal experiments (chronic recordings in the dorsal cortex of awake turtles). The delay from detection of an oscillation to the onset of a visual stimulus on an LCD screen was 47.5 ms and visual-stimulus onset could be locked to the phase of ongoing oscillations at any frequency ≤40 Hz. Our software’s architecture is flexible, allowing on-the-fly modifications by experimenters and the addition of new closed-loop control and analysis components through plugins. The source code of our system “StimOMatic” is available freely as open-source. PMID:23473800
Forder, Lewis; He, Xun; Franklin, Anna
2017-01-01
Debate exists about the time course of the effect of colour categories on visual processing. We investigated the effect of colour categories for two groups who differed in whether they categorised a blue-green boundary colour as the same- or different-category to a reliably-named blue colour and a reliably-named green colour. Colour differences were equated in just-noticeable differences to be equally discriminable. We analysed event-related potentials for these colours elicited on a passive visual oddball task and investigated the time course of categorical effects on colour processing. Support for category effects was found 100 ms after stimulus onset, and over frontal sites around 250 ms, suggesting that colour naming affects both early sensory and later stages of chromatic processing.
He, Xun; Franklin, Anna
2017-01-01
Debate exists about the time course of the effect of colour categories on visual processing. We investigated the effect of colour categories for two groups who differed in whether they categorised a blue-green boundary colour as the same- or different-category to a reliably-named blue colour and a reliably-named green colour. Colour differences were equated in just-noticeable differences to be equally discriminable. We analysed event-related potentials for these colours elicited on a passive visual oddball task and investigated the time course of categorical effects on colour processing. Support for category effects was found 100 ms after stimulus onset, and over frontal sites around 250 ms, suggesting that colour naming affects both early sensory and later stages of chromatic processing. PMID:28542426
Visual but not motor processes predict simple visuomotor reaction time of badminton players.
Hülsdünker, Thorben; Strüder, Heiko K; Mierau, Andreas
2018-03-01
The athlete's brain exhibits significant functional adaptations that facilitate visuomotor reaction performance. However, it is currently unclear if the same neurophysiological processes that differentiate athletes from non-athletes also determine performance within a homogeneous group of athletes. This information can provide valuable help for athletes and coaches aiming to optimize existing training regimes. Therefore, this study aimed to identify the neurophysiological correlates of visuomotor reaction performance in a group of skilled athletes. In 36 skilled badminton athletes, electroencephalography (EEG) was used to investigate pattern reversal and motion onset visual-evoked potentials (VEPs) as well as visuomotor reaction time (VMRT) during a simple reaction task. Stimulus-locked and response-locked event-related potentials (ERPs) in visual and motor regions as well as the onset of muscle activation (EMG onset) were determined. Correlation and multiple regression analyses identified the neurophysiological parameters predicting EMG onset and VMRT. For pattern reversal stimuli, the P100 latency and age best predicted EMG onset (r = 0.43; p = .003) and VMRT (r = 0.62; p = .001). In the motion onset experiment, EMG onset (r = 0.80; p < .001) and VMRT (r = 0.78; p < .001) were predicted by N2 latency and age. In both conditions, cortical potentials in motor regions were not correlated with EMG onset or VMRT. It is concluded that previously identified neurophysiological parameters differentiating athletes from non-athletes do not necessarily determine performance within a homogeneous group of athletes. Specifically, the speed of visual perception/processing predicts EMG onset and VMRT in skilled badminton players while motor-related processes, although differentiating athletes from non-athletes, are not associated simple with visuomotor reaction performance.
Inukai, Tomoe; Kumada, Takatsune; Kawahara, Jun-ichiro
2010-05-01
The identification of a central visual target is impaired by the onset of a peripheral distractor. This impairment is said to occur because attentional focus is diverted to the peripheral distractor. We examined whether distractor offset would enhance or reduce attentional capture by manipulating the duration of the distractor. Observers identified a color singleton among a rapid stream of homogeneous nontargets. Peripheral distractors disappeared 43 or 172 msec after onset (the short- and long-duration conditions, respectively). Identification accuracy was greater in the long-duration condition than in the short-duration condition. The same pattern of results was obtained when participants identified a target of a designated color among heterogeneous nontargets when the color of the distractor was the same as that of the target. These findings suggest that attentional capture consists of stimulus onset and offset, both of which are susceptible to top-down attentional set.
Sundberg, Kristy A.; Mitchell, Jude F.; Gawne, Timothy J.
2012-01-01
Many previous studies have demonstrated that changes in selective attention can alter the response magnitude of visual cortical neurons, but there has been little evidence for attention affecting response latency. Small latency differences, though hard to detect, can potentially be of functional importance, and may also give insight into the mechanisms of neuronal computation. We therefore reexamined the effect of attention on the response latency of both single units and the local field potential (LFP) in primate visual cortical area V4. We find that attention does produce small (1–2 ms) but significant reductions in the latency of both the spiking and LFP responses. Though attention, like contrast elevation, reduces response latencies, we find that the two have different effects on the magnitude of the LFP. Contrast elevations increase and attention decreases the magnitude of the initial deflection of the stimulus-evoked LFP. Both contrast elevation and attention increase the magnitude of the spiking response. We speculate that latencies may be reduced at higher contrast because stronger stimulus inputs drive neurons more rapidly to spiking threshold, while attention may reduce latencies by placing neurons in a more depolarized state closer to threshold before stimulus onset. PMID:23136440
Wutz, Andreas; Weisz, Nathan; Braun, Christoph; Melcher, David
2014-01-22
Dynamic vision requires both stability of the current perceptual representation and sensitivity to the accumulation of sensory evidence over time. Here we study the electrophysiological signatures of this intricate balance between temporal segregation and integration in vision. Within a forward masking paradigm with short and long stimulus onset asynchronies (SOA), we manipulated the temporal overlap of the visual persistence of two successive transients. Human observers enumerated the items presented in the second target display as a measure of the informational capacity read-out from this partly temporally integrated visual percept. We observed higher β-power immediately before mask display onset in incorrect trials, in which enumeration failed due to stronger integration of mask and target visual information. This effect was timescale specific, distinguishing between segregation and integration of visual transients that were distant in time (long SOA). Conversely, for short SOA trials, mask onset evoked a stronger visual response when mask and targets were correctly segregated in time. Examination of the target-related response profile revealed the importance of an evoked α-phase reset for the segregation of those rapid visual transients. Investigating this precise mapping of the temporal relationships of visual signals onto electrophysiological responses highlights how the stream of visual information is carved up into discrete temporal windows that mediate between segregated and integrated percepts. Fragmenting the stream of visual information provides a means to stabilize perceptual events within one instant in time.
Retter, Talia L; Jiang, Fang; Webster, Michael A; Rossion, Bruno
2018-04-01
Fast periodic visual stimulation combined with electroencephalography (FPVS-EEG) has unique sensitivity and objectivity in measuring rapid visual categorization processes. It constrains image processing time by presenting stimuli rapidly through brief stimulus presentation durations and short inter-stimulus intervals. However, the selective impact of these temporal parameters on visual categorization is largely unknown. Here, we presented natural images of objects at a rate of 10 or 20 per second (10 or 20 Hz), with faces appearing once per second (1 Hz), leading to two distinct frequency-tagged EEG responses. Twelve observers were tested with three squarewave image presentation conditions: 1) with an ISI, a traditional 50% duty cycle at 10 Hz (50-ms stimulus duration separated by a 50-ms ISI); 2) removing the ISI and matching the rate, a 100% duty cycle at 10 Hz (100-ms duration with 0-ms ISI); 3) removing the ISI and matching the stimulus presentation duration, a 100% duty cycle at 20 Hz (50-ms duration with 0-ms ISI). The face categorization response was significantly decreased in the 20 Hz 100% condition. The conditions at 10 Hz showed similar face-categorization responses, peaking maximally over the right occipito-temporal (ROT) cortex. However, the onset of the 10 Hz 100% response was delayed by about 20 ms over the ROT region relative to the 10 Hz 50% condition, likely due to immediate forward-masking by preceding images. Taken together, these results help to interpret how the FPVS-EEG paradigm sets temporal constraints on visual image categorization. Copyright © 2018 Elsevier Ltd. All rights reserved.
Chen, Juan; Yu, Qing; Zhu, Ziyun; Peng, Yujia; Fang, Fang
2016-01-01
In natural scenes, multiple objects are usually presented simultaneously. How do specific areas of the brain respond to multiple objects based on their responses to each individual object? Previous functional magnetic resonance imaging (fMRI) studies have shown that the activity induced by a multiobject stimulus in the primary visual cortex (V1) can be predicted by the linear or nonlinear sum of the activities induced by its component objects. However, there has been little evidence from electroencephelogram (EEG) studies so far. Here we explored how V1 responded to multiple objects by comparing the EEG signals evoked by a three-grating stimulus with those evoked by its two components (the central grating and 2 flanking gratings). We focused on the earliest visual component C1 (onset latency of ∼50 ms) because it has been shown to reflect the feedforward responses of neurons in V1. We found that when the stimulus was unattended, the amplitude of the C1 evoked by the three-grating stimulus roughly equaled the sum of the amplitudes of the C1s evoked by its two components, regardless of the distances between these gratings. When the stimulus was attended, this linear spatial summation existed only when the three gratings were far apart from each other. When the three gratings were close to each other, the spatial summation became compressed. These results suggest that the earliest visual responses in V1 follow a linear summation rule when attention is not involved and that attention can affect the earliest interactions between multiple objects. Copyright © 2016 the American Physiological Society.
Rapid discrimination of visual scene content in the human brain.
Anokhin, Andrey P; Golosheykin, Simon; Sirevaag, Erik; Kristjansson, Sean; Rohrbaugh, John W; Heath, Andrew C
2006-06-06
The rapid evaluation of complex visual environments is critical for an organism's adaptation and survival. Previous studies have shown that emotionally significant visual scenes, both pleasant and unpleasant, elicit a larger late positive wave in the event-related brain potential (ERP) than emotionally neutral pictures. The purpose of the present study was to examine whether neuroelectric responses elicited by complex pictures discriminate between specific, biologically relevant contents of the visual scene and to determine how early in the picture processing this discrimination occurs. Subjects (n = 264) viewed 55 color slides differing in both scene content and emotional significance. No categorical judgments or responses were required. Consistent with previous studies, we found that emotionally arousing pictures, regardless of their content, produce a larger late positive wave than neutral pictures. However, when pictures were further categorized by content, anterior ERP components in a time window between 200 and 600 ms following stimulus onset showed a high selectivity for pictures with erotic content compared to other pictures regardless of their emotional valence (pleasant, neutral, and unpleasant) or emotional arousal. The divergence of ERPs elicited by erotic and non-erotic contents started at 185 ms post-stimulus in the fronto-central midline region, with a later onset in parietal regions. This rapid, selective, and content-specific processing of erotic materials and its dissociation from other pictures (including emotionally positive pictures) suggests the existence of a specialized neural network for prioritized processing of a distinct category of biologically relevant stimuli with high adaptive and evolutionary significance.
Rapid discrimination of visual scene content in the human brain
Anokhin, Andrey P.; Golosheykin, Simon; Sirevaag, Erik; Kristjansson, Sean; Rohrbaugh, John W.; Heath, Andrew C.
2007-01-01
The rapid evaluation of complex visual environments is critical for an organism's adaptation and survival. Previous studies have shown that emotionally significant visual scenes, both pleasant and unpleasant, elicit a larger late positive wave in the event-related brain potential (ERP) than emotionally neutral pictures. The purpose of the present study was to examine whether neuroelectric responses elicited by complex pictures discriminate between specific, biologically relevant contents of the visual scene and to determine how early in the picture processing this discrimination occurs. Subjects (n=264) viewed 55 color slides differing in both scene content and emotional significance. No categorical judgments or responses were required. Consistent with previous studies, we found that emotionally arousing pictures, regardless of their content, produce a larger late positive wave than neutral pictures. However, when pictures were further categorized by content, anterior ERP components in a time window between 200−600 ms following stimulus onset showed a high selectivity for pictures with erotic content compared to other pictures regardless of their emotional valence (pleasant, neutral, and unpleasant) or emotional arousal. The divergence of ERPs elicited by erotic and non-erotic contents started at 185 ms post-stimulus in the fronto-central midline regions, with a later onset in parietal regions. This rapid, selective, and content-specific processing of erotic materials and its dissociation from other pictures (including emotionally positive pictures) suggests the existence of a specialized neural network for prioritized processing of a distinct category of biologically relevant stimuli with high adaptive and evolutionary significance. PMID:16712815
Neural Dynamics Underlying Target Detection in the Human Brain
Bansal, Arjun K.; Madhavan, Radhika; Agam, Yigal; Golby, Alexandra; Madsen, Joseph R.
2014-01-01
Sensory signals must be interpreted in the context of goals and tasks. To detect a target in an image, the brain compares input signals and goals to elicit the correct behavior. We examined how target detection modulates visual recognition signals by recording intracranial field potential responses from 776 electrodes in 10 epileptic human subjects. We observed reliable differences in the physiological responses to stimuli when a cued target was present versus absent. Goal-related modulation was particularly strong in the inferior temporal and fusiform gyri, two areas important for object recognition. Target modulation started after 250 ms post stimulus, considerably after the onset of visual recognition signals. While broadband signals exhibited increased or decreased power, gamma frequency power showed predominantly increases during target presence. These observations support models where task goals interact with sensory inputs via top-down signals that influence the highest echelons of visual processing after the onset of selective responses. PMID:24553944
Accessing long-term memory representations during visual change detection.
Beck, Melissa R; van Lamsweerde, Amanda E
2011-04-01
In visual change detection tasks, providing a cue to the change location concurrent with the test image (post-cue) can improve performance, suggesting that, without a cue, not all encoded representations are automatically accessed. Our studies examined the possibility that post-cues can encourage the retrieval of representations stored in long-term memory (LTM). Participants detected changes in images composed of familiar objects. Performance was better when the cue directed attention to the post-change object. Supporting the role of LTM in the cue effect, the effect was similar regardless of whether the cue was presented during the inter-stimulus interval, concurrent with the onset of the test image, or after the onset of the test image. Furthermore, the post-cue effect and LTM performance were similarly influenced by encoding time. These findings demonstrate that monitoring the visual world for changes does not automatically engage LTM retrieval.
Effects of set-size and selective spatial attention on motion processing.
Dobkins, K R; Bosworth, R G
2001-05-01
In order to investigate the effects of divided attention and selective spatial attention on motion processing, we obtained direction-of-motion thresholds using a stochastic motion display under various attentional manipulations and stimulus durations (100-600 ms). To investigate divided attention, we compared motion thresholds obtained when a single motion stimulus was presented in the visual field (set-size=1) to those obtained when the motion stimulus was presented amongst three confusable noise distractors (set-size=4). The magnitude of the observed detriment in performance with an increase in set-size from 1 to 4 could be accounted for by a simple decision model based on signal detection theory, which assumes that attentional resources are not limited in capacity. To investigate selective attention, we compared motion thresholds obtained when a valid pre-cue alerted the subject to the location of the to-be-presented motion stimulus to those obtained when no pre-cue was provided. As expected, the effect of pre-cueing was large when the visual field contained noise distractors, an effect we attribute to "noise reduction" (i.e. the pre-cue allows subjects to exclude irrelevant distractors that would otherwise impair performance). In the single motion stimulus display, we found a significant benefit of pre-cueing only at short durations (< or =150 ms), a result that can potentially be explained by a "time-to-orient" hypothesis (i.e. the pre-cue improves performance by eliminating the time it takes to orient attention to a peripheral stimulus at its onset, thereby increasing the time spent processing the stimulus). Thus, our results suggest that the visual motion system can analyze several stimuli simultaneously without limitations on sensory processing per se, and that spatial pre-cueing serves to reduce the effects of distractors and perhaps increase the effective processing time of the stimulus.
Slavutskaia, M V; Shul'govskiĭ, V V
2003-01-01
The EEG of 10 right-handed subjects preceding saccades with mean values of latent periods were selected and averaged. Two standard paradigms of presentation of visual stimuli (central fixation stimulus-peripheral target succession): with a 200-ms inerstimulus interval (GAP) and successive single step (SS). During the period of central fixation, two kinds of positive potentials were observed: fast potentials of "inermediate" positivity (IP) developing 600-400 ms prior to saccade onset and fast potentials of "leading" positivity (LP), which immediately preceded the offset of the central fixation stimulus. Peak latency of the LP potentials was 300 ms prior to saccade onset in the SS paradigm and 400 ms in the GAP paradigm. These potentials were predominantly recorded in the frontal and frontosagittal cortical areas. Decrease in the latency by 30-50 ms in the GAP paradigm was associated with more pronounced positive potentials during the fixation period and absence of the initiation potential P-1' (or decrease in its amplitude). The obtained evidence suggest that the fast positive presaccadic potentials are of a complex nature related to attention, anticipation, motor preparation, decision making, saccadic initiation, and backward afferentation.
Oculomotor Capture by New and Unannounced Color Singletons during Visual Search.
Retell, James D; Venini, Dustin; Becker, Stefanie I
2015-07-01
The surprise capture hypothesis states that a stimulus will capture attention to the extent that it is preattentively available and deviates from task-expectancies. Interestingly, it has been noted by Horstmann (Psychological Science 13: 499-505. doi: 10.1111/1467-9280.00488, 2002, Human Perception and Performance 31: 1039-1060. doi: 10.1037/00961523.31.5.1039, 2005, Psychological Research, 70, 13-25, 2006) that the time course of capture by such classes of stimuli appears distinct from that of capture by expected stimuli. Specifically, attention shifts to an unexpected stimulus are delayed relative to an expected stimulus (delayed onset account). Across two experiments, we investigated this claim under conditions of unguided (Exp. 1) and guided (Exp. 2) search using eye-movements as the primary index of attentional selection. In both experiments, we found strong evidence of surprise capture for the first presentation of an unannounced color singleton. However, in both experiments the pattern of eye-movements was not consistent with a delayed onset account of attention capture. Rather, we observed costs associated with the unexpected stimulus only once the target had been selected. We propose an interference account of surprise capture to explain our data and argue that this account also can explain existing patterns of data in the literature.
Neural Dynamics of Audiovisual Synchrony and Asynchrony Perception in 6-Month-Old Infants
Kopp, Franziska; Dietrich, Claudia
2013-01-01
Young infants are sensitive to multisensory temporal synchrony relations, but the neural dynamics of temporal interactions between vision and audition in infancy are not well understood. We investigated audiovisual synchrony and asynchrony perception in 6-month-old infants using event-related brain potentials (ERP). In a prior behavioral experiment (n = 45), infants were habituated to an audiovisual synchronous stimulus and tested for recovery of interest by presenting an asynchronous test stimulus in which the visual stream was delayed with respect to the auditory stream by 400 ms. Infants who behaviorally discriminated the change in temporal alignment were included in further analyses. In the EEG experiment (final sample: n = 15), synchronous and asynchronous stimuli (visual delay of 400 ms) were presented in random order. Results show latency shifts in the auditory ERP components N1 and P2 as well as the infant ERP component Nc. Latencies in the asynchronous condition were significantly longer than in the synchronous condition. After video onset but preceding the auditory onset, amplitude modulations propagating from posterior to anterior sites and related to the Pb component of infants’ ERP were observed. Results suggest temporal interactions between the two modalities. Specifically, they point to the significance of anticipatory visual motion for auditory processing, and indicate young infants’ predictive capacities for audiovisual temporal synchrony relations. PMID:23346071
Independent effects of motivation and spatial attention in the human visual cortex.
Bayer, Mareike; Rossi, Valentina; Vanlessen, Naomi; Grass, Annika; Schacht, Annekathrin; Pourtois, Gilles
2017-01-01
Motivation and attention constitute major determinants of human perception and action. Nonetheless, it remains a matter of debate whether motivation effects on the visual cortex depend on the spatial attention system, or rely on independent pathways. This study investigated the impact of motivation and spatial attention on the activity of the human primary and extrastriate visual cortex by employing a factorial manipulation of the two factors in a cued pattern discrimination task. During stimulus presentation, we recorded event-related potentials and pupillary responses. Motivational relevance increased the amplitudes of the C1 component at ∼70 ms after stimulus onset. This modulation occurred independently of spatial attention effects, which were evident at the P1 level. Furthermore, motivation and spatial attention had independent effects on preparatory activation as measured by the contingent negative variation; and pupil data showed increased activation in response to incentive targets. Taken together, these findings suggest independent pathways for the influence of motivation and spatial attention on the activity of the human visual cortex. © The Author (2016). Published by Oxford University Press.
Śmigasiewicz, Kamila; Hasan, Gabriel Sami; Verleger, Rolf
2017-01-01
In dynamically changing environments, spatial attention is not equally distributed across the visual field. For instance, when two streams of stimuli are presented left and right, the second target (T2) is better identified in the left visual field (LVF) than in the right visual field (RVF). Recently, it has been shown that this bias is related to weaker stimulus-driven orienting of attention toward the RVF: The RVF disadvantage was reduced with salient task-irrelevant valid cues and increased with invalid cues. Here we studied if also endogenous orienting of attention may compensate for this unequal distribution of stimulus-driven attention. Explicit information was provided about the location of T1 and T2. Effectiveness of the cue manipulation was confirmed by EEG measures: decreasing alpha power before stream onset with informative cues, earlier latencies of potentials evoked by T1-preceding distractors at the right than at the left hemisphere when T1 was cued left, and decreasing T1- and T2-evoked N2pc amplitudes with informative cues. Importantly, informative cues reduced (though did not completely abolish) the LVF advantage, indicated by improved identification of right T2, and reflected by earlier N2pc latency evoked by right T2 and larger decrease in alpha power after cues indicating right T2. Overall, these results suggest that endogenously driven attention facilitates stimulus-driven orienting of attention toward the RVF, thereby partially overcoming the basic LVF bias in spatial attention.
Nunez, Michael D.; Vandekerckhove, Joachim; Srinivasan, Ramesh
2016-01-01
Perceptual decision making can be accounted for by drift-diffusion models, a class of decision-making models that assume a stochastic accumulation of evidence on each trial. Fitting response time and accuracy to a drift-diffusion model produces evidence accumulation rate and non-decision time parameter estimates that reflect cognitive processes. Our goal is to elucidate the effect of attention on visual decision making. In this study, we show that measures of attention obtained from simultaneous EEG recordings can explain per-trial evidence accumulation rates and perceptual preprocessing times during a visual decision making task. Models assuming linear relationships between diffusion model parameters and EEG measures as external inputs were fit in a single step in a hierarchical Bayesian framework. The EEG measures were features of the evoked potential (EP) to the onset of a masking noise and the onset of a task-relevant signal stimulus. Single-trial evoked EEG responses, P200s to the onsets of visual noise and N200s to the onsets of visual signal, explain single-trial evidence accumulation and preprocessing times. Within-trial evidence accumulation variance was not found to be influenced by attention to the signal or noise. Single-trial measures of attention lead to better out-of-sample predictions of accuracy and correct reaction time distributions for individual subjects. PMID:28435173
Nunez, Michael D; Vandekerckhove, Joachim; Srinivasan, Ramesh
2017-02-01
Perceptual decision making can be accounted for by drift-diffusion models, a class of decision-making models that assume a stochastic accumulation of evidence on each trial. Fitting response time and accuracy to a drift-diffusion model produces evidence accumulation rate and non-decision time parameter estimates that reflect cognitive processes. Our goal is to elucidate the effect of attention on visual decision making. In this study, we show that measures of attention obtained from simultaneous EEG recordings can explain per-trial evidence accumulation rates and perceptual preprocessing times during a visual decision making task. Models assuming linear relationships between diffusion model parameters and EEG measures as external inputs were fit in a single step in a hierarchical Bayesian framework. The EEG measures were features of the evoked potential (EP) to the onset of a masking noise and the onset of a task-relevant signal stimulus. Single-trial evoked EEG responses, P200s to the onsets of visual noise and N200s to the onsets of visual signal, explain single-trial evidence accumulation and preprocessing times. Within-trial evidence accumulation variance was not found to be influenced by attention to the signal or noise. Single-trial measures of attention lead to better out-of-sample predictions of accuracy and correct reaction time distributions for individual subjects.
A user-friendly SSVEP-based brain-computer interface using a time-domain classifier.
Luo, An; Sullivan, Thomas J
2010-04-01
We introduce a user-friendly steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) system. Single-channel EEG is recorded using a low-noise dry electrode. Compared to traditional gel-based multi-sensor EEG systems, a dry sensor proves to be more convenient, comfortable and cost effective. A hardware system was built that displays four LED light panels flashing at different frequencies and synchronizes with EEG acquisition. The visual stimuli have been carefully designed such that potential risk to photosensitive people is minimized. We describe a novel stimulus-locked inter-trace correlation (SLIC) method for SSVEP classification using EEG time-locked to stimulus onsets. We studied how the performance of the algorithm is affected by different selection of parameters. Using the SLIC method, the average light detection rate is 75.8% with very low error rates (an 8.4% false positive rate and a 1.3% misclassification rate). Compared to a traditional frequency-domain-based method, the SLIC method is more robust (resulting in less annoyance to the users) and is also suitable for irregular stimulus patterns.
Inhibition of voluntary saccadic eye movement commands by abrupt visual onsets.
Edelman, Jay A; Xu, Kitty Z
2009-03-01
Saccadic eye movements are made both to explore the visual world and to react to sudden sensory events. We studied the ability for humans to execute a voluntary (i.e., nonstimulus-driven) saccade command in the face of a suddenly appearing visual stimulus. Subjects were required to make a saccade to a memorized location when a central fixation point disappeared. At varying times relative to fixation point disappearance a visual distractor appeared at a random location. When the distractor appeared at locations distant from the target virtually no saccades were initiated in a 30- to 40-ms interval beginning 70-80 ms after appearance of the distractor. If the distractor was presented slightly earlier relative to saccade initiation then saccades tended to have smaller amplitudes, with velocity profiles suggesting that the distractor terminated them prematurely. In contrast, distractors appearing close to the saccade target elicited express saccade-like movements 70-100 ms after their appearance, although the saccade endpoint was generally scarcely affected by the distractor. An additional experiment showed that these effects were weaker when the saccade was made to a visible target in a delayed task and still weaker when the saccade itself was made in response to the abrupt appearance of a visual stimulus. A final experiment revealed that the effect is smaller, but quite evident, for very small stimuli. These results suggest that the transient component of a visual response can briefly but almost completely suppress a voluntary saccade command, but only when the stimulus evoking that response is distant from the saccade goal.
Adaptive Acceleration of Visually Evoked Smooth Eye Movements in Mice
2016-01-01
The optokinetic response (OKR) consists of smooth eye movements following global motion of the visual surround, which suppress image slip on the retina for visual acuity. The effective performance of the OKR is limited to rather slow and low-frequency visual stimuli, although it can be adaptably improved by cerebellum-dependent mechanisms. To better understand circuit mechanisms constraining OKR performance, we monitored how distinct kinematic features of the OKR change over the course of OKR adaptation, and found that eye acceleration at stimulus onset primarily limited OKR performance but could be dramatically potentiated by visual experience. Eye acceleration in the temporal-to-nasal direction depended more on the ipsilateral floccular complex of the cerebellum than did that in the nasal-to-temporal direction. Gaze-holding following the OKR was also modified in parallel with eye-acceleration potentiation. Optogenetic manipulation revealed that synchronous excitation and inhibition of floccular complex Purkinje cells could effectively accelerate eye movements in the nasotemporal and temporonasal directions, respectively. These results collectively delineate multiple motor pathways subserving distinct aspects of the OKR in mice and constrain hypotheses regarding cellular mechanisms of the cerebellum-dependent tuning of movement acceleration. SIGNIFICANCE STATEMENT Although visually evoked smooth eye movements, known as the optokinetic response (OKR), have been studied in various species for decades, circuit mechanisms of oculomotor control and adaptation remain elusive. In the present study, we assessed kinematics of the mouse OKR through the course of adaptation training. Our analyses revealed that eye acceleration at visual-stimulus onset primarily limited working velocity and frequency range of the OKR, yet could be dramatically potentiated during OKR adaptation. Potentiation of eye acceleration exhibited different properties between the nasotemporal and temporonasal OKRs, indicating distinct visuomotor circuits underlying the two. Lesions and optogenetic manipulation of the cerebellum provide constraints on neural circuits mediating visually driven eye acceleration and its adaptation. PMID:27335412
Zhao, Di; Ku, Yixuan
2018-05-01
Neural activity in the dorsolateral prefrontal cortex (DLPFC) has been suggested to integrate information from distinct sensory areas. However, how the DLPFC interacts with the bilateral primary somatosensory cortices (SIs) in tactile-visual cross-modal working memory has not yet been established. In the present study, we applied single-pulse transcranial magnetic stimulation (sp-TMS) over the contralateral DLPFC and bilateral SIs of human participants at various time points, while they performed a tactile-visual delayed matching-to-sample task with a 2-second delay. sp-TMS over the contralateral DLPFC or the contralateral SI at either an sensory encoding stage [i.e. 100 ms after the onset of a vibrotactile sample stimulus (200-ms duration)] or an early maintenance stage (i.e. 300 ms after the onset), significantly impaired the accuracy of task performance; sp-TMS over the contralateral DLPFC or the ipsilateral SI at a late maintenance stage (1600 ms and 1900 ms) also significantly disrupted the performance. Furthermore, at 300 ms after the onset of the vibrotactile sample stimulus, there was a significant correlation between the deteriorating effects of sp-TMS over the contralateral SI and the contralateral DLPFC. These results imply that the DLPFC and the bilateral SIs play causal roles at distinctive stages during cross-modal working memory, while the contralateral DLPFC communicates with the contralateral SI in the early delay, and cooperates with the ipsilateral SI in the late delay. Copyright © 2018 Elsevier B.V. All rights reserved.
Temporal Binding Window of the Sound-Induced Flash Illusion in Amblyopia.
Narinesingh, Cindy; Goltz, Herbert C; Wong, Agnes M F
2017-03-01
Amblyopia is a neurodevelopmental visual disorder caused by abnormal visual experience in childhood. In addition to known visual deficits, there is evidence for changes in audiovisual integration in amblyopia using explicit tasks. We examined audiovisual integration in amblyopia using an implicit task that is more relevant in a real-world context. A total of 11 participants with amblyopia and 16 controls were tested binocularly and monocularly on the sound-induced flash illusion, in which flashes and beeps are presented concurrently and the perceived number of flashes is influenced by the number of beeps. The task used 1 to 2 rapid peripheral flashes presented with 0 to 2 beeps, at 5 stimulus onset asynchronies, that is, beep (-200 milliseconds, -100 milliseconds) or flash leading (100 milliseconds, 200 milliseconds) or simultaneous (0 milliseconds). Participants reported the number of perceived flashes. Susceptibility was indicated by a "2 flashes" response to "fission" (1 flash, 2 beeps) or "1 flash" to "fusion" (2 flashes, 1 beep). For fission with the beep leading during binocular viewing, controls showed an expected decrease in illusion strength as stimulus onset asynchronies increased, whereas the illusion strength remained constant in participants with amblyopia, indicating a wider temporal binding window in amblyopia (P = 0.007). For fusion, participants with amblyopia showed reduced illusion strength during amblyopic eye viewing (P = 0.044) with the flash leading. Amblyopia is associated with the widening of the temporal binding window, specifically for fission when viewing binocularly with the beep leading. This suggests a developmental adaptation to delayed amblyopic eye visual processing to optimize audiovisual integration.
Liao, Hsin-I; Yeh, Su-Ling
2013-11-01
Attentional orienting can be involuntarily directed to task-irrelevant stimuli, but it remains unsolved whether such attentional capture is contingent on top-down settings or could be purely stimulus-driven. We propose that attentional capture depends on the stimulus property because transient and static features are processed differently; thus, they might be modulated differently by top-down controls. To test this hybrid account, we adopted a spatial cuing paradigm in which a noninformative onset or color cue preceded an onset or color target with various stimulus onset asynchronies (SOAs). Results showed that the onset cue captured attention regardless of target type at short-but not long-SOAs. In contrast, the color cue captured attention at short and long SOAs, but only with a color target. The overall pattern of results corroborates our hypothesis, suggesting that different mechanisms are at work for stimulus-driven capture (by onset) and contingent capture (by color). Stimulus-driven capture elicits reflexive involuntary orienting, and contingent capture elicits voluntary feature-based enhancement.
Yum, Yen Na; Holcomb, Phillip J.; Grainger, Jonathan
2011-01-01
Comparisons of word and picture processing using Event-Related Potentials (ERPs) are contaminated by gross physical differences between the two types of stimuli. In the present study, we tackle this problem by comparing picture processing with word processing in an alphabetic and a logographic script, that are also characterized by gross physical differences. Native Mandarin Chinese speakers viewed pictures (line drawings) and Chinese characters (Experiment 1), native English speakers viewed pictures and English words (Experiment 2), and naïve Chinese readers (native English speakers) viewed pictures and Chinese characters (Experiment 3) in a semantic categorization task. The varying pattern of differences in the ERPs elicited by pictures and words across the three experiments provided evidence for i) script-specific processing arising between 150–200 ms post-stimulus onset, ii) domain-specific but script-independent processing arising between 200–300 ms post-stimulus onset, and iii) processing that depended on stimulus meaningfulness in the N400 time window. The results are interpreted in terms of differences in the way visual features are mapped onto higher-level representations for pictures and words in alphabetic and logographic writing systems. PMID:21439991
Multiple foci of spatial attention in multimodal working memory.
Katus, Tobias; Eimer, Martin
2016-11-15
The maintenance of sensory information in working memory (WM) is mediated by the attentional activation of stimulus representations that are stored in perceptual brain regions. Using event-related potentials (ERPs), we measured tactile and visual contralateral delay activity (tCDA/CDA components) in a bimodal WM task to concurrently track the attention-based maintenance of information stored in anatomically segregated (somatosensory and visual) brain areas. Participants received tactile and visual sample stimuli on both sides, and in different blocks, memorized these samples on the same side or on opposite sides. After a retention delay, memory was unpredictably tested for touch or vision. In the same side blocks, tCDA and CDA components simultaneously emerged over the same hemisphere, contralateral to the memorized tactile/visual sample set. In opposite side blocks, these two components emerged over different hemispheres, but had the same sizes and onset latencies as in the same side condition. Our results reveal distinct foci of tactile and visual spatial attention that were concurrently maintained on task-relevant stimulus representations in WM. The independence of spatially-specific biasing mechanisms for tactile and visual WM content suggests that multimodal information is stored in distributed perceptual brain areas that are activated through modality-specific processes that can operate simultaneously and largely independently of each other. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Wu, Shu-Chieh; Remington, Roger W.
2003-01-01
Five visual search experiments found oculomotor and attentional capture consistent with predictions of contingent orienting, contrary to claims that oculomotor capture is purely stimulus driven. Separate saccade and attend-only conditions contained a color target appearing either singly, with an onset or color distractor, or both. In singleton mode, onsets produced oculomotor and attentional capture. In feature mode, capture was absent or greatly reduced, providing evidence for top-down modulation of both types of capture. Although attentional capture by color abstractors was present throughout, oculomotor capture by color occurred only when accompanied by transient change, providing evidence for a dissociation between oculomotor and attentional capture. Oculomotor and attentional capture appear to be mediated by top-down attentional control settings, but transient change may be necessary for oculomotor capture. ((c) 2003 APA, all rights reserved).
Neuronal responses to face-like stimuli in the monkey pulvinar.
Nguyen, Minh Nui; Hori, Etsuro; Matsumoto, Jumpei; Tran, Anh Hai; Ono, Taketoshi; Nishijo, Hisao
2013-01-01
The pulvinar nuclei appear to function as the subcortical visual pathway that bypasses the striate cortex, rapidly processing coarse facial information. We investigated responses from monkey pulvinar neurons during a delayed non-matching-to-sample task, in which monkeys were required to discriminate five categories of visual stimuli [photos of faces with different gaze directions, line drawings of faces, face-like patterns (three dark blobs on a bright oval), eye-like patterns and simple geometric patterns]. Of 401 neurons recorded, 165 neurons responded differentially to the visual stimuli. These visual responses were suppressed by scrambling the images. Although these neurons exhibited a broad response latency distribution, face-like patterns elicited responses with the shortest latencies (approximately 50 ms). Multidimensional scaling analysis indicated that the pulvinar neurons could specifically encode face-like patterns during the first 50-ms period after stimulus onset and classify the stimuli into one of the five different categories during the next 50-ms period. The amount of stimulus information conveyed by the pulvinar neurons and the number of stimulus-differentiating neurons were consistently higher during the second 50-ms period than during the first 50-ms period. These results suggest that responsiveness to face-like patterns during the first 50-ms period might be attributed to ascending inputs from the superior colliculus or the retina, while responsiveness to the five different stimulus categories during the second 50-ms period might be mediated by descending inputs from cortical regions. These findings provide neurophysiological evidence for pulvinar involvement in social cognition and, specifically, rapid coarse facial information processing. © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
NASA Astrophysics Data System (ADS)
Iramina, Keiji; Ge, Sheng; Hyodo, Akira; Hayami, Takehito; Ueno, Shoogo
2009-04-01
In this study, we applied a transcranial magnetic stimulation (TMS) to investigate the temporal aspect for the functional processing of visual attention. Although it has been known that right posterior parietal cortex (PPC) in the brain has a role in certain visual search tasks, there is little knowledge about the temporal aspect of this area. Three visual search tasks that have different difficulties of task execution individually were carried out. These three visual search tasks are the "easy feature task," the "hard feature task," and the "conjunction task." To investigate the temporal aspect of the PPC involved in the visual search, we applied various stimulus onset asynchronies (SOAs) and measured the reaction time of the visual search. The magnetic stimulation was applied on the right PPC or the left PPC by the figure-eight coil. The results show that the reaction times of the hard feature task are longer than those of the easy feature task. When SOA=150 ms, compared with no-TMS condition, there was a significant increase in target-present reaction time when TMS pulses were applied. We considered that the right PPC was involved in the visual search at about SOA=150 ms after visual stimulus presentation. The magnetic stimulation to the right PPC disturbed the processing of the visual search. However, the magnetic stimulation to the left PPC gives no effect on the processing of the visual search.
Fixation not required: characterizing oculomotor attention capture for looming stimuli.
Lewis, Joanna E; Neider, Mark B
2015-10-01
A stimulus moving toward us, such as a ball being thrown in our direction or a vehicle braking suddenly in front of ours, often represents a stimulus that requires a rapid response. Using a visual search task in which target and distractor items were systematically associated with a looming object, we explored whether this sort of looming motion captures attention, the nature of such capture using eye movement measures (overt/covert), and the extent to which such capture effects are more closely tied to motion onset or the motion itself. We replicated previous findings indicating that looming motion induces response time benefits and costs during visual search Lin, Franconeri, & Enns(Psychological Science 19(7): 686-693, 2008). These differences in response times were independent of fixation, indicating that these capture effects did not necessitate overt attentional shifts to a looming object for search benefits or costs to occur. Interestingly, we found no differences in capture benefits and costs associated with differences in looming motion type. Combined, our results suggest that capture effects associated with looming motion are more likely subserved by covert attentional mechanisms rather than overt mechanisms, and attention capture for looming motion is likely related to motion itself rather than the onset of motion.
Ruthmann, Katja; Schacht, Annekathrin
2017-01-01
Abstract Emotional stimuli attract attention and lead to increased activity in the visual cortex. The present study investigated the impact of personal relevance on emotion processing by presenting emotional words within sentences that referred to participants’ significant others or to unknown agents. In event-related potentials, personal relevance increased visual cortex activity within 100 ms after stimulus onset and the amplitudes of the Late Positive Complex (LPC). Moreover, personally relevant contexts gave rise to augmented pupillary responses and higher arousal ratings, suggesting a general boost of attention and arousal. Finally, personal relevance increased emotion-related ERP effects starting around 200 ms after word onset; effects for negative words compared to neutral words were prolonged in duration. Source localizations of these interactions revealed activations in prefrontal regions, in the visual cortex and in the fusiform gyrus. Taken together, these results demonstrate the high impact of personal relevance on reading in general and on emotion processing in particular. PMID:28541505
Gamma oscillation maintains stimulus structure-dependent synchronization in cat visual cortex.
Samonds, Jason M; Bonds, A B
2005-01-01
Visual cortical cells demonstrate both oscillation and synchronization, although the underlying causes and functional significance of these behaviors remain uncertain. We simultaneously recorded single-unit activity with microelectrode arrays in supragranular layers of area 17 of cats paralyzed and anesthetized with propofol and N(2)O. Rate-normalized autocorrelograms of 24 cells reveal bursting (100%) and gamma oscillation (63%). Renewal density analysis, used to explore the source of oscillation, suggests a contribution from extrinsic influences such as feedback. However, a bursting refractory period, presumably membrane-based, could also encourage oscillatory firing. When we investigated the source of synchronization for 60 cell pairs we found only moderate correlation of synchrony with bursts and oscillation. We did, nonetheless, discover a possible functional role for oscillation. In all cases of cross-correlograms that exhibited oscillation, the strength of the synchrony was maintained throughout the stimulation period. When no oscillation was apparent, 75% of the cell pairs showed decay in their synchronization. The synchrony between cells is strongly dependent on similar response onset latencies. We therefore propose that structured input, which yields tight organization of latency, is a more likely candidate for the source of synchronization than oscillation. The reliable synchrony at response onset could be driven by spatial and temporal correlation of the stimulus that is preserved through the earlier stages of the visual system. Oscillation then contributes to maintenance of the synchrony to enhance reliable transmission of the information for higher cognitive processing.
Representational dynamics of object recognition: Feedforward and feedback information flows.
Goddard, Erin; Carlson, Thomas A; Dermody, Nadene; Woolgar, Alexandra
2016-03-01
Object perception involves a range of visual and cognitive processes, and is known to include both a feedfoward flow of information from early visual cortical areas to higher cortical areas, along with feedback from areas such as prefrontal cortex. Previous studies have found that low and high spatial frequency information regarding object identity may be processed over different timescales. Here we used the high temporal resolution of magnetoencephalography (MEG) combined with multivariate pattern analysis to measure information specifically related to object identity in peri-frontal and peri-occipital areas. Using stimuli closely matched in their low-level visual content, we found that activity in peri-occipital cortex could be used to decode object identity from ~80ms post stimulus onset, and activity in peri-frontal cortex could also be used to decode object identity from a later time (~265ms post stimulus onset). Low spatial frequency information related to object identity was present in the MEG signal at an earlier time than high spatial frequency information for peri-occipital cortex, but not for peri-frontal cortex. We additionally used Granger causality analysis to compare feedforward and feedback influences on representational content, and found evidence of both an early feedfoward flow and later feedback flow of information related to object identity. We discuss our findings in relation to existing theories of object processing and propose how the methods we use here could be used to address further questions of the neural substrates underlying object perception. Copyright © 2016 Elsevier Inc. All rights reserved.
Altered figure-ground perception in monkeys with an extra-striate lesion.
Supèr, Hans; Lamme, Victor A F
2007-11-05
The visual system binds and segments the elements of an image into coherent objects and their surroundings. Recent findings demonstrate that primary visual cortex is involved in this process of figure-ground organization. In the primary visual cortex the late part of a neural response to a stimulus correlates with figure-ground segregation and perception. Such a late onset indicates an involvement of feedback projections from higher visual areas. To investigate the possible role of feedback in figure-ground perception we removed dorsal extra-striate areas of the monkey visual cortex. The findings show that figure-ground perception is reduced when the figure is presented in the lesioned hemifield and perception is normal when the figure appeared in the intact hemifield. In conclusion, our observations show the importance for recurrent processing in visual perception.
NASA Technical Reports Server (NTRS)
Mattson, D. L.
1975-01-01
The effect of prolonged angular acceleration on choice reaction time to an accelerating visual stimulus was investigated, with 10 commercial airline pilots serving as subjects. The pattern of reaction times during and following acceleration was compared with the pattern of velocity estimates reported during identical trials. Both reaction times and velocity estimates increased at the onset of acceleration, declined prior to the termination of acceleration, and showed an aftereffect. These results are inconsistent with the torsion-pendulum theory of semicircular canal function and suggest that the vestibular adaptation is of central origin.
Testing sensory evidence against mnemonic templates
Myers, Nicholas E; Rohenkohl, Gustavo; Wyart, Valentin; Woolrich, Mark W; Nobre, Anna C; Stokes, Mark G
2015-01-01
Most perceptual decisions require comparisons between current input and an internal template. Classic studies propose that templates are encoded in sustained activity of sensory neurons. However, stimulus encoding is itself dynamic, tracing a complex trajectory through activity space. Which part of this trajectory is pre-activated to reflect the template? Here we recorded magneto- and electroencephalography during a visual target-detection task, and used pattern analyses to decode template, stimulus, and decision-variable representation. Our findings ran counter to the dominant model of sustained pre-activation. Instead, template information emerged transiently around stimulus onset and quickly subsided. Cross-generalization between stimulus and template coding, indicating a shared neural representation, occurred only briefly. Our results are compatible with the proposal that template representation relies on a matched filter, transforming input into task-appropriate output. This proposal was consistent with a signed difference response at the perceptual decision stage, which can be explained by a simple neural model. DOI: http://dx.doi.org/10.7554/eLife.09000.001 PMID:26653854
Adaptation in human visual cortex as a mechanism for rapid discrimination of aversive stimuli.
Keil, Andreas; Stolarova, Margarita; Moratti, Stephan; Ray, William J
2007-06-01
The ability to react rapidly and efficiently to adverse stimuli is crucial for survival. Neuroscience and behavioral studies have converged to show that visual information associated with aversive content is processed quickly and accurately and is associated with rapid amplification of the neural responses. In particular, unpleasant visual information has repeatedly been shown to evoke increased cortical activity during early visual processing between 60 and 120 ms following the onset of a stimulus. However, the nature of these early responses is not well understood. Using neutral versus unpleasant colored pictures, the current report examines the time course of short-term changes in the human visual cortex when a subject is repeatedly exposed to simple grating stimuli in a classical conditioning paradigm. We analyzed changes in amplitude and synchrony of large-scale oscillatory activity across 2 days of testing, which included baseline measurements, 2 conditioning sessions, and a final extinction session. We found a gradual increase in amplitude and synchrony of very early cortical oscillations in the 20-35 Hz range across conditioning sessions, specifically for conditioned stimuli predicting aversive visual events. This increase for conditioned stimuli affected stimulus-locked cortical oscillations at a latency of around 60-90 ms and disappeared during extinction. Our findings suggest that reorganization of neural connectivity on the level of the visual cortex acts to optimize early perception of specific features indicative of emotional relevance.
Differential temporal dynamics during visual imagery and perception.
Dijkstra, Nadine; Mostert, Pim; Lange, Floris P de; Bosch, Sander; van Gerven, Marcel Aj
2018-05-29
Visual perception and imagery rely on similar representations in the visual cortex. During perception, visual activity is characterized by distinct processing stages, but the temporal dynamics underlying imagery remain unclear. Here, we investigated the dynamics of visual imagery in human participants using magnetoencephalography. Firstly, we show that, compared to perception, imagery decoding becomes significant later and representations at the start of imagery already overlap with later time points. This suggests that during imagery, the entire visual representation is activated at once or that there are large differences in the timing of imagery between trials. Secondly, we found consistent overlap between imagery and perceptual processing around 160 ms and from 300 ms after stimulus onset. This indicates that the N170 gets reactivated during imagery and that imagery does not rely on early perceptual representations. Together, these results provide important insights for our understanding of the neural mechanisms of visual imagery. © 2018, Dijkstra et al.
Two different mechanisms support selective attention at different phases of training.
Itthipuripat, Sirawaj; Cha, Kexin; Byers, Anna; Serences, John T
2017-06-01
Selective attention supports the prioritized processing of relevant sensory information to facilitate goal-directed behavior. Studies in human subjects demonstrate that attentional gain of cortical responses can sufficiently account for attention-related improvements in behavior. On the other hand, studies using highly trained nonhuman primates suggest that reductions in neural noise can better explain attentional facilitation of behavior. Given the importance of selective information processing in nearly all domains of cognition, we sought to reconcile these competing accounts by testing the hypothesis that extensive behavioral training alters the neural mechanisms that support selective attention. We tested this hypothesis using electroencephalography (EEG) to measure stimulus-evoked visual responses from human subjects while they performed a selective spatial attention task over the course of ~1 month. Early in training, spatial attention led to an increase in the gain of stimulus-evoked visual responses. Gain was apparent within ~100 ms of stimulus onset, and a quantitative model based on signal detection theory (SDT) successfully linked the magnitude of this gain modulation to attention-related improvements in behavior. However, after extensive training, this early attentional gain was eliminated even though there were still substantial attention-related improvements in behavior. Accordingly, the SDT-based model required noise reduction to account for the link between the stimulus-evoked visual responses and attentional modulations of behavior. These findings suggest that training can lead to fundamental changes in the way attention alters the early cortical responses that support selective information processing. Moreover, these data facilitate the translation of results across different species and across experimental procedures that employ different behavioral training regimes.
Two different mechanisms support selective attention at different phases of training
Cha, Kexin; Byers, Anna; Serences, John T.
2017-01-01
Selective attention supports the prioritized processing of relevant sensory information to facilitate goal-directed behavior. Studies in human subjects demonstrate that attentional gain of cortical responses can sufficiently account for attention-related improvements in behavior. On the other hand, studies using highly trained nonhuman primates suggest that reductions in neural noise can better explain attentional facilitation of behavior. Given the importance of selective information processing in nearly all domains of cognition, we sought to reconcile these competing accounts by testing the hypothesis that extensive behavioral training alters the neural mechanisms that support selective attention. We tested this hypothesis using electroencephalography (EEG) to measure stimulus-evoked visual responses from human subjects while they performed a selective spatial attention task over the course of ~1 month. Early in training, spatial attention led to an increase in the gain of stimulus-evoked visual responses. Gain was apparent within ~100 ms of stimulus onset, and a quantitative model based on signal detection theory (SDT) successfully linked the magnitude of this gain modulation to attention-related improvements in behavior. However, after extensive training, this early attentional gain was eliminated even though there were still substantial attention-related improvements in behavior. Accordingly, the SDT-based model required noise reduction to account for the link between the stimulus-evoked visual responses and attentional modulations of behavior. These findings suggest that training can lead to fundamental changes in the way attention alters the early cortical responses that support selective information processing. Moreover, these data facilitate the translation of results across different species and across experimental procedures that employ different behavioral training regimes. PMID:28654635
Heart rate reactivity associated to positive and negative food and non-food visual stimuli.
Kuoppa, Pekka; Tarvainen, Mika P; Karhunen, Leila; Narvainen, Johanna
2016-08-01
Using food as a stimuli is known to cause multiple psychophysiological reactions. Heart rate variability (HRV) is common tool for assessing physiological reactions in autonomic nervous system. However, the findings in HRV related to food stimuli have not been consistent. In this paper the quick changes in HRV related to positive and negative food and non-food visual stimuli are investigated. Electrocardiogram (ECG) was measured from 18 healthy females while being stimulated with the pictures. Subjects also filled Three-Factor Eating Questionnaire to determine their eating behavior. The inter-beat-interval time series and the HRV parameters were extracted from the ECG. The quick change in HRV parameters were studied by calculating the change from baseline value (10 s window before stimulus) to value after the onset of the stimulus (10 s window during stimulus). The paired t-test showed significant difference between positive and negative food pictures but not between positive and negative non-food pictures. All the HRV parameters decreased for positive food pictures while they stayed the same or increased a little for negative food pictures. The eating behavior characteristic cognitive restraint was negatively correlated with HRV parameters that describe decreasing of heart rate.
NASA Astrophysics Data System (ADS)
Tan, Bingyao; Mason, Erik; MacLellan, Ben; Bizheva, Kostadinka
2017-02-01
Visually evoked changes of retinal blood flow can serve as an important research tool to investigate eye disease such as glaucoma and diabetic retinopathy. In this study we used a combined, research-grade, high-resolution Doppler OCT+ERG system to study changes in the retinal blood flow (RBF) and retinal neuronal activity in response to visual stimuli of different intensities, durations and type (flicker vs single flash). Specifically, we used white light stimuli of 10 ms and 200 ms single flash, 1s and 2s for flickers stimuli of 20% duty cycle. The study was conducted in-vivo in pigmented rats. Both single flash (SF) and flicker stimuli caused increase in the RBF. The 10 ms SF stimulus did not generate any consistent measurable response, while the 200 ms SF of the same intensity generated 4% change in the RBF peaking at 1.5 s after the stimulus onset. Single flash stimuli introduced 2x smaller change in RBF and 30% earlier RBF peak response compared to flicker stimuli of the same intensity and duration. Doubling the intensity of SF or flicker stimuli increased the RBF peak magnitude by 1.5x. Shortening the flicker stimulus duration by 2x increased the RBF recovery rate by 2x, however, had no effect on the rate of RBF change from baseline to peak.
Figure-Ground Organization in Visual Cortex for Natural Scenes
2016-01-01
Abstract Figure-ground organization and border-ownership assignment are essential for understanding natural scenes. It has been shown that many neurons in the macaque visual cortex signal border-ownership in displays of simple geometric shapes such as squares, but how well these neurons resolve border-ownership in natural scenes is not known. We studied area V2 neurons in behaving macaques with static images of complex natural scenes. We found that about half of the neurons were border-ownership selective for contours in natural scenes, and this selectivity originated from the image context. The border-ownership signals emerged within 70 ms after stimulus onset, only ∼30 ms after response onset. A substantial fraction of neurons were highly consistent across scenes. Thus, the cortical mechanisms of figure-ground organization are fast and efficient even in images of complex natural scenes. Understanding how the brain performs this task so fast remains a challenge. PMID:28058269
Rapid feature-driven changes in the attentional window.
Leonard, Carly J; Lopez-Calderon, Javier; Kreither, Johanna; Luck, Steven J
2013-07-01
Spatial attention must adjust around an object of interest in a manner that reflects the object's size on the retina as well as the proximity of distracting objects, a process often guided by nonspatial features. This study used ERPs to investigate how quickly the size of this type of "attentional window" can adjust around a fixated target object defined by its color and whether this variety of attention influences the feedforward flow of subsequent information through the visual system. The task involved attending either to a circular region at fixation or to a surrounding annulus region, depending on which region contained an attended color. The region containing the attended color varied randomly from trial to trial, so the spatial distribution of attention had to be adjusted on each trial. We measured the initial sensory ERP response elicited by an irrelevant probe stimulus that appeared in one of the two regions at different times after task display onset. This allowed us to measure the amount of time required to adjust spatial attention on the basis of the location of the task-relevant feature. We found that the probe-elicited sensory response was larger when the probe occurred within the region of the attended dots, and this effect required a delay of approximately 175 msec between the onset of the task display and the onset of the probe. Thus, the window of attention is rapidly adjusted around the point of fixation in a manner that reflects the spatial extent of a task-relevant stimulus, leading to changes in the feedforward flow of subsequent information through the visual system.
Dong, Guangheng; Yang, Lizhu; Shen, Yue
2009-08-21
The present study investigated the course of visual searching to a target in a fixed location, using an emotional flanker task. Event-related potentials (ERPs) were recorded while participants performed the task. Emotional facial expressions were used as emotion-eliciting triggers. The course of visual searching was analyzed through the emotional effects arising from these emotion-eliciting stimuli. The flanker stimuli showed effects at about 150-250 ms following the stimulus onset, while the effect of target stimuli showed effects at about 300-400 ms. The visual search sequence in an emotional flanker task moved from a whole overview to a specific target, even if the target always appeared at a known location. The processing sequence was "parallel" in this task. The results supported the feature integration theory of visual search.
Impaired Contingent Attentional Capture Predicts Reduced Working Memory Capacity in Schizophrenia
Mayer, Jutta S.; Fukuda, Keisuke; Vogel, Edward K.; Park, Sohee
2012-01-01
Although impairments in working memory (WM) are well documented in schizophrenia, the specific factors that cause these deficits are poorly understood. In this study, we hypothesized that a heightened susceptibility to attentional capture at an early stage of visual processing would result in working memory encoding problems. 30 patients with schizophrenia and 28 demographically matched healthy participants were presented with a search array and asked to report the orientation of the target stimulus. In some of the trials, a flanker stimulus preceded the search array that either matched the color of the target (relevant-flanker capture) or appeared in a different color (irrelevant-flanker capture). Working memory capacity was determined in each individual using the visual change detection paradigm. Patients needed considerably more time to find the target in the no-flanker condition. After adjusting the individual exposure time, both groups showed equivalent capture costs in the irrelevant-flanker condition. However, in the relevant-flanker condition, capture costs were increased in patients compared to controls when the stimulus onset asynchrony between the flanker and the search array was high. Moreover, the increase in relevant capture costs correlated negatively with working memory capacity. This study demonstrates preserved stimulus-driven attentional capture but impaired contingent attentional capture associated with low working memory capacity in schizophrenia. These findings suggest a selective impairment of top-down attentional control in schizophrenia, which may impair working memory encoding. PMID:23152783
Impaired contingent attentional capture predicts reduced working memory capacity in schizophrenia.
Mayer, Jutta S; Fukuda, Keisuke; Vogel, Edward K; Park, Sohee
2012-01-01
Although impairments in working memory (WM) are well documented in schizophrenia, the specific factors that cause these deficits are poorly understood. In this study, we hypothesized that a heightened susceptibility to attentional capture at an early stage of visual processing would result in working memory encoding problems. 30 patients with schizophrenia and 28 demographically matched healthy participants were presented with a search array and asked to report the orientation of the target stimulus. In some of the trials, a flanker stimulus preceded the search array that either matched the color of the target (relevant-flanker capture) or appeared in a different color (irrelevant-flanker capture). Working memory capacity was determined in each individual using the visual change detection paradigm. Patients needed considerably more time to find the target in the no-flanker condition. After adjusting the individual exposure time, both groups showed equivalent capture costs in the irrelevant-flanker condition. However, in the relevant-flanker condition, capture costs were increased in patients compared to controls when the stimulus onset asynchrony between the flanker and the search array was high. Moreover, the increase in relevant capture costs correlated negatively with working memory capacity. This study demonstrates preserved stimulus-driven attentional capture but impaired contingent attentional capture associated with low working memory capacity in schizophrenia. These findings suggest a selective impairment of top-down attentional control in schizophrenia, which may impair working memory encoding.
Andersen, S K; Müller, M M
2010-08-03
A central question in the field of attention is whether visual processing is a strictly limited resource, which must be allocated by selective attention. If this were the case, attentional enhancement of one stimulus should invariably lead to suppression of unattended distracter stimuli. Here we examine voluntary cued shifts of feature-selective attention to either one of two superimposed red or blue random dot kinematograms (RDKs) to test whether such a reciprocal relationship between enhancement of an attended and suppression of an unattended stimulus can be observed. The steady-state visual evoked potential (SSVEP), an oscillatory brain response elicited by the flickering RDKs, was measured in human EEG. Supporting limited resources, we observed both an enhancement of the attended and a suppression of the unattended RDK, but this observed reciprocity did not occur concurrently: enhancement of the attended RDK started at 220 ms after cue onset and preceded suppression of the unattended RDK by about 130 ms. Furthermore, we found that behavior was significantly correlated with the SSVEP time course of a measure of selectivity (attended minus unattended) but not with a measure of total activity (attended plus unattended). The significant deviations from a temporally synchronized reciprocity between enhancement and suppression suggest that the enhancement of the attended stimulus may cause the suppression of the unattended stimulus in the present experiment.
Electrophysiological evidence for biased competition in V1 for fear expressions.
West, Greg L; Anderson, Adam A K; Ferber, Susanne; Pratt, Jay
2011-11-01
When multiple stimuli are concurrently displayed in the visual field, they must compete for neural representation at the processing expense of their contemporaries. This biased competition is thought to begin as early as primary visual cortex, and can be driven by salient low-level stimulus features. Stimuli important for an organism's survival, such as facial expressions signaling environmental threat, might be similarly prioritized at this early stage of visual processing. In the present study, we used ERP recordings from striate cortex to examine whether fear expressions can bias the competition for neural representation at the earliest stage of retinotopic visuo-cortical processing when in direct competition with concurrently presented visual information of neutral valence. We found that within 50 msec after stimulus onset, information processing in primary visual cortex is biased in favor of perceptual representations of fear at the expense of competing visual information (Experiment 1). Additional experiments confirmed that the facial display's emotional content rather than low-level features is responsible for this prioritization in V1 (Experiment 2), and that this competition is reliant on a face's upright canonical orientation (Experiment 3). These results suggest that complex stimuli important for an organism's survival can indeed be prioritized at the earliest stage of cortical processing at the expense of competing information, with competition possibly beginning before encoding in V1.
Estimating repetitive spatiotemporal patterns from resting-state brain activity data.
Takeda, Yusuke; Hiroe, Nobuo; Yamashita, Okito; Sato, Masa-Aki
2016-06-01
Repetitive spatiotemporal patterns in spontaneous brain activities have been widely examined in non-human studies. These studies have reported that such patterns reflect past experiences embedded in neural circuits. In human magnetoencephalography (MEG) and electroencephalography (EEG) studies, however, spatiotemporal patterns in resting-state brain activities have not been extensively examined. This is because estimating spatiotemporal patterns from resting-state MEG/EEG data is difficult due to their unknown onsets. Here, we propose a method to estimate repetitive spatiotemporal patterns from resting-state brain activity data, including MEG/EEG. Without the information of onsets, the proposed method can estimate several spatiotemporal patterns, even if they are overlapping. We verified the performance of the method by detailed simulation tests. Furthermore, we examined whether the proposed method could estimate the visual evoked magnetic fields (VEFs) without using stimulus onset information. The proposed method successfully detected the stimulus onsets and estimated the VEFs, implying the applicability of this method to real MEG data. The proposed method was applied to resting-state functional magnetic resonance imaging (fMRI) data and MEG data. The results revealed informative spatiotemporal patterns representing consecutive brain activities that dynamically change with time. Using this method, it is possible to reveal discrete events spontaneously occurring in our brains, such as memory retrieval. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Arrington, Catherine M; Weaver, Starla M
2015-01-01
Under conditions of volitional control in multitask environments, subjects may engage in a variety of strategies to guide task selection. The current research examines whether subjects may sometimes use a top-down control strategy of selecting a task-irrelevant stimulus dimension, such as location, to guide task selection. We term this approach a stimulus set selection strategy. Using a voluntary task switching procedure, subjects voluntarily switched between categorizing letter and number stimuli that appeared in two, four, or eight possible target locations. Effects of stimulus availability, manipulated by varying the stimulus onset asynchrony between the two target stimuli, and location repetition were analysed to assess the use of a stimulus set selection strategy. Considered across position condition, Experiment 1 showed effects of both stimulus availability and location repetition on task choice suggesting that only in the 2-position condition, where selection based on location always results in a target at the selected location, subjects may have been using a stimulus set selection strategy on some trials. Experiment 2 replicated and extended these findings in a visually more cluttered environment. These results indicate that, contrary to current models of task selection in voluntary task switching, the top-down control of task selection may occur in the absence of the formation of an intention to perform a particular task.
Heiser, Laura M; Berman, Rebecca A; Saunders, Richard C; Colby, Carol L
2005-11-01
With each eye movement, a new image impinges on the retina, yet we do not notice any shift in visual perception. This perceptual stability indicates that the brain must be able to update visual representations to take our eye movements into account. Neurons in the lateral intraparietal area (LIP) update visual representations when the eyes move. The circuitry that supports these updated representations remains unknown, however. In this experiment, we asked whether the forebrain commissures are necessary for updating in area LIP when stimulus representations must be updated from one visual hemifield to the other. We addressed this question by recording from LIP neurons in split-brain monkeys during two conditions: stimulus traces were updated either across or within hemifields. Our expectation was that across-hemifield updating activity in LIP would be reduced or abolished after transection of the forebrain commissures. Our principal finding is that LIP neurons can update stimulus traces from one hemifield to the other even in the absence of the forebrain commissures. This finding provides the first evidence that representations in parietal cortex can be updated without the use of direct cortico-cortical links. The second main finding is that updating activity in LIP is modified in the split-brain monkey: across-hemifield signals are reduced in magnitude and delayed in onset compared with within-hemifield signals, which indicates that the pathways for across-hemifield updating are less effective in the absence of the forebrain commissures. Together these findings reveal a dynamic circuit that contributes to updating spatial representations.
Remote distractor effects and saccadic inhibition: spatial and temporal modulation.
Walker, Robin; Benson, Valerie
2013-09-12
The onset of a visual distractor remote from a saccade target is known to increase saccade latency (the remote distractor effect [RDE]). In addition, distractors may also selectively inhibit saccades that would be initiated about 90 ms after distractor onset (termed saccadic inhibition [SI]). Recently, it has been proposed that the transitory inhibition of saccades (SI) may underlie the increase in mean latency (RDE). In a first experiment, the distractor eccentricity was manipulated, and a robust RDE that was strongly modulated by distractor eccentricity was observed. However, the underlying latency distributions did not reveal clear evidence of SI. A second experiment manipulated distractor spatial location and the timing of the distractor onset in relation to the target. An RDE was again observed with remote distractors away from the target axis and under conditions with early-onset distractors that would be unlikely to produce SI, whereas later distractor onsets produced an RDE along with some evidence of an SI effect. A third experiment using a mixed block of target-distractor stimulus-onset asynchronies (SOAs) revealed an RDE that varied with both distractor eccentricity and SOA and changes to latency distributions consistent with the timing of SI. We argue that the notion that SI underpins the RDE is similar to the earlier argument that express saccades underlie the fixation offset (gap) effect and that changes in mean latency and to the shape of the underlying latency distributions following a visual onset may involve more than one inhibitory process.
Emotion Separation Is Completed Early and It Depends on Visual Field Presentation
Liu, Lichan; Ioannides, Andreas A.
2010-01-01
It is now apparent that the visual system reacts to stimuli very fast, with many brain areas activated within 100 ms. It is, however, unclear how much detail is extracted about stimulus properties in the early stages of visual processing. Here, using magnetoencephalography we show that the visual system separates different facial expressions of emotion well within 100 ms after image onset, and that this separation is processed differently depending on where in the visual field the stimulus is presented. Seven right-handed males participated in a face affect recognition experiment in which they viewed happy, fearful and neutral faces. Blocks of images were shown either at the center or in one of the four quadrants of the visual field. For centrally presented faces, the emotions were separated fast, first in the right superior temporal sulcus (STS; 35–48 ms), followed by the right amygdala (57–64 ms) and medial pre-frontal cortex (83–96 ms). For faces presented in the periphery, the emotions were separated first in the ipsilateral amygdala and contralateral STS. We conclude that amygdala and STS likely play a different role in early visual processing, recruiting distinct neural networks for action: the amygdala alerts sub-cortical centers for appropriate autonomic system response for fight or flight decisions, while the STS facilitates more cognitive appraisal of situations and links appropriate cortical sites together. It is then likely that different problems may arise when either network fails to initiate or function properly. PMID:20339549
Human perceptual decision making: disentangling task onset and stimulus onset.
Cardoso-Leite, Pedro; Waszak, Florian; Lepsien, Jöran
2014-07-01
The left dorsolateral prefrontal cortex (ldlPFC) has been highlighted as a key actor in human perceptual decision-making (PDM): It is theorized to support decision-formation independently of stimulus type or motor response. PDM studies however generally confound stimulus onset and task onset: when the to-be-recognized stimulus is presented, subjects know that a stimulus is shown and can set up processing resources-even when they do not know which stimulus is shown. We hypothesized that the ldlPFC might be involved in task preparation rather than decision-formation. To test this, we asked participants to report whether sequences of noisy images contained a face or a house within an experimental design that decorrelates stimulus and task onset. Decision-related processes should yield a sustained response during the task, whereas preparation-related areas should yield transient responses at its beginning. The results show that the brain activation pattern at task onset is strikingly similar to that observed in previous PDM studies. In particular, they contradict the idea that ldlPFC forms an abstract decision and suggest instead that its activation reflects preparation for the upcoming task. We further investigated the role of the fusiform face areas and parahippocampal place areas which are thought to be face and house detectors, respectively, that feed their signals to higher level decision areas. The response patterns within these areas suggest that this interpretation is unlikely and that the decisions about the presence of a face or a house in a noisy image might instead already be computed within these areas without requiring higher-order areas. Copyright © 2013 Wiley Periodicals, Inc.
Visual Presentation Effects on Identification of Multiple Environmental Sounds
Masakura, Yuko; Ichikawa, Makoto; Shimono, Koichi; Nakatsuka, Reio
2016-01-01
This study examined how the contents and timing of a visual stimulus affect the identification of mixed sounds recorded in a daily life environment. For experiments, we presented four environment sounds as auditory stimuli for 5 s along with a picture or a written word as a visual stimulus that might or might not denote the source of one of the four sounds. Three conditions of temporal relations between the visual stimuli and sounds were used. The visual stimulus was presented either: (a) for 5 s simultaneously with the sound; (b) for 5 s, 1 s before the sound (SOA between the audio and visual stimuli was 6 s); or (c) for 33 ms, 1 s before the sound (SOA was 1033 ms). Participants reported all identifiable sounds for those audio–visual stimuli. To characterize the effects of visual stimuli on sound identification, the following were used: the identification rates of sounds for which the visual stimulus denoted its sound source, the rates of other sounds for which the visual stimulus did not denote the sound source, and the frequency of false hearing of a sound that was not presented for each sound set. Results of the four experiments demonstrated that a picture or a written word promoted identification of the sound when it was related to the sound, particularly when the visual stimulus was presented for 5 s simultaneously with the sounds. However, a visual stimulus preceding the sounds had a benefit only for the picture, not for the written word. Furthermore, presentation with a picture denoting a sound simultaneously with the sound reduced the frequency of false hearing. These results suggest three ways that presenting a visual stimulus affects identification of the auditory stimulus. First, activation of the visual representation extracted directly from the picture promotes identification of the denoted sound and suppresses the processing of sounds for which the visual stimulus did not denote the sound source. Second, effects based on processing of the conceptual information promote identification of the denoted sound and suppress the processing of sounds for which the visual stimulus did not denote the sound source. Third, processing of the concurrent visual representation suppresses false hearing. PMID:26973478
Janczyk, Markus; Berryhill, Marian E
2014-04-01
The retro-cue effect (RCE) describes superior working memory performance for validly cued stimulus locations long after encoding has ended. Importantly, this happens with delays beyond the range of iconic memory. In general, the RCE is a stable phenomenon that emerges under varied stimulus configurations and timing parameters. We investigated its susceptibility to dual-task interference to determine the attentional requirements at the time point of cue onset and encoding. In Experiment 1, we compared single- with dual-task conditions. In Experiment 2, we borrowed from the psychological refractory period paradigm and compared conditions with high and low (dual-) task overlap. The secondary task was always binary tone discrimination requiring a manual response. Across both experiments, an RCE was found, but it was diminished in magnitude in the critical dual-task conditions. A previous study did not find evidence that sustained attention is required in the interval between cue offset and test. Our results apparently contradict these findings and point to a critical time period around cue onset and briefly thereafter during which attention is required.
Taylor, J Eric T; Hilchey, Matthew D; Pratt, Jay
2018-01-24
Dominant methods of investigating exogenous orienting presume that attention is captured most effectively at locations containing new events. This is evidenced by the ubiquitous use of transient stimuli as cues in the literature on exogenous orienting. In the present study, we showed that attention can be oriented exogenously toward a location containing a completely unchanging stimulus by modifying Posner's landmark exogenous spatial-cueing paradigm. Observers searched a six-element array of placeholder stimuli for an onset target. The target was preceded by a decrement in luminance to five of the six placeholders, such that one location remained physically constant. This "nonset" stimulus (so named to distinguish it from a traditional onsetting transient) acted as an exogenous cue, eliciting patterns of facilitation and inhibition at the nonset location and demonstrating that exogenous orienting is not always evident at the location of a visual transient. This method eliminates the decades-long confounding of orienting to a location with the processing of new events at that location, permitting alternative considerations of the nature of attentional selection.
Berryhill, Marian E.
2014-01-01
The retro-cue effect (RCE) describes superior working memory performance for validly cued stimulus locations long after encoding has ended. Importantly, this happens with delays beyond the range of iconic memory. In general, the RCE is a stable phenomenon that emerges under varied stimulus configurations and timing parameters. We investigated its susceptibility to dual-task interference to determine the attentional requirements at the time point of cue onset and encoding. In Experiment 1, we compared single- with dual-task conditions. In Experiment 2, we borrowed from the psychological refractory period paradigm and compared conditions with high and low (dual-) task overlap. The secondary task was always binary tone discrimination requiring amanual response. Across both experiments, an RCE was found, but it was diminished in magnitude in the critical dual-task conditions. A previous study did not find evidence that sustained attention is required in the interval between cue offset and test. Our results apparently contradict these findings and point to a critical time period around cue onset and briefly thereafter during which attention is required. PMID:24452383
A novel function for the pineal organ in the control of swim depth in the Atlantic halibut larva
NASA Astrophysics Data System (ADS)
Novales Flamarique, Iñigo
2002-02-01
The pineal organ of vertebrates is a photo-sensitive structure that conveys photoperiod information to the brain. This information influences circadian rhythm and related metabolic processes such as thermoregulation, hatching time, body growth, and the timing of reproduction. This study demonstrates extra-ocular light responses that control swim depth in the larva of the Atlantic halibut, Hyppoglosus hyppoglosus. Young larvae without a functional eye (<29 days) swim upwards after an average delay of 5 s following the onset of a downwelling light stimulus, but sink downwards a few seconds later. Older larvae (>=29 days), which possess a functional eye, swim immediately downwards (microsecond delay) following the onset of the light stimulus, but proceed to swim upwards several seconds later. These two response patterns are thus opposite in polarity and have different time kinetics. Because the pineal organ of the Atlantic halibut develops during the embryonic stage, and because it is the only centre in the brain that expresses functional visual pigments (opsins) at early larval stages, it is the only photosensory organ capable of generating the extra-ocular responses observed.
Color-Change Detection Activity in the Primate Superior Colliculus.
Herman, James P; Krauzlis, Richard J
2017-01-01
The primate superior colliculus (SC) is a midbrain structure that participates in the control of spatial attention. Previous studies examining the role of the SC in attention have mostly used luminance-based visual features (e.g., motion, contrast) as the stimuli and saccadic eye movements as the behavioral response, both of which are known to modulate the activity of SC neurons. To explore the limits of the SC's involvement in the control of spatial attention, we recorded SC neuronal activity during a task using color, a visual feature dimension not traditionally associated with the SC, and required monkeys to detect threshold-level changes in the saturation of a cued stimulus by releasing a joystick during maintained fixation. Using this color-based spatial attention task, we found substantial cue-related modulation in all categories of visually responsive neurons in the intermediate layers of the SC. Notably, near-threshold changes in color saturation, both increases and decreases, evoked phasic bursts of activity with magnitudes as large as those evoked by stimulus onset. This change-detection activity had two distinctive features: activity for hits was larger than for misses, and the timing of change-detection activity accounted for 67% of joystick release latency, even though it preceded the release by at least 200 ms. We conclude that during attention tasks, SC activity denotes the behavioral relevance of the stimulus regardless of feature dimension and that phasic event-related SC activity is suitable to guide the selection of manual responses as well as saccadic eye movements.
Auditory and Visual Evoked Potentials as a Function of Sleep Deprivation and Irregular Sleep
1989-08-15
and a slow diminution in amplitude within a block of 35 targets. The gradual decrement between blocks is presumably due to the "waning of attention...scoring of the CNV. With the stimulus parameters used in the present study, the CNV is a slow negative wave which begins from 260-460 ms after tone onset...attended (e.g., Hillyard, Hink , Schwent, & Picton, 1973; Schwent & Hillyard, 1975), little research has focused on the effects when a particular
Dynamic mapping of the human visual cortex by high-speed magnetic resonance imaging.
Blamire, A M; Ogawa, S; Ugurbil, K; Rothman, D; McCarthy, G; Ellermann, J M; Hyder, F; Rattner, Z; Shulman, R G
1992-01-01
We report the use of high-speed magnetic resonance imaging to follow the changes in image intensity in the human visual cortex during stimulation by a flashing checkerboard stimulus. Measurements were made in a 2.1-T, 1-m-diameter magnet, part of a Bruker Biospec spectrometer that we had programmed to do echo-planar imaging. A 15-cm-diameter surface coil was used to transmit and receive signals. Images were acquired during periods of stimulation from 2 s to 180 s. Images were acquired in 65.5 ms in a 10-mm slice with in-plane voxel size of 6 x 3 mm. Repetition time (TR) was generally 2 s, although for the long flashing periods, TR = 8 s was used. Voxels were located onto an inversion recovery image taken with 2 x 2 mm in-plane resolution. Image intensity increased after onset of the stimulus. The mean change in signal relative to the prestimulation level (delta S/S) was 9.7% (SD = 2.8%, n = 20) with an echo time of 70 ms. Irrespective of the period of stimulation, the increase in magnetic resonance signal intensity was delayed relative to the stimulus. The mean delay measured from the start of stimulation for each protocol was as follows: 2-s stimulation, delay = 3.5 s (SD = 0.5 s, n = 10) (the delay exceeds stimulus duration); 20- to 24-s stimulation, delay = 5 s (SD = 2 s, n = 20). PMID:1438317
Modulation of V1 Spike Response by Temporal Interval of Spatiotemporal Stimulus Sequence
Kim, Taekjun; Kim, HyungGoo R.; Kim, Kayeon; Lee, Choongkil
2012-01-01
The spike activity of single neurons of the primary visual cortex (V1) becomes more selective and reliable in response to wide-field natural scenes compared to smaller stimuli confined to the classical receptive field (RF). However, it is largely unknown what aspects of natural scenes increase the selectivity of V1 neurons. One hypothesis is that modulation by surround interaction is highly sensitive to small changes in spatiotemporal aspects of RF surround. Such a fine-tuned modulation would enable single neurons to hold information about spatiotemporal sequences of oriented stimuli, which extends the role of V1 neurons as a simple spatiotemporal filter confined to the RF. In the current study, we examined the hypothesis in the V1 of awake behaving monkeys, by testing whether the spike response of single V1 neurons is modulated by temporal interval of spatiotemporal stimulus sequence encompassing inside and outside the RF. We used two identical Gabor stimuli that were sequentially presented with a variable stimulus onset asynchrony (SOA): the preceding one (S1) outside the RF and the following one (S2) in the RF. This stimulus configuration enabled us to examine the spatiotemporal selectivity of response modulation from a focal surround region. Although S1 alone did not evoke spike responses, visual response to S2 was modulated for SOA in the range of tens of milliseconds. These results suggest that V1 neurons participate in processing spatiotemporal sequences of oriented stimuli extending outside the RF. PMID:23091631
Internal state of monkey primary visual cortex (V1) predicts figure-ground perception.
Supèr, Hans; van der Togt, Chris; Spekreijse, Henk; Lamme, Victor A F
2003-04-15
When stimulus information enters the visual cortex, it is rapidly processed for identification. However, sometimes the processing of the stimulus is inadequate and the subject fails to notice the stimulus. Human psychophysical studies show that this occurs during states of inattention or absent-mindedness. At a neurophysiological level, it remains unclear what these states are. To study the role of cortical state in perception, we analyzed neural activity in the monkey primary visual cortex before the appearance of a stimulus. We show that, before the appearance of a reported stimulus, neural activity was stronger and more correlated than for a not-reported stimulus. This indicates that the strength of neural activity and the functional connectivity between neurons in the primary visual cortex participate in the perceptual processing of stimulus information. Thus, to detect a stimulus, the visual cortex needs to be in an appropriate state.
Seki, Yoshimasa; Okanoya, Kazuo
2008-02-01
Both visual and auditory information are important for songbirds, especially in developmental and sexual contexts. To investigate bimodal cognition in songbirds, the authors conducted audiovisual discrimination training in Bengalese finches. The authors used two types of stimulus: an "artificial stimulus," which is a combination of simple figures and sound, and a "biological stimulus," consisting of video images of singing males along with their songs. The authors found that while both sexes predominantly used visual cues in the discrimination tasks, males tended to be more dependent on auditory information for the biological stimulus. Female responses were always dependent on the visual stimulus for both stimulus types. Only males changed their discrimination strategy according to stimulus type. Although males used both visual and auditory cues for the biological stimulus, they responded to the artificial stimulus depending only on visual information, as the females did. These findings suggest a sex difference in innate auditory sensitivity. (c) 2008 APA.
Motion onset does not capture attention when subsequent motion is "smooth".
Sunny, Meera Mary; von Mühlenen, Adrian
2011-12-01
Previous research on the attentional effects of moving objects has shown that motion per se does not capture attention. However, in later studies it was argued that the onset of motion does capture attention. Here, we show that this motion-onset effect critically depends on motion jerkiness--that is, the rate at which the moving stimulus is refreshed. Experiment 1 used search displays with a static, a motion-onset, and an abrupt-onset stimulus, while systematically varying the refresh rate of the moving stimulus. The results showed that motion onset only captures attention when subsequent motion is jerky (8 and 17 Hz), not when it is smooth (33 and 100 Hz). Experiment 2 replaced motion onset with continuous motion, showing that motion jerkiness does not affect how continuous motion is processed. These findings do not support accounts that assume a special role for motion onset, but they are in line with the more general unique-event account.
Ostarek, Markus; Huettig, Falk
2017-03-01
The notion that processing spoken (object) words involves activation of category-specific representations in visual cortex is a key prediction of modality-specific theories of representation that contrasts with theories assuming dedicated conceptual representational systems abstracted away from sensorimotor systems. In the present study, we investigated whether participants can detect otherwise invisible pictures of objects when they are presented with the corresponding spoken word shortly before the picture appears. Our results showed facilitated detection for congruent ("bottle" → picture of a bottle) versus incongruent ("bottle" → picture of a banana) trials. A second experiment investigated the time-course of the effect by manipulating the timing of picture presentation relative to word onset and revealed that it arises as soon as 200-400 ms after word onset and decays at 600 ms after word onset. Together, these data strongly suggest that spoken words can rapidly activate low-level category-specific visual representations that affect the mere detection of a stimulus, that is, what we see. More generally, our findings fit best with the notion that spoken words activate modality-specific visual representations that are low level enough to provide information related to a given token and at the same time abstract enough to be relevant not only for previously seen tokens but also for generalizing to novel exemplars one has never seen before. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
A comparison of methods for teaching receptive labeling to children with autism spectrum disorders.
Grow, Laura L; Carr, James E; Kodak, Tiffany M; Jostad, Candice M; Kisamore, April N
2011-01-01
Many early intervention curricular manuals recommend teaching auditory-visual conditional discriminations (i.e., receptive labeling) using the simple-conditional method in which component simple discriminations are taught in isolation and in the presence of a distracter stimulus before the learner is required to respond conditionally. Some have argued that this procedure might be susceptible to faulty stimulus control such as stimulus overselectivity (Green, 2001). Consequently, there has been a call for the use of alternative teaching procedures such as the conditional-only method, which involves conditional discrimination training from the onset of intervention. The purpose of the present study was to compare the simple-conditional and conditional-only methods for teaching receptive labeling to 3 young children diagnosed with autism spectrum disorders. The data indicated that the conditional-only method was a more reliable and efficient teaching procedure. In addition, several error patterns emerged during training using the simple-conditional method. The implications of the results with respect to current teaching practices in early intervention programs are discussed.
Cortical oscillations related to processing congruent and incongruent grapheme-phoneme pairs.
Herdman, Anthony T; Fujioka, Takako; Chau, Wilkin; Ross, Bernhard; Pantev, Christo; Picton, Terence W
2006-05-15
In this study, we investigated changes in cortical oscillations following congruent and incongruent grapheme-phoneme stimuli. Hiragana graphemes and phonemes were simultaneously presented as congruent or incongruent audiovisual stimuli to native Japanese-speaking participants. The discriminative reaction time was 57 ms shorter for congruent than incongruent stimuli. Analysis of MEG responses using synthetic aperture magnetometry (SAM) revealed that congruent stimuli evoked larger 2-10 Hz activity in the left auditory cortex within the first 250 ms after stimulus onset, and smaller 2-16 Hz activity in bilateral visual cortices between 250 and 500 ms. These results indicate that congruent visual input can modify cortical activity in the left auditory cortex.
Binocular summation and peripheral visual response time
NASA Technical Reports Server (NTRS)
Gilliland, K.; Haines, R. F.
1975-01-01
Six males were administered a peripheral visual response time test to the onset of brief small stimuli imaged in 10-deg arc separation intervals across the dark adapted horizontal retinal meridian under both binocular and monocular viewing conditions. This was done in an attempt to verify the existence of peripheral binocular summation using a response time measure. The results indicated that from 50-deg arc right to 50-deg arc left of the line of sight binocular summation is a reasonable explanation for the significantly faster binocular data. The stimulus position by viewing eye interaction was also significant. A discussion of these and other analyses is presented along with a review of related literature.
The effect of changes in stimulus level on electrically evoked cortical auditory potentials.
Kim, Jae-Ryong; Brown, Carolyn J; Abbas, Paul J; Etler, Christine P; O'Brien, Sara
2009-06-01
The purpose of this study was to determine whether the electrically evoked acoustic change complex (EACC) could be used to assess sensitivity to changes in stimulus level in cochlear implant (CI) recipients and to investigate the relationship between EACC amplitude and rate of growth of the N1-P2 onset response with increases in stimulus level. Twelve postlingually deafened adults using Nucleus CI24 CIs participated in this study. Nucleus Implant Communicator (NIC) routines were used to bypass the speech processor and to control the stimulation of the implant directly. The stimulus consisted of an 800 msec burst of a 1000 pps biphasic pulse train. A change in the stimulus level was introduced 400 msec after stimulus onset. Band-pass filtering (1 to 100 Hz) was used to minimize stimulus artifact. Four to six recordings of 50 sweeps were obtained for each condition, and averaged responses were analyzed in the time domain using standard peak picking procedures. Cortical auditory change potentials were recorded from CI users in response to both increases and decreases in stimulation level. The amplitude of the EACC was found to be dependent on the magnitude of the stimulus change. Increases in stimulus level elicited more robust EACC responses than decreases in stimulus level. Also, EACC amplitudes were significantly correlated with the slope of the growth of the onset response. This work describes the effect of change in stimulus level on electrically evoked auditory change potentials in CI users. The amplitude of the EACC was found to be related both to the magnitude of the stimulus change introduced and to the rate of growth of the N1-P2 onset response. To the extent that the EACC reflects processing of stimulus change, it could potentially be a valuable tool for assessing neural processing of the kinds of stimulation patterns produced by a CI. Further studies are needed, however, to determine the relationships between the EACC and psychophysical measures of intensity discrimination in CI recipients.
Lateral orbitofrontal cortex anticipates choices and integrates prior with current information
Nogueira, Ramon; Abolafia, Juan M.; Drugowitsch, Jan; Balaguer-Ballester, Emili; Sanchez-Vives, Maria V.; Moreno-Bote, Rubén
2017-01-01
Adaptive behavior requires integrating prior with current information to anticipate upcoming events. Brain structures related to this computation should bring relevant signals from the recent past into the present. Here we report that rats can integrate the most recent prior information with sensory information, thereby improving behavior on a perceptual decision-making task with outcome-dependent past trial history. We find that anticipatory signals in the orbitofrontal cortex about upcoming choice increase over time and are even present before stimulus onset. These neuronal signals also represent the stimulus and relevant second-order combinations of past state variables. The encoding of choice, stimulus and second-order past state variables resides, up to movement onset, in overlapping populations. The neuronal representation of choice before stimulus onset and its build-up once the stimulus is presented suggest that orbitofrontal cortex plays a role in transforming immediate prior and stimulus information into choices using a compact state-space representation. PMID:28337990
The influence of expertise and of physical complexity on visual short-term memory consolidation.
Sun, Huiming; Zimmer, Hubert D; Fu, Xiaolan
2011-04-01
We investigated whether the expertise of a perceiver and the physical complexity of a stimulus influence consolidation of visual short-term memory (VSTM) in a S1-S2 (Stimulus 1-Stimulus 2) change detection task. Consolidation is assumed to make transient perceptual representations in VSTM more durable, and it is investigated by postexposure of a mask shortly after offset of the perceived stimulus (S1; 17 to 483 ms). We presented colours, Chinese characters, pseudocharacters, and novel symbols to novices (Germans) or experts of Chinese language (Chinese readers). Physical complexity was manipulated by the number of strokes. Unfamiliar material was remembered worse than familiar material (Experiments 1, 2, and 3). For novices the absolute VSTM performance was better for physically simple than for complex material, whereas for experts the complexity did not matter-Chinese readers memorized Chinese characters (Experiment 3). Articulatory suppression did not change these effects (Experiment 2). We always observed a strong effect of SOA, but this effect was influenced neither by physical complexity nor by expertise; only the length of the interstimulus interval between S1 and the mask was relevant. This was observed even with short stimulus onset asynchrony (SOA) of 100 ms (Experiment 2) and in comparing colours and characters (Experiment 5). However, masks impaired memory if they were presented at the locations of the to-be-memorized items, but not beside them-that is, interference was location-based (Experiment 6). We explain the effect of SOA by the assumption that it takes time to stop encoding of information presented at item locations with the offset of S1. The increasing resistance against interference by irrelevant material appears as consolidation of S1.
Attractive faces temporally modulate visual attention
Nakamura, Koyo; Kawabata, Hideaki
2014-01-01
Facial attractiveness is an important biological and social signal on social interaction. Recent research has demonstrated that an attractive face captures greater spatial attention than an unattractive face does. Little is known, however, about the temporal characteristics of visual attention for facial attractiveness. In this study, we investigated the temporal modulation of visual attention induced by facial attractiveness by using a rapid serial visual presentation. Fourteen male faces and two female faces were successively presented for 160 ms, respectively, and participants were asked to identify two female faces embedded among a series of multiple male distractor faces. Identification of a second female target (T2) was impaired when a first target (T1) was attractive compared to neutral or unattractive faces, at 320 ms stimulus onset asynchrony (SOA); identification was improved when T1 was attractive compared to unattractive faces at 640 ms SOA. These findings suggest that the spontaneous appraisal of facial attractiveness modulates temporal attention. PMID:24994994
Lieberman, Amy M; Borovsky, Arielle; Mayberry, Rachel I
2018-01-01
Prediction during sign language comprehension may enable signers to integrate linguistic and non-linguistic information within the visual modality. In two eyetracking experiments, we investigated American Sign language (ASL) semantic prediction in deaf adults and children (aged 4-8 years). Participants viewed ASL sentences in a visual world paradigm in which the sentence-initial verb was either neutral or constrained relative to the sentence-final target noun. Adults and children made anticipatory looks to the target picture before the onset of the target noun in the constrained condition only, showing evidence for semantic prediction. Crucially, signers alternated gaze between the stimulus sign and the target picture only when the sentential object could be predicted from the verb. Signers therefore engage in prediction by optimizing visual attention between divided linguistic and referential signals. These patterns suggest that prediction is a modality-independent process, and theoretical implications are discussed.
Direct Relationship Between Perceptual and Motor Variability
NASA Technical Reports Server (NTRS)
Liston, Dorion B.; Stone, Leland S.
2010-01-01
The time that elapses between stimulus onset and the onset of a saccadic eye movement is longer and more variable than can be explained by neural transmission times and synaptic delays (Carpenter, 1981, in: Eye Movements: Cognition & Visual Perception, Earlbaum). In theory, noise underlying response-time (RT) variability could arise at any point along the sensorimotor cascade, from sensory noise arising Vvithin the early visual processing shared Vvith perception to noise in the motor criterion or commands necessary to trigger movements. These two loci for internal noise can be distinguished empirically; sensory internal noise predicts that response time Vvill correlate Vvith perceived stimulus magnitude whereas motor internal noise predicts no such correlation. Methods. We used the data described by Liston and Stone (2008, JNS 28:13866-13875), in which subjects performed a 2AFC saccadic brightness discrimination task and the perceived brightness of the chosen stimulus was then quantified in a second 21FC perceptual task. Results. We binned each subject's data into quartiles for both signal strength (from dimmest to brightest) and RT (from slowest to fastest) and analyzed the trends in perceived brightness. We found significant effects of both signal strength (as expected) and RT on normalized perceived brightness (both p less than 0.0001, 2-way ANOVA), without significant interaction (p = 0.95, 2-way ANOVA). A plot of normalized perceived brightness versus normalized RT show's that more than half of the variance was shared (r2 = 0.56, P less than 0.0001). To rule out any possibility that some signal-strength related artifact was generating this effect, we ran a control analysis on pairs of trials with repeated presentations of identical stimuli and found that stimuli are perceived to be brighter on trials with faster saccades (p less than 0.001, paired t-test across subjects). Conclusion. These data show that shared early visual internal noise jitters perceived brightness and the saccadic motor output in parallel. While the present correlation could theoretically result, either directly or indirectly, from some low-level brainstem or retinal mechanism (e.g., arousal, pupil size, photoreceptor noise) that influences both visual and oculomotor circuits, this is unlikely given the earlier fin ding that the variability in perceived motion direction and smooth-pursuit motor output is highly correlated (Stone and Krauzlis, 2003, JOV 3:725-736), suggesting that cortical circuits contribute to the shared internal noise.
Summation of visual motion across eye movements reflects a nonspatial decision mechanism.
Morris, Adam P; Liu, Charles C; Cropper, Simon J; Forte, Jason D; Krekelberg, Bart; Mattingley, Jason B
2010-07-21
Human vision remains perceptually stable even though retinal inputs change rapidly with each eye movement. Although the neural basis of visual stability remains unknown, a recent psychophysical study pointed to the existence of visual feature-representations anchored in environmental rather than retinal coordinates (e.g., "spatiotopic" receptive fields; Melcher and Morrone, 2003). In that study, sensitivity to a moving stimulus presented after a saccadic eye movement was enhanced when preceded by another moving stimulus at the same spatial location before the saccade. The finding is consistent with spatiotopic sensory integration, but it could also have arisen from a probabilistic improvement in performance due to the presence of more than one motion signal for the perceptual decision. Here we show that this statistical advantage accounts completely for summation effects in this task. We first demonstrate that measurements of summation are confounded by noise related to an observer's uncertainty about motion onset times. When this uncertainty is minimized, comparable summation is observed regardless of whether two motion signals occupy the same or different locations in space, and whether they contain the same or opposite directions of motion. These results are incompatible with the tuning properties of motion-sensitive sensory neurons and provide no evidence for a spatiotopic representation of visual motion. Instead, summation in this context reflects a decision mechanism that uses abstract representations of sensory events to optimize choice behavior.
The effects of task difficulty on visual search strategy in virtual 3D displays.
Pomplun, Marc; Garaas, Tyler W; Carrasco, Marisa
2013-08-28
Analyzing the factors that determine our choice of visual search strategy may shed light on visual behavior in everyday situations. Previous results suggest that increasing task difficulty leads to more systematic search paths. Here we analyze observers' eye movements in an "easy" conjunction search task and a "difficult" shape search task to study visual search strategies in stereoscopic search displays with virtual depth induced by binocular disparity. Standard eye-movement variables, such as fixation duration and initial saccade latency, as well as new measures proposed here, such as saccadic step size, relative saccadic selectivity, and x-y target distance, revealed systematic effects on search dynamics in the horizontal-vertical plane throughout the search process. We found that in the "easy" task, observers start with the processing of display items in the display center immediately after stimulus onset and subsequently move their gaze outwards, guided by extrafoveally perceived stimulus color. In contrast, the "difficult" task induced an initial gaze shift to the upper-left display corner, followed by a systematic left-right and top-down search process. The only consistent depth effect was a trend of initial saccades in the easy task with smallest displays to the items closest to the observer. The results demonstrate the utility of eye-movement analysis for understanding search strategies and provide a first step toward studying search strategies in actual 3D scenarios.
Rosenblatt, Steven David; Crane, Benjamin Thomas
2015-01-01
A moving visual field can induce the feeling of self-motion or vection. Illusory motion from static repeated asymmetric patterns creates a compelling visual motion stimulus, but it is unclear if such illusory motion can induce a feeling of self-motion or alter self-motion perception. In these experiments, human subjects reported the perceived direction of self-motion for sway translation and yaw rotation at the end of a period of viewing set visual stimuli coordinated with varying inertial stimuli. This tested the hypothesis that illusory visual motion would influence self-motion perception in the horizontal plane. Trials were arranged into 5 blocks based on stimulus type: moving star field with yaw rotation, moving star field with sway translation, illusory motion with yaw, illusory motion with sway, and static arrows with sway. Static arrows were used to evaluate the effect of cognitive suggestion on self-motion perception. Each trial had a control condition; the illusory motion controls were altered versions of the experimental image, which removed the illusory motion effect. For the moving visual stimulus, controls were carried out in a dark room. With the arrow visual stimulus, controls were a gray screen. In blocks containing a visual stimulus there was an 8s viewing interval with the inertial stimulus occurring over the final 1s. This allowed measurement of the visual illusion perception using objective methods. When no visual stimulus was present, only the 1s motion stimulus was presented. Eight women and five men (mean age 37) participated. To assess for a shift in self-motion perception, the effect of each visual stimulus on the self-motion stimulus (cm/s) at which subjects were equally likely to report motion in either direction was measured. Significant effects were seen for moving star fields for both translation (p = 0.001) and rotation (p<0.001), and arrows (p = 0.02). For the visual motion stimuli, inertial motion perception was shifted in the direction consistent with the visual stimulus. Arrows had a small effect on self-motion perception driven by a minority of subjects. There was no significant effect of illusory motion on self-motion perception for either translation or rotation (p>0.1 for both). Thus, although a true moving visual field can induce self-motion, results of this study show that illusory motion does not.
Nakamura, S; Shimojo, S
1998-10-01
The effects of the size and eccentricity of the visual stimulus upon visually induced perception of self-motion (vection) were examined with various sizes of central and peripheral visual stimulation. Analysis indicated the strength of vection increased linearly with the size of the area in which the moving pattern was presented, but there was no difference in vection strength between central and peripheral stimuli when stimulus sizes were the same. Thus, the effect of stimulus size is homogeneous across eccentricities in the visual field.
Visually cued motor synchronization: modulation of fMRI activation patterns by baseline condition.
Cerasa, Antonio; Hagberg, Gisela E; Bianciardi, Marta; Sabatini, Umberto
2005-01-03
A well-known issue in functional neuroimaging studies, regarding motor synchronization, is to design suitable control tasks able to discriminate between the brain structures involved in primary time-keeper functions and those related to other processes such as attentional effort. The aim of this work was to investigate how the predictability of stimulus onsets in the baseline condition modulates the activity in brain structures related to processes involved in time-keeper functions during the performance of a visually cued motor synchronization task (VM). The rational behind this choice derives from the notion that using different stimulus predictability can vary the subject's attention and the consequently neural activity. For this purpose, baseline levels of BOLD activity were obtained from 12 subjects during a conventional-baseline condition: maintained fixation of the visual rhythmic stimuli presented in the VM task, and a random-baseline condition: maintained fixation of visual stimuli occurring randomly. fMRI analysis demonstrated that while brain areas with a documented role in basic time processing are detected independent of the baseline condition (right cerebellum, bilateral putamen, left thalamus, left superior temporal gyrus, left sensorimotor cortex, left dorsal premotor cortex and supplementary motor area), the ventral premotor cortex, caudate nucleus, insula and inferior frontal gyrus exhibited a baseline-dependent activation. We conclude that maintained fixation of unpredictable visual stimuli can be employed in order to reduce or eliminate neural activity related to attentional components present in the synchronization task.
Attention capture by abrupt onsets: re-visiting the priority tag model.
Sunny, Meera M; von Mühlenen, Adrian
2013-01-01
Abrupt onsets have been shown to strongly attract attention in a stimulus-driven, bottom-up manner. However, the precise mechanism that drives capture by onsets is still debated. According to the new object account, abrupt onsets capture attention because they signal the appearance of a new object. Yantis and Johnson (1990) used a visual search task and showed that up to four onsets can be automatically prioritized. However, in their study the number of onsets co-varied with the total number of items in the display, allowing for a possible confound between these two variables. In the present study, display size was fixed at eight items while the number of onsets was systematically varied between zero and eight. Experiment 1 showed a systematic increase in reactions times with increasing number of onsets. This increase was stronger when the target was an onset than when it was a no-onset item, a result that is best explained by a model according to which only one onset is automatically prioritized. Even when the onsets were marked in red (Experiment 2), nearly half of the participants continued to prioritize only one onset item. Only when onset and no-onset targets were blocked (Experiment 3), participants started to search selectively through the set of only the relevant target type. These results further support the finding that only one onset captures attention. Many bottom-up models of attention capture, like masking or saliency accounts, can efficiently explain this finding.
Activity in early visual areas predicts interindividual differences in binocular rivalry dynamics
Yamashiro, Hiroyuki; Mano, Hiroaki; Umeda, Masahiro; Higuchi, Toshihiro; Saiki, Jun
2013-01-01
When dissimilar images are presented to the two eyes, binocular rivalry (BR) occurs, and perception alternates spontaneously between the images. Although neural correlates of the oscillating perception during BR have been found in multiple sites along the visual pathway, the source of BR dynamics is unclear. Psychophysical and modeling studies suggest that both low- and high-level cortical processes underlie BR dynamics. Previous neuroimaging studies have demonstrated the involvement of high-level regions by showing that frontal and parietal cortices responded time locked to spontaneous perceptual alternation in BR. However, a potential contribution of early visual areas to BR dynamics has been overlooked, because these areas also responded to the physical stimulus alternation mimicking BR. In the present study, instead of focusing on activity during perceptual switches, we highlighted brain activity during suppression periods to investigate a potential link between activity in human early visual areas and BR dynamics. We used a strong interocular suppression paradigm called continuous flash suppression to suppress and fluctuate the visibility of a probe stimulus and measured retinotopic responses to the onset of the invisible probe using functional MRI. There were ∼130-fold differences in the median suppression durations across 12 subjects. The individual differences in suppression durations could be predicted by the amplitudes of the retinotopic activity in extrastriate visual areas (V3 and V4v) evoked by the invisible probe. Weaker responses were associated with longer suppression durations. These results demonstrate that retinotopic representations in early visual areas play a role in the dynamics of perceptual alternations during BR. PMID:24353304
Distinct roles of the cortical layers of area V1 in figure-ground segregation.
Self, Matthew W; van Kerkoerle, Timo; Supèr, Hans; Roelfsema, Pieter R
2013-11-04
What roles do the different cortical layers play in visual processing? We recorded simultaneously from all layers of the primary visual cortex while monkeys performed a figure-ground segregation task. This task can be divided into different subprocesses that are thought to engage feedforward, horizontal, and feedback processes at different time points. These different connection types have different patterns of laminar terminations in V1 and can therefore be distinguished with laminar recordings. We found that the visual response started 40 ms after stimulus presentation in layers 4 and 6, which are targets of feedforward connections from the lateral geniculate nucleus and distribute activity to the other layers. Boundary detection started shortly after the visual response. In this phase, boundaries of the figure induced synaptic currents and stronger neuronal responses in upper layer 4 and the superficial layers ~70 ms after stimulus onset, consistent with the hypothesis that they are detected by horizontal connections. In the next phase, ~30 ms later, synaptic inputs arrived in layers 1, 2, and 5 that receive feedback from higher visual areas, which caused the filling in of the representation of the entire figure with enhanced neuronal activity. The present results reveal unique contributions of the different cortical layers to the formation of a visual percept. This new blueprint of laminar processing may generalize to other tasks and to other areas of the cerebral cortex, where the layers are likely to have roles similar to those in area V1. Copyright © 2013 Elsevier Ltd. All rights reserved.
Crossmodal semantic priming by naturalistic sounds and spoken words enhances visual sensitivity.
Chen, Yi-Chuan; Spence, Charles
2011-10-01
We propose a multisensory framework based on Glaser and Glaser's (1989) general reading-naming interference model to account for the semantic priming effect by naturalistic sounds and spoken words on visual picture sensitivity. Four experiments were designed to investigate two key issues: First, can auditory stimuli enhance visual sensitivity when the sound leads the picture as well as when they are presented simultaneously? And, second, do naturalistic sounds (e.g., a dog's "woofing") and spoken words (e.g., /dɔg/) elicit similar semantic priming effects? Here, we estimated participants' sensitivity and response criterion using signal detection theory in a picture detection task. The results demonstrate that naturalistic sounds enhanced visual sensitivity when the onset of the sounds led that of the picture by 346 ms (but not when the sounds led the pictures by 173 ms, nor when they were presented simultaneously, Experiments 1-3A). At the same SOA, however, spoken words did not induce semantic priming effects on visual detection sensitivity (Experiments 3B and 4A). When using a dual picture detection/identification task, both kinds of auditory stimulus induced a similar semantic priming effect (Experiment 4B). Therefore, we suggest that there needs to be sufficient processing time for the auditory stimulus to access its associated meaning to modulate visual perception. Besides, the interactions between pictures and the two types of sounds depend not only on their processing route to access semantic representations, but also on the response to be made to fulfill the requirements of the task.
de Graaf, Tom A; Cornelsen, Sonja; Jacobs, Christianne; Sack, Alexander T
2011-12-01
Transcranial magnetic stimulation (TMS) can be used to mask visual stimuli, disrupting visual task performance or preventing visual awareness. While TMS masking studies generally fix stimulation intensity, we hypothesized that varying the intensity of TMS pulses in a masking paradigm might inform several ongoing debates concerning TMS disruption of vision as measured subjectively versus objectively, and pre-stimulus (forward) versus post-stimulus (backward) TMS masking. We here show that both pre-stimulus TMS pulses and post-stimulus TMS pulses could strongly mask visual stimuli. We found no dissociations between TMS effects on the subjective and objective measures of vision for any masking window or intensity, ruling out the option that TMS intensity levels determine whether dissociations between subjective and objective vision are obtained. For the post-stimulus time window particularly, we suggest that these data provide new constraints for (e.g. recurrent) models of vision and visual awareness. Finally, our data are in line with the idea that pre-stimulus masking operates differently from conventional post-stimulus masking. Copyright © 2011 Elsevier Inc. All rights reserved.
Perrodin, Catherine; Kayser, Christoph; Logothetis, Nikos K; Petkov, Christopher I
2015-01-06
When social animals communicate, the onset of informative content in one modality varies considerably relative to the other, such as when visual orofacial movements precede a vocalization. These naturally occurring asynchronies do not disrupt intelligibility or perceptual coherence. However, they occur on time scales where they likely affect integrative neuronal activity in ways that have remained unclear, especially for hierarchically downstream regions in which neurons exhibit temporally imprecise but highly selective responses to communication signals. To address this, we exploited naturally occurring face- and voice-onset asynchronies in primate vocalizations. Using these as stimuli we recorded cortical oscillations and neuronal spiking responses from functional MRI (fMRI)-localized voice-sensitive cortex in the anterior temporal lobe of macaques. We show that the onset of the visual face stimulus resets the phase of low-frequency oscillations, and that the face-voice asynchrony affects the prominence of two key types of neuronal multisensory responses: enhancement or suppression. Our findings show a three-way association between temporal delays in audiovisual communication signals, phase-resetting of ongoing oscillations, and the sign of multisensory responses. The results reveal how natural onset asynchronies in cross-sensory inputs regulate network oscillations and neuronal excitability in the voice-sensitive cortex of macaques, a suggested animal model for human voice areas. These findings also advance predictions on the impact of multisensory input on neuronal processes in face areas and other brain regions.
Deficient multisensory integration in schizophrenia: an event-related potential study.
Stekelenburg, Jeroen J; Maes, Jan Pieter; Van Gool, Arthur R; Sitskoorn, Margriet; Vroomen, Jean
2013-07-01
In many natural audiovisual events (e.g., the sight of a face articulating the syllable /ba/), the visual signal precedes the sound and thus allows observers to predict the onset and the content of the sound. In healthy adults, the N1 component of the event-related brain potential (ERP), reflecting neural activity associated with basic sound processing, is suppressed if a sound is accompanied by a video that reliably predicts sound onset. If the sound does not match the content of the video (e.g., hearing /ba/ while lipreading /fu/), the later occurring P2 component is affected. Here, we examined whether these visual information sources affect auditory processing in patients with schizophrenia. The electroencephalography (EEG) was recorded in 18 patients with schizophrenia and compared with that of 18 healthy volunteers. As stimuli we used video recordings of natural actions in which visual information preceded and predicted the onset of the sound that was either congruent or incongruent with the video. For the healthy control group, visual information reduced the auditory-evoked N1 if compared to a sound-only condition, and stimulus-congruency affected the P2. This reduction in N1 was absent in patients with schizophrenia, and the congruency effect on the P2 was diminished. Distributed source estimations revealed deficits in the network subserving audiovisual integration in patients with schizophrenia. The results show a deficit in multisensory processing in patients with schizophrenia and suggest that multisensory integration dysfunction may be an important and, to date, under-researched aspect of schizophrenia. Copyright © 2013. Published by Elsevier B.V.
Is nevtral NEUTRAL? Visual similarity effects in the early phases of written-word recognition.
Marcet, Ana; Perea, Manuel
2017-08-01
For simplicity, contemporary models of written-word recognition and reading have unspecified feature/letter levels-they predict that the visually similar substituted-letter nonword PEQPLE is as effective at activating the word PEOPLE as the visually dissimilar substituted-letter nonword PEYPLE. Previous empirical evidence on the effects of visual similarly across letters during written-word recognition is scarce and nonconclusive. To examine whether visual similarity across letters plays a role early in word processing, we conducted two masked priming lexical decision experiments (stimulus-onset asynchrony = 50 ms). The substituted-letter primes were visually very similar to the target letters (u/v in Experiment 1 and i/j in Experiment 2; e.g., nevtral-NEUTRAL). For comparison purposes, we included an identity prime condition (neutral-NEUTRAL) and a dissimilar-letter prime condition (neztral-NEUTRAL). Results showed that the similar-letter prime condition produced faster word identification times than the dissimilar-letter prime condition. We discuss how models of written-word recognition should be amended to capture visual similarity effects across letters.
Phasic alertness cues modulate visual processing speed in healthy aging.
Haupt, Marleen; Sorg, Christian; Napiórkowski, Natan; Finke, Kathrin
2018-05-31
Warning signals temporarily increase the rate of visual information in younger participants and thus optimize perception in critical situations. It is unclear whether such important preparatory processes are preserved in healthy aging. We parametrically assessed the effects of auditory alertness cues on visual processing speed and their time course using a whole report paradigm based on the computational Theory of Visual Attention. We replicated prior findings of significant alerting benefits in younger adults. In conditions with short cue-target onset asynchronies, this effect was baseline-dependent. As younger participants with high baseline speed did not show a profit, an inverted U-shaped function of phasic alerting and visual processing speed was implied. Older adults also showed a significant cue-induced benefit. Bayesian analyses indicated that the cueing benefit on visual processing speed was comparably strong across age groups. Our results indicate that in aging individuals, comparable to younger ones, perception is active and increased expectancy of the appearance of a relevant stimulus can increase the rate of visual information uptake. Copyright © 2018 Elsevier Inc. All rights reserved.
Craske, Michelle G.; Wolitzky–Taylor, Kate B.; Mineka, Susan; Zinbarg, Richard; Waters, Allison M.; Vrshek–Schallhorn, Suzanne; Epstein, Alyssa; Naliboff, Bruce; Ornitz, Edward
2013-01-01
The current study evaluated the degree to which startle reflexes (SRs) in safe conditions versus danger conditions were predictive of the onset of anxiety disorders. Specificity of these effects to anxiety disorders was evaluated in comparison to unipolar depressive disorders and with consideration of level of neuroticism. A startle paradigm was administered at baseline to 132 nondisordered adolescents as part of a longitudinal study examining risk factors for emotional disorders. Participants underwent a repetition of eight safe-danger sequences and were told that delivery of an aversive stimulus leading to a muscle contraction of the arm would occur only in the late part of danger conditions. One aversive stimulus occurred midway in the safe-danger sequences. Participants were assessed for the onset of anxiety and unipolar depressive disorders annually over the next 3 to 4 years. Larger SR magnitude during safe conditions following delivery of the aversive stimulus predicted the subsequent first onset of anxiety disorders. Moreover, prediction of the onset of anxiety disorders remained significant above and beyond the effects of comorbid unipolar depression, neuroticism, and subjective ratings of intensity of the aversive stimulus. In sum, elevated responding to safe conditions following an aversive stimulus appears to be a specific, prospective risk factor for the first onset of anxiety disorders. PMID:21988452
The influence of spontaneous activity on stimulus processing in primary visual cortex.
Schölvinck, M L; Friston, K J; Rees, G
2012-02-01
Spontaneous activity in the resting human brain has been studied extensively; however, how such activity affects the local processing of a sensory stimulus is relatively unknown. Here, we examined the impact of spontaneous activity in primary visual cortex on neuronal and behavioural responses to a simple visual stimulus, using functional MRI. Stimulus-evoked responses remained essentially unchanged by spontaneous fluctuations, combining with them in a largely linear fashion (i.e., with little evidence for an interaction). However, interactions between spontaneous fluctuations and stimulus-evoked responses were evident behaviourally; high levels of spontaneous activity tended to be associated with increased stimulus detection at perceptual threshold. Our results extend those found in studies of spontaneous fluctuations in motor cortex and higher order visual areas, and suggest a fundamental role for spontaneous activity in stimulus processing. Copyright © 2011. Published by Elsevier Inc.
Variability and Correlations in Primary Visual Cortical Neurons Driven by Fixational Eye Movements
McFarland, James M.; Cumming, Bruce G.
2016-01-01
The ability to distinguish between elements of a sensory neuron's activity that are stimulus independent versus driven by the stimulus is critical for addressing many questions in systems neuroscience. This is typically accomplished by measuring neural responses to repeated presentations of identical stimuli and identifying the trial-variable components of the response as noise. In awake primates, however, small “fixational” eye movements (FEMs) introduce uncontrolled trial-to-trial differences in the visual stimulus itself, potentially confounding this distinction. Here, we describe novel analytical methods that directly quantify the stimulus-driven and stimulus-independent components of visual neuron responses in the presence of FEMs. We apply this approach, combined with precise model-based eye tracking, to recordings from primary visual cortex (V1), finding that standard approaches that ignore FEMs typically miss more than half of the stimulus-driven neural response variance, creating substantial biases in measures of response reliability. We show that these effects are likely not isolated to the particular experimental conditions used here, such as the choice of visual stimulus or spike measurement time window, and thus will be a more general problem for V1 recordings in awake primates. We also demonstrate that measurements of the stimulus-driven and stimulus-independent correlations among pairs of V1 neurons can be greatly biased by FEMs. These results thus illustrate the potentially dramatic impact of FEMs on measures of signal and noise in visual neuron activity and also demonstrate a novel approach for controlling for these eye-movement-induced effects. SIGNIFICANCE STATEMENT Distinguishing between the signal and noise in a sensory neuron's activity is typically accomplished by measuring neural responses to repeated presentations of an identical stimulus. For recordings from the visual cortex of awake animals, small “fixational” eye movements (FEMs) inevitably introduce trial-to-trial variability in the visual stimulus, potentially confounding such measures. Here, we show that FEMs often have a dramatic impact on several important measures of response variability for neurons in primary visual cortex. We also present an analytical approach for quantifying signal and noise in visual neuron activity in the presence of FEMs. These results thus highlight the importance of controlling for FEMs in studies of visual neuron function, and demonstrate novel methods for doing so. PMID:27277801
Tanahashi, Shigehito; Ashihara, Kaoru; Ujike, Hiroyasu
2015-01-01
Recent studies have found that self-motion perception induced by simultaneous presentation of visual and auditory motion is facilitated when the directions of visual and auditory motion stimuli are identical. They did not, however, examine possible contributions of auditory motion information for determining direction of self-motion perception. To examine this, a visual stimulus projected on a hemisphere screen and an auditory stimulus presented through headphones were presented separately or simultaneously, depending on experimental conditions. The participant continuously indicated the direction and strength of self-motion during the 130-s experimental trial. When the visual stimulus with a horizontal shearing rotation and the auditory stimulus with a horizontal one-directional rotation were presented simultaneously, the duration and strength of self-motion perceived in the opposite direction of the auditory rotation stimulus were significantly longer and stronger than those perceived in the same direction of the auditory rotation stimulus. However, the auditory stimulus alone could not sufficiently induce self-motion perception, and if it did, its direction was not consistent within each experimental trial. We concluded that auditory motion information can determine perceived direction of self-motion during simultaneous presentation of visual and auditory motion information, at least when visual stimuli moved in opposing directions (around the yaw-axis). We speculate that the contribution of auditory information depends on the plausibility and information balance of visual and auditory information. PMID:26113828
Comparison on driving fatigue related hemodynamics activated by auditory and visual stimulus
NASA Astrophysics Data System (ADS)
Deng, Zishan; Gao, Yuan; Li, Ting
2018-02-01
As one of the main causes of traffic accidents, driving fatigue deserves researchers' attention and its detection and monitoring during long-term driving require a new technique to realize. Since functional near-infrared spectroscopy (fNIRS) can be applied to detect cerebral hemodynamic responses, we can promisingly expect its application in fatigue level detection. Here, we performed three different kinds of experiments on a driver and recorded his cerebral hemodynamic responses when driving for long hours utilizing our device based on fNIRS. Each experiment lasted for 7 hours and one of the three specific experimental tests, detecting the driver's response to sounds, traffic lights and direction signs respectively, was done every hour. The results showed that visual stimulus was easier to cause fatigue compared with auditory stimulus and visual stimulus induced by traffic lights scenes was easier to cause fatigue compared with visual stimulus induced by direction signs in the first few hours. We also found that fatigue related hemodynamics caused by auditory stimulus increased fastest, then traffic lights scenes, and direction signs scenes slowest. Our study successfully compared audio, visual color, and visual character stimulus in sensitivity to cause driving fatigue, which is meaningful for driving safety management.
A COMPARISON OF METHODS FOR TEACHING RECEPTIVE LABELING TO CHILDREN WITH AUTISM SPECTRUM DISORDERS
Grow, Laura L; Carr, James E; Kodak, Tiffany M; Jostad, Candice M; Kisamore, April N
2011-01-01
Many early intervention curricular manuals recommend teaching auditory-visual conditional discriminations (i.e., receptive labeling) using the simple-conditional method in which component simple discriminations are taught in isolation and in the presence of a distracter stimulus before the learner is required to respond conditionally. Some have argued that this procedure might be susceptible to faulty stimulus control such as stimulus overselectivity (Green, 2001). Consequently, there has been a call for the use of alternative teaching procedures such as the conditional-only method, which involves conditional discrimination training from the onset of intervention. The purpose of the present study was to compare the simple-conditional and conditional-only methods for teaching receptive labeling to 3 young children diagnosed with autism spectrum disorders. The data indicated that the conditional-only method was a more reliable and efficient teaching procedure. In addition, several error patterns emerged during training using the simple-conditional method. The implications of the results with respect to current teaching practices in early intervention programs are discussed. PMID:21941380
Demonstrating the Potential for Dynamic Auditory Stimulation to Contribute to Motion Sickness
Keshavarz, Behrang; Hettinger, Lawrence J.; Kennedy, Robert S.; Campos, Jennifer L.
2014-01-01
Auditory cues can create the illusion of self-motion (vection) in the absence of visual or physical stimulation. The present study aimed to determine whether auditory cues alone can also elicit motion sickness and how auditory cues contribute to motion sickness when added to visual motion stimuli. Twenty participants were seated in front of a curved projection display and were exposed to a virtual scene that constantly rotated around the participant's vertical axis. The virtual scene contained either visual-only, auditory-only, or a combination of corresponding visual and auditory cues. All participants performed all three conditions in a counterbalanced order. Participants tilted their heads alternately towards the right or left shoulder in all conditions during stimulus exposure in order to create pseudo-Coriolis effects and to maximize the likelihood for motion sickness. Measurements of motion sickness (onset, severity), vection (latency, strength, duration), and postural steadiness (center of pressure) were recorded. Results showed that adding auditory cues to the visual stimuli did not, on average, affect motion sickness and postural steadiness, but it did reduce vection onset times and increased vection strength compared to pure visual or pure auditory stimulation. Eighteen of the 20 participants reported at least slight motion sickness in the two conditions including visual stimuli. More interestingly, six participants also reported slight motion sickness during pure auditory stimulation and two of the six participants stopped the pure auditory test session due to motion sickness. The present study is the first to demonstrate that motion sickness may be caused by pure auditory stimulation, which we refer to as “auditorily induced motion sickness”. PMID:24983752
The question of simultaneity in multisensory integration
NASA Astrophysics Data System (ADS)
Leone, Lynnette; McCourt, Mark E.
2012-03-01
Early reports of audiovisual (AV) multisensory integration (MI) indicated that unisensory stimuli must evoke simultaneous physiological responses to produce decreases in reaction time (RT) such that for unisensory stimuli with unequal RTs the stimulus eliciting the faster RT had to be delayed relative to the stimulus eliciting the slower RT. The "temporal rule" states that MI depends on the temporal proximity of unisensory stimuli, the neural responses to which must fall within a window of integration. Ecological validity demands that MI should occur only for simultaneous events (which may give rise to non-simultaneous neural activations). However, spurious neural response simultaneities which are unrelated to singular environmental multisensory occurrences must somehow be rejected. Using an RT/race model paradigm we measured AV MI as a function of stimulus onset asynchrony (SOA: +/-200 ms, 50 ms intervals) under fully dark adapted conditions for visual (V) stimuli that were either weak (scotopic 525 nm flashes; 511 ms mean RT) or strong (photopic 630 nm flashes; 356 ms mean RT). Auditory (A) stimulus (1000 Hz pure tone) intensity was constant. Despite the 155 ms slower mean RT to the scotopic versus photopic stimulus, facilitative AV MI in both conditions nevertheless occurred exclusively at an SOA of 0 ms. Thus, facilitative MI demands both physical and physiological simultaneity. We consider the mechanisms by which the nervous system may take account of variations in response latency arising from changes in stimulus intensity in order to selectively integrate only those physiological simultaneities that arise from physical simultaneities.
Vollrath-Smith, Fiori R.; Shin, Rick
2011-01-01
Rationale Noncontingent administration of amphetamine into the ventral striatum or systemic nicotine increases responses rewarded by inconsequential visual stimuli. When these drugs are contingently administered, rats learn to self-administer them. We recently found that rats self-administer the GABAB receptor agonist baclofen into the median (MR) or dorsal (DR) raphe nuclei. Objectives We examined whether noncontingent administration of baclofen into the MR or DR increases rats’ investigatory behavior rewarded by a flash of light. Results Contingent presentations of a flash of light slightly increased lever presses. Whereas noncontingent administration of baclofen into the MR or DR did not reliably increase lever presses in the absence of visual stimulus reward, the same manipulation markedly increased lever presses rewarded by the visual stimulus. Heightened locomotor activity induced by intraperitoneal injections of amphetamine (3 mg/kg) failed to concur with increased lever pressing for the visual stimulus. These results indicate that the observed enhancement of visual stimulus seeking is distinct from an enhancement of general locomotor activity. Visual stimulus seeking decreased when baclofen was co-administered with the GABAB receptor antagonist, SCH 50911, confirming the involvement of local GABAB receptors. Seeking for visual stimulus also abated when baclofen administration was preceded by intraperitoneal injections of the dopamine antagonist, SCH 23390 (0.025 mg/kg), suggesting enhanced visual stimulus seeking depends on intact dopamine signals. Conclusions Baclofen administration into the MR or DR increased investigatory behavior induced by visual stimuli. Stimulation of GABAB receptors in the MR and DR appears to disinhibit the motivational process involving stimulus–approach responses. PMID:21904820
Spatiotemporal dynamics of similarity-based neural representations of facial identity.
Vida, Mark D; Nestor, Adrian; Plaut, David C; Behrmann, Marlene
2017-01-10
Humans' remarkable ability to quickly and accurately discriminate among thousands of highly similar complex objects demands rapid and precise neural computations. To elucidate the process by which this is achieved, we used magnetoencephalography to measure spatiotemporal patterns of neural activity with high temporal resolution during visual discrimination among a large and carefully controlled set of faces. We also compared these neural data to lower level "image-based" and higher level "identity-based" model-based representations of our stimuli and to behavioral similarity judgments of our stimuli. Between ∼50 and 400 ms after stimulus onset, face-selective sources in right lateral occipital cortex and right fusiform gyrus and sources in a control region (left V1) yielded successful classification of facial identity. In all regions, early responses were more similar to the image-based representation than to the identity-based representation. In the face-selective regions only, responses were more similar to the identity-based representation at several time points after 200 ms. Behavioral responses were more similar to the identity-based representation than to the image-based representation, and their structure was predicted by responses in the face-selective regions. These results provide a temporally precise description of the transformation from low- to high-level representations of facial identity in human face-selective cortex and demonstrate that face-selective cortical regions represent multiple distinct types of information about face identity at different times over the first 500 ms after stimulus onset. These results have important implications for understanding the rapid emergence of fine-grained, high-level representations of object identity, a computation essential to human visual expertise.
Visual short-term memory for sequential arrays.
Kumar, Arjun; Jiang, Yuhong
2005-04-01
The capacity of visual short-term memory (VSTM) for a single visual display has been investigated in past research, but VSTM for multiple sequential arrays has been explored only recently. In this study, we investigate the capacity of VSTM across two sequential arrays separated by a variable stimulus onset asynchrony (SOA). VSTM for spatial locations (Experiment 1), colors (Experiments 2-4), orientations (Experiments 3 and 4), and conjunction of color and orientation (Experiment 4) were tested, with the SOA across the two sequential arrays varying from 100 to 1,500 msec. We find that VSTM for the trailing array is much better than VSTM for the leading array, but when averaged across the two arrays VSTM has a constant capacity independent of the SOA. We suggest that multiple displays compete for retention in VSTM and that separating information into two temporally discrete groups does not enhance the overall capacity of VSTM.
Goldberg, Melissa C; Mostow, Allison J; Vecera, Shaun P; Larson, Jennifer C Gidley; Mostofsky, Stewart H; Mahone, E Mark; Denckla, Martha B
2008-09-01
We examined the ability to use static line drawings of eye gaze cues to orient visual-spatial attention in children with high functioning autism (HFA) compared to typically developing children (TD). The task was organized such that on valid trials, gaze cues were directed toward the same spatial location as the appearance of an upcoming target, while on invalid trials gaze cues were directed to an opposite location. Unlike TD children, children with HFA showed no advantage in reaction time (RT) on valid trials compared to invalid trials (i.e., no significant validity effect). The two stimulus onset asynchronies (200 ms, 700 ms) did not differentially affect these findings. The results suggest that children with HFA show impairments in utilizing static line drawings of gaze cues to orient visual-spatial attention.
Alertness and cognitive control: Testing the early onset hypothesis.
Schneider, Darryl W
2018-05-01
Previous research has revealed a peculiar interaction between alertness and cognitive control in selective-attention tasks: Congruency effects are larger on alert trials (on which an alerting cue is presented briefly in advance of the imperative stimulus) than on no-alert trials, despite shorter response times (RTs) on alert trials. One explanation for this finding is the early onset hypothesis, which is based on the assumptions that increased alertness shortens stimulus-encoding time and that cognitive control involves gradually focusing attention during a trial. The author tested the hypothesis in 3 experiments by manipulating alertness and stimulus quality (which were intended to shorten and lengthen stimulus-encoding time, respectively) in an arrow-based flanker task involving congruent and incongruent stimuli. Replicating past findings, the alerting manipulation led to shorter RTs but larger congruency effects on alert trials than on no-alert trials. The stimulus-quality manipulation led to longer RTs and larger congruency effects for degraded stimuli than for intact stimuli. These results provide mixed support for the early onset hypothesis, but the author discusses how data and theory might be reconciled if stimulus quality affects stimulus-encoding time and the rate of evidence accumulation in the decision process. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Using Single-trial EEG to Predict and Analyze Subsequent Memory
Noh, Eunho; Herzmann, Grit; Curran, Tim; de Sa, Virginia R.
2013-01-01
We show that it is possible to successfully predict subsequent memory performance based on single-trial EEG activity before and during item presentation in the study phase. Two-class classification was conducted to predict subsequently remembered vs. forgotten trials based on subjects’ responses in the recognition phase. The overall accuracy across 18 subjects was 59.6 % by combining pre- and during-stimulus information. The single-trial classification analysis provides a dimensionality reduction method to project the high-dimensional EEG data onto a discriminative space. These projections revealed novel findings in the pre- and during-stimulus period related to levels of encoding. It was observed that the pre-stimulus information (specifically oscillatory activity between 25–35Hz) −300 to 0 ms before stimulus presentation and during-stimulus alpha (7–12 Hz) information between 1000–1400 ms after stimulus onset distinguished between recollection and familiarity while the during-stimulus alpha information and temporal information between 400–800 ms after stimulus onset mapped these two states to similar values. PMID:24064073
Effects of age, gender, and stimulus presentation period on visual short-term memory.
Kunimi, Mitsunobu
2016-01-01
This study focused on age-related changes in visual short-term memory using visual stimuli that did not allow verbal encoding. Experiment 1 examined the effects of age and the length of the stimulus presentation period on visual short-term memory function. Experiment 2 examined the effects of age, gender, and the length of the stimulus presentation period on visual short-term memory function. The worst memory performance and the largest performance difference between the age groups were observed in the shortest stimulus presentation period conditions. The performance difference between the age groups became smaller as the stimulus presentation period became longer; however, it did not completely disappear. Although gender did not have a significant effect on d' regardless of the presentation period in the young group, a significant gender-based difference was observed for stimulus presentation periods of 500 ms and 1,000 ms in the older group. This study indicates that the decline in visual short-term memory observed in the older group is due to the interaction of several factors.
A massively asynchronous, parallel brain.
Zeki, Semir
2015-05-19
Whether the visual brain uses a parallel or a serial, hierarchical, strategy to process visual signals, the end result appears to be that different attributes of the visual scene are perceived asynchronously--with colour leading form (orientation) by 40 ms and direction of motion by about 80 ms. Whatever the neural root of this asynchrony, it creates a problem that has not been properly addressed, namely how visual attributes that are perceived asynchronously over brief time windows after stimulus onset are bound together in the longer term to give us a unified experience of the visual world, in which all attributes are apparently seen in perfect registration. In this review, I suggest that there is no central neural clock in the (visual) brain that synchronizes the activity of different processing systems. More likely, activity in each of the parallel processing-perceptual systems of the visual brain is reset independently, making of the brain a massively asynchronous organ, just like the new generation of more efficient computers promise to be. Given the asynchronous operations of the brain, it is likely that the results of activities in the different processing-perceptual systems are not bound by physiological interactions between cells in the specialized visual areas, but post-perceptually, outside the visual brain.
Unconscious and conscious processing of negative emotions examined through affective priming.
Okubo, Chisa; Ogawa, Toshiki
2013-04-01
This study investigated unconscious and conscious processes by which negative emotions arise. Participants (26 men, 47 women; M age = 20.3 yr.) evaluated target words that were primed with subliminally or supraliminally presented emotional pictures. Stimulus onset asynchrony was either 200 or 800 msec. With subliminal presentations, reaction times to negative targets were longer than reaction times to positive targets after negative primes for the 200-msec. stimulus onset asynchrony. Reaction times to positive targets after negative or positive primes were shorter when the stimulus onset asynchrony was 800 msec. For supraliminal presentations, reaction times were longer when evaluating targets that followed emotionally opposite primes. When emotional stimuli were consciously distinguished, the evoked emotional states might lead to emotional conflicts, although the qualitatively different effects might be caused when subliminally presented emotion evoking stimulus was appraised unconsciously; that possibility was discussed.
Effects of reward on the accuracy and dynamics of smooth pursuit eye movements.
Brielmann, Aenne A; Spering, Miriam
2015-08-01
Reward modulates behavioral choices and biases goal-oriented behavior, such as eye or hand movements, toward locations or stimuli associated with higher rewards. We investigated reward effects on the accuracy and timing of smooth pursuit eye movements in 4 experiments. Eye movements were recorded in participants tracking a moving visual target on a computer monitor. Before target motion onset, a monetary reward cue indicated whether participants could earn money by tracking accurately, or whether the trial was unrewarded (Experiments 1 and 2, n = 11 each). Reward significantly improved eye-movement accuracy across different levels of task difficulty. Improvements were seen even in the earliest phase of the eye movement, within 70 ms of tracking onset, indicating that reward impacts visual-motor processing at an early level. We obtained similar findings when reward was not precued but explicitly associated with the pursuit target (Experiment 3, n = 16); critically, these results were not driven by stimulus prevalence or other factors such as preparation or motivation. Numerical cues (Experiment 4, n = 9) were not effective. (c) 2015 APA, all rights reserved).
The effects of task difficulty on visual search strategy in virtual 3D displays
Pomplun, Marc; Garaas, Tyler W.; Carrasco, Marisa
2013-01-01
Analyzing the factors that determine our choice of visual search strategy may shed light on visual behavior in everyday situations. Previous results suggest that increasing task difficulty leads to more systematic search paths. Here we analyze observers' eye movements in an “easy” conjunction search task and a “difficult” shape search task to study visual search strategies in stereoscopic search displays with virtual depth induced by binocular disparity. Standard eye-movement variables, such as fixation duration and initial saccade latency, as well as new measures proposed here, such as saccadic step size, relative saccadic selectivity, and x−y target distance, revealed systematic effects on search dynamics in the horizontal-vertical plane throughout the search process. We found that in the “easy” task, observers start with the processing of display items in the display center immediately after stimulus onset and subsequently move their gaze outwards, guided by extrafoveally perceived stimulus color. In contrast, the “difficult” task induced an initial gaze shift to the upper-left display corner, followed by a systematic left-right and top-down search process. The only consistent depth effect was a trend of initial saccades in the easy task with smallest displays to the items closest to the observer. The results demonstrate the utility of eye-movement analysis for understanding search strategies and provide a first step toward studying search strategies in actual 3D scenarios. PMID:23986539
Audiovisual Temporal Processing and Synchrony Perception in the Rat.
Schormans, Ashley L; Scott, Kaela E; Vo, Albert M Q; Tyker, Anna; Typlt, Marei; Stolzberg, Daniel; Allman, Brian L
2016-01-01
Extensive research on humans has improved our understanding of how the brain integrates information from our different senses, and has begun to uncover the brain regions and large-scale neural activity that contributes to an observer's ability to perceive the relative timing of auditory and visual stimuli. In the present study, we developed the first behavioral tasks to assess the perception of audiovisual temporal synchrony in rats. Modeled after the parameters used in human studies, separate groups of rats were trained to perform: (1) a simultaneity judgment task in which they reported whether audiovisual stimuli at various stimulus onset asynchronies (SOAs) were presented simultaneously or not; and (2) a temporal order judgment task in which they reported whether they perceived the auditory or visual stimulus to have been presented first. Furthermore, using in vivo electrophysiological recordings in the lateral extrastriate visual (V2L) cortex of anesthetized rats, we performed the first investigation of how neurons in the rat multisensory cortex integrate audiovisual stimuli presented at different SOAs. As predicted, rats ( n = 7) trained to perform the simultaneity judgment task could accurately (~80%) identify synchronous vs. asynchronous (200 ms SOA) trials. Moreover, the rats judged trials at 10 ms SOA to be synchronous, whereas the majority (~70%) of trials at 100 ms SOA were perceived to be asynchronous. During the temporal order judgment task, rats ( n = 7) perceived the synchronous audiovisual stimuli to be "visual first" for ~52% of the trials, and calculation of the smallest timing interval between the auditory and visual stimuli that could be detected in each rat (i.e., the just noticeable difference (JND)) ranged from 77 ms to 122 ms. Neurons in the rat V2L cortex were sensitive to the timing of audiovisual stimuli, such that spiking activity was greatest during trials when the visual stimulus preceded the auditory by 20-40 ms. Ultimately, given that our behavioral and electrophysiological results were consistent with studies conducted on human participants and previous recordings made in multisensory brain regions of different species, we suggest that the rat represents an effective model for studying audiovisual temporal synchrony at both the neuronal and perceptual level.
Audiovisual Temporal Processing and Synchrony Perception in the Rat
Schormans, Ashley L.; Scott, Kaela E.; Vo, Albert M. Q.; Tyker, Anna; Typlt, Marei; Stolzberg, Daniel; Allman, Brian L.
2017-01-01
Extensive research on humans has improved our understanding of how the brain integrates information from our different senses, and has begun to uncover the brain regions and large-scale neural activity that contributes to an observer’s ability to perceive the relative timing of auditory and visual stimuli. In the present study, we developed the first behavioral tasks to assess the perception of audiovisual temporal synchrony in rats. Modeled after the parameters used in human studies, separate groups of rats were trained to perform: (1) a simultaneity judgment task in which they reported whether audiovisual stimuli at various stimulus onset asynchronies (SOAs) were presented simultaneously or not; and (2) a temporal order judgment task in which they reported whether they perceived the auditory or visual stimulus to have been presented first. Furthermore, using in vivo electrophysiological recordings in the lateral extrastriate visual (V2L) cortex of anesthetized rats, we performed the first investigation of how neurons in the rat multisensory cortex integrate audiovisual stimuli presented at different SOAs. As predicted, rats (n = 7) trained to perform the simultaneity judgment task could accurately (~80%) identify synchronous vs. asynchronous (200 ms SOA) trials. Moreover, the rats judged trials at 10 ms SOA to be synchronous, whereas the majority (~70%) of trials at 100 ms SOA were perceived to be asynchronous. During the temporal order judgment task, rats (n = 7) perceived the synchronous audiovisual stimuli to be “visual first” for ~52% of the trials, and calculation of the smallest timing interval between the auditory and visual stimuli that could be detected in each rat (i.e., the just noticeable difference (JND)) ranged from 77 ms to 122 ms. Neurons in the rat V2L cortex were sensitive to the timing of audiovisual stimuli, such that spiking activity was greatest during trials when the visual stimulus preceded the auditory by 20–40 ms. Ultimately, given that our behavioral and electrophysiological results were consistent with studies conducted on human participants and previous recordings made in multisensory brain regions of different species, we suggest that the rat represents an effective model for studying audiovisual temporal synchrony at both the neuronal and perceptual level. PMID:28119580
The Effect of Visual Experience on Perceived Haptic Verticality When Tilted in the Roll Plane
Cuturi, Luigi F.; Gori, Monica
2017-01-01
The orientation of the body in space can influence perception of verticality leading sometimes to biases consistent with priors peaked at the most common head and body orientation, that is upright. In this study, we investigate haptic perception of verticality in sighted individuals and early and late blind adults when tilted counterclockwise in the roll plane. Participants were asked to perform a stimulus orientation discrimination task with their body tilted to their left ear side 90° relative to gravity. Stimuli were presented by using a motorized haptic bar. In order to test whether different reference frames relative to the head influenced perception of verticality, we varied the position of the stimulus on the body longitudinal axis. Depending on the stimulus position sighted participants tended to have biases away or toward their body tilt. Visually impaired individuals instead show a different pattern of verticality estimations. A bias toward head and body tilt (i.e., Aubert effect) was observed in late blind individuals. Interestingly, no strong biases were observed in early blind individuals. Overall, these results posit visual sensory information to be fundamental in influencing the haptic readout of proprioceptive and vestibular information about body orientation relative to gravity. The acquisition of an idiotropic vector signaling the upright might take place through vision during development. Regarding early blind individuals, independent spatial navigation experience likely enhanced by echolocation behavior might have a role in such acquisition. In the case of participants with late onset blindness, early experience of vision might lead them to anchor their visually acquired priors to the haptic modality with no disambiguation between head and body references as observed in sighted individuals (Fraser et al., 2015). With our study, we aim to investigate haptic perception of gravity direction in unusual body tilts when vision is absent due to visual impairment. Insofar, our findings throw light on the influence of proprioceptive/vestibular sensory information on haptic perceived verticality in blind individuals showing how this phenomenon is affected by visual experience. PMID:29270109
Simultaneous recording of multifocal VEP responses to short-wavelength and achromatic stimuli
Wang, Min; Hood, Donald C.
2010-01-01
A paradigm is introduced that allows for simultaneous recording of the pattern-onset multifocal visual evoked potentials (mfVEP) to both short-wavelength (SW) and achromatic (A) stimuli. There were 5 sets of stimulus conditions, each of which is defined by two semi-concurrently presented stimuli, A64/SW (a 64% contrast achromatic stimulus and a short-wavelength stimulus), A64/A8 (64% achromatic/8% achromatic), A0/A8 (0% (gray) achromatic/8% achromatic), A64/A0 and A0/SW. When paired with A64 as part of A64/SW, the SW stimulus yielded mfVEP responses (SWmfVEP) with diminished amplitude in the fovea, consistent with the known sensitivity of the S-cone system. In addition, when A8, which is approximately equal to the L and M cone contribution of the SW stimulus, was recorded alone, the response to A8 was small, but significantly larger than noise. However, when A8 was paired with A64, the response to A8 was reduced to close to noise level, suggesting that the LM cone contribution of the SWmfVEP can be suppressed by A64. When A64 was recorded alone, the response to A64 was about 32% larger than the mfVEP for A64 when paired with the SW. Likewise, the presence of A64 stimulus also reduces the response of SWmfVEP by 35%. Finally, an intense narrow-band yellow background prolonged the latency of SW response for the A0/SW stimulus but not the latency of SW response for the A64/SW stimulus. These results indicate that it is possible to simultaneously record an SWmfVEP with little LM cone contribution along with an achromatic mfVEP. PMID:20499134
Multisensory Integration in Non-Human Primates during a Sensory-Motor Task
Lanz, Florian; Moret, Véronique; Rouiller, Eric Michel; Loquet, Gérard
2013-01-01
Daily our central nervous system receives inputs via several sensory modalities, processes them and integrates information in order to produce a suitable behavior. The amazing part is that such a multisensory integration brings all information into a unified percept. An approach to start investigating this property is to show that perception is better and faster when multimodal stimuli are used as compared to unimodal stimuli. This forms the first part of the present study conducted in a non-human primate’s model (n = 2) engaged in a detection sensory-motor task where visual and auditory stimuli were displayed individually or simultaneously. The measured parameters were the reaction time (RT) between stimulus and onset of arm movement, successes and errors percentages, as well as the evolution as a function of time of these parameters with training. As expected, RTs were shorter when the subjects were exposed to combined stimuli. The gains for both subjects were around 20 and 40 ms, as compared with the auditory and visual stimulus alone, respectively. Moreover the number of correct responses increased in response to bimodal stimuli. We interpreted such multisensory advantage through redundant signal effect which decreases perceptual ambiguity, increases speed of stimulus detection, and improves performance accuracy. The second part of the study presents single-unit recordings derived from the premotor cortex (PM) of the same subjects during the sensory-motor task. Response patterns to sensory/multisensory stimulation are documented and specific type proportions are reported. Characterization of bimodal neurons indicates a mechanism of audio-visual integration possibly through a decrease of inhibition. Nevertheless the neural processing leading to faster motor response from PM as a polysensory association cortical area remains still unclear. PMID:24319421
Factors influencing the latency of simple reaction time
Woods, David L.; Wyma, John M.; Yund, E. William; Herron, Timothy J.; Reed, Bruce
2015-01-01
Simple reaction time (SRT), the minimal time needed to respond to a stimulus, is a basic measure of processing speed. SRTs were first measured by Francis Galton in the 19th century, who reported visual SRT latencies below 190 ms in young subjects. However, recent large-scale studies have reported substantially increased SRT latencies that differ markedly in different laboratories, in part due to timing delays introduced by the computer hardware and software used for SRT measurement. We developed a calibrated and temporally precise SRT test to analyze the factors that influence SRT latencies in a paradigm where visual stimuli were presented to the left or right hemifield at varying stimulus onset asynchronies (SOAs). Experiment 1 examined a community sample of 1469 subjects ranging in age from 18 to 65. Mean SRT latencies were short (231, 213 ms when corrected for hardware delays) and increased significantly with age (0.55 ms/year), but were unaffected by sex or education. As in previous studies, SRTs were prolonged at shorter SOAs and were slightly faster for stimuli presented in the visual field contralateral to the responding hand. Stimulus detection time (SDT) was estimated by subtracting movement initiation time, measured in a speeded finger tapping test, from SRTs. SDT latencies averaged 131 ms and were unaffected by age. Experiment 2 tested 189 subjects ranging in age from 18 to 82 years in a different laboratory using a larger range of SOAs. Both SRTs and SDTs were slightly prolonged (by 7 ms). SRT latencies increased with age while SDT latencies remained stable. Precise computer-based measurements of SRT latencies show that processing speed is as fast in contemporary populations as in the Victorian era, and that age-related increases in SRT latencies are due primarily to slowed motor output. PMID:25859198
Stimulus-driven attentional capture by subliminal onset cues.
Schoeberl, Tobias; Fuchs, Isabella; Theeuwes, Jan; Ansorge, Ulrich
2015-04-01
In two experiments, we tested whether subliminal abrupt onset cues capture attention in a stimulus-driven way. An onset cue was presented 16 ms prior to the stimulus display that consisted of clearly visible color targets. The onset cue was presented either at the same side as the target (the valid cue condition) or on the opposite side of the target (the invalid cue condition). Because the onset cue was presented 16 ms before other placeholders were presented, the cue was subliminal to the participant. To ensure that this subliminal cue captured attention in a stimulus-driven way, the cue's features did not match the top-down attentional control settings of the participants: (1) The color of the cue was always different than the color of the non-singleton targets ensuring that a top-down set for a specific color or for a singleton would not match the cue, and (2) colored targets and distractors had the same objective luminance (measured by the colorimeter) and subjective lightness (measured by flicker photometry), preventing a match between the top-down set for target and cue contrast. Even though a match between the cues and top-down settings was prevented, in both experiments, the cues captured attention, with faster response times in valid than invalid cue conditions (Experiments 1 and 2) and faster response times in valid than the neutral conditions (Experiment 2). The results support the conclusion that subliminal cues capture attention in a stimulus-driven way.
Stubbs, D A; Cohen, S L
1972-11-01
Pigeons performed on a second-order schedule in which fixed-interval components were maintained under a variable-interval schedule. Completion of each fixed-interval component resulted in a brief-stimulus presentation and/or food. The relation of the brief stimulus and food was varied across conditions. Under some conditions, the brief stimulus was never paired with food. Under other conditions, the brief stimulus was paired with food; three different pairing procedures were used: (a) a response produced the simultaneous onset of the stimulus and food; (b) a response produced the stimulus before food with the stimulus remaining on during food presentation; (c) a response produced the stimulus and the offset of the stimulus was simultaneous with the onset of the food cycle. The various pairing and nonpairing operations all produced similar effects on performance. Under all conditions, response rates were positively accelerated within fixed-interval components. Total response rates and Index of Curvature measures were similar across conditions. In one condition, a blackout was paired with food; with this different stimulus in effect, less curvature resulted. The results suggest that pairing of a stimulus is not a necessary condition for within-component patterning under some second-order schedules.
Stubbs, D. Alan; Cohen, Steven L.
1972-01-01
Pigeons performed on a second-order schedule in which fixed-interval components were maintained under a variable-interval schedule. Completion of each fixed-interval component resulted in a brief-stimulus presentation and/or food. The relation of the brief stimulus and food was varied across conditions. Under some conditions, the brief stimulus was never paired with food. Under other conditions, the brief stimulus was paired with food; three different pairing procedures were used: (a) a response produced the simultaneous onset of the stimulus and food; (b) a response produced the stimulus before food with the stimulus remaining on during food presentation; (c) a response produced the stimulus and the offset of the stimulus was simultaneous with the onset of the food cycle. The various pairing and nonpairing operations all produced similar effects on performance. Under all conditions, response rates were positively accelerated within fixed-interval components. Total response rates and Index of Curvature measures were similar across conditions. In one condition, a blackout was paired with food; with this different stimulus in effect, less curvature resulted. The results suggest that pairing of a stimulus is not a necessary condition for within-component patterning under some second-order schedules. PMID:16811634
First-Pass Processing of Value Cues in the Ventral Visual Pathway.
Sasikumar, Dennis; Emeric, Erik; Stuphorn, Veit; Connor, Charles E
2018-02-19
Real-world value often depends on subtle, continuously variable visual cues specific to particular object categories, like the tailoring of a suit, the condition of an automobile, or the construction of a house. Here, we used microelectrode recording in behaving monkeys to test two possible mechanisms for category-specific value-cue processing: (1) previous findings suggest that prefrontal cortex (PFC) identifies object categories, and based on category identity, PFC could use top-down attentional modulation to enhance visual processing of category-specific value cues, providing signals to PFC for calculating value, and (2) a faster mechanism would be first-pass visual processing of category-specific value cues, immediately providing the necessary visual information to PFC. This, however, would require learned mechanisms for processing the appropriate cues in a given object category. To test these hypotheses, we trained monkeys to discriminate value in four letter-like stimulus categories. Each category had a different, continuously variable shape cue that signified value (liquid reward amount) as well as other cues that were irrelevant. Monkeys chose between stimuli of different reward values. Consistent with the first-pass hypothesis, we found early signals for category-specific value cues in area TE (the final stage in monkey ventral visual pathway) beginning 81 ms after stimulus onset-essentially at the start of TE responses. Task-related activity emerged in lateral PFC approximately 40 ms later and consisted mainly of category-invariant value tuning. Our results show that, for familiar, behaviorally relevant object categories, high-level ventral pathway cortex can implement rapid, first-pass processing of category-specific value cues. Copyright © 2018 Elsevier Ltd. All rights reserved.
Stimulus specificity of a steady-state visual-evoked potential-based brain-computer interface.
Ng, Kian B; Bradley, Andrew P; Cunnington, Ross
2012-06-01
The mechanisms of neural excitation and inhibition when given a visual stimulus are well studied. It has been established that changing stimulus specificity such as luminance contrast or spatial frequency can alter the neuronal activity and thus modulate the visual-evoked response. In this paper, we study the effect that stimulus specificity has on the classification performance of a steady-state visual-evoked potential-based brain-computer interface (SSVEP-BCI). For example, we investigate how closely two visual stimuli can be placed before they compete for neural representation in the cortex and thus influence BCI classification accuracy. We characterize stimulus specificity using the four stimulus parameters commonly encountered in SSVEP-BCI design: temporal frequency, spatial size, number of simultaneously displayed stimuli and their spatial proximity. By varying these quantities and measuring the SSVEP-BCI classification accuracy, we are able to determine the parameters that provide optimal performance. Our results show that superior SSVEP-BCI accuracy is attained when stimuli are placed spatially more than 5° apart, with size that subtends at least 2° of visual angle, when using a tagging frequency of between high alpha and beta band. These findings may assist in deciding the stimulus parameters for optimal SSVEP-BCI design.
Stimulus specificity of a steady-state visual-evoked potential-based brain-computer interface
NASA Astrophysics Data System (ADS)
Ng, Kian B.; Bradley, Andrew P.; Cunnington, Ross
2012-06-01
The mechanisms of neural excitation and inhibition when given a visual stimulus are well studied. It has been established that changing stimulus specificity such as luminance contrast or spatial frequency can alter the neuronal activity and thus modulate the visual-evoked response. In this paper, we study the effect that stimulus specificity has on the classification performance of a steady-state visual-evoked potential-based brain-computer interface (SSVEP-BCI). For example, we investigate how closely two visual stimuli can be placed before they compete for neural representation in the cortex and thus influence BCI classification accuracy. We characterize stimulus specificity using the four stimulus parameters commonly encountered in SSVEP-BCI design: temporal frequency, spatial size, number of simultaneously displayed stimuli and their spatial proximity. By varying these quantities and measuring the SSVEP-BCI classification accuracy, we are able to determine the parameters that provide optimal performance. Our results show that superior SSVEP-BCI accuracy is attained when stimuli are placed spatially more than 5° apart, with size that subtends at least 2° of visual angle, when using a tagging frequency of between high alpha and beta band. These findings may assist in deciding the stimulus parameters for optimal SSVEP-BCI design.
Stekelenburg, Jeroen J; Keetels, Mirjam
2016-05-01
The Colavita effect refers to the phenomenon that when confronted with an audiovisual stimulus, observers report more often to have perceived the visual than the auditory component. The Colavita effect depends on low-level stimulus factors such as spatial and temporal proximity between the unimodal signals. Here, we examined whether the Colavita effect is modulated by synesthetic congruency between visual size and auditory pitch. If the Colavita effect depends on synesthetic congruency, we expect a larger Colavita effect for synesthetically congruent size/pitch (large visual stimulus/low-pitched tone; small visual stimulus/high-pitched tone) than synesthetically incongruent (large visual stimulus/high-pitched tone; small visual stimulus/low-pitched tone) combinations. Participants had to identify stimulus type (visual, auditory or audiovisual). The study replicated the Colavita effect because participants reported more often the visual than auditory component of the audiovisual stimuli. Synesthetic congruency had, however, no effect on the magnitude of the Colavita effect. EEG recordings to congruent and incongruent audiovisual pairings showed a late frontal congruency effect at 400-550 ms and an occipitoparietal effect at 690-800 ms with neural sources in the anterior cingulate and premotor cortex for the 400- to 550-ms window and premotor cortex, inferior parietal lobule and the posterior middle temporal gyrus for the 690- to 800-ms window. The electrophysiological data show that synesthetic congruency was probably detected in a processing stage subsequent to the Colavita effect. We conclude that-in a modality detection task-the Colavita effect can be modulated by low-level structural factors but not by higher-order associations between auditory and visual inputs.
Development of adaptive sensorimotor control in infant sitting posture.
Chen, Li-Chiou; Jeka, John; Clark, Jane E
2016-03-01
A reliable and adaptive relationship between action and perception is necessary for postural control. Our understanding of how this adaptive sensorimotor control develops during infancy is very limited. This study examines the dynamic visual-postural relationship during early development. Twenty healthy infants were divided into 4 developmental groups (each n=5): sitting onset, standing alone, walking onset, and 1-year post-walking. During the experiment, the infant sat independently in a virtual moving-room in which anterior-posterior oscillations of visual motion were presented using a sum-of-sines technique with five input frequencies (from 0.12 to 1.24 Hz). Infants were tested in five conditions that varied in the amplitude of visual motion (from 0 to 8.64 cm). Gain and phase responses of infants' postural sway were analyzed. Our results showed that infants, from a few months post-sitting to 1 year post-walking, were able to control their sitting posture in response to various frequency and amplitude properties of the visual motion. Infants showed an adult-like inverted-U pattern for the frequency response to visual inputs with the highest gain at 0.52 and 0.76 Hz. As the visual motion amplitude increased, the gain response decreased. For the phase response, an adult-like frequency-dependent pattern was observed in all amplitude conditions for the experienced walkers. Newly sitting infants, however, showed variable postural behavior and did not systemically respond to the visual stimulus. Our results suggest that visual-postural entrainment and sensory re-weighting are fundamental processes that are present after a few months post sitting. Sensorimotor refinement during early postural development may result from the interactions of improved self-motion control and enhanced perceptual abilities. Copyright © 2016 Elsevier B.V. All rights reserved.
Gestalt perception modulates early visual processing.
Herrmann, C S; Bosch, V
2001-04-17
We examined whether early visual processing reflects perceptual properties of a stimulus in addition to physical features. We recorded event-related potentials (ERPs) of 13 subjects in a visual classification task. We used four different stimuli which were all composed of four identical elements. One of the stimuli constituted an illusory Kanizsa square, another was composed of the same number of collinear line segments but the elements did not form a Gestalt. In addition, a target and a control stimulus were used which were arranged differently. These stimuli allow us to differentiate the processing of colinear line elements (stimulus features) and illusory figures (perceptual properties). The visual N170 in response to the illusory figure was significantly larger as compared to the other collinear stimulus. This is taken to indicate that the visual N170 reflects cognitive processes of Gestalt perception in addition to attentional processes and physical stimulus properties.
Manzone, Joseph; Heath, Matthew
2018-04-01
Reaching to a veridical target permits an egocentric spatial code (i.e., absolute limb and target position) to effect fast and effective online trajectory corrections supported via the visuomotor networks of the dorsal visual pathway. In contrast, a response entailing decoupled spatial relations between stimulus and response is thought to be primarily mediated via an allocentric code (i.e., the position of a target relative to another external cue) laid down by the visuoperceptual networks of the ventral visual pathway. Because the ventral stream renders a temporally durable percept, it is thought that an allocentric code does not support a primarily online mode of control, but instead supports a mode wherein a response is evoked largely in advance of movement onset via central planning mechanisms (i.e., offline control). Here, we examined whether reaches defined via ego- and allocentric visual coordinates are supported via distinct control modes (i.e., online versus offline). Participants performed target-directed and allocentric reaches in limb visible and limb-occluded conditions. Notably, in the allocentric task, participants reached to a location that matched the position of a target stimulus relative to a reference stimulus, and to examine online trajectory amendments, we computed the proportion of variance explained (i.e., R 2 values) by the spatial position of the limb at 75% of movement time relative to a response's ultimate movement endpoint. Target-directed trials performed with limb vision showed more online corrections and greater endpoint precision than their limb-occluded counterparts, which in turn were associated with performance metrics comparable to allocentric trials performed with and without limb vision. Accordingly, we propose that the absence of ego-motion cues (i.e., limb vision) and/or the specification of a response via an allocentric code renders motor output served via the 'slow' visuoperceptual networks of the ventral visual pathway.
Age-related slowing of response selection and production in a visual choice reaction time task
Woods, David L.; Wyma, John M.; Yund, E. William; Herron, Timothy J.; Reed, Bruce
2015-01-01
Aging is associated with delayed processing in choice reaction time (CRT) tasks, but the processing stages most impacted by aging have not been clearly identified. Here, we analyzed CRT latencies in a computerized serial visual feature-conjunction task. Participants responded to a target letter (probability 40%) by pressing one mouse button, and responded to distractor letters differing either in color, shape, or both features from the target (probabilities 20% each) by pressing the other mouse button. Stimuli were presented randomly to the left and right visual fields and stimulus onset asynchronies (SOAs) were adaptively reduced following correct responses using a staircase procedure. In Experiment 1, we tested 1466 participants who ranged in age from 18 to 65 years. CRT latencies increased significantly with age (r = 0.47, 2.80 ms/year). Central processing time (CPT), isolated by subtracting simple reaction times (SRT) (obtained in a companion experiment performed on the same day) from CRT latencies, accounted for more than 80% of age-related CRT slowing, with most of the remaining increase in latency due to slowed motor responses. Participants were faster and more accurate when the stimulus location was spatially compatible with the mouse button used for responding, and this effect increased slightly with age. Participants took longer to respond to distractors with target color or shape than to distractors with no target features. However, the additional time needed to discriminate the more target-like distractors did not increase with age. In Experiment 2, we replicated the findings of Experiment 1 in a second population of 178 participants (ages 18–82 years). CRT latencies did not differ significantly in the two experiments, and similar effects of age, distractor similarity, and stimulus-response spatial compatibility were found. The results suggest that the age-related slowing in visual CRT latencies is largely due to delays in response selection and production. PMID:25954175
Real-time detection and discrimination of visual perception using electrocorticographic signals
NASA Astrophysics Data System (ADS)
Kapeller, C.; Ogawa, H.; Schalk, G.; Kunii, N.; Coon, W. G.; Scharinger, J.; Guger, C.; Kamada, K.
2018-06-01
Objective. Several neuroimaging studies have demonstrated that the ventral temporal cortex contains specialized regions that process visual stimuli. This study investigated the spatial and temporal dynamics of electrocorticographic (ECoG) responses to different types and colors of visual stimulation that were presented to four human participants, and demonstrated a real-time decoder that detects and discriminates responses to untrained natural images. Approach. ECoG signals from the participants were recorded while they were shown colored and greyscale versions of seven types of visual stimuli (images of faces, objects, bodies, line drawings, digits, and kanji and hiragana characters), resulting in 14 classes for discrimination (experiment I). Additionally, a real-time system asynchronously classified ECoG responses to faces, kanji and black screens presented via a monitor (experiment II), or to natural scenes (i.e. the face of an experimenter, natural images of faces and kanji, and a mirror) (experiment III). Outcome measures in all experiments included the discrimination performance across types based on broadband γ activity. Main results. Experiment I demonstrated an offline classification accuracy of 72.9% when discriminating among the seven types (without color separation). Further discrimination of grey versus colored images reached an accuracy of 67.1%. Discriminating all colors and types (14 classes) yielded an accuracy of 52.1%. In experiment II and III, the real-time decoder correctly detected 73.7% responses to face, kanji and black computer stimuli and 74.8% responses to presented natural scenes. Significance. Seven different types and their color information (either grey or color) could be detected and discriminated using broadband γ activity. Discrimination performance maximized for combined spatial-temporal information. The discrimination of stimulus color information provided the first ECoG-based evidence for color-related population-level cortical broadband γ responses in humans. Stimulus categories can be detected by their ECoG responses in real time within 500 ms with respect to stimulus onset.
Tracking the first two seconds: three stages of visual information processing?
Jacob, Jane; Breitmeyer, Bruno G; Treviño, Melissa
2013-12-01
We compared visual priming and comparison tasks to assess information processing of a stimulus during the first 2 s after its onset. In both tasks, a 13-ms prime was followed at varying SOAs by a 40-ms probe. In the priming task, observers identified the probe as rapidly and accurately as possible; in the comparison task, observers determined as rapidly and accurately as possible whether or not the probe and prime were identical. Priming effects attained a maximum at an SOA of 133 ms and then declined monotonically to zero by 700 ms, indicating reliance on relatively brief visuosensory (iconic) memory. In contrast, the comparison effects yielded a multiphasic function, showing a maximum at 0 ms followed by a minimum at 133 ms, followed in turn by a maximum at 240 ms and another minimum at 720 ms, and finally a third maximum at 1,200 ms before declining thereafter. The results indicate three stages of prime processing that we take to correspond to iconic visible persistence, iconic informational persistence, and visual working memory, with the first two used in the priming task and all three in the comparison task. These stages are related to stages presumed to underlie stimulus processing in other tasks, such as those giving rise to the attentional blink.
Crossmodal binding rivalry: A "race" for integration between unequal sensory inputs.
Kostaki, Maria; Vatakis, Argiro
2016-10-01
Exposure to multiple but unequal (in number) sensory inputs often leads to illusory percepts, which may be the product of a conflict between those inputs. To test this conflict, we utilized the classic sound induced visual fission and fusion illusions under various temporal configurations and timing presentations. This conflict between unequal numbers of sensory inputs (i.e., crossmodal binding rivalry) depends on the binding of the first audiovisual pair and its temporal proximity to the upcoming unisensory stimulus. We, therefore, expected that tight coupling of the first audiovisual pair would lead to higher rivalry with the upcoming unisensory stimulus and, thus, weaker illusory percepts. Loose coupling, on the other hand, would lead to lower rivalry and higher illusory percepts. Our data showed the emergence of two different participant groups, those with low discrimination performance and strong illusion reports (particularly for fusion) and those with the exact opposite pattern, thus extending previous findings on the effect of visual acuity in the strength of the illusion. Most importantly, our data revealed differential illusory strength across different temporal configurations for the fission illusion, while for the fusion illusion these effects were only noted for the largest stimulus onset asynchronies tested. These findings support that the optimal integration theory for the double flash illusion should be expanded so as to also take into account the multisensory temporal interactions of the stimuli presented (i.e., temporal sequence and configuration). Copyright © 2016 Elsevier Ltd. All rights reserved.
Perrodin, Catherine; Kayser, Christoph; Logothetis, Nikos K.; Petkov, Christopher I.
2015-01-01
When social animals communicate, the onset of informative content in one modality varies considerably relative to the other, such as when visual orofacial movements precede a vocalization. These naturally occurring asynchronies do not disrupt intelligibility or perceptual coherence. However, they occur on time scales where they likely affect integrative neuronal activity in ways that have remained unclear, especially for hierarchically downstream regions in which neurons exhibit temporally imprecise but highly selective responses to communication signals. To address this, we exploited naturally occurring face- and voice-onset asynchronies in primate vocalizations. Using these as stimuli we recorded cortical oscillations and neuronal spiking responses from functional MRI (fMRI)-localized voice-sensitive cortex in the anterior temporal lobe of macaques. We show that the onset of the visual face stimulus resets the phase of low-frequency oscillations, and that the face–voice asynchrony affects the prominence of two key types of neuronal multisensory responses: enhancement or suppression. Our findings show a three-way association between temporal delays in audiovisual communication signals, phase-resetting of ongoing oscillations, and the sign of multisensory responses. The results reveal how natural onset asynchronies in cross-sensory inputs regulate network oscillations and neuronal excitability in the voice-sensitive cortex of macaques, a suggested animal model for human voice areas. These findings also advance predictions on the impact of multisensory input on neuronal processes in face areas and other brain regions. PMID:25535356
Basu, Anamitra; Mandal, Manas K
2004-07-01
The present study examined visual-field advantage as a function of presentation mode (unilateral, bilateral), stimulus structure (facial, lexical), and stimulus content (emotional, neutral). The experiment was conducted in a split visual-field paradigm using a JAVA-based computer program with recognition accuracy as the dependent measure. Unilaterally, rather than bilaterally, presented stimuli were significantly better recognized. Words were significantly better recognized than faces in the right visual-field; the difference was nonsignificant in the left visual-field. Emotional content elicited left visual-field and neutral content elicited right visual-field advantages. Copyright Taylor and Francis Inc.
Innes-Brown, Hamish; Barutchu, Ayla; Crewther, David P.
2013-01-01
The effect of multi-modal vs uni-modal prior stimuli on the subsequent processing of a simple flash stimulus was studied in the context of the audio-visual ‘flash-beep’ illusion, in which the number of flashes a person sees is influenced by accompanying beep stimuli. EEG recordings were made while combinations of simple visual and audio-visual stimuli were presented. The experiments found that the electric field strength related to a flash stimulus was stronger when it was preceded by a multi-modal flash/beep stimulus, compared to when it was preceded by another uni-modal flash stimulus. This difference was found to be significant in two distinct timeframes – an early timeframe, from 130–160 ms, and a late timeframe, from 300–320 ms. Source localisation analysis found that the increased activity in the early interval was localised to an area centred on the inferior and superior parietal lobes, whereas the later increase was associated with stronger activity in an area centred on primary and secondary visual cortex, in the occipital lobe. The results suggest that processing of a visual stimulus can be affected by the presence of an immediately prior multisensory event. Relatively long-lasting interactions generated by the initial auditory and visual stimuli altered the processing of a subsequent visual stimulus. PMID:24391939
Task choice and semantic interference in picture naming.
Piai, Vitória; Roelofs, Ardi; Schriefers, Herbert
2015-05-01
Evidence from dual-task performance indicates that speakers prefer not to select simultaneous responses in picture naming and another unrelated task, suggesting a response selection bottleneck in naming. In particular, when participants respond to tones with a manual response and name pictures with superimposed semantically related or unrelated distractor words, semantic interference in naming tends to be constant across stimulus onset asynchronies (SOAs) between the tone stimulus and the picture-word stimulus. In the present study, we examine whether semantic interference in picture naming depends on SOA in case of a task choice (naming the picture vs reading the word of a picture-word stimulus) based on tones. This situation requires concurrent processing of the tone stimulus and the picture-word stimulus, but not a manual response to the tones. On each trial, participants either named a picture or read aloud a word depending on the pitch of a tone, which was presented simultaneously with picture-word onset or 350 ms or 1000 ms before picture-word onset. Semantic interference was present with tone pre-exposure, but absent when tone and picture-word stimulus were presented simultaneously. Against the background of the available studies, these results support an account according to which speakers tend to avoid concurrent response selection, but can engage in other types of concurrent processing, such as task choices. Copyright © 2015 Elsevier B.V. All rights reserved.
The adequate stimulus for avian short latency vestibular responses to linear translation
NASA Technical Reports Server (NTRS)
Jones, T. A.; Jones, S. M.; Colbert, S.
1998-01-01
Transient linear acceleration stimuli have been shown to elicit eighth nerve vestibular compound action potentials in birds and mammals. The present study was undertaken to better define the nature of the adequate stimulus for neurons generating the response in the chicken (Gallus domesticus). In particular, the study evaluated the question of whether the neurons studied are most sensitive to the maximum level of linear acceleration achieved or to the rate of change in acceleration (da/dt, or jerk). To do this, vestibular response thresholds were measured as a function of stimulus onset slope. Traditional computer signal averaging was used to record responses to pulsed linear acceleration stimuli. Stimulus onset slope was systematically varied. Acceleration thresholds decreased with increasing stimulus onset slope (decreasing stimulus rise time). When stimuli were expressed in units of jerk (g/ms), thresholds were virtually constant for all stimulus rise times. Moreover, stimuli having identical jerk magnitudes but widely varying peak acceleration levels produced virtually identical responses. Vestibular response thresholds, latencies and amplitudes appear to be determined strictly by stimulus jerk magnitudes. Stimulus attributes such as peak acceleration or rise time alone do not provide sufficient information to predict response parameter quantities. Indeed, the major response parameters were shown to be virtually independent of peak acceleration levels or rise time when these stimulus features were isolated and considered separately. It is concluded that the neurons generating short latency vestibular evoked potentials do so as "jerk encoders" in the chicken. Primary afferents classified as "irregular", and which traditionally fall into the broad category of "dynamic" or "phasic" neurons, would seem to be the most likely candidates for the neural generators of short latency vestibular compound action potentials.
Neural Correlates of Individual Differences in Infant Visual Attention and Recognition Memory
Reynolds, Greg D.; Guy, Maggie W.; Zhang, Dantong
2010-01-01
Past studies have identified individual differences in infant visual attention based upon peak look duration during initial exposure to a stimulus. Colombo and colleagues (e.g., Colombo & Mitchell, 1990) found that infants that demonstrate brief visual fixations (i.e., short lookers) during familiarization are more likely to demonstrate evidence of recognition memory during subsequent stimulus exposure than infants that demonstrate long visual fixations (i.e., long lookers). The current study utilized event-related potentials to examine possible neural mechanisms associated with individual differences in visual attention and recognition memory for 6- and 7.5-month-old infants. Short- and long-looking infants viewed images of familiar and novel objects during ERP testing. There was a stimulus type by looker type interaction at temporal and frontal electrodes on the late slow wave (LSW). Short lookers demonstrated a LSW that was significantly greater in amplitude in response to novel stimulus presentations. No significant differences in LSW amplitude were found based on stimulus type for long lookers. These results indicate deeper processing and recognition memory of the familiar stimulus for short lookers. PMID:21666833
Lambert, Anthony J; Wootton, Adrienne
2017-08-01
Different patterns of high density EEG activity were elicited by the same peripheral stimuli, in the context of Landmark Cueing and Perceptual Discrimination tasks. The C1 component of the visual event-related potential (ERP) at parietal - occipital electrode sites was larger in the Landmark Cueing task, and source localisation suggested greater activation in the superior parietal lobule (SPL) in this task, compared to the Perceptual Discrimination task, indicating stronger early recruitment of the dorsal visual stream. In the Perceptual Discrimination task, source localisation suggested widespread activation of the inferior temporal gyrus (ITG) and fusiform gyrus (FFG), structures associated with the ventral visual stream, during the early phase of the P1 ERP component. Moreover, during a later epoch (171-270ms after stimulus onset) increased temporal-occipital negativity, and stronger recruitment of ITG and FFG were observed in the Perceptual Discrimination task. These findings illuminate the contrasting functions of the dorsal and ventral visual streams, to support rapid shifts of attention in response to contextual landmarks, and conscious discrimination, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.
Cortical activity is more stable when sensory stimuli are consciously perceived
Schurger, Aaron; Sarigiannidis, Ioannis; Naccache, Lionel; Sitt, Jacobo D.; Dehaene, Stanislas
2015-01-01
According to recent evidence, stimulus-tuned neurons in the cerebral cortex exhibit reduced variability in firing rate across trials, after the onset of a stimulus. However, in order for a reduction in variability to be directly relevant to perception and behavior, it must be realized within trial—the pattern of activity must be relatively stable. Stability is characteristic of decision states in recurrent attractor networks, and its possible relevance to conscious perception has been suggested by theorists. However, it is difficult to measure on the within-trial time scales and broadly distributed spatial scales relevant to perception. We recorded simultaneous magneto- and electroencephalography (MEG and EEG) data while subjects observed threshold-level visual stimuli. Pattern-similarity analyses applied to the data from MEG gradiometers uncovered a pronounced decrease in variability across trials after stimulus onset, consistent with previous single-unit data. This was followed by a significant divergence in variability depending upon subjective report (seen/unseen), with seen trials exhibiting less variability. Applying the same analysis across time, within trial, we found that the latter effect coincided in time with a difference in the stability of the pattern of activity. Stability alone could be used to classify data from individual trials as “seen” or “unseen.” The same metric applied to EEG data from patients with disorders of consciousness exposed to auditory stimuli diverged parametrically according to clinically diagnosed level of consciousness. Differences in signal strength could not account for these results. Conscious perception may involve the transient stabilization of distributed cortical networks, corresponding to a global brain-scale decision. PMID:25847997
Porcu, Emanuele; Keitel, Christian; Müller, Matthias M
2013-11-27
We investigated effects of inter-modal attention on concurrent visual and tactile stimulus processing by means of stimulus-driven oscillatory brain responses, so-called steady-state evoked potentials (SSEPs). To this end, we frequency-tagged a visual (7.5Hz) and a tactile stimulus (20Hz) and participants were cued, on a trial-by-trial basis, to attend to either vision or touch to perform a detection task in the cued modality. SSEPs driven by the stimulation comprised stimulus frequency-following (i.e. fundamental frequency) as well as frequency-doubling (i.e. second harmonic) responses. We observed that inter-modal attention to vision increased amplitude and phase synchrony of the fundamental frequency component of the visual SSEP while the second harmonic component showed an increase in phase synchrony, only. In contrast, inter-modal attention to touch increased SSEP amplitude of the second harmonic but not of the fundamental frequency, while leaving phase synchrony unaffected in both responses. Our results show that inter-modal attention generally influences concurrent stimulus processing in vision and touch, thus, extending earlier audio-visual findings to a visuo-tactile stimulus situation. The pattern of results, however, suggests differences in the neural implementation of inter-modal attentional influences on visual vs. tactile stimulus processing. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Covic, Amra; Keitel, Christian; Porcu, Emanuele; Schröger, Erich; Müller, Matthias M
2017-11-01
The neural processing of a visual stimulus can be facilitated by attending to its position or by a co-occurring auditory tone. Using frequency-tagging, we investigated whether facilitation by spatial attention and audio-visual synchrony rely on similar neural processes. Participants attended to one of two flickering Gabor patches (14.17 and 17 Hz) located in opposite lower visual fields. Gabor patches further "pulsed" (i.e. showed smooth spatial frequency variations) at distinct rates (3.14 and 3.63 Hz). Frequency-modulating an auditory stimulus at the pulse-rate of one of the visual stimuli established audio-visual synchrony. Flicker and pulsed stimulation elicited stimulus-locked rhythmic electrophysiological brain responses that allowed tracking the neural processing of simultaneously presented Gabor patches. These steady-state responses (SSRs) were quantified in the spectral domain to examine visual stimulus processing under conditions of synchronous vs. asynchronous tone presentation and when respective stimulus positions were attended vs. unattended. Strikingly, unique patterns of effects on pulse- and flicker driven SSRs indicated that spatial attention and audiovisual synchrony facilitated early visual processing in parallel and via different cortical processes. We found attention effects to resemble the classical top-down gain effect facilitating both, flicker and pulse-driven SSRs. Audio-visual synchrony, in turn, only amplified synchrony-producing stimulus aspects (i.e. pulse-driven SSRs) possibly highlighting the role of temporally co-occurring sights and sounds in bottom-up multisensory integration. Copyright © 2017 Elsevier Inc. All rights reserved.
Pratt, Hillel; Bleich, Naomi; Mittelman, Nomi
2015-11-01
Spatio-temporal distributions of cortical activity to audio-visual presentations of meaningless vowel-consonant-vowels and the effects of audio-visual congruence/incongruence, with emphasis on the McGurk effect, were studied. The McGurk effect occurs when a clearly audible syllable with one consonant, is presented simultaneously with a visual presentation of a face articulating a syllable with a different consonant and the resulting percept is a syllable with a consonant other than the auditorily presented one. Twenty subjects listened to pairs of audio-visually congruent or incongruent utterances and indicated whether pair members were the same or not. Source current densities of event-related potentials to the first utterance in the pair were estimated and effects of stimulus-response combinations, brain area, hemisphere, and clarity of visual articulation were assessed. Auditory cortex, superior parietal cortex, and middle temporal cortex were the most consistently involved areas across experimental conditions. Early (<200 msec) processing of the consonant was overall prominent in the left hemisphere, except right hemisphere prominence in superior parietal cortex and secondary visual cortex. Clarity of visual articulation impacted activity in secondary visual cortex and Wernicke's area. McGurk perception was associated with decreased activity in primary and secondary auditory cortices and Wernicke's area before 100 msec, increased activity around 100 msec which decreased again around 180 msec. Activity in Broca's area was unaffected by McGurk perception and was only increased to congruent audio-visual stimuli 30-70 msec following consonant onset. The results suggest left hemisphere prominence in the effects of stimulus and response conditions on eight brain areas involved in dynamically distributed parallel processing of audio-visual integration. Initially (30-70 msec) subcortical contributions to auditory cortex, superior parietal cortex, and middle temporal cortex occur. During 100-140 msec, peristriate visual influences and Wernicke's area join in the processing. Resolution of incongruent audio-visual inputs is then attempted, and if successful, McGurk perception occurs and cortical activity in left hemisphere further increases between 170 and 260 msec.
Hemispheric differences in visual search of simple line arrays.
Polich, J; DeFrancesco, D P; Garon, J F; Cohen, W
1990-01-01
The effects of perceptual organization on hemispheric visual-information processing were assessed with stimulus arrays composed of short lines arranged in columns. A visual-search task was employed in which subjects judged whether all the lines were vertical (same) or whether a single horizontal line was present (different). Stimulus-display organization was manipulated in two experiments by variation of line density, linear organization, and array size. In general, left-visual-field/right-hemisphere presentations demonstrated more rapid and accurate responses when the display was perceived as a whole. Right-visual-field/left-hemisphere superiorities were observed when the display organization coerced assessment of individual array elements because the physical qualities of the stimulus did not effect a gestalt whole. Response times increased somewhat with increases in array size, although these effects interacted with other stimulus variables. Error rates tended to follow the reaction-time patterns. The results suggest that laterality differences in visual search are governed by stimulus properties which contribute to, or inhibit, the perception of a display as a gestalt. The implications of these findings for theoretical interpretations of hemispheric specialization are discussed.
Walla, Peter; Hufnagl, Bernd; Lehrner, Johann; Mayer, Dagmar; Lindinger, Gerald; Deecke, Lüder; Lang, Wilfried
2002-11-01
The present study was meant to distinguish between unconscious and conscious olfactory information processing and to investigate the influence of olfaction on word information processing. Magnetic field changes were recorded in healthy young participants during deep encoding of visually presented words whereby some of the words were randomly associated with an odor. All recorded data were then split into two groups. One group consisted of participants who did not consciously perceive the odor during the whole experiment whereas the other group did report continuous conscious odor perception. The magnetic field changes related to the condition 'words without odor' were subtracted from the magnetic field changes related to the condition 'words with odor' for both groups. First, an odor-induced effect occurred between about 200 and 500 ms after stimulus onset which was similar in both groups. It is interpreted to reflect an activity reduction during word encoding related to the additional olfactory stimulation. Second, a later effect occurred between about 600 and 900 ms after stimulus onset which differed between the two groups. This effect was due to higher brain activity related to the additional olfactory stimulation. It was more pronounced in the group consisting of participants who consciously perceived the odor during the whole experiment as compared to the other group. These results are interpreted as evidence that the later effect is related to conscious odor perception whereas the earlier effect reflects unconscious olfactory information processing. Furthermore, our study provides evidence that only the conscious perception of an odor which is simultaneously presented to the visual presentation of a word reduces its chance to be subsequently recognized.
Spatiotemporal dynamics of similarity-based neural representations of facial identity
Vida, Mark D.; Nestor, Adrian; Plaut, David C.; Behrmann, Marlene
2017-01-01
Humans’ remarkable ability to quickly and accurately discriminate among thousands of highly similar complex objects demands rapid and precise neural computations. To elucidate the process by which this is achieved, we used magnetoencephalography to measure spatiotemporal patterns of neural activity with high temporal resolution during visual discrimination among a large and carefully controlled set of faces. We also compared these neural data to lower level “image-based” and higher level “identity-based” model-based representations of our stimuli and to behavioral similarity judgments of our stimuli. Between ∼50 and 400 ms after stimulus onset, face-selective sources in right lateral occipital cortex and right fusiform gyrus and sources in a control region (left V1) yielded successful classification of facial identity. In all regions, early responses were more similar to the image-based representation than to the identity-based representation. In the face-selective regions only, responses were more similar to the identity-based representation at several time points after 200 ms. Behavioral responses were more similar to the identity-based representation than to the image-based representation, and their structure was predicted by responses in the face-selective regions. These results provide a temporally precise description of the transformation from low- to high-level representations of facial identity in human face-selective cortex and demonstrate that face-selective cortical regions represent multiple distinct types of information about face identity at different times over the first 500 ms after stimulus onset. These results have important implications for understanding the rapid emergence of fine-grained, high-level representations of object identity, a computation essential to human visual expertise. PMID:28028220
Reimer, Christina B; Strobach, Tilo; Schubert, Torsten
2017-12-01
Visual attention and response selection are limited in capacity. Here, we investigated whether visual attention requires the same bottleneck mechanism as response selection in a dual-task of the psychological refractory period (PRP) paradigm. The dual-task consisted of an auditory two-choice discrimination Task 1 and a conjunction search Task 2, which were presented at variable temporal intervals (stimulus onset asynchrony, SOA). In conjunction search, visual attention is required to select items and to bind their features resulting in a serial search process around the items in the search display (i.e., set size). We measured the reaction time of the visual search task (RT2) and the N2pc, an event-related potential (ERP), which reflects lateralized visual attention processes. If the response selection processes in Task 1 influence the visual attention processes in Task 2, N2pc latency and amplitude would be delayed and attenuated at short SOA compared to long SOA. The results, however, showed that latency and amplitude were independent of SOA, indicating that visual attention was concurrently deployed to response selection. Moreover, the RT2 analysis revealed an underadditive interaction of SOA and set size. We concluded that visual attention does not require the same bottleneck mechanism as response selection in dual-tasks.
A massively asynchronous, parallel brain
Zeki, Semir
2015-01-01
Whether the visual brain uses a parallel or a serial, hierarchical, strategy to process visual signals, the end result appears to be that different attributes of the visual scene are perceived asynchronously—with colour leading form (orientation) by 40 ms and direction of motion by about 80 ms. Whatever the neural root of this asynchrony, it creates a problem that has not been properly addressed, namely how visual attributes that are perceived asynchronously over brief time windows after stimulus onset are bound together in the longer term to give us a unified experience of the visual world, in which all attributes are apparently seen in perfect registration. In this review, I suggest that there is no central neural clock in the (visual) brain that synchronizes the activity of different processing systems. More likely, activity in each of the parallel processing-perceptual systems of the visual brain is reset independently, making of the brain a massively asynchronous organ, just like the new generation of more efficient computers promise to be. Given the asynchronous operations of the brain, it is likely that the results of activities in the different processing-perceptual systems are not bound by physiological interactions between cells in the specialized visual areas, but post-perceptually, outside the visual brain. PMID:25823871
Defever, Emmy; Reynvoet, Bert; Gebuis, Titia
2013-10-01
Researchers investigating numerosity processing manipulate the visual stimulus properties (e.g., surface). This is done to control for the confound between numerosity and its visual properties and should allow the examination of pure number processes. Nevertheless, several studies have shown that, despite different visual controls, visual cues remained to exert their influence on numerosity judgments. This study, therefore, investigated whether the impact of the visual stimulus manipulations on numerosity judgments is dependent on the task at hand (comparison task vs. same-different task) and whether this impact changes throughout development. In addition, we examined whether the influence of visual stimulus manipulations on numerosity judgments plays a role in the relation between performance on numerosity tasks and mathematics achievement. Our findings confirmed that the visual stimulus manipulations affect numerosity judgments; more important, we found that these influences changed with increasing age and differed between the comparison and the same-different tasks. Consequently, direct comparisons between numerosity studies using different tasks and age groups are difficult. No meaningful relationship between the performance on the comparison and same-different tasks and mathematics achievement was found in typically developing children, nor did we find consistent differences between children with and without mathematical learning disability (MLD). Copyright © 2013 Elsevier Inc. All rights reserved.
Comparison of timing and force control of foot tapping between elderly and young subjects.
Takimoto, Koji; Takebayashi, Hideaki; Miyamoto, Kenzo; Takuma, Yutaka; Inoue, Yoshikazu; Miyamoto, Shoko; Okabe, Takao; Okuda, Takahiro; Kaba, Hideto
2016-06-01
[Purpose] To examine the ability of young and elderly individuals to control the timing and force of periodic sequential foot tapping. [Subjects and Methods] Participants were 10 young (age, 22.1 ± 4.3 years) and 10 elderly individuals (74.8 ± 6.7 years) who were healthy and active. The foot tapping task consisted of practice (stimulus-synchronized tapping with visual feedback) and recall trials (self-paced tapping without visual feedback), periodically performed in this order, at 500-, 1,000-, and 2,000-ms target interstimulus-onset intervals, with a target force of 20% maximum voluntary contraction of the ankle plantar-flexor muscle. [Results] The coefficients of variation of force and intertap interval, used for quantifying the steadiness of the trials, were significantly greater in the elderly than in the young individuals. At the 500-ms interstimulus-onset interval, age-related effects were observed on the normalized mean absolute error of force, which was used to quantify the accuracy of the trials. The coefficients of variation of intertap interval for elderly individuals were significantly greater in the practice than in the recall trials at the 500- and 1,000-ms interstimulus-onset intervals. [Conclusion] The elderly individuals exhibited greater force and timing variability than the young individuals and showed impaired visuomotor processing during foot tapping sequences.
ERIC Educational Resources Information Center
Dambacher, Michael; Dimigen, Olaf; Braun, Mario; Wille, Kristin; Jacobs, Arthur M.; Kliegl, Reinhold
2012-01-01
Three ERP experiments examined the effect of word presentation rate (i.e., stimulus onset asynchrony, SOA) on the time course of word frequency and predictability effects in sentence reading. In Experiments 1 and 2, sentences were presented word-by-word in the screen center at an SOA of 700 and 490ms, respectively. While these rates are typical…
Synchronization in monkey visual cortex analyzed with an information-theoretic measure
NASA Astrophysics Data System (ADS)
Manyakov, Nikolay V.; Van Hulle, Marc M.
2008-09-01
We apply an information-theoretic measure for phase synchrony to local field potentials recorded with a multi-electrode array implanted in area V4 of the monkey visual cortex during a reinforcement pairing experiment. We show for the first time that (1) the phase synchrony is significantly higher for the rewarded stimulus than the unrewarded one, after training the monkey; (2) just after the stimuli reversal, the difference in phase synchronization is due to the stimuli, not the reward; (3) the difference between reward and no reward is most clear in two disconnected time intervals between stimuli onset and the expected delivery of the reward; and (4) synchronous activity appears in waves running over the array, and their timing correlates well with the time intervals where the difference between reward and no reward is most prominent.
Emadi, Nazli; Rajimehr, Reza; Esteky, Hossein
2014-01-01
Spontaneous firing is a ubiquitous property of neural activity in the brain. Recent literature suggests that this baseline activity plays a key role in perception. However, it is not known how the baseline activity contributes to neural coding and behavior. Here, by recording from the single neurons in the inferior temporal cortex of monkeys performing a visual categorization task, we thoroughly explored the relationship between baseline activity, the evoked response, and behavior. Specifically we found that a low-frequency (<8 Hz) oscillation in the spike train, prior and phase-locked to the stimulus onset, was correlated with increased gamma power and neuronal baseline activity. This enhancement of the baseline activity was then followed by an increase in the neural selectivity and the response reliability and eventually a higher behavioral performance. PMID:25404900
A dual-task investigation of automaticity in visual word processing
NASA Technical Reports Server (NTRS)
McCann, R. S.; Remington, R. W.; Van Selst, M.
2000-01-01
An analysis of activation models of visual word processing suggests that frequency-sensitive forms of lexical processing should proceed normally while unattended. This hypothesis was tested by having participants perform a speeded pitch discrimination task followed by lexical decisions or word naming. As the stimulus onset asynchrony between the tasks was reduced, lexical-decision and naming latencies increased dramatically. Word-frequency effects were additive with the increase, indicating that frequency-sensitive processing was subject to postponement while attention was devoted to the other task. Either (a) the same neural hardware shares responsibility for lexical processing and central stages of choice reaction time task processing and cannot perform both computations simultaneously, or (b) lexical processing is blocked in order to optimize performance on the pitch discrimination task. Either way, word processing is not as automatic as activation models suggest.
Tanaka, Tomohiro; Nishida, Satoshi
2015-01-01
The neuronal processes that underlie visual searches can be divided into two stages: target discrimination and saccade preparation/generation. This predicts that the length of time of the prediscrimination stage varies according to the search difficulty across different stimulus conditions, whereas the length of the latter postdiscrimination stage is stimulus invariant. However, recent studies have suggested that the length of the postdiscrimination interval changes with different stimulus conditions. To address whether and how the visual stimulus affects determination of the postdiscrimination interval, we recorded single-neuron activity in the lateral intraparietal area (LIP) when monkeys (Macaca fuscata) performed a color-singleton search involving four stimulus conditions that differed regarding luminance (Bright vs. Dim) and target-distractor color similarity (Easy vs. Difficult). We specifically focused on comparing activities between the Bright-Difficult and Dim-Easy conditions, in which the visual stimuli were considerably different, but the mean reaction times were indistinguishable. This allowed us to examine the neuronal activity when the difference in the degree of search speed between different stimulus conditions was minimal. We found that not only prediscrimination but also postdiscrimination intervals varied across stimulus conditions: the postdiscrimination interval was longer in the Dim-Easy condition than in the Bright-Difficult condition. Further analysis revealed that the postdiscrimination interval might vary with stimulus luminance. A computer simulation using an accumulation-to-threshold model suggested that the luminance-related difference in visual response strength at discrimination time could be the cause of different postdiscrimination intervals. PMID:25995344
Joshi, Anand C.; Riley, David E.; Mustari, Michael J.; Cohen, Mark L.; Leigh, R. John
2010-01-01
Smooth ocular tracking of a moving visual stimulus comprises a range of responses that encompass the ocular following response (OFR), a pre-attentive, short-latency mechanism, and smooth pursuit, which directs the retinal fovea at the moving stimulus. In order to determine how interdependent these two forms of ocular tracking are, we studied vertical OFR in progressive supranuclear palsy (PSP), a parkinsonian disorder in which vertical smooth pursuit is known to be impaired. We measured eye movements of 9 patients with PSP and 12 healthy control subjects. Subjects viewed vertically moving sine-wave gratings that had a temporal frequency of 16.7 Hz, contrast of 32%, and spatial frequencies of 0.17, 0.27 or 0.44 cycles/°. We measured OFR amplitude as change in eye position in the 70 – 150 ms, open-loop interval following stimulus onset. Vertical smooth pursuit was studied as subjects attempted to track a 0.27 cycles/° grating moving sinusoidally through several cycles at frequencies between 0.1 – 2.5 Hz. We found that OFR amplitude, and its dependence on spatial frequency, was similar in PSP patients (group mean 0.10°) and control subjects (0.11°), but the latency to onset of OFR was greater for PSP patients (group mean 99 ms) than control subjects (90 ms). When OFR amplitude was re-measured, taking into account the increased latency in PSP patients, there was still no difference from control subjects. We confirmed that smooth pursuit was consistently impaired in PSP; group mean tracking gain at 0.7 Hz was 0.29 for PSP patients and 0.63 for controls. Neither PSP patients nor control subjects showed any correlation between OFR amplitude and smooth-pursuit gain. We propose that OFR is spared because it is generated by low-level motion processing that is dependent on posterior cerebral cortex, which is less affected in PSP. Conversely, smooth pursuit depends more on projections from frontal cortex to the pontine nuclei, both of which are involved in PSP. The accessory optic pathway, which is heavily involved in PSP, seems unlikely to contribute to the OFR in humans. PMID:20123108
Cortical interactions in vision and awareness: hierarchies in reverse.
Juan, Chi-Hung; Campana, Gianluca; Walsh, Vincent
2004-01-01
The anatomical connections between visual areas can be organized in 'feedforward', 'feedback' or 'horizontal' laminar patterns. We report here four experiments that test the function of some of the feedback projections in visual cortex. Projections from V5 to V1 have been suggested to be important in visual awareness, and in the first experiment we show this to be the case in the blindsight patient GY. This demonstration is replicated, in principle, in the second experiment and we also show the timing of the V5-V1 interaction to correspond to findings from single unit physiology. In the third experiment we show that V1 is important for stimulus detection in visual search arrays and that the timing of V1 interference with TMS is late (up to 240 ms after the onset of the visual array). Finally we report an experiment showing that the parietal cortex is not involved in visual motion priming, whereas V5 is, suggesting that the parietal cortex does not modulate V5 in this task. We interpret the data in terms of Bullier's recent physiological recordings and Ahissar and Hochstein's reverse hierarchy theory of vision.
Eccentricity effects in vision and attention.
Staugaard, Camilla Funch; Petersen, Anders; Vangkilde, Signe
2016-11-01
Stimulus eccentricity affects visual processing in multiple ways. Performance on a visual task is often better when target stimuli are presented near or at the fovea compared to the retinal periphery. For instance, reaction times and error rates are often reported to increase with increasing eccentricity. Such findings have been interpreted as purely visual, reflecting neurophysiological differences in central and peripheral vision, as well as attentional, reflecting a central bias in the allocation of attentional resources. Other findings indicate that in some cases, information from the periphery is preferentially processed. Specifically, it has been suggested that visual processing speed increases with increasing stimulus eccentricity, and that this positive correlation is reduced, but not eliminated, when the amount of cortex activated by a stimulus is kept constant by magnifying peripheral stimuli (Carrasco et al., 2003). In this study, we investigated effects of eccentricity on visual attentional capacity with and without magnification, using computational modeling based on Bundesen's (1990) theory of visual attention. Our results suggest a general decrease in attentional capacity with increasing stimulus eccentricity, irrespective of magnification. We discuss these results in relation to the physiology of the visual system, the use of different paradigms for investigating visual perception across the visual field, and the use of different stimulus materials (e.g. Gabor patches vs. letters). Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Active visual search in non-stationary scenes: coping with temporal variability and uncertainty
NASA Astrophysics Data System (ADS)
Ušćumlić, Marija; Blankertz, Benjamin
2016-02-01
Objective. State-of-the-art experiments for studying neural processes underlying visual cognition often constrain sensory inputs (e.g., static images) and our behavior (e.g., fixed eye-gaze, long eye fixations), isolating or simplifying the interaction of neural processes. Motivated by the non-stationarity of our natural visual environment, we investigated the electroencephalography (EEG) correlates of visual recognition while participants overtly performed visual search in non-stationary scenes. We hypothesized that visual effects (such as those typically used in human-computer interfaces) may increase temporal uncertainty (with reference to fixation onset) of cognition-related EEG activity in an active search task and therefore require novel techniques for single-trial detection. Approach. We addressed fixation-related EEG activity in an active search task with respect to stimulus-appearance styles and dynamics. Alongside popping-up stimuli, our experimental study embraces two composite appearance styles based on fading-in, enlarging, and motion effects. Additionally, we explored whether the knowledge obtained in the pop-up experimental setting can be exploited to boost the EEG-based intention-decoding performance when facing transitional changes of visual content. Main results. The results confirmed our initial hypothesis that the dynamic of visual content can increase temporal uncertainty of the cognition-related EEG activity in active search with respect to fixation onset. This temporal uncertainty challenges the pivotal aim to keep the decoding performance constant irrespective of visual effects. Importantly, the proposed approach for EEG decoding based on knowledge transfer between the different experimental settings gave a promising performance. Significance. Our study demonstrates that the non-stationarity of visual scenes is an important factor in the evolution of cognitive processes, as well as in the dynamic of ocular behavior (i.e., dwell time and fixation duration) in an active search task. In addition, our method to improve single-trial detection performance in this adverse scenario is an important step in making brain-computer interfacing technology available for human-computer interaction applications.
Spatial updating in human parietal cortex
NASA Technical Reports Server (NTRS)
Merriam, Elisha P.; Genovese, Christopher R.; Colby, Carol L.
2003-01-01
Single neurons in monkey parietal cortex update visual information in conjunction with eye movements. This remapping of stimulus representations is thought to contribute to spatial constancy. We hypothesized that a similar process occurs in human parietal cortex and that we could visualize it with functional MRI. We scanned subjects during a task that involved remapping of visual signals across hemifields. We observed an initial response in the hemisphere contralateral to the visual stimulus, followed by a remapped response in the hemisphere ipsilateral to the stimulus. We ruled out the possibility that this remapped response resulted from either eye movements or visual stimuli alone. Our results demonstrate that updating of visual information occurs in human parietal cortex.
Moving Stimuli Facilitate Synchronization But Not Temporal Perception
Silva, Susana; Castro, São Luís
2016-01-01
Recent studies have shown that a moving visual stimulus (e.g., a bouncing ball) facilitates synchronization compared to a static stimulus (e.g., a flashing light), and that it can even be as effective as an auditory beep. We asked a group of participants to perform different tasks with four stimulus types: beeps, siren-like sounds, visual flashes (static) and bouncing balls. First, participants performed synchronization with isochronous sequences (stimulus-guided synchronization), followed by a continuation phase in which the stimulus was internally generated (imagery-guided synchronization). Then they performed a perception task, in which they judged whether the final part of a temporal sequence was compatible with the previous beat structure (stimulus-guided perception). Similar to synchronization, an imagery-guided variant was added, in which sequences contained a gap in between (imagery-guided perception). Balls outperformed flashes and matched beeps (powerful ball effect) in stimulus-guided synchronization but not in perception (stimulus- or imagery-guided). In imagery-guided synchronization, performance accuracy decreased for beeps and balls, but not for flashes and sirens. Our findings suggest that the advantages of moving visual stimuli over static ones are grounded in action rather than perception, and they support the hypothesis that the sensorimotor coupling mechanisms for auditory (beeps) and moving visual stimuli (bouncing balls) overlap. PMID:27909419
Moving Stimuli Facilitate Synchronization But Not Temporal Perception.
Silva, Susana; Castro, São Luís
2016-01-01
Recent studies have shown that a moving visual stimulus (e.g., a bouncing ball) facilitates synchronization compared to a static stimulus (e.g., a flashing light), and that it can even be as effective as an auditory beep. We asked a group of participants to perform different tasks with four stimulus types: beeps, siren-like sounds, visual flashes (static) and bouncing balls. First, participants performed synchronization with isochronous sequences (stimulus-guided synchronization), followed by a continuation phase in which the stimulus was internally generated (imagery-guided synchronization). Then they performed a perception task, in which they judged whether the final part of a temporal sequence was compatible with the previous beat structure (stimulus-guided perception). Similar to synchronization, an imagery-guided variant was added, in which sequences contained a gap in between (imagery-guided perception). Balls outperformed flashes and matched beeps (powerful ball effect) in stimulus-guided synchronization but not in perception (stimulus- or imagery-guided). In imagery-guided synchronization, performance accuracy decreased for beeps and balls, but not for flashes and sirens. Our findings suggest that the advantages of moving visual stimuli over static ones are grounded in action rather than perception, and they support the hypothesis that the sensorimotor coupling mechanisms for auditory (beeps) and moving visual stimuli (bouncing balls) overlap.
Multi-Variate EEG Analysis as a Novel Tool to Examine Brain Responses to Naturalistic Music Stimuli
Sturm, Irene; Dähne, Sven; Blankertz, Benjamin; Curio, Gabriel
2015-01-01
Note onsets in music are acoustic landmarks providing auditory cues that underlie the perception of more complex phenomena such as beat, rhythm, and meter. For naturalistic ongoing sounds a detailed view on the neural representation of onset structure is hard to obtain, since, typically, stimulus-related EEG signatures are derived by averaging a high number of identical stimulus presentations. Here, we propose a novel multivariate regression-based method extracting onset-related brain responses from the ongoing EEG. We analyse EEG recordings of nine subjects who passively listened to stimuli from various sound categories encompassing simple tone sequences, full-length romantic piano pieces and natural (non-music) soundscapes. The regression approach reduces the 61-channel EEG to one time course optimally reflecting note onsets. The neural signatures derived by this procedure indeed resemble canonical onset-related ERPs, such as the N1-P2 complex. This EEG projection was then utilized to determine the Cortico-Acoustic Correlation (CACor), a measure of synchronization between EEG signal and stimulus. We demonstrate that a significant CACor (i) can be detected in an individual listener's EEG of a single presentation of a full-length complex naturalistic music stimulus, and (ii) it co-varies with the stimuli’s average magnitudes of sharpness, spectral centroid, and rhythmic complexity. In particular, the subset of stimuli eliciting a strong CACor also produces strongly coordinated tension ratings obtained from an independent listener group in a separate behavioral experiment. Thus musical features that lead to a marked physiological reflection of tone onsets also contribute to perceived tension in music. PMID:26510120
Multi-Variate EEG Analysis as a Novel Tool to Examine Brain Responses to Naturalistic Music Stimuli.
Sturm, Irene; Dähne, Sven; Blankertz, Benjamin; Curio, Gabriel
2015-01-01
Note onsets in music are acoustic landmarks providing auditory cues that underlie the perception of more complex phenomena such as beat, rhythm, and meter. For naturalistic ongoing sounds a detailed view on the neural representation of onset structure is hard to obtain, since, typically, stimulus-related EEG signatures are derived by averaging a high number of identical stimulus presentations. Here, we propose a novel multivariate regression-based method extracting onset-related brain responses from the ongoing EEG. We analyse EEG recordings of nine subjects who passively listened to stimuli from various sound categories encompassing simple tone sequences, full-length romantic piano pieces and natural (non-music) soundscapes. The regression approach reduces the 61-channel EEG to one time course optimally reflecting note onsets. The neural signatures derived by this procedure indeed resemble canonical onset-related ERPs, such as the N1-P2 complex. This EEG projection was then utilized to determine the Cortico-Acoustic Correlation (CACor), a measure of synchronization between EEG signal and stimulus. We demonstrate that a significant CACor (i) can be detected in an individual listener's EEG of a single presentation of a full-length complex naturalistic music stimulus, and (ii) it co-varies with the stimuli's average magnitudes of sharpness, spectral centroid, and rhythmic complexity. In particular, the subset of stimuli eliciting a strong CACor also produces strongly coordinated tension ratings obtained from an independent listener group in a separate behavioral experiment. Thus musical features that lead to a marked physiological reflection of tone onsets also contribute to perceived tension in music.
Ward, Ryan D; Gallistel, C R; Jensen, Greg; Richards, Vanessa L; Fairhurst, Stephen; Balsam, Peter D
2012-07-01
In a conditioning protocol, the onset of the conditioned stimulus ([CS]) provides information about when to expect reinforcement (unconditioned stimulus [US]). There are two sources of information from the CS in a delay conditioning paradigm in which the CS-US interval is fixed. The first depends on the informativeness, the degree to which CS onset reduces the average expected time to onset of the next US. The second depends only on how precisely a subject can represent a fixed-duration interval (the temporal Weber fraction). In three experiments with mice, we tested the differential impact of these two sources of information on rate of acquisition of conditioned responding (CS-US associability). In Experiment 1, we showed that associability (the inverse of trials to acquisition) increased in proportion to informativeness. In Experiment 2, we showed that fixing the duration of the US-US interval or the CS-US interval or both had no effect on associability. In Experiment 3, we equated the increase in information produced by varying the C/T ratio with the increase produced by fixing the duration of the CS-US interval. Associability increased with increased informativeness, but, as in Experiment 2, fixing the CS-US duration had no effect on associability. These results are consistent with the view that CS-US associability depends on the increased rate of reward signaled by CS onset. The results also provide further evidence that conditioned responding is temporally controlled when it emerges.
Oculomotor Evidence for Top-Down Control following the Initial Saccade
Siebold, Alisha; van Zoest, Wieske; Donk, Mieke
2011-01-01
The goal of the current study was to investigate how salience-driven and goal-driven processes unfold during visual search over multiple eye movements. Eye movements were recorded while observers searched for a target, which was located on (Experiment 1) or defined as (Experiment 2) a specific orientation singleton. This singleton could either be the most, medium, or least salient element in the display. Results were analyzed as a function of response time separately for initial and second eye movements. Irrespective of the search task, initial saccades elicited shortly after the onset of the search display were primarily salience-driven whereas initial saccades elicited after approximately 250 ms were completely unaffected by salience. Initial saccades were increasingly guided in line with task requirements with increasing response times. Second saccades were completely unaffected by salience and were consistently goal-driven, irrespective of response time. These results suggest that stimulus-salience affects the visual system only briefly after a visual image enters the brain and has no effect thereafter. PMID:21931603
Dynamic reweighting of three modalities for sensor fusion.
Hwang, Sungjae; Agada, Peter; Kiemel, Tim; Jeka, John J
2014-01-01
We simultaneously perturbed visual, vestibular and proprioceptive modalities to understand how sensory feedback is re-weighted so that overall feedback remains suited to stabilizing upright stance. Ten healthy young subjects received an 80 Hz vibratory stimulus to their bilateral Achilles tendons (stimulus turns on-off at 0.28 Hz), a ± 1 mA binaural monopolar galvanic vestibular stimulus at 0.36 Hz, and a visual stimulus at 0.2 Hz during standing. The visual stimulus was presented at different amplitudes (0.2, 0.8 deg rotation about ankle axis) to measure: the change in gain (weighting) to vision, an intramodal effect; and a change in gain to vibration and galvanic vestibular stimulation, both intermodal effects. The results showed a clear intramodal visual effect, indicating a de-emphasis on vision when the amplitude of visual stimulus increased. At the same time, an intermodal visual-proprioceptive reweighting effect was observed with the addition of vibration, which is thought to change proprioceptive inputs at the ankles, forcing the nervous system to rely more on vision and vestibular modalities. Similar intermodal effects for visual-vestibular reweighting were observed, suggesting that vestibular information is not a "fixed" reference, but is dynamically adjusted in the sensor fusion process. This is the first time, to our knowledge, that the interplay between the three primary modalities for postural control has been clearly delineated, illustrating a central process that fuses these modalities for accurate estimates of self-motion.
Top-down deactivation of interference from irrelevant spatial or verbal stimulus features.
Frings, Christian; Wühr, Peter
2014-11-01
The selective-attention model of Houghton and Tipper (1994) assumes top-down deactivation of (conflicting) distractor representations as a mechanism of visual attention. Deactivation should produce an inverted-U-shaped activation function for distractor representations. In a recent study, Frings, Wentura, and Wühr (2012) tested this prediction in a variant of the flanker task in which a cue sometimes required participants to respond to the distractors rather than to the target. When reaction times and error rates were plotted as a function of the target-cue stimulus onset asynchrony, a quadratic trend emerged, consistent with the notion of distractor deactivation. However, in the flanker task, an alternative explanation for the quadratic trend in terms of attentional zooming is possible. The present experiments tested the deactivation account against the attentional-zooming account with the Stroop and the Simon task, in which attentional zooming should have minimal effects on distractor processing, because the target and distractor are presented at the same spatial location. Both experiments replicated the quadratic trend in the performance functions for responses to incongruent distractors, and additionally showed linear trends in the performance functions for responses to congruent distractors. These results provide additional support for the notion of top-down deactivation of distractor representations as a mechanism of visual selective attention.
Henry, Christopher A; Joshi, Siddhartha; Xing, Dajun; Shapley, Robert M; Hawken, Michael J
2013-04-03
Neurons in primary visual cortex, V1, very often have extraclassical receptive fields (eCRFs). The eCRF is defined as the region of visual space where stimuli cannot elicit a spiking response but can modulate the response of a stimulus in the classical receptive field (CRF). We investigated the dependence of the eCRF on stimulus contrast and orientation in macaque V1 cells for which the laminar location was determined. The eCRF was more sensitive to contrast than the CRF across the whole population of V1 cells with the greatest contrast differential in layer 2/3. We confirmed that many V1 cells experience stronger suppression for collinear than orthogonal stimuli in the eCRF. Laminar analysis revealed that the predominant bias for collinear suppression was found in layers 2/3 and 4b. The laminar pattern of contrast and orientation dependence suggests that eCRF suppression may derive from different neural circuits in different layers, and may be comprised of two distinct components: orientation-tuned and untuned suppression. On average tuned suppression was delayed by ∼25 ms compared with the onset of untuned suppression. Therefore, response modulation by the eCRF develops dynamically and rapidly in time.
Nishimura, Akio; Yokosawa, Kazuhiko
2012-01-01
Tlauka and McKenna ( 2000 ) reported a reversal of the traditional stimulus-response compatibility (SRC) effect (faster responding to a stimulus presented on the same side than to one on the opposite side) when the stimulus appearing on one side of a display is a member of a superordinate unit that is largely on the opposite side. We investigated the effects of a visual cue that explicitly shows a superordinate unit, and of assignment of multiple stimuli within each superordinate unit to one response, on the SRC effect based on superordinate unit position. Three experiments revealed that stimulus-response assignment is critical, while the visual cue plays a minor role, in eliciting the SRC effect based on the superordinate unit position. Findings suggest bidirectional interaction between perception and action and simultaneous spatial stimulus coding according to multiple frames of reference, with contribution of each coding to the SRC effect flexibly varying with task situations.
Subliminal perception of complex visual stimuli.
Ionescu, Mihai Radu
2016-01-01
Rationale: Unconscious perception of various sensory modalities is an active subject of research though its function and effect on behavior is uncertain. Objective: The present study tried to assess if unconscious visual perception could occur with more complex visual stimuli than previously utilized. Methods and Results: Videos containing slideshows of indifferent complex images with interspersed frames of interest of various durations were presented to 24 healthy volunteers. The perception of the stimulus was evaluated with a forced-choice questionnaire while awareness was quantified by self-assessment with a modified awareness scale annexed to each question with 4 categories of awareness. At values of 16.66 ms of stimulus duration, conscious awareness was not possible and answers regarding the stimulus were random. At 50 ms, nonrandom answers were coupled with no self-reported awareness suggesting unconscious perception of the stimulus. At larger durations of stimulus presentation, significantly correct answers were coupled with a certain conscious awareness. Discussion: At values of 50 ms, unconscious perception is possible even with complex visual stimuli. Further studies are recommended with a focus on a range of interest of stimulus duration between 50 to 16.66 ms.
Bertelson, Paul; Aschersleben, Gisa
2003-10-01
In the well-known visual bias of auditory location (alias the ventriloquist effect), auditory and visual events presented in separate locations appear closer together, provided the presentations are synchronized. Here, we consider the possibility of the converse phenomenon: crossmodal attraction on the time dimension conditional on spatial proximity. Participants judged the order of occurrence of sound bursts and light flashes, respectively, separated in time by varying stimulus onset asynchronies (SOAs) and delivered either in the same or in different locations. Presentation was organized using randomly mixed psychophysical staircases, by which the SOA was reduced progressively until a point of uncertainty was reached. This point was reached at longer SOAs with the sounds in the same frontal location as the flashes than in different places, showing that apparent temporal separation is effectively longer in the first condition. Together with a similar one obtained recently in a case of tactile-visual discrepancy, this result supports a view in which timing and spatial layout of the inputs play to some extent inter-changeable roles in the pairing operation at the base of crossmodal interaction.
Aging and the rate of visual information processing.
Guest, Duncan; Howard, Christina J; Brown, Louise A; Gleeson, Harriet
2015-01-01
Multiple methods exist for measuring how age influences the rate of visual information processing. The most advanced methods model the processing dynamics in a task in order to estimate processing rates independently of other factors that might be influenced by age, such as overall performance level and the time at which processing onsets. However, such modeling techniques have produced mixed evidence for age effects. Using a time-accuracy function (TAF) analysis, Kliegl, Mayr, and Krampe (1994) showed clear evidence for age effects on processing rate. In contrast, using the diffusion model to examine the dynamics of decision processes, Ratcliff and colleagues (e.g., Ratcliff, Thapar, & McKoon, 2006) found no evidence for age effects on processing rate across a range of tasks. Examination of these studies suggests that the number of display stimuli might account for the different findings. In three experiments we measured the precision of younger and older adults' representations of target stimuli after different amounts of stimulus exposure. A TAF analysis found little evidence for age differences in processing rate when a single stimulus was presented (Experiment 1). However, adding three nontargets to the display resulted in age-related slowing of processing (Experiment 2). Similar slowing was observed when simply presenting two stimuli and using a post-cue to indicate the target (Experiment 3). Although there was some interference from distracting objects and from previous responses, these age-related effects on processing rate seem to reflect an age-related difficulty in processing multiple objects, particularly when encoding them into visual working memory.
Dynamic Integration of Reward and Stimulus Information in Perceptual Decision-Making
Gao, Juan; Tortell, Rebecca; McClelland, James L.
2011-01-01
In perceptual decision-making, ideal decision-makers should bias their choices toward alternatives associated with larger rewards, and the extent of the bias should decrease as stimulus sensitivity increases. When responses must be made at different times after stimulus onset, stimulus sensitivity grows with time from zero to a final asymptotic level. Are decision makers able to produce responses that are more biased if they are made soon after stimulus onset, but less biased if they are made after more evidence has been accumulated? If so, how close to optimal can they come in doing this, and how might their performance be achieved mechanistically? We report an experiment in which the payoff for each alternative is indicated before stimulus onset. Processing time is controlled by a “go” cue occurring at different times post stimulus onset, requiring a response within msec. Reward bias does start high when processing time is short and decreases as sensitivity increases, leveling off at a non-zero value. However, the degree of bias is sub-optimal for shorter processing times. We present a mechanistic account of participants' performance within the framework of the leaky competing accumulator model [1], in which accumulators for each alternative accumulate noisy information subject to leakage and mutual inhibition. The leveling off of accuracy is attributed to mutual inhibition between the accumulators, allowing the accumulator that gathers the most evidence early in a trial to suppress the alternative. Three ways reward might affect decision making in this framework are considered. One of the three, in which reward affects the starting point of the evidence accumulation process, is consistent with the qualitative pattern of the observed reward bias effect, while the other two are not. Incorporating this assumption into the leaky competing accumulator model, we are able to provide close quantitative fits to individual participant data. PMID:21390225
Dynamic integration of reward and stimulus information in perceptual decision-making.
Gao, Juan; Tortell, Rebecca; McClelland, James L
2011-03-03
In perceptual decision-making, ideal decision-makers should bias their choices toward alternatives associated with larger rewards, and the extent of the bias should decrease as stimulus sensitivity increases. When responses must be made at different times after stimulus onset, stimulus sensitivity grows with time from zero to a final asymptotic level. Are decision makers able to produce responses that are more biased if they are made soon after stimulus onset, but less biased if they are made after more evidence has been accumulated? If so, how close to optimal can they come in doing this, and how might their performance be achieved mechanistically? We report an experiment in which the payoff for each alternative is indicated before stimulus onset. Processing time is controlled by a "go" cue occurring at different times post stimulus onset, requiring a response within msec. Reward bias does start high when processing time is short and decreases as sensitivity increases, leveling off at a non-zero value. However, the degree of bias is sub-optimal for shorter processing times. We present a mechanistic account of participants' performance within the framework of the leaky competing accumulator model [1], in which accumulators for each alternative accumulate noisy information subject to leakage and mutual inhibition. The leveling off of accuracy is attributed to mutual inhibition between the accumulators, allowing the accumulator that gathers the most evidence early in a trial to suppress the alternative. Three ways reward might affect decision making in this framework are considered. One of the three, in which reward affects the starting point of the evidence accumulation process, is consistent with the qualitative pattern of the observed reward bias effect, while the other two are not. Incorporating this assumption into the leaky competing accumulator model, we are able to provide close quantitative fits to individual participant data.
Wendt, Mike; Kiesel, Andrea; Geringswald, Franziska; Purmann, Sascha; Fischer, Rico
2014-01-01
Current models of cognitive control assume gradual adjustment of processing selectivity to the strength of conflict evoked by distractor stimuli. Using a flanker task, we varied conflict strength by manipulating target and distractor onset. Replicating previous findings, flanker interference effects were larger on trials associated with advance presentation of the flankers compared to simultaneous presentation. Controlling for stimulus and response sequence effects by excluding trials with feature repetitions from stimulus administration (Experiment 1) or from the statistical analyses (Experiment 2), we found a reduction of the flanker interference effect after high-conflict predecessor trials (i.e., trials associated with advance presentation of the flankers) but not after low-conflict predecessor trials (i.e., trials associated with simultaneous presentation of target and flankers). This result supports the assumption of conflict-strength-dependent adjustment of visual attention. The selective adaptation effect after high-conflict trials was associated with an increase in prestimulus pupil diameter, possibly reflecting increased cognitive effort of focusing attention.
Eimer, Martin; Grubert, Anna
2015-09-01
Previous electrophysiological studies have shown that attentional selection processes are highly sensitive to the temporal order of task-relevant visual events. When two successively presented colour-defined target stimuli are separated by a stimulus onset asynchrony (SOA) of only 10 ms, the onset latencies of N2pc components to these stimuli (which reflect their attentional selection) precisely match their objective temporal separation. We tested whether such small onset differences are accessible to conscious awareness by instructing participants to report the category (letter or digit) of the first of two target-colour items that were separated by an SOA of 10, 20, or 30 ms. Performance was at chance level for the 10 ms SOA, demonstrating that temporal order information which is available to attentional control processes cannot be utilized for conscious temporal order judgments. These results provide new evidence that selective attention and conscious awareness are functionally separable, and support the hypothesis that attention and awareness operate at different stages of cognitive processing. Copyright © 2015 Elsevier Inc. All rights reserved.
Moors, Pieter; Huygelier, Hanne; Wagemans, Johan; de-Wit, Lee; van Ee, Raymond
2015-01-01
Previous studies using binocular rivalry have shown that signals in a modality other than the visual can bias dominance durations depending on their congruency with the rivaling stimuli. More recently, studies using continuous flash suppression (CFS) have reported that multisensory integration influences how long visual stimuli remain suppressed. In this study, using CFS, we examined whether the contrast thresholds for detecting visual looming stimuli are influenced by a congruent auditory stimulus. In Experiment 1, we show that a looming visual stimulus can result in lower detection thresholds compared to a static concentric grating, but that auditory tone pips congruent with the looming stimulus did not lower suppression thresholds any further. In Experiments 2, 3, and 4, we again observed no advantage for congruent multisensory stimuli. These results add to our understanding of the conditions under which multisensory integration is possible, and suggest that certain forms of multisensory integration are not evident when the visual stimulus is suppressed from awareness using CFS.
2013-01-01
Background Prior studies demonstrated that hesitation-prone persons with Parkinson’s disease (PDs) acutely improve step initiation using a novel self-triggered stimulus that enhances lateral weight shift prior to step onset. PDs showed reduced anticipatory postural adjustment (APA) durations, earlier step onsets, and faster 1st step speed immediately following stimulus exposure. Objective This study investigated the effects of long-term stimulus exposure. Methods Two groups of hesitation-prone subjects with Parkinson’s disease (PD) participated in a 6-week step-initiation training program involving one of two stimulus conditions: 1) Drop. The stance-side support surface was lowered quickly (1.5 cm); 2) Vibration. A short vibration (100 ms) was applied beneath the stance-side support surface. Stimuli were self-triggered by a 5% reduction in vertical force under the stance foot during the APA. Testing was at baseline, immediately post-training, and 6 weeks post-training. Measurements included timing and magnitude of ground reaction forces, and step speed and length. Results Both groups improved their APA force modulation after training. Contrary to previous results, neither group showed reduced APA durations or earlier step onset times. The vibration group showed 55% increase in step speed and a 39% increase in step length which were retained 6 weeks post-training. The drop group showed no stepping-performance improvements. Conclusions The acute sensitivity to the quickness-enhancing effects of stimulus exposure demonstrated in previous studies was supplanted by improved force modulation following prolonged stimulus exposure. The results suggest a potential approach to reduce the severity of start hesitation in PDs, but further study is needed to understand the relationship between short- and long-term effects of stimulus exposure. PMID:23363975
The Effect of Visual Threat on Spatial Attention to Touch
ERIC Educational Resources Information Center
Poliakoff, Ellen; Miles, Eleanor; Li, Xinying; Blanchette, Isabelle
2007-01-01
Viewing a threatening stimulus can bias visual attention toward that location. Such effects have typically been investigated only in the visual modality, despite the fact that many threatening stimuli are most dangerous when close to or in contact with the body. Recent multisensory research indicates that a neutral visual stimulus, such as a light…
Schoeberl, Tobias; Ansorge, Ulrich
2018-05-15
Prior research suggested that attentional capture by subliminal abrupt onset cues is stimulus driven. In these studies, reacting was faster when a searched-for target appeared at the location of a preceding abrupt onset cue compared to when the same target appeared at a location away from the cue (cueing effect), although the earlier onset of the cue was subliminal, because it appeared as one out of three horizontally aligned placeholders with a lead time that was too short to be noticed by the participants. Because the cueing effects seemed to be independent of top-down search settings for target features, the effect was attributed to stimulus-driven attentional capture. However, prior studies did not investigate if participants experienced the cues as useful temporal warning signals and, therefore, attended to the cues in a top-down way. Here, we tested to which extent search settings based on temporal contingencies between cue and target onset could be responsible for spatial cueing effects. Cueing effects were replicated, and we showed that removing temporal contingencies between cue and target onset did not diminish the cueing effects (Experiments 1 and 2). Neither presenting the cues in the majority of trials after target onset (Experiment 1) nor presenting cue and target unrelated to one another (Experiment 2) led to a significant reduction of the spatial cueing effects. Results thus support the hypothesis that the subliminal cues captured attention in a stimulus-driven way.
Ludwig, Karin; Sterzer, Philipp; Kathmann, Norbert; Hesselmann, Guido
2016-10-01
As a functional organization principle in cortical visual information processing, the influential 'two visual systems' hypothesis proposes a division of labor between a dorsal "vision-for-action" and a ventral "vision-for-perception" stream. A core assumption of this model is that the two visual streams are differentially involved in visual awareness: ventral stream processing is closely linked to awareness while dorsal stream processing is not. In this functional magnetic resonance imaging (fMRI) study with human observers, we directly probed the stimulus-related information encoded in fMRI response patterns in both visual streams as a function of stimulus visibility. We parametrically modulated the visibility of face and tool stimuli by varying the contrasts of the masks in a continuous flash suppression (CFS) paradigm. We found that visibility - operationalized by objective and subjective measures - decreased proportionally with increasing log CFS mask contrast. Neuronally, this relationship was closely matched by ventral visual areas, showing a linear decrease of stimulus-related information with increasing mask contrast. Stimulus-related information in dorsal areas also showed a dependency on mask contrast, but the decrease rather followed a step function instead of a linear function. Together, our results suggest that both the ventral and the dorsal visual stream are linked to visual awareness, but neural activity in ventral areas more closely reflects graded differences in awareness compared to dorsal areas. Copyright © 2016 Elsevier Ltd. All rights reserved.
Visual Masking During Pursuit Eye Movements
ERIC Educational Resources Information Center
White, Charles W.
1976-01-01
Visual masking occurs when one stimulus interferes with the perception of another stimulus. Investigates which matters more for visual masking--that the target and masking stimuli are flashed on the same part of the retina, or, that the target and mask appear in the same place. (Author/RK)
Spatial attention increases high-frequency gamma synchronisation in human medial visual cortex.
Koelewijn, Loes; Rich, Anina N; Muthukumaraswamy, Suresh D; Singh, Krish D
2013-10-01
Visual information processing involves the integration of stimulus and goal-driven information, requiring neuronal communication. Gamma synchronisation is linked to neuronal communication, and is known to be modulated in visual cortex both by stimulus properties and voluntarily-directed attention. Stimulus-driven modulations of gamma activity are particularly associated with early visual areas such as V1, whereas attentional effects are generally localised to higher visual areas such as V4. The absence of a gamma increase in early visual cortex is at odds with robust attentional enhancements found with other measures of neuronal activity in this area. Here we used magnetoencephalography (MEG) to explore the effect of spatial attention on gamma activity in human early visual cortex using a highly effective gamma-inducing stimulus and strong attentional manipulation. In separate blocks, subjects tracked either a parafoveal grating patch that induced gamma activity in contralateral medial visual cortex, or a small line at fixation, effectively attending away from the gamma-inducing grating. Both items were always present, but rotated unpredictably and independently of each other. The rotating grating induced gamma synchronisation in medial visual cortex at 30-70 Hz, and in lateral visual cortex at 60-90 Hz, regardless of whether it was attended. Directing spatial attention to the grating increased gamma synchronisation in medial visual cortex, but only at 60-90 Hz. These results suggest that the generally found increase in gamma activity by spatial attention can be localised to early visual cortex in humans, and that stimulus and goal-driven modulations may be mediated at different frequencies within the gamma range. Copyright © 2013 Elsevier Inc. All rights reserved.
Neural and cognitive face-selective markers: An integrative review.
Yovel, Galit
2016-03-01
Faces elicit robust and selective neural responses in the primate brain. These neural responses have been investigated with functional MRI and EEG in numerous studies, which have reported face-selective activations in the occipital-temporal cortex and an electrophysiological face-selective response that peaks 170 ms after stimulus onset at occipital-temporal sites. Evidence for face-selective processes has also been consistently reported in cognitive studies, which investigated the face inversion effect, the composite face effect and the left visual field (LVF) superiority. These cognitive effects indicate that the perceptual representation that we generate for faces differs from the representation that is generated for inverted faces or non-face objects. In this review, I will show that the fMRI and ERP face-selective responses are strongly associated with these three well-established behavioral face-selective measures. I will further review studies that examined the relationship between fMRI and EEG face-selective measures suggesting that they are strongly linked. Taken together these studies imply that a holistic representation of a face is generated at 170 ms after stimulus onset over the right hemisphere. These findings, which reveal a strong link between the various and complementary cognitive and neural measures of face processing, allow to characterize where, when and how faces are represented during the first 200 ms of face processing. Copyright © 2015 Elsevier Ltd. All rights reserved.
Do People Take Stimulus Correlations into Account in Visual Search (Open Source)
2016-03-10
RESEARCH ARTICLE Do People Take Stimulus Correlations into Account in Visual Search ? Manisha Bhardwaj1, Ronald van den Berg2,3, Wei Ji Ma2,4...visual search experiments, distractors are often statistically independent of each other. However, stimuli in more naturalistic settings are often...contribute to bridging the gap between artificial and natural visual search tasks. Introduction Visual target detection in displays consisting of multiple
Perceptual grouping enhances visual plasticity.
Mastropasqua, Tommaso; Turatto, Massimo
2013-01-01
Visual perceptual learning, a manifestation of neural plasticity, refers to improvements in performance on a visual task achieved by training. Attention is known to play an important role in perceptual learning, given that the observer's discriminative ability improves only for those stimulus feature that are attended. However, the distribution of attention can be severely constrained by perceptual grouping, a process whereby the visual system organizes the initial retinal input into candidate objects. Taken together, these two pieces of evidence suggest the interesting possibility that perceptual grouping might also affect perceptual learning, either directly or via attentional mechanisms. To address this issue, we conducted two experiments. During the training phase, participants attended to the contrast of the task-relevant stimulus (oriented grating), while two similar task-irrelevant stimuli were presented in the adjacent positions. One of the two flanking stimuli was perceptually grouped with the attended stimulus as a consequence of its similar orientation (Experiment 1) or because it was part of the same perceptual object (Experiment 2). A test phase followed the training phase at each location. Compared to the task-irrelevant no-grouping stimulus, orientation discrimination improved at the attended location. Critically, a perceptual learning effect equivalent to the one observed for the attended location also emerged for the task-irrelevant grouping stimulus, indicating that perceptual grouping induced a transfer of learning to the stimulus (or feature) being perceptually grouped with the task-relevant one. Our findings indicate that no voluntary effort to direct attention to the grouping stimulus or feature is necessary to enhance visual plasticity.
Lateralized electrical brain activity reveals covert attention allocation during speaking.
Rommers, Joost; Meyer, Antje S; Praamstra, Peter
2017-01-27
Speakers usually begin to speak while only part of the utterance has been planned. Earlier work has shown that speech planning processes are reflected in speakers' eye movements as they describe visually presented objects. However, to-be-named objects can be processed to some extent before they have been fixated upon, presumably because attention can be allocated to objects covertly, without moving the eyes. The present study investigated whether EEG could track speakers' covert attention allocation as they produced short utterances to describe pairs of objects (e.g., "dog and chair"). The processing difficulty of each object was varied by presenting it in upright orientation (easy) or in upside down orientation (difficult). Background squares flickered at different frequencies in order to elicit steady-state visual evoked potentials (SSVEPs). The N2pc component, associated with the focusing of attention on an item, was detectable not only prior to speech onset, but also during speaking. The time course of the N2pc showed that attention shifted to each object in the order of mention prior to speech onset. Furthermore, greater processing difficulty increased the time speakers spent attending to each object. This demonstrates that the N2pc can track covert attention allocation in a naming task. In addition, an effect of processing difficulty at around 200-350ms after stimulus onset revealed early attention allocation to the second to-be-named object. The flickering backgrounds elicited SSVEPs, but SSVEP amplitude was not influenced by processing difficulty. These results help complete the picture of the coordination of visual information uptake and motor output during speaking. Copyright © 2016 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Vause, Tricia; Martin, Garry L.; Yu, C.T.; Marion, Carole; Sakko, Gina
2005-01-01
The relationship between language, performance on the Assessment of Basic Learning Abilities (ABLA) test, and stimulus equivalence was examined. Five participants with minimal verbal repertoires were studied; 3 who passed up to ABLA Level 4, a visual quasi-identity discrimination and 2 who passed ABLA Level 6, an auditory-visual nonidentity…
Discrepant visual speech facilitates covert selective listening in "cocktail party" conditions.
Williams, Jason A
2012-06-01
The presence of congruent visual speech information facilitates the identification of auditory speech, while the addition of incongruent visual speech information often impairs accuracy. This latter arrangement occurs naturally when one is being directly addressed in conversation but listens to a different speaker. Under these conditions, performance may diminish since: (a) one is bereft of the facilitative effects of the corresponding lip motion and (b) one becomes subject to visual distortion by incongruent visual speech; by contrast, speech intelligibility may be improved due to (c) bimodal localization of the central unattended stimulus. Participants were exposed to centrally presented visual and auditory speech while attending to a peripheral speech stream. In some trials, the lip movements of the central visual stimulus matched the unattended speech stream; in others, the lip movements matched the attended peripheral speech. Accuracy for the peripheral stimulus was nearly one standard deviation greater with incongruent visual information, compared to the congruent condition which provided bimodal pattern recognition cues. Likely, the bimodal localization of the central stimulus further differentiated the stimuli and thus facilitated intelligibility. Results are discussed with regard to similar findings in an investigation of the ventriloquist effect, and the relative strength of localization and speech cues in covert listening.
Effect of eye position during human visual-vestibular integration of heading perception.
Crane, Benjamin T
2017-09-01
Visual and inertial stimuli provide heading discrimination cues. Integration of these multisensory stimuli has been demonstrated to depend on their relative reliability. However, the reference frame of visual stimuli is eye centered while inertia is head centered, and it remains unclear how these are reconciled with combined stimuli. Seven human subjects completed a heading discrimination task consisting of a 2-s translation with a peak velocity of 16 cm/s. Eye position was varied between 0° and ±25° left/right. Experiments were done with inertial motion, visual motion, or a combined visual-inertial motion. Visual motion coherence varied between 35% and 100%. Subjects reported whether their perceived heading was left or right of the midline in a forced-choice task. With the inertial stimulus the eye position had an effect such that the point of subjective equality (PSE) shifted 4.6 ± 2.4° in the gaze direction. With the visual stimulus the PSE shift was 10.2 ± 2.2° opposite the gaze direction, consistent with retinotopic coordinates. Thus with eccentric eye positions the perceived inertial and visual headings were offset ~15°. During the visual-inertial conditions the PSE varied consistently with the relative reliability of these stimuli such that at low visual coherence the PSE was similar to that of the inertial stimulus and at high coherence it was closer to the visual stimulus. On average, the inertial stimulus was weighted near Bayesian ideal predictions, but there was significant deviation from ideal in individual subjects. These findings support visual and inertial cue integration occurring in independent coordinate systems. NEW & NOTEWORTHY In multiple cortical areas visual heading is represented in retinotopic coordinates while inertial heading is in body coordinates. It remains unclear whether multisensory integration occurs in a common coordinate system. The experiments address this using a multisensory integration task with eccentric gaze positions making the effect of coordinate systems clear. The results indicate that the coordinate systems remain separate to the perceptual level and that during the multisensory task the perception depends on relative stimulus reliability. Copyright © 2017 the American Physiological Society.
Russo, N; Mottron, L; Burack, J A; Jemel, B
2012-07-01
Individuals with autism spectrum disorders (ASD) report difficulty integrating simultaneously presented visual and auditory stimuli (Iarocci & McDonald, 2006), albeit showing enhanced perceptual processing of unisensory stimuli, as well as an enhanced role of perception in higher-order cognitive tasks (Enhanced Perceptual Functioning (EPF) model; Mottron, Dawson, Soulières, Hubert, & Burack, 2006). Individuals with an ASD also integrate auditory-visual inputs over longer periods of time than matched typically developing (TD) peers (Kwakye, Foss-Feig, Cascio, Stone & Wallace, 2011). To tease apart the dichotomy of both extended multisensory processing and enhanced perceptual processing, we used behavioral and electrophysiological measurements of audio-visual integration among persons with ASD. 13 TD and 14 autistics matched on IQ completed a forced choice multisensory semantic congruence task requiring speeded responses regarding the congruence or incongruence of animal sounds and pictures. Stimuli were presented simultaneously or sequentially at various stimulus onset asynchronies in both auditory first and visual first presentations. No group differences were noted in reaction time (RT) or accuracy. The latency at which congruent and incongruent waveforms diverged was the component of interest. In simultaneous presentations, congruent and incongruent waveforms diverged earlier (circa 150 ms) among persons with ASD than among TD individuals (around 350 ms). In sequential presentations, asymmetries in the timing of neuronal processing were noted in ASD which depended on stimulus order, but these were consistent with the nature of specific perceptual strengths in this group. These findings extend the Enhanced Perceptual Functioning Model to the multisensory domain, and provide a more nuanced context for interpreting ERP findings of impaired semantic processing in ASD. Copyright © 2012 Elsevier Ltd. All rights reserved.
Stimulus Dependence of Correlated Variability across Cortical Areas
Cohen, Marlene R.
2016-01-01
The way that correlated trial-to-trial variability between pairs of neurons in the same brain area (termed spike count or noise correlation, rSC) depends on stimulus or task conditions can constrain models of cortical circuits and of the computations performed by networks of neurons (Cohen and Kohn, 2011). In visual cortex, rSC tends not to depend on stimulus properties (Kohn and Smith, 2005; Huang and Lisberger, 2009) but does depend on cognitive factors like visual attention (Cohen and Maunsell, 2009; Mitchell et al., 2009). However, neurons across visual areas respond to any visual stimulus or contribute to any perceptual decision, and the way that information from multiple areas is combined to guide perception is unknown. To gain insight into these issues, we recorded simultaneously from neurons in two areas of visual cortex (primary visual cortex, V1, and the middle temporal area, MT) while rhesus monkeys viewed different visual stimuli in different attention conditions. We found that correlations between neurons in different areas depend on stimulus and attention conditions in very different ways than do correlations within an area. Correlations across, but not within, areas depend on stimulus direction and the presence of a second stimulus, and attention has opposite effects on correlations within and across areas. This observed pattern of cross-area correlations is predicted by a normalization model where MT units sum V1 inputs that are passed through a divisive nonlinearity. Together, our results provide insight into how neurons in different areas interact and constrain models of the neural computations performed across cortical areas. SIGNIFICANCE STATEMENT Correlations in the responses of pairs of neurons within the same cortical area have been a subject of growing interest in systems neuroscience. However, correlated variability between different cortical areas is likely just as important. We recorded simultaneously from neurons in primary visual cortex and the middle temporal area while rhesus monkeys viewed different visual stimuli in different attention conditions. We found that correlations between neurons in different areas depend on stimulus and attention conditions in very different ways than do correlations within an area. The observed pattern of cross-area correlations was predicted by a simple normalization model. Our results provide insight into how neurons in different areas interact and constrain models of the neural computations performed across cortical areas. PMID:27413163
Object form discontinuity facilitates displacement discrimination across saccades.
Demeyer, Maarten; De Graef, Peter; Wagemans, Johan; Verfaillie, Karl
2010-06-01
Stimulus displacements coinciding with a saccadic eye movement are poorly detected by human observers. In recent years, converging evidence has shown that this phenomenon does not result from poor transsaccadic retention of presaccadic stimulus position information, but from the visual system's efforts to spatially align presaccadic and postsaccadic perception on the basis of visual landmarks. It is known that this process can be disrupted, and transsaccadic displacement detection performance can be improved, by briefly blanking the stimulus display during and immediately after the saccade. In the present study, we investigated whether this improvement could also follow from a discontinuity in the task-irrelevant form of the displaced stimulus. We observed this to be the case: Subjects more accurately identified the direction of intrasaccadic displacements when the displaced stimulus simultaneously changed form, compared to conditions without a form change. However, larger improvements were still observed under blanking conditions. In a second experiment, we show that facilitation induced by form changes and blanks can combine. We conclude that a strong assumption of visual stability underlies the suppression of transsaccadic change detection performance, the rejection of which generalizes from stimulus form to stimulus position.
Oculomotor Reflexes as a Test of Visual Dysfunctions in Cognitively Impaired Observers
2013-09-01
right. Gaze horizontal position is plotted along the y-axis. The red bar indicates a visual nystagmus event detected by the filter. (d) A mild curse word...experimental conditions were chosen to simulate testing cognitively impaired observers. Reflex Stimulus Functions Visual Nystagmus luminance grating low-level...developed a new stimulus for visual nystagmus to 8 test visual motion processing in the presence of incoherent motion noise. The drifting equiluminant
Tilt and Translation Motion Perception during Pitch Tilt with Visual Surround Translation
NASA Technical Reports Server (NTRS)
O'Sullivan, Brita M.; Harm, Deborah L.; Reschke, Millard F.; Wood, Scott J.
2006-01-01
The central nervous system must resolve the ambiguity of inertial motion sensory cues in order to derive an accurate representation of spatial orientation. Previous studies suggest that multisensory integration is critical for discriminating linear accelerations arising from tilt and translation head motion. Visual input is especially important at low frequencies where canal input is declining. The NASA Tilt Translation Device (TTD) was designed to recreate postflight orientation disturbances by exposing subjects to matching tilt self motion with conflicting visual surround translation. Previous studies have demonstrated that brief exposures to pitch tilt with foreaft visual surround translation produced changes in compensatory vertical eye movement responses, postural equilibrium, and motion sickness symptoms. Adaptation appeared greatest with visual scene motion leading (versus lagging) the tilt motion, and the adaptation time constant appeared to be approximately 30 min. The purpose of this study was to compare motion perception when the visual surround translation was inphase versus outofphase with pitch tilt. The inphase stimulus presented visual surround motion one would experience if the linear acceleration was due to foreaft self translation within a stationary surround, while the outofphase stimulus had the visual scene motion leading the tilt by 90 deg as previously used. The tilt stimuli in these conditions were asymmetrical, ranging from an upright orientation to 10 deg pitch back. Another objective of the study was to compare motion perception with the inphase stimulus when the tilts were asymmetrical relative to upright (0 to 10 deg back) versus symmetrical (10 deg forward to 10 deg back). Twelve subjects (6M, 6F, 22-55 yrs) were tested during 3 sessions separated by at least one week. During each of the three sessions (out-of-phase asymmetrical, in-phase asymmetrical, inphase symmetrical), subjects were exposed to visual surround translation synchronized with pitch tilt at 0.1 Hz for a total of 30 min. Tilt and translation motion perception was obtained from verbal reports and a joystick mounted on a linear stage. Horizontal vergence and vertical eye movements were obtained with a binocular video system. Responses were also obtained during darkness before and following 15 min and 30 min of visual surround translation. Each of the three stimulus conditions involving visual surround translation elicited a significantly reduced sense of perceived tilt and strong linear vection (perceived translation) compared to pre-exposure tilt stimuli in darkness. This increase in perceived translation with reduction in tilt perception was also present in darkness following 15 and 30 min exposures, provided the tilt stimuli were not interrupted. Although not significant, there was a trend for the inphase asymmetrical stimulus to elicit a stronger sense of both translation and tilt than the out-of-phase asymmetrical stimulus. Surprisingly, the inphase asymmetrical stimulus also tended to elicit a stronger sense of peak-to-peak translation than the inphase symmetrical stimulus, even though the range of linear acceleration during the symmetrical stimulus was twice that of the asymmetrical stimulus. These results are consistent with the hypothesis that the central nervous system resolves the ambiguity of inertial motion sensory cues by integrating inputs from visual, vestibular, and somatosensory systems.
Lundqvist, Daniel; Bruce, Neil; Öhman, Arne
2015-01-01
In this article, we examine how emotional and perceptual stimulus factors influence visual search efficiency. In an initial task, we run a visual search task, using a large number of target/distractor emotion combinations. In two subsequent tasks, we then assess measures of perceptual (rated and computational distances) and emotional (rated valence, arousal and potency) stimulus properties. In a series of regression analyses, we then explore the degree to which target salience (the size of target/distractor dissimilarities) on these emotional and perceptual measures predict the outcome on search efficiency measures (response times and accuracy) from the visual search task. The results show that both emotional and perceptual stimulus salience contribute to visual search efficiency. The results show that among the emotional measures, salience on arousal measures was more influential than valence salience. The importance of the arousal factor may be a contributing factor to contradictory history of results within this field.
Stropahl, Maren; Schellhardt, Sebastian; Debener, Stefan
2017-06-01
The concurrent presentation of different auditory and visual syllables may result in the perception of a third syllable, reflecting an illusory fusion of visual and auditory information. This well-known McGurk effect is frequently used for the study of audio-visual integration. Recently, it was shown that the McGurk effect is strongly stimulus-dependent, which complicates comparisons across perceivers and inferences across studies. To overcome this limitation, we developed the freely available Oldenburg audio-visual speech stimuli (OLAVS), consisting of 8 different talkers and 12 different syllable combinations. The quality of the OLAVS set was evaluated with 24 normal-hearing subjects. All 96 stimuli were characterized based on their stimulus disparity, which was obtained from a probabilistic model (cf. Magnotti & Beauchamp, 2015). Moreover, the McGurk effect was studied in eight adult cochlear implant (CI) users. By applying the individual, stimulus-independent parameters of the probabilistic model, the predicted effect of stronger audio-visual integration in CI users could be confirmed, demonstrating the validity of the new stimulus material.
Fluctuations of visual awareness: Combining motion-induced blindness with binocular rivalry
Jaworska, Katarzyna; Lages, Martin
2014-01-01
Binocular rivalry (BR) and motion-induced blindness (MIB) are two phenomena of visual awareness where perception alternates between multiple states despite constant retinal input. Both phenomena have been extensively studied, but the underlying processing remains unclear. It has been suggested that BR and MIB involve the same neural mechanism, but how the two phenomena compete for visual awareness in the same stimulus has not been systematically investigated. Here we introduce BR in a dichoptic stimulus display that can also elicit MIB and examine fluctuations of visual awareness over the course of each trial. Exploiting this paradigm we manipulated stimulus characteristics that are known to influence MIB and BR. In two experiments we found that effects on multistable percepts were incompatible with the idea of a common oscillator. The results suggest instead that local and global stimulus attributes can affect the dynamics of each percept differently. We conclude that the two phenomena of visual awareness share basic temporal characteristics but are most likely influenced by processing at different stages within the visual system. PMID:25240063
Cecere, Roberto; Gross, Joachim; Thut, Gregor
2016-06-01
The ability to integrate auditory and visual information is critical for effective perception and interaction with the environment, and is thought to be abnormal in some clinical populations. Several studies have investigated the time window over which audiovisual events are integrated, also called the temporal binding window, and revealed asymmetries depending on the order of audiovisual input (i.e. the leading sense). When judging audiovisual simultaneity, the binding window appears narrower and non-malleable for auditory-leading stimulus pairs and wider and trainable for visual-leading pairs. Here we specifically examined the level of independence of binding mechanisms when auditory-before-visual vs. visual-before-auditory input is bound. Three groups of healthy participants practiced audiovisual simultaneity detection with feedback, selectively training on auditory-leading stimulus pairs (group 1), visual-leading stimulus pairs (group 2) or both (group 3). Subsequently, we tested for learning transfer (crossover) from trained stimulus pairs to non-trained pairs with opposite audiovisual input. Our data confirmed the known asymmetry in size and trainability for auditory-visual vs. visual-auditory binding windows. More importantly, practicing one type of audiovisual integration (e.g. auditory-visual) did not affect the other type (e.g. visual-auditory), even if trainable by within-condition practice. Together, these results provide crucial evidence that audiovisual temporal binding for auditory-leading vs. visual-leading stimulus pairs are independent, possibly tapping into different circuits for audiovisual integration due to engagement of different multisensory sampling mechanisms depending on leading sense. Our results have implications for informing the study of multisensory interactions in healthy participants and clinical populations with dysfunctional multisensory integration. © 2016 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Vinken, Kasper; Vogels, Rufin; Op de Beeck, Hans
2017-03-20
From an ecological point of view, it is generally suggested that the main goal of vision in rats and mice is navigation and (aerial) predator evasion [1-3]. The latter requires fast and accurate detection of a change in the visual environment. An outstanding question is whether there are mechanisms in the rodent visual system that would support and facilitate visual change detection. An experimental protocol frequently used to investigate change detection in humans is the oddball paradigm, in which a rare, unexpected stimulus is presented in a train of stimulus repetitions [4]. A popular "predictive coding" theory of cortical responses states that neural responses should decrease for expected sensory input and increase for unexpected input [5, 6]. Despite evidence for response suppression and enhancement in noninvasive scalp recordings in humans with this paradigm [7, 8], it has proven challenging to observe both phenomena in invasive action potential recordings in other animals [9-11]. During a visual oddball experiment, we recorded multi-unit spiking activity in rat primary visual cortex (V1) and latero-intermediate area (LI), which is a higher area of the rodent ventral visual stream. In rat V1, there was only evidence for response suppression related to stimulus-specific adaptation, and not for response enhancement. However, higher up in area LI, spiking activity showed clear surprise-based response enhancement in addition to stimulus-specific adaptation. These results show that neural responses along the rat ventral visual stream become increasingly sensitive to changes in the visual environment, suggesting a system specialized in the detection of unexpected events. Copyright © 2017 Elsevier Ltd. All rights reserved.
Marini, Francesco; Marzi, Carlo A.
2016-01-01
The visual system leverages organizational regularities of perceptual elements to create meaningful representations of the world. One clear example of such function, which has been formalized in the Gestalt psychology principles, is the perceptual grouping of simple visual elements (e.g., lines and arcs) into unitary objects (e.g., forms and shapes). The present study sought to characterize automatic attentional capture and related cognitive processing of Gestalt-like visual stimuli at the psychophysiological level by using event-related potentials (ERPs). We measured ERPs during a simple visual reaction time task with bilateral presentations of physically matched elements with or without a Gestalt organization. Results showed that Gestalt (vs. non-Gestalt) stimuli are characterized by a larger N2pc together with enhanced ERP amplitudes of non-lateralized components (N1, N2, P3) starting around 150 ms post-stimulus onset. Thus, we conclude that Gestalt stimuli capture attention automatically and entail characteristic psychophysiological signatures at both early and late processing stages. Highlights We studied the neural signatures of the automatic processes of visual attention elicited by Gestalt stimuli. We found that a reliable early correlate of attentional capture turned out to be the N2pc component. Perceptual and cognitive processing of Gestalt stimuli is associated with larger N1, N2, and P3 PMID:27630555
Multisensory perceptual learning is dependent upon task difficulty.
De Niear, Matthew A; Koo, Bonhwang; Wallace, Mark T
2016-11-01
There has been a growing interest in developing behavioral tasks to enhance temporal acuity as recent findings have demonstrated changes in temporal processing in a number of clinical conditions. Prior research has demonstrated that perceptual training can enhance temporal acuity both within and across different sensory modalities. Although certain forms of unisensory perceptual learning have been shown to be dependent upon task difficulty, this relationship has not been explored for multisensory learning. The present study sought to determine the effects of task difficulty on multisensory perceptual learning. Prior to and following a single training session, participants completed a simultaneity judgment (SJ) task, which required them to judge whether a visual stimulus (flash) and auditory stimulus (beep) presented in synchrony or at various stimulus onset asynchronies (SOAs) occurred synchronously or asynchronously. During the training session, participants completed the same SJ task but received feedback regarding the accuracy of their responses. Participants were randomly assigned to one of three levels of difficulty during training: easy, moderate, and hard, which were distinguished based on the SOAs used during training. We report that only the most difficult (i.e., hard) training protocol enhanced temporal acuity. We conclude that perceptual training protocols for enhancing multisensory temporal acuity may be optimized by employing audiovisual stimuli for which it is difficult to discriminate temporal synchrony from asynchrony.
Expectations about person identity modulate the face-sensitive N170.
Johnston, Patrick; Overell, Anne; Kaufman, Jordy; Robinson, Jonathan; Young, Andrew W
2016-12-01
Identifying familiar faces is a fundamentally important aspect of social perception that requires the ability to assign very different (ambient) images of a face to a common identity. The current consensus is that the brain processes face identity at approximately 250-300 msec following stimulus onset, as indexed by the N250 event related potential. However, using two experiments we show compelling evidence that where experimental paradigms induce expectations about person identity, changes in famous face identity are in fact detected at an earlier latency corresponding to the face-sensitive N170. In Experiment 1, using a rapid periodic stimulation paradigm presenting highly variable ambient images, we demonstrate robust effects of low frequency, periodic face-identity changes in N170 amplitude. In Experiment 2, we added infrequent aperiodic identity changes to show that the N170 was larger to both infrequent periodic and infrequent aperiodic identity changes than to high frequency identities. Our use of ambient stimulus images makes it unlikely that these effects are due to adaptation of low-level stimulus features. In line with current ideas about predictive coding, we therefore suggest that when expectations about the identity of a face exist, the visual system is capable of detecting identity mismatches at a latency consistent with the N170. Copyright © 2016 Elsevier Ltd. All rights reserved.
Burke, M R; Barnes, G R
2008-12-15
We used passive and active following of a predictable smooth pursuit stimulus in order to establish if predictive eye movement responses are equivalent under both passive and active conditions. The smooth pursuit stimulus was presented in pairs that were either 'predictable' in which both presentations were matched in timing and velocity, or 'randomized' in which each presentation in the pair was varied in both timing and velocity. A visual cue signaled the type of response required from the subject; a green cue indicated the subject should follow both the target presentations (Go-Go), a pink cue indicated that the subject should passively observe the 1st target and follow the 2nd target (NoGo-Go), and finally a green cue with a black cross revealed a randomized (Rnd) trial in which the subject should follow both presentations. The results revealed better prediction in the Go-Go trials than in the NoGo-Go trials, as indicated by higher anticipatory velocity and earlier eye movement onset (latency). We conclude that velocity and timing information stored from passive observation of a moving target is diminished when compared to active following of the target. This study has significant consequences for understanding how visuomotor memory is generated, stored and subsequently released from short-term memory.
ERIC Educational Resources Information Center
Reyer, Howard S.; Sturmey, Peter
2009-01-01
Three adults with intellectual disabilities participated to investigate the effects of reinforcer deprivation on choice responding. The experimenter identified the most preferred audio-visual (A-V) stimulus and the least preferred visual-only stimulus for each participant. Participants did not have access to the A-V stimulus for 5 min, 5 and 24 h.…
Lateralization of noise-burst trains based on onset and ongoing interaural delays.
Freyman, Richard L; Balakrishnan, Uma; Zurek, Patrick M
2010-07-01
The lateralization of 250-ms trains of brief noise bursts was measured using an acoustic pointing technique. Stimuli were designed to assess the contribution of the interaural time delay (ITD) of the onset binaural burst relative to that of the ITDs in the ongoing part of the train. Lateralization was measured by listeners' adjustments of the ITD of a pointer stimulus, a 50-ms burst of noise, to match the lateral position of the target train. Results confirmed previous reports of lateralization dominance by the onset burst under conditions in which the train is composed of frozen tokens and the ongoing part contains multiple ambiguous interaural delays. In contrast, lateralization of ongoing trains in which fresh noise tokens were used for each set of two alternating (left-leading/right-leading) binaural pairs followed the ITD of the first pair in each set, regardless of the ITD of the onset burst of the entire stimulus and even when the onset burst was removed by gradual gating. This clear lateralization of a long-duration stimulus with ambiguous interaural delay cues suggests precedence mechanisms that involve not only the interaural cues at the beginning of a sound, but also the pattern of cues within an ongoing sound.
Perceptual Grouping Enhances Visual Plasticity
Mastropasqua, Tommaso; Turatto, Massimo
2013-01-01
Visual perceptual learning, a manifestation of neural plasticity, refers to improvements in performance on a visual task achieved by training. Attention is known to play an important role in perceptual learning, given that the observer's discriminative ability improves only for those stimulus feature that are attended. However, the distribution of attention can be severely constrained by perceptual grouping, a process whereby the visual system organizes the initial retinal input into candidate objects. Taken together, these two pieces of evidence suggest the interesting possibility that perceptual grouping might also affect perceptual learning, either directly or via attentional mechanisms. To address this issue, we conducted two experiments. During the training phase, participants attended to the contrast of the task-relevant stimulus (oriented grating), while two similar task-irrelevant stimuli were presented in the adjacent positions. One of the two flanking stimuli was perceptually grouped with the attended stimulus as a consequence of its similar orientation (Experiment 1) or because it was part of the same perceptual object (Experiment 2). A test phase followed the training phase at each location. Compared to the task-irrelevant no-grouping stimulus, orientation discrimination improved at the attended location. Critically, a perceptual learning effect equivalent to the one observed for the attended location also emerged for the task-irrelevant grouping stimulus, indicating that perceptual grouping induced a transfer of learning to the stimulus (or feature) being perceptually grouped with the task-relevant one. Our findings indicate that no voluntary effort to direct attention to the grouping stimulus or feature is necessary to enhance visual plasticity. PMID:23301100
Edges, colour and awareness in blindsight.
Alexander, Iona; Cowey, Alan
2010-06-01
It remains unclear what is being processed in blindsight in response to faces, colours, shapes, and patterns. This was investigated in two hemianopes with chromatic and achromatic stimuli with sharp or shallow luminance or chromatic contrast boundaries or temporal onsets. Performance was excellent only when stimuli had sharp spatial boundaries. When discrimination between isoluminant coloured Gaussians was good it declined to chance levels if stimulus onset was slow. The ability to discriminate between instantaneously presented colours in the hemianopic field depended on their luminance, indicating that wavelength discrimination totally independent of other stimulus qualities is absent. When presented with narrow-band colours the hemianopes detected a stimulus maximally effective for S-cones but invisible to M- and L-cones, indicating that blindsight is mediated not just by the mid-brain, which receives no S-cone input, or that the rods contribute to blindsight. The results show that only simple stimulus features are processed in blindsight. 2010 Elsevier Inc. All rights reserved.
Lateralization of Frequency-Specific Networks for Covert Spatial Attention to Auditory Stimuli
Thorpe, Samuel; D'Zmura, Michael
2011-01-01
We conducted a cued spatial attention experiment to investigate the time–frequency structure of human EEG induced by attentional orientation of an observer in external auditory space. Seven subjects participated in a task in which attention was cued to one of two spatial locations at left and right. Subjects were instructed to report the speech stimulus at the cued location and to ignore a simultaneous speech stream originating from the uncued location. EEG was recorded from the onset of the directional cue through the offset of the inter-stimulus interval (ISI), during which attention was directed toward the cued location. Using a wavelet spectrum, each frequency band was then normalized by the mean level of power observed in the early part of the cue interval to obtain a measure of induced power related to the deployment of attention. Topographies of band specific induced power during the cue and inter-stimulus intervals showed peaks over symmetric bilateral scalp areas. We used a bootstrap analysis of a lateralization measure defined for symmetric groups of channels in each band to identify specific lateralization events throughout the ISI. Our results suggest that the deployment and maintenance of spatially oriented attention throughout a period of 1,100 ms is marked by distinct episodes of reliable hemispheric lateralization ipsilateral to the direction in which attention is oriented. An early theta lateralization was evident over posterior parietal electrodes and was sustained throughout the ISI. In the alpha and mu bands punctuated episodes of parietal power lateralization were observed roughly 500 ms after attentional deployment, consistent with previous studies of visual attention. In the beta band these episodes show similar patterns of lateralization over frontal motor areas. These results indicate that spatial attention involves similar mechanisms in the auditory and visual modalities. PMID:21630112
Braun, J
1994-02-01
In more than one respect, visual search for the most salient or the least salient item in a display are different kinds of visual tasks. The present work investigated whether this difference is primarily one of perceptual difficulty, or whether it is more fundamental and relates to visual attention. Display items of different salience were produced by varying either size, contrast, color saturation, or pattern. Perceptual masking was employed and, on average, mask onset was delayed longer in search for the least salient item than in search for the most salient item. As a result, the two types of visual search presented comparable perceptual difficulty, as judged by psychophysical measures of performance, effective stimulus contrast, and stability of decision criterion. To investigate the role of attention in the two types of search, observers attempted to carry out a letter discrimination and a search task concurrently. To discriminate the letters, observers had to direct visual attention at the center of the display and, thus, leave unattended the periphery, which contained target and distractors of the search task. In this situation, visual search for the least salient item was severely impaired while visual search for the most salient item was only moderately affected, demonstrating a fundamental difference with respect to visual attention. A qualitatively identical pattern of results was encountered by Schiller and Lee (1991), who used similar visual search tasks to assess the effect of a lesion in extrastriate area V4 of the macaque.
Harris, Joseph A.; McMahon, Alex R.; Woldorff, Marty G.
2015-01-01
Any information represented in the brain holds the potential to influence behavior. It is therefore of broad interest to determine the extent and quality of neural processing of stimulus input that occurs with and without awareness. The attentional blink is a useful tool for dissociating neural and behavioral measures of perceptual visual processing across conditions of awareness. The extent of higher-order visual information beyond basic sensory signaling that is processed during the attentional blink remains controversial. To determine what neural processing at the level of visual-object identification occurs in the absence of awareness, electrophysiological responses to images of faces and houses were recorded both within and outside of the attentional blink period during a rapid serial visual presentation (RSVP) stream. Electrophysiological results were sorted according to behavioral performance (correctly identified targets versus missed targets) within these blink and non-blink periods. An early index of face-specific processing (the N170, 140–220 ms post-stimulus) was observed regardless of whether the subject demonstrated awareness of the stimulus, whereas a later face-specific effect with the same topographic distribution (500–700 ms post-stimulus) was only seen for accurate behavioral discrimination of the stimulus content. The present findings suggest a multi-stage process of object-category processing, with only the later phase being associated with explicit visual awareness. PMID:23859644
The mechanisms of collinear integration.
Cass, John; Alais, David
2006-08-11
Low-contrast visual contour fragments are easier to detect when presented in the context of nearby collinear contour elements (U. Polat & D. Sagi, 1993). The spatial and temporal determinants of this collinear facilitation have been studied extensively (J. R. Cass & B. Spehar, 2005; Y. Tanaka & D. Sagi, 1998; C. B. Williams & R. F. Hess, 1998), although considerable debate surrounds the neural mechanisms underlying it. Our study examines this question using a novel stimulus, whereby the flanking "contour" elements are rotated around their own axis. By measuring contrast detection thresholds to a brief foveal target presented at various phases of flanker rotation, we find peak facilitation after flankers have rotated beyond their collinear phase. This optimal facilitative delay increases monotonically as a function of target-flanker separation, yielding estimates of cortical propagation of 0.1 m/s, a value highly consistent with the dynamics of long-range horizontal interactions observed within primary visual cortex (V1). A curious new finding is also observed: Facilitative peaks also occur when the target flash precedes flanker collinearity by 20-80 ms, a range consistent with contrast-dependent cortical onset latencies. Together, these data suggest that collinear facilitation involves two separate mechanisms, each possessing distinct dynamics: (i) slowly propagating horizontal interactions within V1 and (ii) a faster integrative mechanism, possibly driven by synchronous collinear cortical onset.
Processing of prosodic changes in natural speech stimuli in school-age children.
Lindström, R; Lepistö, T; Makkonen, T; Kujala, T
2012-12-01
Speech prosody conveys information about important aspects of communication: the meaning of the sentence and the emotional state or intention of the speaker. The present study addressed processing of emotional prosodic changes in natural speech stimuli in school-age children (mean age 10 years) by recording the electroencephalogram, facial electromyography, and behavioral responses. The stimulus was a semantically neutral Finnish word uttered with four different emotional connotations: neutral, commanding, sad, and scornful. In the behavioral sound-discrimination task the reaction times were fastest for the commanding stimulus and longest for the scornful stimulus, and faster for the neutral than for the sad stimulus. EEG and EMG responses were measured during non-attentive oddball paradigm. Prosodic changes elicited a negative-going, fronto-centrally distributed neural response peaking at about 500 ms from the onset of the stimulus, followed by a fronto-central positive deflection, peaking at about 740 ms. For the commanding stimulus also a rapid negative deflection peaking at about 290 ms from stimulus onset was elicited. No reliable stimulus type specific rapid facial reactions were found. The results show that prosodic changes in natural speech stimuli activate pre-attentive neural change-detection mechanisms in school-age children. However, the results do not support the suggestion of automaticity of emotion specific facial muscle responses to non-attended emotional speech stimuli in children. Copyright © 2012 Elsevier B.V. All rights reserved.
Rosenberg, Monica; Noonan, Sarah; DeGutis, Joseph; Esterman, Michael
2013-04-01
Sustained attention is a fundamental aspect of human cognition and has been widely studied in applied and clinical contexts. Despite a growing understanding of how attention varies throughout task performance, moment-to-moment fluctuations are often difficult to assess. In order to better characterize fluctuations in sustained visual attention, in the present study we employed a novel continuous performance task (CPT), the gradual-onset CPT (gradCPT). In the gradCPT, a central face stimulus gradually transitions between individuals at a constant rate (1,200 ms), and participants are instructed to respond to each male face but not to a rare target female face. In the distractor-present version, the background distractors consist of scene images, and in the distractor-absent condition, of phase-scrambled scene images. The results confirmed that the gradCPT taxes sustained attention, as vigilance decrements were observed over the task's 12-min duration: Participants made more commission errors and showed increasingly variable response latencies (RTs) over time. Participants' attentional states also fluctuated from moment to moment, with periods of higher RT variability being associated with increased likelihood of errors and greater speed-accuracy trade-offs. In addition, task performance was related to self-reported mindfulness and the propensity for attention lapses in everyday life. The gradCPT is a useful tool for studying both low- and high-frequency fluctuations in sustained visual attention and is sensitive to individual differences in attentional ability.
Purely temporal figure-ground segregation.
Kandil, F I; Fahle, M
2001-05-01
Visual figure-ground segregation is achieved by exploiting differences in features such as luminance, colour, motion or presentation time between a figure and its surround. Here we determine the shortest delay times required for figure-ground segregation based on purely temporal features. Previous studies usually employed stimulus onset asynchronies between figure- and ground-containing possible artefacts based on apparent motion cues or on luminance differences. Our stimuli systematically avoid these artefacts by constantly showing 20 x 20 'colons' that flip by 90 degrees around their midpoints at constant time intervals. Colons constituting the background flip in-phase whereas those constituting the target flip with a phase delay. We tested the impact of frequency modulation and phase reduction on target detection. Younger subjects performed well above chance even at temporal delays as short as 13 ms, whilst older subjects required up to three times longer delays in some conditions. Figure-ground segregation can rely on purely temporal delays down to around 10 ms even in the absence of luminance and motion artefacts, indicating a temporal precision of cortical information processing almost an order of magnitude lower than the one required for some models of feature binding in the visual cortex [e.g. Singer, W. (1999), Curr. Opin. Neurobiol., 9, 189-194]. Hence, in our experiment, observers are unable to use temporal stimulus features with the precision required for these models.
Chromatic VEP in children with congenital colour vision deficiency.
Tekavčič Pompe, Manca; Stirn Kranjc, Branka; Brecelj, Jelka
2010-09-01
Visual evoked potentials to chromatic stimulus (cVEP) are believed to selectively test the parvocellular visual pathway which is responsible for processing information about colour. The aim was to evaluate cVEP in children with red-green congenital colour vision deficiency. VEP responses of 15 colour deficient children were compared to 31 children with normal colour vision. An isoluminant red-green stimulus composed of horizontal gratings was presented in an onset-offset manner. The shape of the waveform was studied, as well as the latency and amplitude of positive (P) and negative (N) waves. cVEP response did not change much with increased age in colour deficient children, whereas normative data showed changes from a predominantly positive to a negative response with increased age. A P wave was present in 87% of colour deficient children (and in 100% of children with normal colour vision), whereas the N wave was absent in a great majority of colour deficient children and was present in 80% of children with normal colour vision. Therefore, the amplitude of the whole response (N-P) decreased linearly with age in colour deficient children, whereas in children with normal colour vision it increased linearly. P wave latency shortened with increased age in both groups. cVEP responses differ in children with congenital colour vision deficiency compared to children with normal colour vision. © 2010 The Authors, Ophthalmic and Physiological Optics © 2010 The College of Optometrists.
Harris, Anthony M; Dux, Paul E; Jones, Caelyn N; Mattingley, Jason B
2017-05-15
Mechanisms of attention assign priority to sensory inputs on the basis of current task goals. Previous studies have shown that lateralized neural oscillations within the alpha (8-14Hz) range are associated with the voluntary allocation of attention to the contralateral visual field. It is currently unknown, however, whether similar oscillatory signatures instantiate the involuntary capture of spatial attention by goal-relevant stimulus properties. Here we investigated the roles of theta (4-8Hz), alpha, and beta (14-30Hz) oscillations in human goal-directed visual attention. Across two experiments, we had participants respond to a brief target of a particular color among heterogeneously colored distractors. Prior to target onset, we cued one location with a lateralized, non-predictive cue that was either target- or non-target-colored. During the behavioral task, we recorded brain activity using electroencephalography (EEG), with the aim of analyzing cue-elicited oscillatory activity. We found that theta oscillations lateralized in response to all cues, and this lateralization was stronger if the cue matched the target color. Alpha oscillations lateralized relatively later, and only in response to target-colored cues, consistent with the capture of spatial attention. Our findings suggest that stimulus induced changes in theta and alpha amplitude reflect task-based modulation of signals by feature-based and spatial attention, respectively. Copyright © 2017 Elsevier Inc. All rights reserved.
Selective representation of task-relevant objects and locations in the monkey prefrontal cortex.
Everling, Stefan; Tinsley, Chris J; Gaffan, David; Duncan, John
2006-04-01
In the monkey prefrontal cortex (PFC), task context exerts a strong influence on neural activity. We examined different aspects of task context in a temporal search task. On each trial, the monkey (Macaca mulatta) watched a stream of pictures presented to left or right of fixation. The task was to hold fixation until seeing a particular target, and then to make an immediate saccade to it. Sometimes (unilateral task), the attended pictures appeared alone, with a cue at trial onset indicating whether they would be presented to left or right. Sometimes (bilateral task), the attended picture stream (cued side) was accompanied by an irrelevant stream on the opposite side. In two macaques, we recorded responses from a total of 161 cells in the lateral PFC. Many cells (75/161) showed visual responses. Object-selective responses were strongly shaped by task relevance - with stronger responses to targets than to nontargets, failure to discriminate one nontarget from another, and filtering out of information from an irrelevant stimulus stream. Location selectivity occurred rather independently of object selectivity, and independently in visual responses and delay periods between one stimulus and the next. On error trials, PFC activity followed the correct rules of the task, rather than the incorrect overt behaviour. Together, these results suggest a highly programmable system, with responses strongly determined by the rules and requirements of the task performed.
Miller, Kai J.; Schalk, Gerwin; Hermes, Dora; Ojemann, Jeffrey G.; Rao, Rajesh P. N.
2016-01-01
The link between object perception and neural activity in visual cortical areas is a problem of fundamental importance in neuroscience. Here we show that electrical potentials from the ventral temporal cortical surface in humans contain sufficient information for spontaneous and near-instantaneous identification of a subject’s perceptual state. Electrocorticographic (ECoG) arrays were placed on the subtemporal cortical surface of seven epilepsy patients. Grayscale images of faces and houses were displayed rapidly in random sequence. We developed a template projection approach to decode the continuous ECoG data stream spontaneously, predicting the occurrence, timing and type of visual stimulus. In this setting, we evaluated the independent and joint use of two well-studied features of brain signals, broadband changes in the frequency power spectrum of the potential and deflections in the raw potential trace (event-related potential; ERP). Our ability to predict both the timing of stimulus onset and the type of image was best when we used a combination of both the broadband response and ERP, suggesting that they capture different and complementary aspects of the subject’s perceptual state. Specifically, we were able to predict the timing and type of 96% of all stimuli, with less than 5% false positive rate and a ~20ms error in timing. PMID:26820899
Oscillatory encoding of visual stimulus familiarity.
Kissinger, Samuel T; Pak, Alexandr; Tang, Yu; Masmanidis, Sotiris C; Chubykin, Alexander A
2018-06-18
Familiarity of the environment changes the way we perceive and encode incoming information. However, the neural substrates underlying this phenomenon are poorly understood. Here we describe a new form of experience-dependent low frequency oscillations in the primary visual cortex (V1) of awake adult male mice. The oscillations emerged in visually evoked potentials (VEPs) and single-unit activity following repeated visual stimulation. The oscillations were sensitive to the spatial frequency content of a visual stimulus and required the muscarinic acetylcholine receptors (mAChRs) for their induction and expression. Finally, ongoing visually evoked theta (4-6 Hz) oscillations boost the VEP amplitude of incoming visual stimuli if the stimuli are presented at the high excitability phase of the oscillations. Our results demonstrate that an oscillatory code can be used to encode familiarity and serves as a gate for oncoming sensory inputs. Significance Statement. Previous experience can influence the processing of incoming sensory information by the brain and alter perception. However, the mechanistic understanding of how this process takes place is lacking. We have discovered that persistent low frequency oscillations in the primary visual cortex encode information about familiarity and the spatial frequency of the stimulus. These familiarity evoked oscillations influence neuronal responses to the oncoming stimuli in a way that depends on the oscillation phase. Our work demonstrates a new mechanism of visual stimulus feature detection and learning. Copyright © 2018 the authors.
Krick, Christoph M.; Argstatter, Heike; Grapp, Miriam; Plinkert, Peter K.; Reith, Wolfgang
2017-01-01
Background: Tinnitus is the perception of a phantom sound without external acoustic stimulation. Recent tinnitus research suggests a relationship between attention processes and tinnitus-related distress. It has been found that too much focus on tinnitus comes at the expense of the visual domain. The angular gyrus (AG) seems to play a crucial role in switching attention to the most salient stimulus. This study aims to evaluate the involvement of the AG during visual attention tasks in tinnitus sufferers treated with Heidelberg Neuro-Music Therapy (HNMT), an intervention that has been shown to reduce tinnitus-related distress. Methods: Thirty-three patients with chronic tinnitus, 45 patients with recent-onset tinnitus, and 35 healthy controls were tested. A fraction of these (21/21/22) were treated with the “compact” version of the HNMT lasting 1 week with intense treatments, while non-treated participants were included as passive controls. Visual attention was evaluated during functional Magnet-Resonance Imaging (fMRI) by a visual Continous Performance Task (CPT) using letter-based alarm cues (“O” and “X”) appearing in a sequence of neutral letters, “A” through “H.” Participants were instructed to respond via button press only if the letter “O” was followed by the letter “X” (GO condition), but not to respond if a neutral letter appeared instead (NOGO condition). All participants underwent two fMRI sessions, before and after a 1-week study period. Results: The CPT results revealed a relationship between error rates and tinnitus duration at baseline whereby the occurrence of erroneous “GO omissions” and the reaction time increased with tinnitus duration. Patients with chronic tinnitus who were treated with HNMT had decreasing error rates (fewer GO omissions) compared to treated recent-onset patients. fMRI analyses confirmed greater activation of the AG during CPT in chronic patients after HNMT treatment compared to treated recent-onset patients. Conclusions: Our findings suggest that HNMT treatment helps shift the attention from the auditory phantom percept toward visual cues in chronic tinnitus patients and that this shift in attention may involve the AG. PMID:28775679
Optical images of visible and invisible percepts in the primary visual cortex of primates
Macknik, Stephen L.; Haglund, Michael M.
1999-01-01
We optically imaged a visual masking illusion in primary visual cortex (area V-1) of rhesus monkeys to ask whether activity in the early visual system more closely reflects the physical stimulus or the generated percept. Visual illusions can be a powerful way to address this question because they have the benefit of dissociating the stimulus from perception. We used an illusion in which a flickering target (a bar oriented in visual space) is rendered invisible by two counter-phase flickering bars, called masks, which flank and abut the target. The target and masks, when shown separately, each generated correlated activity on the surface of the cortex. During the illusory condition, however, optical signals generated in the cortex by the target disappeared although the image of the masks persisted. The optical image thus was correlated with perception but not with the physical stimulus. PMID:10611363
Task-set inertia and memory-consolidation bottleneck in dual tasks.
Koch, Iring; Rumiati, Raffaella I
2006-11-01
Three dual-task experiments examined the influence of processing a briefly presented visual object for deferred verbal report on performance in an unrelated auditory-manual reaction time (RT) task. RT was increased at short stimulus-onset asynchronies (SOAs) relative to long SOAs, showing that memory consolidation processes can produce a functional processing bottleneck in dual-task performance. In addition, the experiments manipulated the spatial compatibility of the orientation of the visual object and the side of the speeded manual response. This cross-task compatibility produced relative RT benefits only when the instruction for the visual task emphasized overlap at the level of response codes across the task sets (Experiment 1). However, once the effective task set was in place, it continued to produce cross-task compatibility effects even in single-task situations ("ignore" trials in Experiment 2) and when instructions for the visual task did not explicitly require spatial coding of object orientation (Experiment 3). Taken together, the data suggest a considerable degree of task-set inertia in dual-task performance, which is also reinforced by finding costs of switching task sequences (e.g., AC --> BC vs. BC --> BC) in Experiment 3.
Harrison, Neil R; Witheridge, Sian; Makin, Alexis; Wuerger, Sophie M; Pegna, Alan J; Meyer, Georg F
2015-11-01
Motion is represented by low-level signals, such as size-expansion in vision or loudness changes in the auditory modality. The visual and auditory signals from the same object or event may be integrated and facilitate detection. We explored behavioural and electrophysiological correlates of congruent and incongruent audio-visual depth motion in conditions where auditory level changes, visual expansion, and visual disparity cues were manipulated. In Experiment 1 participants discriminated auditory motion direction whilst viewing looming or receding, 2D or 3D, visual stimuli. Responses were faster and more accurate for congruent than for incongruent audio-visual cues, and the congruency effect (i.e., difference between incongruent and congruent conditions) was larger for visual 3D cues compared to 2D cues. In Experiment 2, event-related potentials (ERPs) were collected during presentation of the 2D and 3D, looming and receding, audio-visual stimuli, while participants detected an infrequent deviant sound. Our main finding was that audio-visual congruity was affected by retinal disparity at an early processing stage (135-160ms) over occipito-parietal scalp. Topographic analyses suggested that similar brain networks were activated for the 2D and 3D congruity effects, but that cortical responses were stronger in the 3D condition. Differences between congruent and incongruent conditions were observed between 140-200ms, 220-280ms, and 350-500ms after stimulus onset. Copyright © 2015 Elsevier Ltd. All rights reserved.
Size matters: large objects capture attention in visual search.
Proulx, Michael J
2010-12-23
Can objects or events ever capture one's attention in a purely stimulus-driven manner? A recent review of the literature set out the criteria required to find stimulus-driven attentional capture independent of goal-directed influences, and concluded that no published study has satisfied that criteria. Here visual search experiments assessed whether an irrelevantly large object can capture attention. Capture of attention by this static visual feature was found. The results suggest that a large object can indeed capture attention in a stimulus-driven manner and independent of displaywide features of the task that might encourage a goal-directed bias for large items. It is concluded that these results are either consistent with the stimulus-driven criteria published previously or alternatively consistent with a flexible, goal-directed mechanism of saliency detection.
Stimulus change as a factor in response maintenance with free food available.
Osborne, S R; Shelby, M
1975-01-01
Rats bar pressed for food on a reinforcement schedule in which every response was reinforced, even though a dish of pellets was present. Initially, auditory and visual stimuli accompanied response-produced food presentation. With stimulus feedback as an added consequence of bar pressing, responding was maintained in the presence of free food; without stimulus feedback, responding decreased to a low level. Auditory feedback maintained slightly more responding than did visual feedback, and both together maintained more responding than did either separately. Almost no responding occurred when the only consequence of bar pressing was stimulus feedback. The data indicated conditioned and sensory reinforcement effects of response-produced stimulus feedback. PMID:1202121
Wheat, Katherine L; Cornelissen, Piers L; Sack, Alexander T; Schuhmann, Teresa; Goebel, Rainer; Blomert, Leo
2013-05-01
Magnetoencephalography (MEG) has shown pseudohomophone priming effects at Broca's area (specifically pars opercularis of left inferior frontal gyrus and precentral gyrus; LIFGpo/PCG) within ∼100ms of viewing a word. This is consistent with Broca's area involvement in fast phonological access during visual word recognition. Here we used online transcranial magnetic stimulation (TMS) to investigate whether LIFGpo/PCG is necessary for (not just correlated with) visual word recognition by ∼100ms. Pulses were delivered to individually fMRI-defined LIFGpo/PCG in Dutch speakers 75-500ms after stimulus onset during reading and picture naming. Reading and picture naming reactions times were significantly slower following pulses at 225-300ms. Contrary to predictions, there was no disruption to reading for pulses before 225ms. This does not provide evidence in favour of a functional role for LIFGpo/PCG in reading before 225ms in this case, but does extend previous findings in picture stimuli to written Dutch words. Copyright © 2012 Elsevier Inc. All rights reserved.
Prestimulus alpha-band power biases visual discrimination confidence, but not accuracy.
Samaha, Jason; Iemi, Luca; Postle, Bradley R
2017-09-01
The magnitude of power in the alpha-band (8-13Hz) of the electroencephalogram (EEG) prior to the onset of a near threshold visual stimulus predicts performance. Together with other findings, this has been interpreted as evidence that alpha-band dynamics reflect cortical excitability. We reasoned, however, that non-specific changes in excitability would be expected to influence signal and noise in the same way, leaving actual discriminability unchanged. Indeed, using a two-choice orientation discrimination task, we found that discrimination accuracy was unaffected by fluctuations in prestimulus alpha power. Decision confidence, on the other hand, was strongly negatively correlated with prestimulus alpha power. This finding constitutes a clear dissociation between objective and subjective measures of visual perception as a function of prestimulus cortical excitability. This dissociation is predicted by a model where the balance of evidence supporting each choice drives objective performance but only the magnitude of evidence supporting the selected choice drives subjective reports, suggesting that human perceptual confidence can be suboptimal with respect to tracking objective accuracy. Copyright © 2017 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Teubert, Manuel; Lohaus, Arnold; Fassbender, Ina; Vierhaus, Marc; Spangler, Sibylle; Borchert, Sonja; Freitag, Claudia; Goertz, Claudia; Graf, Frauke; Gudi, Helene; Kolling, Thorsten; Lamm, Bettina; Keller, Heidi; Knopf, Monika; Schwarzer, Gudrun
2012-01-01
This longitudinal study examined the influence of stimulus material on attention and expectation learning in the visual expectation paradigm. Female faces were used as attention-attracting stimuli, and non-meaningful visual stimuli of comparable complexity (Greebles) were used as low attention-attracting stimuli. Expectation learning performance…
Distractor inhibition: Evidence from lateralized readiness potentials.
Pramme, Lisa; Dierolf, Angelika M; Naumann, Ewald; Frings, Christian
2015-08-01
The present study investigated distractor inhibition on the level of stimulus representation. In a sequential distractor-to-distractor priming task participants had to respond to target letters flanked by distractor digits. Reaction time and stimulus-locked lateralized readiness potentials (S-LRPs) of probe responses were measured. Distractor-target onset asynchrony was varied. For RTs responses to probe targets were faster in the case of prime-distractor repetition compared to distractor changes indicating distractor inhibition. Benefits in RTs and the latency of S-LRP onsets for distractor repetition were also modulated by distractor-target onset asynchrony. For S-LRPs distractor inhibition was only present with a simultaneous onset of distractors and target. The results confirm previous results indicating inhibitory mechanisms of object-based selective attention on the level of distractor representations. Copyright © 2015 Elsevier Inc. All rights reserved.
Perceptual organization and visual attention.
Kimchi, Ruth
2009-01-01
Perceptual organization--the processes structuring visual information into coherent units--and visual attention--the processes by which some visual information in a scene is selected--are crucial for the perception of our visual environment and to visuomotor behavior. Recent research points to important relations between attentional and organizational processes. Several studies demonstrated that perceptual organization constrains attentional selectivity, and other studies suggest that attention can also constrain perceptual organization. In this chapter I focus on two aspects of the relationship between perceptual organization and attention. The first addresses the question of whether or not perceptual organization can take place without attention. I present findings demonstrating that some forms of grouping and figure-ground segmentation can occur without attention, whereas others require controlled attentional processing, depending on the processes involved and the conditions prevailing for each process. These findings challenge the traditional view, which assumes that perceptual organization is a unitary entity that operates preattentively. The second issue addresses the question of whether perceptual organization can affect the automatic deployment of attention. I present findings showing that the mere organization of some elements in the visual field by Gestalt factors into a coherent perceptual unit (an "object"), with no abrupt onset or any other unique transient, can capture attention automatically in a stimulus-driven manner. Taken together, the findings discussed in this chapter demonstrate the multifaceted, interactive relations between perceptual organization and visual attention.
Caruso, Valeria C; Pages, Daniel S; Sommer, Marc A; Groh, Jennifer M
2016-06-01
Saccadic eye movements can be elicited by more than one type of sensory stimulus. This implies substantial transformations of signals originating in different sense organs as they reach a common motor output pathway. In this study, we compared the prevalence and magnitude of auditory- and visually evoked activity in a structure implicated in oculomotor processing, the primate frontal eye fields (FEF). We recorded from 324 single neurons while 2 monkeys performed delayed saccades to visual or auditory targets. We found that 64% of FEF neurons were active on presentation of auditory targets and 87% were active during auditory-guided saccades, compared with 75 and 84% for visual targets and saccades. As saccade onset approached, the average level of population activity in the FEF became indistinguishable on visual and auditory trials. FEF activity was better correlated with the movement vector than with the target location for both modalities. In summary, the large proportion of auditory-responsive neurons in the FEF, the similarity between visual and auditory activity levels at the time of the saccade, and the strong correlation between the activity and the saccade vector suggest that auditory signals undergo tailoring to match roughly the strength of visual signals present in the FEF, facilitating accessing of a common motor output pathway. Copyright © 2016 the American Physiological Society.
Meijer, Guido T; Montijn, Jorrit S; Pennartz, Cyriel M A; Lansink, Carien S
2017-09-06
The sensory neocortex is a highly connected associative network that integrates information from multiple senses, even at the level of the primary sensory areas. Although a growing body of empirical evidence supports this view, the neural mechanisms of cross-modal integration in primary sensory areas, such as the primary visual cortex (V1), are still largely unknown. Using two-photon calcium imaging in awake mice, we show that the encoding of audiovisual stimuli in V1 neuronal populations is highly dependent on the features of the stimulus constituents. When the visual and auditory stimulus features were modulated at the same rate (i.e., temporally congruent), neurons responded with either an enhancement or suppression compared with unisensory visual stimuli, and their prevalence was balanced. Temporally incongruent tones or white-noise bursts included in audiovisual stimulus pairs resulted in predominant response suppression across the neuronal population. Visual contrast did not influence multisensory processing when the audiovisual stimulus pairs were congruent; however, when white-noise bursts were used, neurons generally showed response suppression when the visual stimulus contrast was high whereas this effect was absent when the visual contrast was low. Furthermore, a small fraction of V1 neurons, predominantly those located near the lateral border of V1, responded to sound alone. These results show that V1 is involved in the encoding of cross-modal interactions in a more versatile way than previously thought. SIGNIFICANCE STATEMENT The neural substrate of cross-modal integration is not limited to specialized cortical association areas but extends to primary sensory areas. Using two-photon imaging of large groups of neurons, we show that multisensory modulation of V1 populations is strongly determined by the individual and shared features of cross-modal stimulus constituents, such as contrast, frequency, congruency, and temporal structure. Congruent audiovisual stimulation resulted in a balanced pattern of response enhancement and suppression compared with unisensory visual stimuli, whereas incongruent or dissimilar stimuli at full contrast gave rise to a population dominated by response-suppressing neurons. Our results indicate that V1 dynamically integrates nonvisual sources of information while still attributing most of its resources to coding visual information. Copyright © 2017 the authors 0270-6474/17/378783-14$15.00/0.
Neural processing of visual information under interocular suppression: a critical review
Sterzer, Philipp; Stein, Timo; Ludwig, Karin; Rothkirch, Marcus; Hesselmann, Guido
2014-01-01
When dissimilar stimuli are presented to the two eyes, only one stimulus dominates at a time while the other stimulus is invisible due to interocular suppression. When both stimuli are equally potent in competing for awareness, perception alternates spontaneously between the two stimuli, a phenomenon called binocular rivalry. However, when one stimulus is much stronger, e.g., due to higher contrast, the weaker stimulus can be suppressed for prolonged periods of time. A technique that has recently become very popular for the investigation of unconscious visual processing is continuous flash suppression (CFS): High-contrast dynamic patterns shown to one eye can render a low-contrast stimulus shown to the other eye invisible for up to minutes. Studies using CFS have produced new insights but also controversies regarding the types of visual information that can be processed unconsciously as well as the neural sites and the relevance of such unconscious processing. Here, we review the current state of knowledge in regard to neural processing of interocularly suppressed information. Focusing on recent neuroimaging findings, we discuss whether and to what degree such suppressed visual information is processed at early and more advanced levels of the visual processing hierarchy. We review controversial findings related to the influence of attention on early visual processing under interocular suppression, the putative differential roles of dorsal and ventral areas in unconscious object processing, and evidence suggesting privileged unconscious processing of emotional and other socially relevant information. On a more general note, we discuss methodological and conceptual issues, from practical issues of how unawareness of a stimulus is assessed to the overarching question of what constitutes an adequate operational definition of unawareness. Finally, we propose approaches for future research to resolve current controversies in this exciting research area. PMID:24904469
Harrison, Neil R.; Ziessler, Michael
2016-01-01
The anticipation of action effects is a basic process that can be observed even for key-pressing responses in a stimulus-response paradigm. In Ziessler et al.’s (2012) experiments participants first learned arbitrary effects of key-pressing responses. In the test phase an imperative stimulus determined the response, but participants withheld the response until a Go-stimulus appeared. Reaction times (RTs) were shorter if the Go-stimulus was compatible with the learned response effect. This is strong evidence that effect representations were activated during response planning. Here, we repeated the experiment using event-related potentials (ERPs), and we found that Go-stimulus locked ERPs depended on the compatibility relationship between the Go-stimulus and the response effect. In general, this supports the interpretation of the behavioral data. More specifically, differences in the ERPs between compatible and incompatible Go-stimuli were found for the early perceptual P1 component and the later frontal P2 component. P1 differences were found only in the second half of the experiment and for long stimulus onset asynchronies (SOAs) between imperative stimulus and Go-stimulus, i.e., when the effect was fully anticipated and the perceptual system was prepared for the effect-compatible Go-stimulus. P2 amplitudes, likely associated with evaluation and conflict detection, were larger when Go-stimulus and effect were incompatible; presumably, incompatibility increased the difficulty of effect anticipation. Onset of response-locked lateralized readiness potentials (R-LRPs) occurred earlier under incompatible conditions indicating extended motor processing. Together, these results strongly suggest that effect anticipation affects all (i.e., perceptual, cognitive, and motor) phases of response preparation. PMID:26858621
Harrison, Neil R; Ziessler, Michael
2016-01-01
The anticipation of action effects is a basic process that can be observed even for key-pressing responses in a stimulus-response paradigm. In Ziessler et al.'s (2012) experiments participants first learned arbitrary effects of key-pressing responses. In the test phase an imperative stimulus determined the response, but participants withheld the response until a Go-stimulus appeared. Reaction times (RTs) were shorter if the Go-stimulus was compatible with the learned response effect. This is strong evidence that effect representations were activated during response planning. Here, we repeated the experiment using event-related potentials (ERPs), and we found that Go-stimulus locked ERPs depended on the compatibility relationship between the Go-stimulus and the response effect. In general, this supports the interpretation of the behavioral data. More specifically, differences in the ERPs between compatible and incompatible Go-stimuli were found for the early perceptual P1 component and the later frontal P2 component. P1 differences were found only in the second half of the experiment and for long stimulus onset asynchronies (SOAs) between imperative stimulus and Go-stimulus, i.e., when the effect was fully anticipated and the perceptual system was prepared for the effect-compatible Go-stimulus. P2 amplitudes, likely associated with evaluation and conflict detection, were larger when Go-stimulus and effect were incompatible; presumably, incompatibility increased the difficulty of effect anticipation. Onset of response-locked lateralized readiness potentials (R-LRPs) occurred earlier under incompatible conditions indicating extended motor processing. Together, these results strongly suggest that effect anticipation affects all (i.e., perceptual, cognitive, and motor) phases of response preparation.
Molecular mechanisms of memory in imprinting.
Solomonia, Revaz O; McCabe, Brian J
2015-03-01
Converging evidence implicates the intermediate and medial mesopallium (IMM) of the domestic chick forebrain in memory for a visual imprinting stimulus. During and after imprinting training, neuronal responsiveness in the IMM to the familiar stimulus exhibits a distinct temporal profile, suggesting several memory phases. We discuss the temporal progression of learning-related biochemical changes in the IMM, relative to the start of this electrophysiological profile. c-fos gene expression increases <15 min after training onset, followed by a learning-related increase in Fos expression, in neurons immunopositive for GABA, taurine and parvalbumin (not calbindin). Approximately simultaneously or shortly after, there are increases in phosphorylation level of glutamate (AMPA) receptor subunits and in releasable neurotransmitter pools of GABA and taurine. Later, the mean area of spine synapse post-synaptic densities, N-methyl-D-aspartate receptor number and phosphorylation level of further synaptic proteins are elevated. After ∼ 15 h, learning-related changes in amounts of several synaptic proteins are observed. The results indicate progression from transient/labile to trophic synaptic modification, culminating in stable recognition memory. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.
Molecular mechanisms of memory in imprinting
Solomonia, Revaz O.; McCabe, Brian J.
2015-01-01
Converging evidence implicates the intermediate and medial mesopallium (IMM) of the domestic chick forebrain in memory for a visual imprinting stimulus. During and after imprinting training, neuronal responsiveness in the IMM to the familiar stimulus exhibits a distinct temporal profile, suggesting several memory phases. We discuss the temporal progression of learning-related biochemical changes in the IMM, relative to the start of this electrophysiological profile. c-fos gene expression increases <15 min after training onset, followed by a learning-related increase in Fos expression, in neurons immunopositive for GABA, taurine and parvalbumin (not calbindin). Approximately simultaneously or shortly after, there are increases in phosphorylation level of glutamate (AMPA) receptor subunits and in releasable neurotransmitter pools of GABA and taurine. Later, the mean area of spine synapse post-synaptic densities, N-methyl-d-aspartate receptor number and phosphorylation level of further synaptic proteins are elevated. After ∼15 h, learning-related changes in amounts of several synaptic proteins are observed. The results indicate progression from transient/labile to trophic synaptic modification, culminating in stable recognition memory. PMID:25280906
Temporally flexible feedback signal to foveal cortex for peripheral object recognition
Fan, Xiaoxu; Wang, Lan; Shao, Hanyu; Kersten, Daniel; He, Sheng
2016-01-01
Recent studies have shown that information from peripherally presented images is present in the human foveal retinotopic cortex, presumably because of feedback signals. We investigated this potential feedback signal by presenting noise in fovea at different object–noise stimulus onset asynchronies (SOAs), whereas subjects performed a discrimination task on peripheral objects. Results revealed a selective impairment of performance when foveal noise was presented at 250-ms SOA, but only for tasks that required comparing objects’ spatial details, suggesting a task- and stimulus-dependent foveal processing mechanism. Critically, the temporal window of foveal processing was shifted when mental rotation was required for the peripheral objects, indicating that the foveal retinotopic processing is not automatically engaged at a fixed time following peripheral stimulation; rather, it occurs at a stage when detailed information is required. Moreover, fMRI measurements using multivoxel pattern analysis showed that both image and object category-relevant information of peripheral objects was represented in the foveal cortex. Taken together, our results support the hypothesis of a temporally flexible feedback signal to the foveal retinotopic cortex when discriminating objects in the visual periphery. PMID:27671651
Graewe, Britta; De Weerd, Peter; Farivar, Reza; Castelo-Branco, Miguel
2012-01-01
Many studies have linked the processing of different object categories to specific event-related potentials (ERPs) such as the face-specific N170. Despite reports showing that object-related ERPs are influenced by visual stimulus features, there is consensus that these components primarily reflect categorical aspects of the stimuli. Here, we re-investigated this idea by systematically measuring the effects of visual feature manipulations on ERP responses elicited by both structure-from-motion (SFM)-defined and luminance-defined object stimuli. SFM objects elicited a novel component at 200–250 ms (N250) over parietal and posterior temporal sites. We found, however, that the N250 amplitude was unaffected by restructuring SFM stimuli into meaningless objects based on identical visual cues. This suggests that this N250 peak was not uniquely linked to categorical aspects of the objects, but is strongly determined by visual stimulus features. We provide strong support for this hypothesis by parametrically manipulating the depth range of both SFM- and luminance-defined object stimuli and showing that the N250 evoked by SFM stimuli as well as the well-known N170 to static faces were sensitive to this manipulation. Importantly, this effect could not be attributed to compromised object categorization in low depth stimuli, confirming a strong impact of visual stimulus features on object-related ERP signals. As ERP components linked with visual categorical object perception are likely determined by multiple stimulus features, this creates an interesting inverse problem when deriving specific perceptual processes from variations in ERP components. PMID:22363479
Graewe, Britta; De Weerd, Peter; Farivar, Reza; Castelo-Branco, Miguel
2012-01-01
Many studies have linked the processing of different object categories to specific event-related potentials (ERPs) such as the face-specific N170. Despite reports showing that object-related ERPs are influenced by visual stimulus features, there is consensus that these components primarily reflect categorical aspects of the stimuli. Here, we re-investigated this idea by systematically measuring the effects of visual feature manipulations on ERP responses elicited by both structure-from-motion (SFM)-defined and luminance-defined object stimuli. SFM objects elicited a novel component at 200-250 ms (N250) over parietal and posterior temporal sites. We found, however, that the N250 amplitude was unaffected by restructuring SFM stimuli into meaningless objects based on identical visual cues. This suggests that this N250 peak was not uniquely linked to categorical aspects of the objects, but is strongly determined by visual stimulus features. We provide strong support for this hypothesis by parametrically manipulating the depth range of both SFM- and luminance-defined object stimuli and showing that the N250 evoked by SFM stimuli as well as the well-known N170 to static faces were sensitive to this manipulation. Importantly, this effect could not be attributed to compromised object categorization in low depth stimuli, confirming a strong impact of visual stimulus features on object-related ERP signals. As ERP components linked with visual categorical object perception are likely determined by multiple stimulus features, this creates an interesting inverse problem when deriving specific perceptual processes from variations in ERP components.
Electrophysiological Evidence for Ventral Stream Deficits in Schizophrenia Patients
Plomp, Gijs; Roinishvili, Maya; Chkonia, Eka; Kapanadze, George; Kereselidze, Maia; Brand, Andreas; Herzog, Michael H.
2013-01-01
Schizophrenic patients suffer from many deficits including visual, attentional, and cognitive ones. Visual deficits are of particular interest because they are at the fore-end of information processing and can provide clear examples of interactions between sensory, perceptual, and higher cognitive functions. Visual deficits in schizophrenic patients are often attributed to impairments in the dorsal (where) rather than the ventral (what) stream of visual processing. We used a visual-masking paradigm in which patients and matched controls discriminated small vernier offsets. We analyzed the evoked electroencephalography (EEG) responses and applied distributed electrical source imaging techniques to estimate activity differences between conditions and groups throughout the brain. Compared with controls, patients showed strongly reduced discrimination accuracy, confirming previous work. The behavioral deficits corresponded to pronounced decreases in the evoked EEG response at around 200 ms after stimulus onset. At this latency, patients showed decreased activity for targets in left parietal cortex (dorsal stream), but the decrease was most pronounced in lateral occipital cortex (in the ventral stream). These deficiencies occurred at latencies that reflect object processing and fine shape discriminations. We relate the reduced ventral stream activity to deficient top-down processing of target stimuli and provide a framework for relating the commonly observed dorsal stream deficiencies with the currently observed ventral stream deficiencies. PMID:22258884
Electrophysiological evidence for ventral stream deficits in schizophrenia patients.
Plomp, Gijs; Roinishvili, Maya; Chkonia, Eka; Kapanadze, George; Kereselidze, Maia; Brand, Andreas; Herzog, Michael H
2013-05-01
Schizophrenic patients suffer from many deficits including visual, attentional, and cognitive ones. Visual deficits are of particular interest because they are at the fore-end of information processing and can provide clear examples of interactions between sensory, perceptual, and higher cognitive functions. Visual deficits in schizophrenic patients are often attributed to impairments in the dorsal (where) rather than the ventral (what) stream of visual processing. We used a visual-masking paradigm in which patients and matched controls discriminated small vernier offsets. We analyzed the evoked electroencephalography (EEG) responses and applied distributed electrical source imaging techniques to estimate activity differences between conditions and groups throughout the brain. Compared with controls, patients showed strongly reduced discrimination accuracy, confirming previous work. The behavioral deficits corresponded to pronounced decreases in the evoked EEG response at around 200 ms after stimulus onset. At this latency, patients showed decreased activity for targets in left parietal cortex (dorsal stream), but the decrease was most pronounced in lateral occipital cortex (in the ventral stream). These deficiencies occurred at latencies that reflect object processing and fine shape discriminations. We relate the reduced ventral stream activity to deficient top-down processing of target stimuli and provide a framework for relating the commonly observed dorsal stream deficiencies with the currently observed ventral stream deficiencies.
Bressler, David W.; Fortenbaugh, Francesca C.; Robertson, Lynn C.; Silver, Michael A.
2013-01-01
Endogenous visual spatial attention improves perception and enhances neural responses to visual stimuli at attended locations. Although many aspects of visual processing differ significantly between central and peripheral vision, little is known regarding the neural substrates of the eccentricity dependence of spatial attention effects. We measured amplitudes of positive and negative fMRI responses to visual stimuli as a function of eccentricity in a large number of topographically-organized cortical areas. Responses to each stimulus were obtained when the stimulus was attended and when spatial attention was directed to a stimulus in the opposite visual hemifield. Attending to the stimulus increased both positive and negative response amplitudes in all cortical areas we studied: V1, V2, V3, hV4, VO1, LO1, LO2, V3A/B, IPS0, TO1, and TO2. However, the eccentricity dependence of these effects differed considerably across cortical areas. In early visual, ventral, and lateral occipital cortex, attentional enhancement of positive responses was greater for central compared to peripheral eccentricities. The opposite pattern was observed in dorsal stream areas IPS0 and putative MT homolog TO1, where attentional enhancement of positive responses was greater in the periphery. Both the magnitude and the eccentricity dependence of attentional modulation of negative fMRI responses closely mirrored that of positive responses across cortical areas. PMID:23562388
Synchronization to auditory and visual rhythms in hearing and deaf individuals
Iversen, John R.; Patel, Aniruddh D.; Nicodemus, Brenda; Emmorey, Karen
2014-01-01
A striking asymmetry in human sensorimotor processing is that humans synchronize movements to rhythmic sound with far greater precision than to temporally equivalent visual stimuli (e.g., to an auditory vs. a flashing visual metronome). Traditionally, this finding is thought to reflect a fundamental difference in auditory vs. visual processing, i.e., superior temporal processing by the auditory system and/or privileged coupling between the auditory and motor systems. It is unclear whether this asymmetry is an inevitable consequence of brain organization or whether it can be modified (or even eliminated) by stimulus characteristics or by experience. With respect to stimulus characteristics, we found that a moving, colliding visual stimulus (a silent image of a bouncing ball with a distinct collision point on the floor) was able to drive synchronization nearly as accurately as sound in hearing participants. To study the role of experience, we compared synchronization to flashing metronomes in hearing and profoundly deaf individuals. Deaf individuals performed better than hearing individuals when synchronizing with visual flashes, suggesting that cross-modal plasticity enhances the ability to synchronize with temporally discrete visual stimuli. Furthermore, when deaf (but not hearing) individuals synchronized with the bouncing ball, their tapping patterns suggest that visual timing may access higher-order beat perception mechanisms for deaf individuals. These results indicate that the auditory advantage in rhythmic synchronization is more experience- and stimulus-dependent than has been previously reported. PMID:25460395
Square or sine: finding a waveform with high success rate of eliciting SSVEP.
Teng, Fei; Chen, Yixin; Choong, Aik Min; Gustafson, Scott; Reichley, Christopher; Lawhead, Pamela; Waddell, Dwight
2011-01-01
Steady state visual evoked potential (SSVEP) is the brain's natural electrical potential response for visual stimuli at specific frequencies. Using a visual stimulus flashing at some given frequency will entrain the SSVEP at the same frequency, thereby allowing determination of the subject's visual focus. The faster an SSVEP is identified, the higher information transmission rate the system achieves. Thus, an effective stimulus, defined as one with high success rate of eliciting SSVEP and high signal-noise ratio, is desired. Also, researchers observed that harmonic frequencies often appear in the SSVEP at a reduced magnitude. Are the harmonics in the SSVEP elicited by the fundamental stimulating frequency or by the artifacts of the stimuli? In this paper, we compare the SSVEP responses of three periodic stimuli: square wave (with different duty cycles), triangle wave, and sine wave to find an effective stimulus. We also demonstrate the connection between the strength of the harmonics in SSVEP and the type of stimulus.
Stimulus-dependent modulation of spontaneous low-frequency oscillations in the rat visual cortex.
Huang, Liangming; Liu, Yadong; Gui, Jianjun; Li, Ming; Hu, Dewen
2014-08-06
Research on spontaneous low-frequency oscillations is important to reveal underlying regulatory mechanisms in the brain. The mechanism for the stimulus modulation of low-frequency oscillations is not known. Here, we used the intrinsic optical imaging technique to examine stimulus-modulated low-frequency oscillation signals in the rat visual cortex. The stimulation was presented monocularly as a flashing light with different frequencies and intensities. The phases of low-frequency oscillations in different regions tended to be synchronized and the rhythms typically accelerated within a 30-s period after stimulation. These phenomena were confined to visual stimuli with specific flashing frequencies (12.5-17.5 Hz) and intensities (5-10 mA). The acceleration and synchronization induced by the flashing frequency were more marked than those induced by the intensity. These results show that spontaneous low-frequency oscillations can be modulated by parameter-dependent flashing lights and indicate the potential utility of the visual stimulus paradigm in exploring the origin and function of low-frequency oscillations.
Perceptual grouping across eccentricity.
Tannazzo, Teresa; Kurylo, Daniel D; Bukhari, Farhan
2014-10-01
Across the visual field, progressive differences exist in neural processing as well as perceptual abilities. Expansion of stimulus scale across eccentricity compensates for some basic visual capacities, but not for high-order functions. It was hypothesized that as with many higher-order functions, perceptual grouping ability should decline across eccentricity. To test this prediction, psychophysical measurements of grouping were made across eccentricity. Participants indicated the dominant grouping of dot grids in which grouping was based upon luminance, motion, orientation, or proximity. Across trials, the organization of stimuli was systematically decreased until perceived grouping became ambiguous. For all stimulus features, grouping ability remained relatively stable until 40°, beyond which thresholds significantly elevated. The pattern of change across eccentricity varied across stimulus feature, in which stimulus scale, dot size, or stimulus size interacted with eccentricity effects. These results demonstrate that perceptual grouping of such stimuli is not reliant upon foveal viewing, and suggest that selection of dominant grouping patterns from ambiguous displays operates similarly across much of the visual field. Copyright © 2014 Elsevier Ltd. All rights reserved.
Goard, Michael J; Pho, Gerald N; Woodson, Jonathan; Sur, Mriganka
2016-08-04
Mapping specific sensory features to future motor actions is a crucial capability of mammalian nervous systems. We investigated the role of visual (V1), posterior parietal (PPC), and frontal motor (fMC) cortices for sensorimotor mapping in mice during performance of a memory-guided visual discrimination task. Large-scale calcium imaging revealed that V1, PPC, and fMC neurons exhibited heterogeneous responses spanning all task epochs (stimulus, delay, response). Population analyses demonstrated unique encoding of stimulus identity and behavioral choice information across regions, with V1 encoding stimulus, fMC encoding choice even early in the trial, and PPC multiplexing the two variables. Optogenetic inhibition during behavior revealed that all regions were necessary during the stimulus epoch, but only fMC was required during the delay and response epochs. Stimulus identity can thus be rapidly transformed into behavioral choice, requiring V1, PPC, and fMC during the transformation period, but only fMC for maintaining the choice in memory prior to execution.
A Unifying Motif for Spatial and Directional Surround Suppression.
Liu, Liu D; Miller, Kenneth D; Pack, Christopher C
2018-01-24
In the visual system, the response to a stimulus in a neuron's receptive field can be modulated by stimulus context, and the strength of these contextual influences vary with stimulus intensity. Recent work has shown how a theoretical model, the stabilized supralinear network (SSN), can account for such modulatory influences, using a small set of computational mechanisms. Although the predictions of the SSN have been confirmed in primary visual cortex (V1), its computational principles apply with equal validity to any cortical structure. We have therefore tested the generality of the SSN by examining modulatory influences in the middle temporal area (MT) of the macaque visual cortex, using electrophysiological recordings and pharmacological manipulations. We developed a novel stimulus that can be adjusted parametrically to be larger or smaller in the space of all possible motion directions. We found, as predicted by the SSN, that MT neurons integrate across motion directions for low-contrast stimuli, but that they exhibit suppression by the same stimuli when they are high in contrast. These results are analogous to those found in visual cortex when stimulus size is varied in the space domain. We further tested the mechanisms of inhibition using pharmacological manipulations of inhibitory efficacy. As predicted by the SSN, local manipulation of inhibitory strength altered firing rates, but did not change the strength of surround suppression. These results are consistent with the idea that the SSN can account for modulatory influences along different stimulus dimensions and in different cortical areas. SIGNIFICANCE STATEMENT Visual neurons are selective for specific stimulus features in a region of visual space known as the receptive field, but can be modulated by stimuli outside of the receptive field. The SSN model has been proposed to account for these and other modulatory influences, and tested in V1. As this model is not specific to any particular stimulus feature or brain region, we wondered whether similar modulatory influences might be observed for other stimulus dimensions and other regions. We tested for specific patterns of modulatory influences in the domain of motion direction, using electrophysiological recordings from MT. Our data confirm the predictions of the SSN in MT, suggesting that the SSN computations might be a generic feature of sensory cortex. Copyright © 2018 the authors 0270-6474/18/380989-11$15.00/0.
Sudo, R T; Nelson, T E
1997-09-01
Elective diagnosis of malignant hyperthermia depends on halothane and caffeine contracture testing of biopsied skeletal muscle. Ryanodine-induced contractures may provide greater sensitivity and specificity for malignant hyperthermia (MH) diagnosis. This study investigated the effects of ryanodine concentration and stimulus frequency to distinguish between MH susceptible (MHS) and MH non-susceptible (MHN) dogs. Increasing ryanodine concentrations (1, 2.5 and 5 microM) increased peak isometric contracture tension, but similar responses in MHS and MHN muscle precluded use for diagnosis. Time to tension onset and to peak tension decreased with increasing ryanodine concentration, and these times were shorter in MH skeletal muscle. Increasing stimulus frequency (0.1, 0.5 and 1 Hz) decreased the time to tension onset and to peak tension, but the effect was greater in MHN muscle which decreased the difference between MHN and MHS muscle responses. When ryanodine contracture tension onset time was selected to detect MHS muscle, combinations of either 0.1 Hz and 1 microM ryanodine or 0.5 Hz and 1 microM ryanodine reduced the probabilty of a false diagnosis to less than 1%. Similar studies performed on human muscle might identify optimal stimulus frequency and ryanodine concentration for detecting MH in patients.
Lundström, Johan N.; Gordon, Amy R.; Alden, Eva C.; Boesveldt, Sanne; Albrecht, Jessica
2010-01-01
Many human olfactory experiments call for fast and stable stimulus-rise times as well as exact and stable stimulus-onset times. Due to these temporal demands, an olfactometer is often needed. However, an olfactometer is a piece of equipment that either comes with a high price tag or requires a high degree of technical expertise to build and/or to run. Here, we detail the construction of an olfactometer that is constructed almost exclusively with “off-the-shelf” parts, requires little technical knowledge to build, has relatively low price tags, and is controlled by E-Prime, a turnkey-ready and easily-programmable software commonly used in psychological experiments. The olfactometer can present either solid or liquid odor sources, and it exhibits a fast stimulus-rise time and a fast and stable stimulus-onset time. We provide a detailed description of the olfactometer construction, a list of its individual parts and prices, as well as potential modifications to the design. In addition, we present odor onset and concentration curves as measured with a photoionization detector, together with corresponding GC/MS analyses of signal-intensity drop (5.9%) over a longer period of use. Finally, we present data from behavioral and psychophysiological recordings demonstrating that the olfactometer is suitable for use during event-related EEG experiments. PMID:20688109
Nagai, Takehiro; Matsushima, Toshiki; Koida, Kowa; Tani, Yusuke; Kitazaki, Michiteru; Nakauchi, Shigeki
2015-10-01
Humans can visually recognize material categories of objects, such as glass, stone, and plastic, easily. However, little is known about the kinds of surface quality features that contribute to such material class recognition. In this paper, we examine the relationship between perceptual surface features and material category discrimination performance for pictures of materials, focusing on temporal aspects, including reaction time and effects of stimulus duration. The stimuli were pictures of objects with an identical shape but made of different materials that could be categorized into seven classes (glass, plastic, metal, stone, wood, leather, and fabric). In a pre-experiment, observers rated the pictures on nine surface features, including visual (e.g., glossiness and transparency) and non-visual features (e.g., heaviness and warmness), on a 7-point scale. In the main experiments, observers judged whether two simultaneously presented pictures were classified as the same or different material category. Reaction times and effects of stimulus duration were measured. The results showed that visual feature ratings were correlated with material discrimination performance for short reaction times or short stimulus durations, while non-visual feature ratings were correlated only with performance for long reaction times or long stimulus durations. These results suggest that the mechanisms underlying visual and non-visual feature processing may differ in terms of processing time, although the cause is unclear. Visual surface features may mainly contribute to material recognition in daily life, while non-visual features may contribute only weakly, if at all. Copyright © 2014 Elsevier Ltd. All rights reserved.
Effects of Temporal Features and Order on the Apparent duration of a Visual Stimulus
Bruno, Aurelio; Ayhan, Inci; Johnston, Alan
2012-01-01
The apparent duration of a visual stimulus has been shown to be influenced by its speed. For low speeds, apparent duration increases linearly with stimulus speed. This effect has been ascribed to the number of changes that occur within a visual interval. Accordingly, a higher number of changes should produce an increase in apparent duration. In order to test this prediction, we asked subjects to compare the relative duration of a 10-Hz drifting comparison stimulus with a standard stimulus that contained a different number of changes in different conditions. The standard could be static, drifting at 10 Hz, or mixed (a combination of variable duration static and drifting intervals). In this last condition the number of changes was intermediate between the static and the continuously drifting stimulus. For all standard durations, the mixed stimulus looked significantly compressed (∼20% reduction) relative to the drifting stimulus. However, no difference emerged between the static (that contained no changes) and the mixed stimuli (which contained an intermediate number of changes). We also observed that when the standard was displayed first, it appeared compressed relative to when it was displayed second with a magnitude that depended on standard duration. These results are at odds with a model of time perception that simply reflects the number of temporal features within an interval in determining the perceived passing of time. PMID:22461778
Comparing different stimulus configurations for population receptive field mapping in human fMRI
Alvarez, Ivan; de Haas, Benjamin; Clark, Chris A.; Rees, Geraint; Schwarzkopf, D. Samuel
2015-01-01
Population receptive field (pRF) mapping is a widely used approach to measuring aggregate human visual receptive field properties by recording non-invasive signals using functional MRI. Despite growing interest, no study to date has systematically investigated the effects of different stimulus configurations on pRF estimates from human visual cortex. Here we compared the effects of three different stimulus configurations on a model-based approach to pRF estimation: size-invariant bars and eccentricity-scaled bars defined in Cartesian coordinates and traveling along the cardinal axes, and a novel simultaneous “wedge and ring” stimulus defined in polar coordinates, systematically covering polar and eccentricity axes. We found that the presence or absence of eccentricity scaling had a significant effect on goodness of fit and pRF size estimates. Further, variability in pRF size estimates was directly influenced by stimulus configuration, particularly for higher visual areas including V5/MT+. Finally, we compared eccentricity estimation between phase-encoded and model-based pRF approaches. We observed a tendency for more peripheral eccentricity estimates using phase-encoded methods, independent of stimulus size. We conclude that both eccentricity scaling and polar rather than Cartesian stimulus configuration are important considerations for optimal experimental design in pRF mapping. While all stimulus configurations produce adequate estimates, simultaneous wedge and ring stimulation produced higher fit reliability, with a significant advantage in reduced acquisition time. PMID:25750620
[Microcomputer control of a LED stimulus display device].
Ohmoto, S; Kikuchi, T; Kumada, T
1987-02-01
A visual stimulus display system controlled by a microcomputer was constructed at low cost. The system consists of a LED stimulus display device, a microcomputer, two interface boards, a pointing device (a "mouse") and two kinds of software. The first software package is written in BASIC. Its functions are: to construct stimulus patterns using the mouse, to construct letter patterns (alphabet, digit, symbols and Japanese letters--kanji, hiragana, katakana), to modify the patterns, to store the patterns on a floppy disc, to translate the patterns into integer data which are used to display the patterns in the second software. The second software package, written in BASIC and machine language, controls display of a sequence of stimulus patterns in predetermined time schedules in visual experiments.
Babiloni, Claudio; Marzano, Nicola; Soricelli, Andrea; Cordone, Susanna; Millán-Calenti, José Carlos; Del Percio, Claudio; Buján, Ana
2016-01-01
This article reviews three experiments on event-related potentials (ERPs) testing the hypothesis that primary visual consciousness (stimulus self-report) is related to enhanced cortical neural synchronization as a function of stimulus features. ERP peak latency and sources were compared between “seen” trials and “not seen” trials, respectively related and unrelated to the primary visual consciousness. Three salient features of visual stimuli were considered (visuospatial, emotional face expression, and written words). Results showed the typical visual ERP components in both “seen” and “not seen” trials. There was no statistical difference in the ERP peak latencies between the “seen” and “not seen” trials, suggesting a similar timing of the cortical neural synchronization regardless the primary visual consciousness. In contrast, ERP sources showed differences between “seen” and “not seen” trials. For the visuospatial stimuli, the primary consciousness was related to higher activity in dorsal occipital and parietal sources at about 400 ms post-stimulus. For the emotional face expressions, there was greater activity in parietal and frontal sources at about 180 ms post-stimulus. For the written letters, there was higher activity in occipital, parietal and temporal sources at about 230 ms post-stimulus. These results hint that primary visual consciousness is associated with an enhanced cortical neural synchronization having entirely different spatiotemporal characteristics as a function of the features of the visual stimuli and possibly, the relative qualia (i.e., visuospatial, face expression, and words). In this framework, the dorsal visual stream may be synchronized in association with the primary consciousness of visuospatial and emotional face contents. Analogously, both dorsal and ventral visual streams may be synchronized in association with the primary consciousness of linguistic contents. In this line of reasoning, the ensemble of the cortical neural networks underpinning the single visual features would constitute a sort of multi-dimensional palette of colors, shapes, regions of the visual field, movements, emotional face expressions, and words. The synchronization of one or more of these cortical neural networks, each with its peculiar timing, would produce the primary consciousness of one or more of the visual features of the scene. PMID:27445750
Response-specifying cue for action interferes with perception of feature-sharing stimuli.
Nishimura, Akio; Yokosawa, Kazuhiko
2010-06-01
Perceiving a visual stimulus is more difficult when a to-be-executed action is compatible with that stimulus, which is known as blindness to response-compatible stimuli. The present study explored how the factors constituting the action event (i.e., response-specifying cue, response intention, and response feature) affect the occurrence of this blindness effect. The response-specifying cue varied along the horizontal and vertical dimensions, while the response buttons were arranged diagonally. Participants responded based on one dimension randomly determined in a trial-by-trial manner. The response intention varied along a single dimension, whereas the response location and the response-specifying cue varied within both vertical and horizontal dimensions simultaneously. Moreover, the compatibility between the visual stimulus and the response location and the compatibility between that stimulus and the response-specifying cue was separately determined. The blindness effect emerged exclusively based on the feature correspondence between the response-specifying cue of the action task and the visual target of the perceptual task. The size of this stimulus-stimulus (S-S) blindness effect did not differ significantly across conditions, showing no effect of response intention and response location. This finding emphasizes the effect of stimulus factors, rather than response factors, of the action event as a source of the blindness to response-compatible stimuli.
Neural Pathways Conveying Novisual Information to the Visual Cortex
2013-01-01
The visual cortex has been traditionally considered as a stimulus-driven, unimodal system with a hierarchical organization. However, recent animal and human studies have shown that the visual cortex responds to non-visual stimuli, especially in individuals with visual deprivation congenitally, indicating the supramodal nature of the functional representation in the visual cortex. To understand the neural substrates of the cross-modal processing of the non-visual signals in the visual cortex, we firstly showed the supramodal nature of the visual cortex. We then reviewed how the nonvisual signals reach the visual cortex. Moreover, we discussed if these non-visual pathways are reshaped by early visual deprivation. Finally, the open question about the nature (stimulus-driven or top-down) of non-visual signals is also discussed. PMID:23840972
A neural correlate of working memory in the monkey primary visual cortex.
Supèr, H; Spekreijse, H; Lamme, V A
2001-07-06
The brain frequently needs to store information for short periods. In vision, this means that the perceptual correlate of a stimulus has to be maintained temporally once the stimulus has been removed from the visual scene. However, it is not known how the visual system transfers sensory information into a memory component. Here, we identify a neural correlate of working memory in the monkey primary visual cortex (V1). We propose that this component may link sensory activity with memory activity.
Brainstem auditory evoked responses in man. 1: Effect of stimulus rise-fall time and duration
NASA Technical Reports Server (NTRS)
Hecox, K.; Squires, N.; Galambos, R.
1975-01-01
Short latency (under 10 msec) responses elicited by bursts of white noise were recorded from the scalps of human subjects. Response alterations produced by changes in the noise burst duration (on-time), inter-burst interval (off-time), and onset and offset shapes were analyzed. The latency of the most prominent response component, wave V, was markedly delayed with increases in stimulus rise time but was unaffected by changes in fall time. Increases in stimulus duration, and therefore in loudness, resulted in a systematic increase in latency. This was probably due to response recovery processes, since the effect was eliminated with increases in stimulus off-time. The amplitude of wave V was insensitive to changes in signal rise and fall times, while increasing signal on-time produced smaller amplitude responses only for sufficiently short off-times. It was concluded that wave V of the human auditory brainstem evoked response is solely an onset response.
Identification of a novel dynamic red blindness in human by event-related brain potentials.
Zhang, Jiahua; Kong, Weijia; Yang, Zhongle
2010-12-01
Dynamic color is an important carrier that takes information in some special occupations. However, up to the present, there are no available and objective tests to evaluate dynamic color processing. To investigate the characteristics of dynamic color processing, we adopted two patterns of visual stimulus called "onset-offset" which reflected static color stimuli and "sustained moving" without abrupt mode which reflected dynamic color stimuli to evoke event-related brain potentials (ERPs) in primary color amblyopia patients (abnormal group) and subjects with normal color recognition ability (normal group). ERPs were recorded by Neuroscan system. The results showed that in the normal group, ERPs in response to the dynamic red stimulus showed frontal positive amplitudes with a latency of about 180 ms, a negative peak at about 240 ms and a peak latency of the late positive potential (LPP) in a time window between 290 and 580 ms. In the abnormal group, ERPs in response to the dynamic red stimulus were fully lost and characterized by vanished amplitudes between 0 and 800 ms. No significant difference was noted in ERPs in response to the dynamic green and blue stimulus between the two groups (P>0.05). ERPs of the two groups in response to the static red, green and blue stimulus were not much different, showing a transient negative peak at about 170 ms and a peak latency of LPP in a time window between 350 and 650 ms. Our results first revealed that some subjects who were not identified as color blindness under static color recognition could not completely apperceive a sort of dynamic red stimulus by ERPs, which was called "dynamic red blindness". Furthermore, these results also indicated that low-frequency ERPs induced by "sustained moving" may be a good and new method to test dynamic color perception competence.
Barack Obama Blindness (BOB): Absence of Visual Awareness to a Single Object.
Persuh, Marjan; Melara, Robert D
2016-01-01
In two experiments, we evaluated whether a perceiver's prior expectations could alone obliterate his or her awareness of a salient visual stimulus. To establish expectancy, observers first made a demanding visual discrimination on each of three baseline trials. Then, on a fourth, critical trial, a single, salient and highly visible object appeared in full view at the center of the visual field and in the absence of any competing visual input. Surprisingly, fully half of the participants were unaware of the solitary object in front of their eyes. Dramatically, observers were blind even when the only stimulus on display was the face of U.S. President Barack Obama. We term this novel, counterintuitive phenomenon, Barack Obama Blindness (BOB). Employing a method that rules out putative memory effects by probing awareness immediately after presentation of the critical stimulus, we demonstrate that the BOB effect is a true failure of conscious vision.
Barack Obama Blindness (BOB): Absence of Visual Awareness to a Single Object
Persuh, Marjan; Melara, Robert D.
2016-01-01
In two experiments, we evaluated whether a perceiver’s prior expectations could alone obliterate his or her awareness of a salient visual stimulus. To establish expectancy, observers first made a demanding visual discrimination on each of three baseline trials. Then, on a fourth, critical trial, a single, salient and highly visible object appeared in full view at the center of the visual field and in the absence of any competing visual input. Surprisingly, fully half of the participants were unaware of the solitary object in front of their eyes. Dramatically, observers were blind even when the only stimulus on display was the face of U.S. President Barack Obama. We term this novel, counterintuitive phenomenon, Barack Obama Blindness (BOB). Employing a method that rules out putative memory effects by probing awareness immediately after presentation of the critical stimulus, we demonstrate that the BOB effect is a true failure of conscious vision. PMID:27047362
Desantis, Andrea; Haggard, Patrick
2016-01-01
To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events. PMID:27982063
Desantis, Andrea; Haggard, Patrick
2016-12-16
To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events.
Griffeth, Valerie E M; Simon, Aaron B; Buxton, Richard B
2015-01-01
Quantitative functional MRI (fMRI) experiments to measure blood flow and oxygen metabolism coupling in the brain typically rely on simple repetitive stimuli. Here we compared such stimuli with a more naturalistic stimulus. Previous work on the primary visual cortex showed that direct attentional modulation evokes a blood flow (CBF) response with a relatively large oxygen metabolism (CMRO2) response in comparison to an unattended stimulus, which evokes a much smaller metabolic response relative to the flow response. We hypothesized that a similar effect would be associated with a more engaging stimulus, and tested this by measuring the primary human visual cortex response to two contrast levels of a radial flickering checkerboard in comparison to the response to free viewing of brief movie clips. We did not find a significant difference in the blood flow-metabolism coupling (n=%ΔCBF/%ΔCMRO2) between the movie stimulus and the flickering checkerboards employing two different analysis methods: a standard analysis using the Davis model and a new analysis using a heuristic model dependent only on measured quantities. This finding suggests that in the primary visual cortex a naturalistic stimulus (in comparison to a simple repetitive stimulus) is either not sufficient to provoke a change in flow-metabolism coupling by attentional modulation as hypothesized, that the experimental design disrupted the cognitive processes underlying the response to a more natural stimulus, or that the technique used is not sensitive enough to detect a small difference. Copyright © 2014 Elsevier Inc. All rights reserved.
Orme, Elizabeth; Brown, Louise A.; Riby, Leigh M.
2017-01-01
In this study, we examined electrophysiological indices of episodic remembering whilst participants recalled novel shapes, with and without semantic content, within a visual working memory paradigm. The components of interest were the parietal episodic (PE; 400–800 ms) and late posterior negativity (LPN; 500–900 ms), as these have previously been identified as reliable markers of recollection and post-retrieval monitoring, respectively. Fifteen young adults completed a visual matrix patterns task, assessing memory for low and high semantic visual representations. Matrices with either low semantic or high semantic content (containing familiar visual forms) were briefly presented to participants for study (1500 ms), followed by a retention interval (6000 ms) and finally a same/different recognition phase. The event-related potentials of interest were tracked from the onset of the recognition test stimuli. Analyses revealed equivalent amplitude for the earlier PE effect for the processing of both low and high semantic stimulus types. However, the LPN was more negative-going for the processing of the low semantic stimuli. These data are discussed in terms of relatively ‘pure’ and complete retrieval of high semantic items, where support can readily be recruited from semantic memory. However, for the low semantic items additional executive resources, as indexed by the LPN, are recruited when memory monitoring and uncertainty exist in order to recall previously studied items more effectively. PMID:28725203
Orme, Elizabeth; Brown, Louise A; Riby, Leigh M
2017-01-01
In this study, we examined electrophysiological indices of episodic remembering whilst participants recalled novel shapes, with and without semantic content, within a visual working memory paradigm. The components of interest were the parietal episodic (PE; 400-800 ms) and late posterior negativity (LPN; 500-900 ms), as these have previously been identified as reliable markers of recollection and post-retrieval monitoring, respectively. Fifteen young adults completed a visual matrix patterns task, assessing memory for low and high semantic visual representations. Matrices with either low semantic or high semantic content (containing familiar visual forms) were briefly presented to participants for study (1500 ms), followed by a retention interval (6000 ms) and finally a same/different recognition phase. The event-related potentials of interest were tracked from the onset of the recognition test stimuli. Analyses revealed equivalent amplitude for the earlier PE effect for the processing of both low and high semantic stimulus types. However, the LPN was more negative-going for the processing of the low semantic stimuli. These data are discussed in terms of relatively 'pure' and complete retrieval of high semantic items, where support can readily be recruited from semantic memory. However, for the low semantic items additional executive resources, as indexed by the LPN, are recruited when memory monitoring and uncertainty exist in order to recall previously studied items more effectively.
Role of somatosensory and vestibular cues in attenuating visually induced human postural sway
NASA Technical Reports Server (NTRS)
Peterka, Robert J.; Benolken, Martha S.
1993-01-01
The purpose was to determine the contribution of visual, vestibular, and somatosensory cues to the maintenance of stance in humans. Postural sway was induced by full field, sinusoidal visual surround rotations about an axis at the level of the ankle joints. The influences of vestibular and somatosensory cues were characterized by comparing postural sway in normal and bilateral vestibular absent subjects in conditions that provided either accurate or inaccurate somatosensory orientation information. In normal subjects, the amplitude of visually induced sway reached a saturation level as stimulus amplitude increased. The saturation amplitude decreased with increasing stimulus frequency. No saturation phenomena was observed in subjects with vestibular loss, implying that vestibular cues were responsible for the saturation phenomenon. For visually induced sways below the saturation level, the stimulus-response curves for both normal and vestibular loss subjects were nearly identical implying that (1) normal subjects were not using vestibular information to attenuate their visually induced sway, possibly because sway was below a vestibular-related threshold level, and (2) vestibular loss subjects did not utilize visual cues to a greater extent than normal subjects; that is, a fundamental change in visual system 'gain' was not used to compensate for a vestibular deficit. An unexpected finding was that the amplitude of body sway induced by visual surround motion could be almost three times greater than the amplitude of the visual stimulus in normals and vestibular loss subjects. This occurred in conditions where somatosensory cues were inaccurate and at low stimulus amplitudes. A control system model of visually induced postural sway was developed to explain this finding. For both subject groups, the amplitude of visually induced sway was smaller by a factor of about four in tests where somatosensory cues provided accurate versus inaccurate orientation information. This implied that (1) the vestibular loss subjects did not utilize somatosensory cues to a greater extent than normal subjects; that is, changes in somatosensory system 'gain' were not used to compensate for a vestibular deficit, and (2) the threshold for the use of vestibular cues in normals was apparently lower in test conditions where somatosensory cues were providing accurate orientation information.
Influence of cognitive control and mismatch on the N2 component of the ERP: A review
Folstein, Jonathan R.; Van Petten, Cyma
2008-01-01
Recent years have seen an explosion of research on the N2 component of the event-related potential, a negative wave peaking between 200 and 350 ms after stimulus onset. This research has focused on the influence of “cognitive control,” a concept that covers strategic monitoring and control of motor responses. However, rich research traditions focus on attention and novelty or mismatch as determinants of N2 amplitude. We focus on paradigms that elicit N2 components with an anterior scalp distribution, namely, cognitive control, novelty, and sequential matching, and argue that the anterior N2 should be divided into separate control- and mismatch-related subcomponents. We also argue that the oddball N2 belongs in the family of attention-related N2 components that, in the visual modality, have a posterior scalp distribution. We focus on the visual modality for which components with frontocentral and more posterior scalp distributions can be readily distinguished. PMID:17850238
Motor selection dynamics in FEF explain the reaction time variance of saccades to single targets
Hauser, Christopher K; Zhu, Dantong; Stanford, Terrence R
2018-01-01
In studies of voluntary movement, a most elemental quantity is the reaction time (RT) between the onset of a visual stimulus and a saccade toward it. However, this RT demonstrates extremely high variability which, in spite of extensive research, remains unexplained. It is well established that, when a visual target appears, oculomotor activity gradually builds up until a critical level is reached, at which point a saccade is triggered. Here, based on computational work and single-neuron recordings from monkey frontal eye field (FEF), we show that this rise-to-threshold process starts from a dynamic initial state that already contains other incipient, internally driven motor plans, which compete with the target-driven activity to varying degrees. The ensuing conflict resolution process, which manifests in subtle covariations between baseline activity, build-up rate, and threshold, consists of fundamentally deterministic interactions, and explains the observed RT distributions while invoking only a small amount of intrinsic randomness. PMID:29652247
Phonological processing of ignored distractor pictures, an fMRI investigation.
Bles, Mart; Jansma, Bernadette M
2008-02-11
Neuroimaging studies of attention often focus on interactions between stimulus representations and top-down selection mechanisms in visual cortex. Less is known about the neural representation of distractor stimuli beyond visual areas, and the interactions between stimuli in linguistic processing areas. In the present study, participants viewed simultaneously presented line drawings at peripheral locations, while in the MRI scanner. The names of the objects depicted in these pictures were either phonologically related (i.e. shared the same consonant-vowel onset construction), or unrelated. Attention was directed either at the linguistic properties of one of these pictures, or at the fixation point (i.e. away from the pictures). Phonological representations of unattended pictures could be detected in the posterior superior temporal gyrus, the inferior frontal gyrus, and the insula. Under some circumstances, the name of ignored distractor pictures is retrieved by linguistic areas. This implies that selective attention to a specific location does not completely filter out the representations of distractor stimuli at early perceptual stages.
Tu, Joanna H; Foote, Katharina G; Lujan, Brandon J; Ratnam, Kavitha; Qin, Jia; Gorin, Michael B; Cunningham, Emmett T; Tuten, William S; Duncan, Jacque L; Roorda, Austin
2017-09-01
Confocal adaptive optics scanning laser ophthalmoscope (AOSLO) images provide a sensitive measure of cone structure. However, the relationship between structural findings of diminished cone reflectivity and visual function is unclear. We used fundus-referenced testing to evaluate visual function in regions of apparent cone loss identified using confocal AOSLO images. A patient diagnosed with acute bilateral foveolitis had spectral-domain optical coherence tomography (SD-OCT) (Spectralis HRA + OCT system [Heidelberg Engineering, Vista, CA, USA]) images indicating focal loss of the inner segment-outer segment junction band with an intact, but hyper-reflective, external limiting membrane. Five years after symptom onset, visual acuity had improved from 20/80 to 20/25, but the retinal appearance remained unchanged compared to 3 months after symptoms began. We performed structural assessments using SD-OCT, directional OCT (non-standard use of a prototype on loan from Carl Zeiss Meditec) and AOSLO (custom-built system). We also administered fundus-referenced functional tests in the region of apparent cone loss, including analysis of preferred retinal locus (PRL), AOSLO acuity, and microperimetry with tracking SLO (TSLO) (prototype system). To determine AOSLO-corrected visual acuity, the scanning laser was modulated with a tumbling E consistent with 20/30 visual acuity. Visual sensitivity was assessed in and around the lesion using TSLO microperimetry. Complete eye examination, including standard measures of best-corrected visual acuity, visual field tests, color fundus photos, and fundus auto-fluorescence were also performed. Despite a lack of visible cone profiles in the foveal lesion, fundus-referenced vision testing demonstrated visual function within the lesion consistent with cone function. The PRL was within the lesion of apparent cone loss at the fovea. AOSLO visual acuity tests were abnormal, but measurable: for trials in which the stimulus remained completely within the lesion, the subject got 48% correct, compared to 78% correct when the stimulus was outside the lesion. TSLO microperimetry revealed reduced, but detectible, sensitivity thresholds within the lesion. Fundus-referenced visual testing proved useful to identify functional cones despite apparent photoreceptor loss identified using AOSLO and SD-OCT. While AOSLO and SD-OCT appear to be sensitive for the detection of abnormal or absent photoreceptors, changes in photoreceptors that are identified with these imaging tools do not correlate completely with visual function in every patient. Fundus-referenced vision testing is a useful tool to indicate the presence of cones that may be amenable to recovery or response to experimental therapies despite not being visible on confocal AOSLO or SD-OCT images.
Affective Priming with Auditory Speech Stimuli
ERIC Educational Resources Information Center
Degner, Juliane
2011-01-01
Four experiments explored the applicability of auditory stimulus presentation in affective priming tasks. In Experiment 1, it was found that standard affective priming effects occur when prime and target words are presented simultaneously via headphones similar to a dichotic listening procedure. In Experiment 2, stimulus onset asynchrony (SOA) was…
On the role of covarying functions in stimulus class formation and transfer of function.
Markham, Rebecca G; Markham, Michael R
2002-01-01
This experiment investigated whether directly trained covarying functions are necessary for stimulus class formation and transfer of function in humans. Initial class training was designed to establish two respondent-based stimulus classes by pairing two visual stimuli with shock and two other visual stimuli with no shock. Next, two operant discrimination functions were trained to one stimulus of each putative class. The no-shock group received the same training and testing in all phases, except no stimuli were ever paired with shock. The data indicated that skin conductance response conditioning did not occur for the shock groups or for the no-shock group. Tests showed transfer of the established discriminative functions, however, only for the shock groups, indicating the formation of two stimulus classes only for those participants who received respondent class training. The results suggest that transfer of function does not depend on first covarying the stimulus class functions. PMID:12507017
Bressler, David W; Fortenbaugh, Francesca C; Robertson, Lynn C; Silver, Michael A
2013-06-07
Endogenous visual spatial attention improves perception and enhances neural responses to visual stimuli at attended locations. Although many aspects of visual processing differ significantly between central and peripheral vision, little is known regarding the neural substrates of the eccentricity dependence of spatial attention effects. We measured amplitudes of positive and negative fMRI responses to visual stimuli as a function of eccentricity in a large number of topographically-organized cortical areas. Responses to each stimulus were obtained when the stimulus was attended and when spatial attention was directed to a stimulus in the opposite visual hemifield. Attending to the stimulus increased both positive and negative response amplitudes in all cortical areas we studied: V1, V2, V3, hV4, VO1, LO1, LO2, V3A/B, IPS0, TO1, and TO2. However, the eccentricity dependence of these effects differed considerably across cortical areas. In early visual, ventral, and lateral occipital cortex, attentional enhancement of positive responses was greater for central compared to peripheral eccentricities. The opposite pattern was observed in dorsal stream areas IPS0 and putative MT homolog TO1, where attentional enhancement of positive responses was greater in the periphery. Both the magnitude and the eccentricity dependence of attentional modulation of negative fMRI responses closely mirrored that of positive responses across cortical areas. Copyright © 2013 Elsevier Ltd. All rights reserved.
Filbrich, Lieve; Alamia, Andrea; Burns, Soline; Legrain, Valéry
2017-07-01
Despite their high relevance for defending the integrity of the body, crossmodal links between nociception, the neural system specifically coding potentially painful information, and vision are still poorly studied, especially the effects of nociception on visual perception. This study investigated if, and in which time window, a nociceptive stimulus can attract attention to its location on the body, independently of voluntary control, to facilitate the processing of visual stimuli occurring in the same side of space as the limb on which the visual stimulus was applied. In a temporal order judgment task based on an adaptive procedure, participants judged which of two visual stimuli, one presented next to either hand in either side of space, had been perceived first. Each pair of visual stimuli was preceded (by 200, 400, or 600 ms) by a nociceptive stimulus applied either unilaterally on one single hand, or bilaterally, on both hands simultaneously. Results show that, as compared to the bilateral condition, participants' judgments were biased to the advantage of the visual stimuli that occurred in the same side of space as the hand on which a unilateral, nociceptive stimulus was applied. This effect was present in a time window ranging from 200 to 600 ms, but importantly, biases increased with decreasing time interval. These results suggest that nociceptive stimuli can affect the perceptual processing of spatially congruent visual inputs.
Cook, Stephanie; Kokmotou, Katerina; Soto, Vicente; Wright, Hazel; Fallon, Nicholas; Thomas, Anna; Giesbrecht, Timo; Field, Matt; Stancak, Andrej
2018-04-13
Odours alter evaluations of concurrently presented visual stimuli, such as faces. Stimulus onset asynchrony (SOA) is known to affect evaluative priming in various sensory modalities. However, effects of SOA on odour priming of visual stimuli are not known. The present study aimed to analyse whether subjective and cortical activation changes during odour priming would vary as a function of SOA between odours and faces. Twenty-eight participants rated faces under pleasant, unpleasant, and no-odour conditions using visual analogue scales. In half of trials, faces appeared one-second after odour offset (SOA 1). In the other half of trials, faces appeared during the odour pulse (SOA 2). EEG was recorded continuously using a 128-channel system, and event-related potentials (ERPs) to face stimuli were evaluated using statistical parametric mapping (SPM). Faces presented during unpleasant-odour stimulation were rated significantly less pleasant than the same faces presented one-second after offset of the unpleasant odour. Scalp-time clusters in the late-positive-potential (LPP) time-range showed an interaction between odour and SOA effects, whereby activation was stronger for faces presented simultaneously with the unpleasant odour, compared to the same faces presented after odour offset. Our results highlight stronger unpleasant odour priming with simultaneous, compared to delayed, odour-face presentation. Such effects were represented in both behavioural and neural data. A greater cortical and subjective response during simultaneous presentation of faces and unpleasant odour may have an adaptive role, allowing for a prompt and focused behavioural reaction to a concurrent stimulus if an aversive odour would signal danger, or unwanted social interaction. Copyright © 2018 Elsevier B.V. All rights reserved.
Temporally evolving gain mechanisms of attention in macaque area V4.
Sani, Ilaria; Santandrea, Elisa; Morrone, Maria Concetta; Chelazzi, Leonardo
2017-08-01
Cognitive attention and perceptual saliency jointly govern our interaction with the environment. Yet, we still lack a universally accepted account of the interplay between attention and luminance contrast, a fundamental dimension of saliency. We measured the attentional modulation of V4 neurons' contrast response functions (CRFs) in awake, behaving macaque monkeys and applied a new approach that emphasizes the temporal dynamics of cell responses. We found that attention modulates CRFs via different gain mechanisms during subsequent epochs of visually driven activity: an early contrast-gain, strongly dependent on prestimulus activity changes (baseline shift); a time-limited stimulus-dependent multiplicative modulation, reaching its maximal expression around 150 ms after stimulus onset; and a late resurgence of contrast-gain modulation. Attention produced comparable time-dependent attentional gain changes on cells heterogeneously coding contrast, supporting the notion that the same circuits mediate attention mechanisms in V4 regardless of the form of contrast selectivity expressed by the given neuron. Surprisingly, attention was also sometimes capable of inducing radical transformations in the shape of CRFs. These findings offer important insights into the mechanisms that underlie contrast coding and attention in primate visual cortex and a new perspective on their interplay, one in which time becomes a fundamental factor. NEW & NOTEWORTHY We offer an innovative perspective on the interplay between attention and luminance contrast in macaque area V4, one in which time becomes a fundamental factor. We place emphasis on the temporal dynamics of attentional effects, pioneering the notion that attention modulates contrast response functions of V4 neurons via the sequential engagement of distinct gain mechanisms. These findings advance understanding of attentional influences on visual processing and help reconcile divergent results in the literature. Copyright © 2017 the American Physiological Society.
Perceived duration decreases with increasing eccentricity.
Kliegl, Katrin M; Huckauf, Anke
2014-07-01
Previous studies examining the influence of stimulus location on temporal perception yield inhomogeneous and contradicting results. Therefore, the aim of the present study is to soundly examine the effect of stimulus eccentricity. In a series of five experiments, subjects compared the duration of foveal disks to disks presented at different retinal eccentricities on the horizontal meridian. The results show that the perceived duration of a visual stimulus declines with increasing eccentricity. The effect was replicated with various stimulus orders (Experiments 1-3), as well as with cortically magnified stimuli (Experiments 4-5), ruling out that the effect was merely caused by different cortical representation sizes. The apparent decreasing duration of stimuli with increasing eccentricity is discussed with respect to current models of time perception, the possible influence of visual attention and respective underlying physiological characteristics of the visual system. Copyright © 2014 Elsevier B.V. All rights reserved.
Response of anterior parietal cortex to cutaneous flutter versus vibration.
Tommerdahl, M; Delemos, K A; Whitsel, B L; Favorov, O V; Metz, C B
1999-07-01
The response of anesthetized squirrel monkey anterior parietal (SI) cortex to 25 or 200 Hz sinusoidal vertical skin displacement stimulation was studied using the method of optical intrinsic signal (OIS) imaging. Twenty-five-Hertz ("flutter") stimulation of a discrete skin site on either the hindlimb or forelimb for 3-30 s evoked a prominent increase in absorbance within cytoarchitectonic areas 3b and 1 in the contralateral hemisphere. This response was confined to those area 3b/1 regions occupied by neurons with a receptive field (RF) that includes the stimulated skin site. In contrast, same-site 200-Hz stimulation ("vibration") for 3-30 s evoked a decrease in absorbance in a much larger territory (most frequently involving areas 3b, 1, and area 3a, but in some subjects area 2 as well) than the region that undergoes an increase in absorbance during 25-Hz flutter stimulation. The increase in absorbance evoked by 25-Hz flutter developed quickly and remained relatively constant for as long as stimulation continued (stimulus duration never exceeded 30 s). At 1-3 s after stimulus onset, the response to 200-Hz stimulation, like the response to 25-Hz flutter, consisted of a localized increase in absorbance limited to the topographically appropriate region of area 3b and/or area 1. With continuing 200-Hz stimulation, however, the early response declined, and by 4-6 s after stimulus onset, it was replaced by a prominent and spatially extensive decrease in absorbance. The spike train responses of single quickly adapting (QA) neurons were recorded extracellularly during microelectrode penetrations that traverse the optically responding regions of areas 3b and 1. Onset of either 25- or 200-Hz stimulation at a site within the cutaneous RF of a QA neuron was accompanied by a substantial increase in mean spike firing rate. With continued 200-Hz stimulation, however, QA neuron mean firing rate declined rapidly (typically within 0.5-1.0 s) to a level below that recorded at the same time after onset of same-site 25-Hz stimulation. For some neurons, the mean firing rate after the initial 0.5-1 s of an exposure to 200-Hz stimulation of the RF decreased to a level below the level of background ("spontaneous") activity. The decline in both the stimulus-evoked increases in absorbance in areas 3b/1 and spike discharge activity of area 3b/1 neurons within only a few seconds of the onset of 200-Hz skin stimulation raised the possibility that the predominant effect of continuous 200-Hz stimulation for >3 s is inhibition of area 3b/1 QA neurons. This possibility was evaluated at the neuronal population level by comparing the intrinsic signal evoked in areas 3b/1 by 25-Hz skin stimulation to the intrinsic signal evoked by a same-site skin stimulus containing both 25- and 200-Hz sinusoidal components (a "complex waveform stimulus"). Such experiments revealed that the increase in absorbance evoked in areas 3b/1 by a stimulus having both 25- and 200-Hz components was substantially smaller (especially at times >3 s after stimulus onset) than the increase in absorbance evoked by "pure" 25-Hz stimulation of the same skin site. It is concluded that within a brief time (within 1-3 s) after stimulus onset, 200-Hz skin stimulation elicits a powerful inhibitory action on area 3b/1 QA neurons. The findings appear generally consistent with the suggestion that the activity of neurons in cortical regions other than areas 3b and 1 play the leading role in the processing of high-frequency (>/=200 Hz) vibrotactile stimuli.
Psychophysical and perceptual performance in a simulated-scotoma model of human eye injury
NASA Astrophysics Data System (ADS)
Brandeis, R.; Egoz, I.; Peri, D.; Sapiens, N.; Turetz, J.
2008-02-01
Macular scotomas, affecting visual functioning, characterize many eye and neurological diseases like AMD, diabetes mellitus, multiple sclerosis, and macular hole. In this work, foveal visual field defects were modeled, and their effects were evaluated on spatial contrast sensitivity and a task of stimulus detection and aiming. The modeled occluding scotomas, of different size, were superimposed on the stimuli presented on the computer display, and were stabilized on the retina using a mono Purkinje Eye-Tracker. Spatial contrast sensitivity was evaluated using square-wave grating stimuli, whose contrast thresholds were measured using the method of constant stimuli with "catch trials". The detection task consisted of a triple conjunctive visual search display of: size (in visual angle), contrast and background (simple, low-level features vs. complex, high-level features). Search/aiming accuracy as well as R.T. measures used for performance evaluation. Artificially generated scotomas suppressed spatial contrast sensitivity in a size dependent manner, similar to previous studies. Deprivation effect was dependent on spatial frequency, consistent with retinal inhomogeneity models. Stimulus detection time was slowed in complex background search situation more than in simple background. Detection speed was dependent on scotoma size and size of stimulus. In contrast, visually guided aiming was more sensitive to scotoma effect in simple background search situation than in complex background. Both stimulus aiming R.T. and accuracy (precision targeting) were impaired, as a function of scotoma size and size of stimulus. The data can be explained by models distinguishing between saliency-based, parallel and serial search processes, guiding visual attention, which are supported by underlying retinal as well as neural mechanisms.
Crossmodal attention switching: auditory dominance in temporal discrimination tasks.
Lukas, Sarah; Philipp, Andrea M; Koch, Iring
2014-11-01
Visual stimuli are often processed more efficiently than accompanying stimuli in another modality. In line with this "visual dominance", earlier studies on attentional switching showed a clear benefit for visual stimuli in a bimodal visual-auditory modality-switch paradigm that required spatial stimulus localization in the relevant modality. The present study aimed to examine the generality of this visual dominance effect. The modality appropriateness hypothesis proposes that stimuli in different modalities are differentially effectively processed depending on the task dimension, so that processing of visual stimuli is favored in the dimension of space, whereas processing auditory stimuli is favored in the dimension of time. In the present study, we examined this proposition by using a temporal duration judgment in a bimodal visual-auditory switching paradigm. Two experiments demonstrated that crossmodal interference (i.e., temporal stimulus congruence) was larger for visual stimuli than for auditory stimuli, suggesting auditory dominance when performing temporal judgment tasks. However, attention switch costs were larger for the auditory modality than for visual modality, indicating a dissociation of the mechanisms underlying crossmodal competition in stimulus processing and modality-specific biasing of attentional set. Copyright © 2014 Elsevier B.V. All rights reserved.
O'Connor, Constance M; Reddon, Adam R; Odetunde, Aderinsola; Jindal, Shagun; Balshine, Sigal
2015-12-01
Predation is one of the primary drivers of fitness for prey species. Therefore, there should be strong selection for accurate assessment of predation risk, and whenever possible, individuals should use all available information to fine-tune their response to the current threat of predation. Here, we used a controlled laboratory experiment to assess the responses of individual Neolamprologus pulcher, a social cichlid fish, to a live predator stimulus, to the odour of damaged conspecifics, or to both indicators of predation risk combined. We found that fish in the presence of the visual predator stimulus showed typical antipredator behaviour. Namely, these fish decreased activity and exploration, spent more time seeking shelter, and more time near conspecifics. Surprisingly, there was no effect of the chemical cue alone, and fish showed a reduced response to the combination of the visual predator stimulus and the odour of damaged conspecifics relative to the visual predator stimulus alone. These results demonstrate that N. pulcher adjust their anti-predator behaviour to the information available about current predation risk, and we suggest a possible role for the use of social information in the assessment of predation risk in a cooperatively breeding fish. Copyright © 2015. Published by Elsevier B.V.
Hales, J. B.; Brewer, J. B.
2018-01-01
Given the diversity of stimuli encountered in daily life, a variety of strategies must be used for learning new information. Relating and encoding visual and verbal stimuli into memory has been probed using various tasks and stimulus-types. Engagement of specific subsequent memory and cortical processing regions depends on the stimulus modality of studied material; however, it remains unclear whether different encoding strategies similarly influence regional activity when stimulus-type is held constant. In this study, subjects encoded object pairs using a visual or verbal associative strategy during functional magnetic resonance imaging (fMRI), and subsequent memory was assessed for pairs encoded under each strategy. Each strategy elicited distinct regional processing and subsequent memory effects: middle / superior frontal, lateral parietal, and lateral occipital for visually-associated pairs and inferior frontal, medial frontal, and medial occipital for verbally-associated pairs. This regional selectivity mimics the effects of stimulus modality, suggesting that cortical involvement in associative encoding is driven by strategy, and not simply by stimulus-type. The clinical relevance of these findings, probed in two patients with recent aphasic strokes, suggest that training with strategies utilizing unaffected cortical regions might improve memory ability in patients with brain damage. PMID:22390467
Memorable Audiovisual Narratives Synchronize Sensory and Supramodal Neural Responses
2016-01-01
Abstract Our brains integrate information across sensory modalities to generate perceptual experiences and form memories. However, it is difficult to determine the conditions under which multisensory stimulation will benefit or hinder the retrieval of everyday experiences. We hypothesized that the determining factor is the reliability of information processing during stimulus presentation, which can be measured through intersubject correlation of stimulus-evoked activity. We therefore presented biographical auditory narratives and visual animations to 72 human subjects visually, auditorily, or combined, while neural activity was recorded using electroencephalography. Memory for the narrated information, contained in the auditory stream, was tested 3 weeks later. While the visual stimulus alone led to no meaningful retrieval, this related stimulus improved memory when it was combined with the story, even when it was temporally incongruent with the audio. Further, individuals with better subsequent memory elicited neural responses during encoding that were more correlated with their peers. Surprisingly, portions of this predictive synchronized activity were present regardless of the sensory modality of the stimulus. These data suggest that the strength of sensory and supramodal activity is predictive of memory performance after 3 weeks, and that neural synchrony may explain the mnemonic benefit of the functionally uninformative visual context observed for these real-world stimuli. PMID:27844062
Leon-Carrion, Jose; Martín-Rodríguez, Juan Francisco; Damas-López, Jesús; Pourrezai, Kambiz; Izzetoglu, Kurtulus; Barroso Y Martin, Juan Manuel; Dominguez-Morales, M Rosario
2007-04-06
A fundamental question in human sexuality regards the neural substrate underlying sexually-arousing representations. Lesion and neuroimaging studies suggest that dorsolateral pre-frontal cortex (DLPFC) plays an important role in regulating the processing of visual sexual stimulation. The aim of this Functional Near-Infrared Spectroscopy (fNIRS) study was to explore DLPFC structures involved in the processing of erotic and non-sexual films. fNIRS was used to image the evoked-cerebral blood oxygenation (CBO) response in 15 male and 15 female subjects. Our hypothesis is that a sexual stimulus would produce DLPFC activation during the period of direct stimulus perception ("on" period), and that this activation would continue after stimulus cessation ("off" period). A new paradigm was used to measure the relative oxygenated hemoglobin (oxyHb) concentrations in DLPFC while subjects viewed the two selected stimuli (Roman orgy and a non-sexual film clip), and also immediately following stimulus cessation. Viewing of the non-sexual stimulus produced no overshoot in DLPFC, whereas exposure to the erotic stimulus produced rapidly ascendant overshoot, which became even more pronounced following stimulus cessation. We also report on gender differences in the timing and intensity of DLPFC activation in response to a sexually explicit visual stimulus. We found evidence indicating that men experience greater and more rapid sexual arousal when exposed to erotic stimuli than do women. Our results point out that self-regulation of DLPFC activation is modulated by subjective arousal and that cognitive appraisal of the sexual stimulus (valence) plays a secondary role in this regulation.
Fischmeister, Florian Ph.S.; Leodolter, Ulrich; Windischberger, Christian; Kasess, Christian H.; Schöpf, Veronika; Moser, Ewald; Bauer, Herbert
2010-01-01
Throughout recent years there has been an increasing interest in studying unconscious visual processes. Such conditions of unawareness are typically achieved by either a sufficient reduction of the stimulus presentation time or visual masking. However, there are growing concerns about the reliability of the presentation devices used. As all these devices show great variability in presentation parameters, the processing of visual stimuli becomes dependent on the display-device, e.g. minimal changes in the physical stimulus properties may have an enormous impact on stimulus processing by the sensory system and on the actual experience of the stimulus. Here we present a custom-built three-way LC-shutter-tachistoscope which allows experimental setups with both, precise and reliable stimulus delivery, and millisecond resolution. This tachistoscope consists of three LCD-projectors equipped with zoom lenses to enable stimulus presentation via a built-in mirror-system onto a back projection screen from an adjacent room. Two high-speed liquid crystal shutters are mounted serially in front of each projector to control the stimulus duration. To verify the intended properties empirically, different sequences of presentation times were performed while changes in optical power were measured using a photoreceiver. The obtained results demonstrate that interfering variabilities in stimulus parameters and stimulus rendering are markedly reduced. Together with the possibility to collect external signals and to send trigger-signals to other devices, this tachistoscope represents a highly flexible and easy to set up research tool not only for the study of unconscious processing in the brain but for vision research in general. PMID:20122963
Overgaard, Morten; Lindeløv, Jonas; Svejstrup, Stinna; Døssing, Marianne; Hvid, Tanja; Kauffmann, Oliver; Mouridsen, Kim
2013-01-01
This paper reports an experiment intended to test a particular hypothesis derived from blindsight research, which we name the “source misidentification hypothesis.” According to this hypothesis, a subject may be correct about a stimulus without being correct about how she had access to this knowledge (whether the stimulus was visual, auditory, or something else). We test this hypothesis in healthy subjects, asking them to report whether a masked stimulus was presented auditorily or visually, what the stimulus was, and how clearly they experienced the stimulus using the Perceptual Awareness Scale (PAS). We suggest that knowledge about perceptual modality may be a necessary precondition in order to issue correct reports of which stimulus was presented. Furthermore, we find that PAS ratings correlate with correctness, and that subjects are at chance level when reporting no conscious experience of the stimulus. To demonstrate that particular levels of reporting accuracy are obtained, we employ a statistical strategy, which operationally tests the hypothesis of non-equality, such that the usual rejection of the null-hypothesis admits the conclusion of equivalence. PMID:23508677
Nakajima, S
2000-03-14
Pigeons were trained with the A+, AB-, ABC+, AD- and ADE+ task where each of stimulus A and stimulus compounds ABC and ADE signalled food (positive trials), and each of stimulus compounds AB and AD signalled no food (negative trials). Stimuli A, B, C and E were small visual figures localised on a response key, and stimulus D was a white noise. Stimulus B was more effective than D as an inhibitor of responding to A during the training. After the birds learned to respond exclusively on the positive trials, effects of B and D on responding to C and E, respectively, were tested by comparing C, BC, E and DE trials. Stimulus B continuously facilitated responding to C on the BC test trials, but D's facilitative effect was observed only on the first DE test trial. Stimulus B also facilitated responding to E on BE test trials. Implications for the Rescorla-Wagner elemental model and the Pearce configural model of Pavlovian conditioning were discussed.
Shades of yellow: interactive effects of visual and odour cues in a pest beetle
Stevenson, Philip C.; Belmain, Steven R.
2016-01-01
Background: The visual ecology of pest insects is poorly studied compared to the role of odour cues in determining their behaviour. Furthermore, the combined effects of both odour and vision on insect orientation are frequently ignored, but could impact behavioural responses. Methods: A locomotion compensator was used to evaluate use of different visual stimuli by a major coleopteran pest of stored grains (Sitophilus zeamais), with and without the presence of host odours (known to be attractive to this species), in an open-loop setup. Results: Some visual stimuli—in particular, one shade of yellow, solid black and high-contrast black-against-white stimuli—elicited positive orientation behaviour from the beetles in the absence of odour stimuli. When host odours were also present, at 90° to the source of the visual stimulus, the beetles presented with yellow and vertical black-on-white grating patterns changed their walking course and typically adopted a path intermediate between the two stimuli. The beetles presented with a solid black-on-white target continued to orient more strongly towards the visual than the odour stimulus. Discussion: Visual stimuli can strongly influence orientation behaviour, even in species where use of visual cues is sometimes assumed to be unimportant, while the outcomes from exposure to multimodal stimuli are unpredictable and need to be determined under differing conditions. The importance of the two modalities of stimulus (visual and olfactory) in food location is likely to depend upon relative stimulus intensity and motivational state of the insect. PMID:27478707
Top-Down Beta Enhances Bottom-Up Gamma
Thompson, William H.
2017-01-01
Several recent studies have demonstrated that the bottom-up signaling of a visual stimulus is subserved by interareal gamma-band synchronization, whereas top-down influences are mediated by alpha-beta band synchronization. These processes may implement top-down control of stimulus processing if top-down and bottom-up mediating rhythms are coupled via cross-frequency interaction. To test this possibility, we investigated Granger-causal influences among awake macaque primary visual area V1, higher visual area V4, and parietal control area 7a during attentional task performance. Top-down 7a-to-V1 beta-band influences enhanced visually driven V1-to-V4 gamma-band influences. This enhancement was spatially specific and largest when beta-band activity preceded gamma-band activity by ∼0.1 s, suggesting a causal effect of top-down processes on bottom-up processes. We propose that this cross-frequency interaction mechanistically subserves the attentional control of stimulus selection. SIGNIFICANCE STATEMENT Contemporary research indicates that the alpha-beta frequency band underlies top-down control, whereas the gamma-band mediates bottom-up stimulus processing. This arrangement inspires an attractive hypothesis, which posits that top-down beta-band influences directly modulate bottom-up gamma band influences via cross-frequency interaction. We evaluate this hypothesis determining that beta-band top-down influences from parietal area 7a to visual area V1 are correlated with bottom-up gamma frequency influences from V1 to area V4, in a spatially specific manner, and that this correlation is maximal when top-down activity precedes bottom-up activity. These results show that for top-down processes such as spatial attention, elevated top-down beta-band influences directly enhance feedforward stimulus-induced gamma-band processing, leading to enhancement of the selected stimulus. PMID:28592697
Preattentive binding of auditory and visual stimulus features.
Winkler, István; Czigler, István; Sussman, Elyse; Horváth, János; Balázs, Lászlo
2005-02-01
We investigated the role of attention in feature binding in the auditory and the visual modality. One auditory and one visual experiment used the mismatch negativity (MMN and vMMN, respectively) event-related potential to index the memory representations created from stimulus sequences, which were either task-relevant and, therefore, attended or task-irrelevant and ignored. In the latter case, the primary task was a continuous demanding within-modality task. The test sequences were composed of two frequently occurring stimuli, which differed from each other in two stimulus features (standard stimuli) and two infrequently occurring stimuli (deviants), which combined one feature from one standard stimulus with the other feature of the other standard stimulus. Deviant stimuli elicited MMN responses of similar parameters across the different attentional conditions. These results suggest that the memory representations involved in the MMN deviance detection response encoded the frequently occurring feature combinations whether or not the test sequences were attended. A possible alternative to the memory-based interpretation of the visual results, the elicitation of the McCollough color-contingent aftereffect, was ruled out by the results of our third experiment. The current results are compared with those supporting the attentive feature integration theory. We conclude that (1) with comparable stimulus paradigms, similar results have been obtained in the two modalities, (2) there exist preattentive processes of feature binding, however, (3) conjoining features within rich arrays of objects under time pressure and/or longterm retention of the feature-conjoined memory representations may require attentive processes.
Physical Features of Visual Images Affect Macaque Monkey’s Preference for These Images
Funahashi, Shintaro
2016-01-01
Animals exhibit different degrees of preference toward various visual stimuli. In addition, it has been shown that strongly preferred stimuli can often act as a reward. The aim of the present study was to determine what features determine the strength of the preference for visual stimuli in order to examine neural mechanisms of preference judgment. We used 50 color photographs obtained from the Flickr Material Database (FMD) as original stimuli. Four macaque monkeys performed a simple choice task, in which two stimuli selected randomly from among the 50 stimuli were simultaneously presented on a monitor and monkeys were required to choose either stimulus by eye movements. We considered that the monkeys preferred the chosen stimulus if it continued to look at the stimulus for an additional 6 s and calculated a choice ratio for each stimulus. Each monkey exhibited a different choice ratio for each of the original 50 stimuli. They tended to select clear, colorful and in-focus stimuli. Complexity and clarity were stronger determinants of preference than colorfulness. Images that included greater amounts of spatial frequency components were selected more frequently. These results indicate that particular physical features of the stimulus can affect the strength of a monkey’s preference and that the complexity, clarity and colorfulness of the stimulus are important determinants of this preference. Neurophysiological studies would be needed to examine whether these features of visual stimuli produce more activation in neurons that participate in this preference judgment. PMID:27853424
ERIC Educational Resources Information Center
Mullen, Stuart; Dixon, Mark R.; Belisle, Jordan; Stanley, Caleb
2017-01-01
The current study sought to evaluate the efficacy of a stimulus equivalence training procedure in establishing auditory-tactile-visual stimulus classes with 2 children with autism and developmental delays. Participants were exposed to vocal-tactile (A-B) and tactile-picture (B-C) conditional discrimination training and were tested for the…
Components of Attention Modulated by Temporal Expectation
ERIC Educational Resources Information Center
Sørensen, Thomas Alrik; Vangkilde, Signe; Bundesen, Claus
2015-01-01
By varying the probabilities that a stimulus would appear at particular times after the presentation of a cue and modeling the data by the theory of visual attention (Bundesen, 1990), Vangkilde, Coull, and Bundesen (2012) provided evidence that the speed of encoding a singly presented stimulus letter into visual short-term memory (VSTM) is…
Stimulus information contaminates summation tests of independent neural representations of features
NASA Technical Reports Server (NTRS)
Shimozaki, Steven S.; Eckstein, Miguel P.; Abbey, Craig K.
2002-01-01
Many models of visual processing assume that visual information is analyzed into separable and independent neural codes, or features. A common psychophysical test of independent features is known as a summation study, which measures performance in a detection, discrimination, or visual search task as the number of proposed features increases. Improvement in human performance with increasing number of available features is typically attributed to the summation, or combination, of information across independent neural coding of the features. In many instances, however, increasing the number of available features also increases the stimulus information in the task, as assessed by an optimal observer that does not include the independent neural codes. In a visual search task with spatial frequency and orientation as the component features, a particular set of stimuli were chosen so that all searches had equivalent stimulus information, regardless of the number of features. In this case, human performance did not improve with increasing number of features, implying that the improvement observed with additional features may be due to stimulus information and not the combination across independent features.
Blur adaptation: contrast sensitivity changes and stimulus extent.
Venkataraman, Abinaya Priya; Winter, Simon; Unsbo, Peter; Lundström, Linda
2015-05-01
A prolonged exposure to foveal defocus is well known to affect the visual functions in the fovea. However, the effects of peripheral blur adaptation on foveal vision, or vice versa, are still unclear. In this study, we therefore examined the changes in contrast sensitivity function from baseline, following blur adaptation to small as well as laterally extended stimuli in four subjects. The small field stimulus (7.5° visual field) was a 30min video of forest scenery projected on a screen and the large field stimulus consisted of 7-tiles of the 7.5° stimulus stacked horizontally. Both stimuli were used for adaptation with optical blur (+2.00D trial lens) as well as for clear control conditions. After small field blur adaptation foveal contrast sensitivity improved in the mid spatial frequency region. However, these changes neither spread to the periphery nor occurred for the large field blur adaptation. To conclude, visual performance after adaptation is dependent on the lateral extent of the adaptation stimulus. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Pooresmaeili, Arezoo; Arrighi, Roberto; Biagi, Laura; Morrone, Maria Concetta
2016-01-01
In natural scenes, objects rarely occur in isolation but appear within a spatiotemporal context. Here, we show that the perceived size of a stimulus is significantly affected by the context of the scene: brief previous presentation of larger or smaller adapting stimuli at the same region of space changes the perceived size of a test stimulus, with larger adapting stimuli causing the test to appear smaller than veridical and vice versa. In a human fMRI study, we measured the blood oxygen level-dependent activation (BOLD) responses of the primary visual cortex (V1) to the contours of large-diameter stimuli and found that activation closely matched the perceptual rather than the retinal stimulus size: the activated area of V1 increased or decreased, depending on the size of the preceding stimulus. A model based on local inhibitory V1 mechanisms simulated the inward or outward shifts of the stimulus contours and hence the perceptual effects. Our findings suggest that area V1 is actively involved in reshaping our perception to match the short-term statistics of the visual scene. PMID:24089504
Response properties of ON-OFF retinal ganglion cells to high-order stimulus statistics.
Xiao, Lei; Gong, Han-Yan; Gong, Hai-Qing; Liang, Pei-Ji; Zhang, Pu-Ming
2014-10-17
The visual stimulus statistics are the fundamental parameters to provide the reference for studying visual coding rules. In this study, the multi-electrode extracellular recording experiments were designed and implemented on bullfrog retinal ganglion cells to explore the neural response properties to the changes in stimulus statistics. The changes in low-order stimulus statistics, such as intensity and contrast, were clearly reflected in the neuronal firing rate. However, it was difficult to distinguish the changes in high-order statistics, such as skewness and kurtosis, only based on the neuronal firing rate. The neuronal temporal filtering and sensitivity characteristics were further analyzed. We observed that the peak-to-peak amplitude of the temporal filter and the neuronal sensitivity, which were obtained from either neuronal ON spikes or OFF spikes, could exhibit significant changes when the high-order stimulus statistics were changed. These results indicate that in the retina, the neuronal response properties may be reliable and powerful in carrying some complex and subtle visual information. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Goard, Michael J; Pho, Gerald N; Woodson, Jonathan; Sur, Mriganka
2016-01-01
Mapping specific sensory features to future motor actions is a crucial capability of mammalian nervous systems. We investigated the role of visual (V1), posterior parietal (PPC), and frontal motor (fMC) cortices for sensorimotor mapping in mice during performance of a memory-guided visual discrimination task. Large-scale calcium imaging revealed that V1, PPC, and fMC neurons exhibited heterogeneous responses spanning all task epochs (stimulus, delay, response). Population analyses demonstrated unique encoding of stimulus identity and behavioral choice information across regions, with V1 encoding stimulus, fMC encoding choice even early in the trial, and PPC multiplexing the two variables. Optogenetic inhibition during behavior revealed that all regions were necessary during the stimulus epoch, but only fMC was required during the delay and response epochs. Stimulus identity can thus be rapidly transformed into behavioral choice, requiring V1, PPC, and fMC during the transformation period, but only fMC for maintaining the choice in memory prior to execution. DOI: http://dx.doi.org/10.7554/eLife.13764.001 PMID:27490481
The contribution of perceptual factors and training on varying audiovisual integration capacity.
Wilbiks, Jonathan M P; Dyson, Benjamin J
2018-06-01
The suggestion that the capacity of audiovisual integration has an upper limit of 1 was challenged in 4 experiments using perceptual factors and training to enhance the binding of auditory and visual information. Participants were required to note a number of specific visual dot locations that changed in polarity when a critical auditory stimulus was presented, under relatively fast (200-ms stimulus onset asynchrony [SOA]) and slow (700-ms SOA) rates of presentation. In Experiment 1, transient cross-modal congruency between the brightness of polarity change and pitch of the auditory tone was manipulated. In Experiment 2, sustained chunking was enabled on certain trials by connecting varying dot locations with vertices. In Experiment 3, training was employed to determine if capacity would increase through repeated experience with an intermediate presentation rate (450 ms). Estimates of audiovisual integration capacity (K) were larger than 1 during cross-modal congruency at slow presentation rates (Experiment 1), during perceptual chunking at slow and fast presentation rates (Experiment 2), and, during an intermediate presentation rate posttraining (Experiment 3). Finally, Experiment 4 showed a linear increase in K using SOAs ranging from 100 to 600 ms, suggestive of quantitative rather than qualitative changes in the mechanisms in audiovisual integration as a function of presentation rate. The data compromise the suggestion that the capacity of audiovisual integration is limited to 1 and suggest that the ability to bind sounds to sights is contingent on individual and environmental factors. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Balz, Johanna; Roa Romero, Yadira; Keil, Julian; Krebber, Martin; Niedeggen, Michael; Gallinat, Jürgen; Senkowski, Daniel
2016-01-01
Recent behavioral and neuroimaging studies have suggested multisensory processing deficits in patients with schizophrenia (SCZ). Thus far, the neural mechanisms underlying these deficits are not well understood. Previous studies with unisensory stimulation have shown altered neural oscillations in SCZ. As such, altered oscillations could contribute to aberrant multisensory processing in this patient group. To test this assumption, we conducted an electroencephalography (EEG) study in 15 SCZ and 15 control participants in whom we examined neural oscillations and event-related potentials (ERPs) in the sound-induced flash illusion (SIFI). In the SIFI multiple auditory stimuli that are presented alongside a single visual stimulus can induce the illusory percept of multiple visual stimuli. In SCZ and control participants we compared ERPs and neural oscillations between trials that induced an illusion and trials that did not induce an illusion. On the behavioral level, SCZ (55.7%) and control participants (55.4%) did not significantly differ in illusion rates. The analysis of ERPs revealed diminished amplitudes and altered multisensory processing in SCZ compared to controls around 135 ms after stimulus onset. Moreover, the analysis of neural oscillations revealed altered 25–35 Hz power after 100 to 150 ms over occipital scalp for SCZ compared to controls. Our findings extend previous observations of aberrant neural oscillations in unisensory perception paradigms. They suggest that altered ERPs and altered occipital beta/gamma band power reflect aberrant multisensory processing in SCZ. PMID:27999553
A versatile stereoscopic visual display system for vestibular and oculomotor research.
Kramer, P D; Roberts, D C; Shelhamer, M; Zee, D S
1998-01-01
Testing of the vestibular system requires a vestibular stimulus (motion) and/or a visual stimulus. We have developed a versatile, low cost, stereoscopic visual display system, using "virtual reality" (VR) technology. The display system can produce images for each eye that correspond to targets at any virtual distance relative to the subject, and so require the appropriate ocular vergence. We elicited smooth pursuit, "stare" optokinetic nystagmus (OKN) and after-nystagmus (OKAN), vergence for targets at various distances, and short-term adaptation of the vestibulo-ocular reflex (VOR), using both conventional methods and the stereoscopic display. Pursuit, OKN, and OKAN were comparable with both methods. When used with a vestibular stimulus, VR induced appropriate adaptive changes of the phase and gain of the angular VOR. In addition, using the VR display system and a human linear acceleration sled, we adapted the phase of the linear VOR. The VR-based stimulus system not only offers an alternative to more cumbersome means of stimulating the visual system in vestibular experiments, it also can produce visual stimuli that would otherwise be impractical or impossible. Our techniques provide images without the latencies encountered in most VR systems. Its inherent versatility allows it to be useful in several different types of experiments, and because it is software driven it can be quickly adapted to provide a new stimulus. These two factors allow VR to provide considerable savings in time and money, as well as flexibility in developing experimental paradigms.
Attention Determines Contextual Enhancement versus Suppression in Human Primary Visual Cortex.
Flevaris, Anastasia V; Murray, Scott O
2015-09-02
Neural responses in primary visual cortex (V1) depend on stimulus context in seemingly complex ways. For example, responses to an oriented stimulus can be suppressed when it is flanked by iso-oriented versus orthogonally oriented stimuli but can also be enhanced when attention is directed to iso-oriented versus orthogonal flanking stimuli. Thus the exact same contextual stimulus arrangement can have completely opposite effects on neural responses-in some cases leading to orientation-tuned suppression and in other cases leading to orientation-tuned enhancement. Here we show that stimulus-based suppression and enhancement of fMRI responses in humans depends on small changes in the focus of attention and can be explained by a model that combines feature-based attention with response normalization. Neurons in the primary visual cortex (V1) respond to stimuli within a restricted portion of the visual field, termed their "receptive field." However, neuronal responses can also be influenced by stimuli that surround a receptive field, although the nature of these contextual interactions and underlying neural mechanisms are debated. Here we show that the response in V1 to a stimulus in the same context can either be suppressed or enhanced depending on the focus of attention. We are able to explain the results using a simple computational model that combines two well established properties of visual cortical responses: response normalization and feature-based enhancement. Copyright © 2015 the authors 0270-6474/15/3512273-08$15.00/0.
Distributed Fading Memory for Stimulus Properties in the Primary Visual Cortex
Singer, Wolf; Maass, Wolfgang
2009-01-01
It is currently not known how distributed neuronal responses in early visual areas carry stimulus-related information. We made multielectrode recordings from cat primary visual cortex and applied methods from machine learning in order to analyze the temporal evolution of stimulus-related information in the spiking activity of large ensembles of around 100 neurons. We used sequences of up to three different visual stimuli (letters of the alphabet) presented for 100 ms and with intervals of 100 ms or larger. Most of the information about visual stimuli extractable by sophisticated methods of machine learning, i.e., support vector machines with nonlinear kernel functions, was also extractable by simple linear classification such as can be achieved by individual neurons. New stimuli did not erase information about previous stimuli. The responses to the most recent stimulus contained about equal amounts of information about both this and the preceding stimulus. This information was encoded both in the discharge rates (response amplitudes) of the ensemble of neurons and, when using short time constants for integration (e.g., 20 ms), in the precise timing of individual spikes (≤∼20 ms), and persisted for several 100 ms beyond the offset of stimuli. The results indicate that the network from which we recorded is endowed with fading memory and is capable of performing online computations utilizing information about temporally sequential stimuli. This result challenges models assuming frame-by-frame analyses of sequential inputs. PMID:20027205
The extraction of motion-onset VEP BCI features based on deep learning and compressed sensing.
Ma, Teng; Li, Hui; Yang, Hao; Lv, Xulin; Li, Peiyang; Liu, Tiejun; Yao, Dezhong; Xu, Peng
2017-01-01
Motion-onset visual evoked potentials (mVEP) can provide a softer stimulus with reduced fatigue, and it has potential applications for brain computer interface(BCI)systems. However, the mVEP waveform is seriously masked in the strong background EEG activities, and an effective approach is needed to extract the corresponding mVEP features to perform task recognition for BCI control. In the current study, we combine deep learning with compressed sensing to mine discriminative mVEP information to improve the mVEP BCI performance. The deep learning and compressed sensing approach can generate the multi-modality features which can effectively improve the BCI performance with approximately 3.5% accuracy incensement over all 11 subjects and is more effective for those subjects with relatively poor performance when using the conventional features. Compared with the conventional amplitude-based mVEP feature extraction approach, the deep learning and compressed sensing approach has a higher classification accuracy and is more effective for subjects with relatively poor performance. According to the results, the deep learning and compressed sensing approach is more effective for extracting the mVEP feature to construct the corresponding BCI system, and the proposed feature extraction framework is easy to extend to other types of BCIs, such as motor imagery (MI), steady-state visual evoked potential (SSVEP)and P300. Copyright © 2016 Elsevier B.V. All rights reserved.
Hinde, Stephen J; Smith, Tim J; Gilchrist, Iain D
2017-05-01
In the laboratory, the abrupt onset of a visual distractor can generate an involuntary orienting response: this robust oculomotor capture effect has been reported in a large number of studies (e.g. Ludwig & Gilchrist, 2002; Theeuwes, Kramer, Hahn, & Irwin, 1998) suggesting it may be a ubiquitous part of more natural visual behaviour. However the visual stimuli used in these experiments have tended to be static and had none of the complexity, and dynamism of more natural visual environments. In addition, the primary task in the laboratory (typically visual search) can be tedious for the participants with participant's losing interest and becoming stimulus driven and more easily distracted. Both of these factors may have led to an overestimation of the extent to which oculomotor capture occurs and the importance of this phenomena in everyday visual behaviour. To address this issue, in the current series of studies we presented abrupt and highly salient visual distractors away from fixation while participants watched a film. No evidence of oculomotor capture was found. However, the distractor does effect fixation duration: we find an increase in fixation duration analogous to the remote distractor effect (Walker, Deubel, Schneider, & Findlay, 1997). These results suggest that during dynamic scene perception, the oculomotor system may be under far more top-down control than traditional laboratory based-tasks have previously suggested. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.
Temporal Dynamics of Visual Attention Measured with Event-Related Potentials
Kashiwase, Yoshiyuki; Matsumiya, Kazumichi; Kuriki, Ichiro; Shioiri, Satoshi
2013-01-01
How attentional modulation on brain activities determines behavioral performance has been one of the most important issues in cognitive neuroscience. This issue has been addressed by comparing the temporal relationship between attentional modulations on neural activities and behavior. Our previous study measured the time course of attention with amplitude and phase coherence of steady-state visual evoked potential (SSVEP) and found that the modulation latency of phase coherence rather than that of amplitude was consistent with the latency of behavioral performance. In this study, as a complementary report, we compared the time course of visual attention shift measured by event-related potentials (ERPs) with that by target detection task. We developed a novel technique to compare ERPs with behavioral results and analyzed the EEG data in our previous study. Two sets of flickering stimulus at different frequencies were presented in the left and right visual hemifields, and a target or distracter pattern was presented randomly at various moments after an attention-cue presentation. The observers were asked to detect targets on the attended stimulus after the cue. We found that two ERP components, P300 and N2pc, were elicited by the target presented at the attended location. Time-course analyses revealed that attentional modulation of the P300 and N2pc amplitudes increased gradually until reaching a maximum and lasted at least 1.5 s after the cue onset, which is similar to the temporal dynamics of behavioral performance. However, attentional modulation of these ERP components started later than that of behavioral performance. Rather, the time course of attentional modulation of behavioral performance was more closely associated with that of the concurrently recorded SSVEPs analyzed. These results suggest that neural activities reflected not by either the P300 or N2pc, but by the SSVEPs, are the source of attentional modulation of behavioral performance. PMID:23976966
Paolini, A G; Clark, G M
1999-05-01
Intracellular responses of onset chopper neurons in the ventral cochlear nucleus to tones: evidence for dual-component processing. The ventral cochlear nucleus (VCN) contains a heterogeneous collection of cell types reflecting the multiple processing tasks undertaken by this nucleus. This in vivo study in the rat used intracellular recordings and dye filling to examine membrane potential changes and firing characteristics of onset chopper (OC) neurons to acoustic stimulation (50 ms pure tones, 5 ms r/f time). Stable impalements were made from 15 OC neurons, 7 identified as multipolar cells. Neurons responded to characteristic frequency (CF) tones with sustained depolarization below spike threshold. With increasing stimulus intensity, the depolarization during the initial 10 ms of the response became peaked, and with further increases in intensity the peak became narrower. Onset spikes were generated during this initial depolarization. Tones presented below CF resulted in a broadening of this initial depolarizing component with high stimulus intensities required to initiate onset spikes. This initial component was followed by a sustained depolarizing component lasting until stimulus cessation. The amplitude of the sustained depolarizing component was greatest when frequencies were presented at high intensities below CF resulting in increased action potential firing during this period when compared with comparable high intensities at CF. During the presentation of tones at or above the high-frequency edge of a cell's response area, hyperpolarization was evident during the sustained component. The presence of hyperpolarization and the differences seen in the level of sustained depolarization during CF and off CF tones suggests that changes in membrane responsiveness between the initial and sustained components may be attributed to polysynaptic inhibitory mechanisms. The dual-component processing resulting from convergent auditory nerve excitation and polysynaptic inhibition enables OC neurons to respond in a unique fashion to intensity and frequency features contained within an acoustic stimulus.
Short-term memory for event duration: modality specificity and goal dependency.
Takahashi, Kohske; Watanabe, Katsumi
2012-11-01
Time perception is involved in various cognitive functions. This study investigated the characteristics of short-term memory for event duration by examining how the length of the retention period affects inter- and intramodal duration judgment. On each trial, a sample stimulus was followed by a comparison stimulus, after a variable delay period (0.5-5 s). The sample and comparison stimuli were presented in the visual or auditory modality. The participants determined whether the comparison stimulus was longer or shorter than the sample stimulus. The distortion pattern of subjective duration during the delay period depended on the sensory modality of the comparison stimulus but was not affected by that of the sample stimulus. When the comparison stimulus was visually presented, the retained duration of the sample stimulus was shortened as the delay period increased. Contrarily, when the comparison stimulus was presented in the auditory modality, the delay period had little to no effect on the retained duration. Furthermore, whenever the participants did not know the sensory modality of the comparison stimulus beforehand, the effect of the delay period disappeared. These results suggest that the memory process for event duration is specific to sensory modality and that its performance is determined depending on the sensory modality in which the retained duration will be used subsequently.
An investigation of the spatial selectivity of the duration after-effect.
Maarseveen, Jim; Hogendoorn, Hinze; Verstraten, Frans A J; Paffen, Chris L E
2017-01-01
Adaptation to the duration of a visual stimulus causes the perceived duration of a subsequently presented stimulus with a slightly different duration to be skewed away from the adapted duration. This pattern of repulsion following adaptation is similar to that observed for other visual properties, such as orientation, and is considered evidence for the involvement of duration-selective mechanisms in duration encoding. Here, we investigated whether the encoding of duration - by duration-selective mechanisms - occurs early on in the visual processing hierarchy. To this end, we investigated the spatial specificity of the duration after-effect in two experiments. We measured the duration after-effect at adapter-test distances ranging between 0 and 15° of visual angle and for within- and between-hemifield presentations. We replicated the duration after-effect: the test stimulus was perceived to have a longer duration following adaptation to a shorter duration, and a shorter duration following adaptation to a longer duration. Importantly, this duration after-effect occurred at all measured distances, with no evidence for a decrease in the magnitude of the after-effect at larger distances or across hemifields. This shows that adaptation to duration does not result from adaptation occurring early on in the visual processing hierarchy. Instead, it seems likely that duration information is a high-level stimulus property that is encoded later on in the visual processing hierarchy. Copyright © 2016 Elsevier Ltd. All rights reserved.
Early differential processing of material images: Evidence from ERP classification.
Wiebel, Christiane B; Valsecchi, Matteo; Gegenfurtner, Karl R
2014-06-24
Investigating the temporal dynamics of natural image processing using event-related potentials (ERPs) has a long tradition in object recognition research. In a classical Go-NoGo task two characteristic effects have been emphasized: an early task independent category effect and a later task-dependent target effect. Here, we set out to use this well-established Go-NoGo paradigm to study the time course of material categorization. Material perception has gained more and more interest over the years as its importance in natural viewing conditions has been ignored for a long time. In addition to analyzing standard ERPs, we conducted a single trial ERP pattern analysis. To validate this procedure, we also measured ERPs in two object categories (people and animals). Our linear classification procedure was able to largely capture the overall pattern of results from the canonical analysis of the ERPs and even extend it. We replicate the known target effect (differential Go-NoGo potential at frontal sites) for the material images. Furthermore, we observe task-independent differential activity between the two material categories as early as 140 ms after stimulus onset. Using our linear classification approach, we show that material categories can be differentiated consistently based on the ERP pattern in single trials around 100 ms after stimulus onset, independent of the target-related status. This strengthens the idea of early differential visual processing of material categories independent of the task, probably due to differences in low-level image properties and suggests pattern classification of ERP topographies as a strong instrument for investigating electrophysiological brain activity. © 2014 ARVO.
Simon Effect with and without Awareness of the Accessory Stimulus
ERIC Educational Resources Information Center
Treccani, Barbara; Umilta, Carlo; Tagliabue, Mariaelena
2006-01-01
The authors investigated whether a Simon effect could be observed in an accessory-stimulus Simon task when participants were unaware of the task-irrelevant accessory cue. In Experiment 1A a central visual target was accompanied by a suprathreshold visual lateral cue. A regular Simon effect (i.e., faster cue-response corresponding reaction times…
Nelson, D E; Takahashi, J S
1991-01-01
1. Light-induced phase shifts of the circadian rhythm of wheel-running activity were used to measure the photic sensitivity of a circadian pacemaker and the visual pathway that conveys light information to it in the golden hamster (Mesocricetus auratus). The sensitivity to stimulus irradiance and duration was assessed by measuring the magnitude of phase-shift responses to photic stimuli of different irradiance and duration. The visual sensitivity was also measured at three different phases of the circadian rhythm. 2. The stimulus-response curves measured at different circadian phases suggest that the maximum phase-shift is the only aspect of visual responsivity to change as a function of the circadian day. The half-saturation constants (sigma) for the stimulus-response curves are not significantly different over the three circadian phases tested. The photic sensitivity to irradiance (1/sigma) appears to remain constant over the circadian day. 3. The hamster circadian pacemaker and the photoreceptive system that subserves it are more sensitive to the irradiance of longer-duration stimuli than to irradiance of briefer stimuli. The system is maximally sensitive to the irradiance of stimuli of 300 s and longer in duration. A quantitative model is presented to explain the changes that occur in the stimulus-response curves as a function of photic stimulus duration. 4. The threshold for photic stimulation of the hamster circadian pacemaker is also quite high. The threshold irradiance (the minimum irradiance necessary to induce statistically significant responses) is approximately 10(11) photons cm-2 s-1 for optimal stimulus durations. This threshold is equivalent to a luminance at the cornea of 0.1 cd m-2. 5. We also measured the sensitivity of this visual pathway to the total number of photons in a stimulus. This system is maximally sensitive to photons in stimuli between 30 and 3600 s in duration. The maximum quantum efficiency of photic integration occurs in 300 s stimuli. 6. These results suggest that the visual pathways that convey light information to the mammalian circadian pacemaker possess several unique characteristics. These pathways are relatively insensitive to light irradiance and also integrate light inputs over relatively long durations. This visual system, therefore, possesses an optimal sensitivity of 'tuning' to total photons delivered in stimuli of several minutes in duration. Together these characteristics may make this visual system unresponsive to environmental 'noise' that would interfere with the entrainment of circadian rhythms to light-dark cycles. PMID:1895235
Tachistoscopic exposure and masking of real three-dimensional scenes
Pothier, Stephen; Philbeck, John; Chichka, David; Gajewski, Daniel A.
2010-01-01
Although there are many well-known forms of visual cues specifying absolute and relative distance, little is known about how visual space perception develops at small temporal scales. How much time does the visual system require to extract the information in the various absolute and relative distance cues? In this article, we describe a system that may be used to address this issue by presenting brief exposures of real, three-dimensional scenes, followed by a masking stimulus. The system is composed of an electronic shutter (a liquid crystal smart window) for exposing the stimulus scene, and a liquid crystal projector coupled with an electromechanical shutter for presenting the masking stimulus. This system can be used in both full- and reduced-cue viewing conditions, under monocular and binocular viewing, and at distances limited only by the testing space. We describe a configuration that may be used for studying the microgenesis of visual space perception in the context of visually directed walking. PMID:19182129
NASA Astrophysics Data System (ADS)
Namazi, Hamidreza; Kulish, Vladimir V.; Akrami, Amin
2016-05-01
One of the major challenges in vision research is to analyze the effect of visual stimuli on human vision. However, no relationship has been yet discovered between the structure of the visual stimulus, and the structure of fixational eye movements. This study reveals the plasticity of human fixational eye movements in relation to the ‘complex’ visual stimulus. We demonstrated that the fractal temporal structure of visual dynamics shifts towards the fractal dynamics of the visual stimulus (image). The results showed that images with higher complexity (higher fractality) cause fixational eye movements with lower fractality. Considering the brain, as the main part of nervous system that is engaged in eye movements, we analyzed the governed Electroencephalogram (EEG) signal during fixation. We have found out that there is a coupling between fractality of image, EEG and fixational eye movements. The capability observed in this research can be further investigated and applied for treatment of different vision disorders.
Impaired temporal, not just spatial, resolution in amblyopia.
Spang, Karoline; Fahle, Manfred
2009-11-01
In amblyopia, neuronal deficits deteriorate spatial vision including visual acuity, possibly because of a lack of use-dependent fine-tuning of afferents to the visual cortex during infancy; but temporal processing may deteriorate as well. Temporal, rather than spatial, resolution was investigated in patients with amblyopia by means of a task based on time-defined figure-ground segregation. Patients had to indicate the quadrant of the visual field where a purely time-defined square appeared. The results showed a clear decrease in temporal resolution of patients' amblyopic eyes compared with the dominant eyes in this task. The extent of this decrease in figure-ground segregation based on time of motion onset only loosely correlated with the decrease in spatial resolution and spanned a smaller range than did the spatial loss. Control experiments with artificially induced blur in normal observers confirmed that the decrease in temporal resolution was not simply due to the acuity loss. Amblyopia not only decreases spatial resolution, but also temporal factors such as time-based figure-ground segregation, even at high stimulus contrasts. This finding suggests that the realm of neuronal processes that may be disturbed in amblyopia is larger than originally thought.
Modulation of human extrastriate visual processing by selective attention to colours and words.
Nobre, A C; Allison, T; McCarthy, G
1998-07-01
The present study investigated the effect of visual selective attention upon neural processing within functionally specialized regions of the human extrastriate visual cortex. Field potentials were recorded directly from the inferior surface of the temporal lobes in subjects with epilepsy. The experimental task required subjects to focus attention on words from one of two competing texts. Words were presented individually and foveally. Texts were interleaved randomly and were distinguishable on the basis of word colour. Focal field potentials were evoked by words in the posterior part of the fusiform gyrus. Selective attention strongly modulated long-latency potentials evoked by words. The attention effect co-localized with word-related potentials in the posterior fusiform gyrus, and was independent of stimulus colour. The results demonstrated that stimuli receive differential processing within specialized regions of the extrastriate cortex as a function of attention. The late onset of the attention effect and its co-localization with letter string-related potentials but not with colour-related potentials recorded from nearby regions of the fusiform gyrus suggest that the attention effect is due to top-down influences from downstream regions involved in word processing.
Automatic facial mimicry in response to dynamic emotional stimuli in five-month-old infants.
Isomura, Tomoko; Nakano, Tamami
2016-12-14
Human adults automatically mimic others' emotional expressions, which is believed to contribute to sharing emotions with others. Although this behaviour appears fundamental to social reciprocity, little is known about its developmental process. Therefore, we examined whether infants show automatic facial mimicry in response to others' emotional expressions. Facial electromyographic activity over the corrugator supercilii (brow) and zygomaticus major (cheek) of four- to five-month-old infants was measured while they viewed dynamic clips presenting audiovisual, visual and auditory emotions. The audiovisual bimodal emotion stimuli were a display of a laughing/crying facial expression with an emotionally congruent vocalization, whereas the visual/auditory unimodal emotion stimuli displayed those emotional faces/vocalizations paired with a neutral vocalization/face, respectively. Increased activation of the corrugator supercilii muscle in response to audiovisual cries and the zygomaticus major in response to audiovisual laughter were observed between 500 and 1000 ms after stimulus onset, which clearly suggests rapid facial mimicry. By contrast, both visual and auditory unimodal emotion stimuli did not activate the infants' corresponding muscles. These results revealed that automatic facial mimicry is present as early as five months of age, when multimodal emotional information is present. © 2016 The Author(s).
Increased visual gamma power in schizoaffective bipolar disorder.
Brealy, J A; Shaw, A; Richardson, H; Singh, K D; Muthukumaraswamy, S D; Keedwell, P A
2015-03-01
Electroencephalography and magnetoencephalography (MEG) studies have identified alterations in gamma-band (30-80 Hz) cortical activity in schizophrenia and mood disorders, consistent with neural models of disturbed glutamate (and GABA) neuron influence over cortical pyramidal cells. Genetic evidence suggests specific deficits in GABA-A receptor function in schizoaffective bipolar disorder (SABP), a clinical syndrome with features of both bipolar disorder and schizophrenia. This study investigated gamma oscillations in this under-researched disorder. MEG was used to measure induced gamma and evoked responses to a visual grating stimulus, known to be a potent inducer of primary visual gamma oscillations, in 15 individuals with remitted SABP, defined using Research Diagnostic Criteria, and 22 age- and sex-matched healthy controls. Individuals with SABP demonstrated increased sustained visual cortical power in the gamma band (t 35 = -2.56, p = 0.015) compared to controls. There were no group differences in baseline gamma power, transient or sustained gamma frequency, alpha band responses or pattern onset visual-evoked responses. Gamma power is increased in remitted SABP, which reflects an abnormality in the cortical inhibitory-excitatory balance. Although an interaction between gamma power and medication can not be ruled out, there were no group differences in evoked responses or baseline measures. Further work is needed in other clinical populations and at-risk relatives. Pharmaco-magnetoencephalography studies will help to elucidate the specific GABA and glutamate pathways affected.
Retinotopic maps and foveal suppression in the visual cortex of amblyopic adults.
Conner, Ian P; Odom, J Vernon; Schwartz, Terry L; Mendola, Janine D
2007-08-15
Amblyopia is a developmental visual disorder associated with loss of monocular acuity and sensitivity as well as profound alterations in binocular integration. Abnormal connections in visual cortex are known to underlie this loss, but the extent to which these abnormalities are regionally or retinotopically specific has not been fully determined. This functional magnetic resonance imaging (fMRI) study compared the retinotopic maps in visual cortex produced by each individual eye in 19 adults (7 esotropic strabismics, 6 anisometropes and 6 controls). In our standard viewing condition, the non-tested eye viewed a dichoptic homogeneous mid-level grey stimulus, thereby permitting some degree of binocular interaction. Regions-of-interest analysis was performed for extrafoveal V1, extrafoveal V2 and the foveal representation at the occipital pole. In general, the blood oxygenation level-dependent (BOLD) signal was reduced for the amblyopic eye. At the occipital pole, population receptive fields were shifted to represent more parafoveal locations for the amblyopic eye, compared with the fellow eye, in some subjects. Interestingly, occluding the fellow eye caused an expanded foveal representation for the amblyopic eye in one early-onset strabismic subject with binocular suppression, indicating real-time cortical remapping. In addition, a few subjects actually showed increased activity in parietal and temporal cortex when viewing with the amblyopic eye. We conclude that, even in a heterogeneous population, abnormal early visual experience commonly leads to regionally specific cortical adaptations.
Pfleger, B; Bonds, A B
1995-01-01
The influence of GABAA receptors on orientation selectivity of cat complex cells was tested by iontophoresis of the GABAA receptor blockers bicuculline and N-methyl-bicuculline while stimulating with drifting sinusoidal gratings. Reduction of orientation tuning was markedly less than reported in previous studies that used drifting bars as visual stimuli. Only 3/31 cells lost orientation selectivity, with an average increase in bandwidth of 33%, as opposed to half the cells losing selectivity and a bandwidth increase for the remainder of 47% as reported previously. Infusion of GABAA blockers revealed a prominent stimulus onset transient response, lasting about 120 ms, that showed a broadening of orientation selectivity comparable to that found using drifting bars under similar circumstances. We believe that drifting gratings emphasize a steady-state response component that retains, in the presence of GABAA blockers, significant orientation selectivity. Because the onset transient is initially unselective for orientation, we suggest that the steady-state, orientation-selective response component develops from an alternate inhibitory mechanism, possibly mediated by GABAB receptors.
Evaluation of an organic light-emitting diode display for precise visual stimulation.
Ito, Hiroyuki; Ogawa, Masaki; Sunaga, Shoji
2013-06-11
A new type of visual display for presentation of a visual stimulus with high quality was assessed. The characteristics of an organic light-emitting diode (OLED) display (Sony PVM-2541, 24.5 in.; Sony Corporation, Tokyo, Japan) were measured in detail from the viewpoint of its applicability to visual psychophysics. We found the new display to be superior to other display types in terms of spatial uniformity, color gamut, and contrast ratio. Changes in the intensity of luminance were sharper on the OLED display than those on a liquid crystal display. Therefore, such OLED displays could replace conventional cathode ray tube displays in vision research for high quality stimulus presentation. Benefits of using OLED displays in vision research were especially apparent in the fields of low-level vision, where precise control and description of the stimulus are needed, e.g., in mesopic or scotopic vision, color vision, and motion perception.
Emotional facilitation of sensory processing in the visual cortex.
Schupp, Harald T; Junghöfer, Markus; Weike, Almut I; Hamm, Alfons O
2003-01-01
A key function of emotion is the preparation for action. However, organization of successful behavioral strategies depends on efficient stimulus encoding. The present study tested the hypothesis that perceptual encoding in the visual cortex is modulated by the emotional significance of visual stimuli. Event-related brain potentials were measured while subjects viewed pleasant, neutral, and unpleasant pictures. Early selective encoding of pleasant and unpleasant images was associated with a posterior negativity, indicating primary sources of activation in the visual cortex. The study also replicated previous findings in that affective cues also elicited enlarged late positive potentials, indexing increased stimulus relevance at higher-order stages of stimulus processing. These results support the hypothesis that sensory encoding of affective stimuli is facilitated implicitly by natural selective attention. Thus, the affect system not only modulates motor output (i.e., favoring approach or avoidance dispositions), but already operates at an early level of sensory encoding.
High-resolution eye tracking using V1 neuron activity
McFarland, James M.; Bondy, Adrian G.; Cumming, Bruce G.; Butts, Daniel A.
2014-01-01
Studies of high-acuity visual cortical processing have been limited by the inability to track eye position with sufficient accuracy to precisely reconstruct the visual stimulus on the retina. As a result, studies on primary visual cortex (V1) have been performed almost entirely on neurons outside the high-resolution central portion of the visual field (the fovea). Here we describe a procedure for inferring eye position using multi-electrode array recordings from V1 coupled with nonlinear stimulus processing models. We show that this method can be used to infer eye position with one arc-minute accuracy – significantly better than conventional techniques. This allows for analysis of foveal stimulus processing, and provides a means to correct for eye-movement induced biases present even outside the fovea. This method could thus reveal critical insights into the role of eye movements in cortical coding, as well as their contribution to measures of cortical variability. PMID:25197783
Mendoza-Halliday, Diego; Martinez-Trujillo, Julio C.
2017-01-01
The primate lateral prefrontal cortex (LPFC) encodes visual stimulus features while they are perceived and while they are maintained in working memory. However, it remains unclear whether perceived and memorized features are encoded by the same or different neurons and population activity patterns. Here we record LPFC neuronal activity while monkeys perceive the motion direction of a stimulus that remains visually available, or memorize the direction if the stimulus disappears. We find neurons with a wide variety of combinations of coding strength for perceived and memorized directions: some neurons encode both to similar degrees while others preferentially or exclusively encode either one. Reading out the combined activity of all neurons, a machine-learning algorithm reliably decode the motion direction and determine whether it is perceived or memorized. Our results indicate that a functionally diverse population of LPFC neurons provides a substrate for discriminating between perceptual and mnemonic representations of visual features. PMID:28569756
The stimulus-evoked population response in visual cortex of awake monkey is a propagating wave
Muller, Lyle; Reynaud, Alexandre; Chavane, Frédéric; Destexhe, Alain
2014-01-01
Propagating waves occur in many excitable media and were recently found in neural systems from retina to neocortex. While propagating waves are clearly present under anaesthesia, whether they also appear during awake and conscious states remains unclear. One possibility is that these waves are systematically missed in trial-averaged data, due to variability. Here we present a method for detecting propagating waves in noisy multichannel recordings. Applying this method to single-trial voltage-sensitive dye imaging data, we show that the stimulus-evoked population response in primary visual cortex of the awake monkey propagates as a travelling wave, with consistent dynamics across trials. A network model suggests that this reliability is the hallmark of the horizontal fibre network of superficial cortical layers. Propagating waves with similar properties occur independently in secondary visual cortex, but maintain precise phase relations with the waves in primary visual cortex. These results show that, in response to a visual stimulus, propagating waves are systematically evoked in several visual areas, generating a consistent spatiotemporal frame for further neuronal interactions. PMID:24770473
ERIC Educational Resources Information Center
Mossbridge, Julia A.; Scissors, Beth N.; Wright, Beverly A.
2008-01-01
Normal auditory perception relies on accurate judgments about the temporal relationships between sounds. Previously, we used a perceptual-learning paradigm to investigate the neural substrates of two such relative-timing judgments made at sound onset: detecting stimulus asynchrony and discriminating stimulus order. Here, we conducted parallel…
Griffis, Joseph C.; Elkhetali, Abdurahman S.; Burge, Wesley K.; Chen, Richard H.; Visscher, Kristina M.
2015-01-01
Attention facilitates the processing of task-relevant visual information and suppresses interference from task-irrelevant information. Modulations of neural activity in visual cortex depend on attention, and likely result from signals originating in fronto-parietal and cingulo-opercular regions of cortex. Here, we tested the hypothesis that attentional facilitation of visual processing is accomplished in part by changes in how brain networks involved in attentional control interact with sectors of V1 that represent different retinal eccentricities. We measured the strength of background connectivity between fronto-parietal and cingulo-opercular regions with different eccentricity sectors in V1 using functional MRI data that were collected while participants performed tasks involving attention to either a centrally presented visual stimulus or a simultaneously presented auditory stimulus. We found that when the visual stimulus was attended, background connectivity between V1 and the left frontal eye fields (FEF), left intraparietal sulcus (IPS), and right IPS varied strongly across different eccentricity sectors in V1 so that foveal sectors were more strongly connected than peripheral sectors. This retinotopic gradient was weaker when the visual stimulus was ignored, indicating that it was driven by attentional effects. Greater task-driven differences between foveal and peripheral sectors in background connectivity to these regions were associated with better performance on the visual task and faster response times on correct trials. These findings are consistent with the notion that attention drives the configuration of task-specific functional pathways that enable the prioritized processing of task-relevant visual information, and show that the prioritization of visual information by attentional processes may be encoded in the retinotopic gradient of connectivty between V1 and fronto-parietal regions. PMID:26106320
Visual motion perception predicts driving hazard perception ability.
Lacherez, Philippe; Au, Sandra; Wood, Joanne M
2014-02-01
To examine the basis of previous findings of an association between indices of driving safety and visual motion sensitivity and to examine whether this association could be explained by low-level changes in visual function. A total of 36 visually normal participants (aged 19-80 years) completed a battery of standard vision tests including visual acuity, contrast sensitivity and automated visual fields and two tests of motion perception including sensitivity for movement of a drifting Gabor stimulus and sensitivity for displacement in a random dot kinematogram (Dmin ). Participants also completed a hazard perception test (HPT), which measured participants' response times to hazards embedded in video recordings of real-world driving, which has been shown to be linked to crash risk. Dmin for the random dot stimulus ranged from -0.88 to -0.12 log minutes of arc, and the minimum drift rate for the Gabor stimulus ranged from 0.01 to 0.35 cycles per second. Both measures of motion sensitivity significantly predicted response times on the HPT. In addition, while the relationship involving the HPT and motion sensitivity for the random dot kinematogram was partially explained by the other visual function measures, the relationship with sensitivity for detection of the drifting Gabor stimulus remained significant even after controlling for these variables. These findings suggest that motion perception plays an important role in the visual perception of driving-relevant hazards independent of other areas of visual function and should be further explored as a predictive test of driving safety. Future research should explore the causes of reduced motion perception to develop better interventions to improve road safety. © 2012 The Authors. Acta Ophthalmologica © 2012 Acta Ophthalmologica Scandinavica Foundation.
Can responses to basic non-numerical visual features explain neural numerosity responses?
Harvey, Ben M; Dumoulin, Serge O
2017-04-01
Humans and many animals can distinguish between stimuli that differ in numerosity, the number of objects in a set. Human and macaque parietal lobes contain neurons that respond to changes in stimulus numerosity. However, basic non-numerical visual features can affect neural responses to and perception of numerosity, and visual features often co-vary with numerosity. Therefore, it is debated whether numerosity or co-varying low-level visual features underlie neural and behavioral responses to numerosity. To test the hypothesis that non-numerical visual features underlie neural numerosity responses in a human parietal numerosity map, we analyze responses to a group of numerosity stimulus configurations that have the same numerosity progression but vary considerably in their non-numerical visual features. Using ultra-high-field (7T) fMRI, we measure responses to these stimulus configurations in an area of posterior parietal cortex whose responses are believed to reflect numerosity-selective activity. We describe an fMRI analysis method to distinguish between alternative models of neural response functions, following a population receptive field (pRF) modeling approach. For each stimulus configuration, we first quantify the relationships between numerosity and several non-numerical visual features that have been proposed to underlie performance in numerosity discrimination tasks. We then determine how well responses to these non-numerical visual features predict the observed fMRI responses, and compare this to the predictions of responses to numerosity. We demonstrate that a numerosity response model predicts observed responses more accurately than models of responses to simple non-numerical visual features. As such, neural responses in cognitive processing need not reflect simpler properties of early sensory inputs. Copyright © 2017 Elsevier Inc. All rights reserved.
Exogenous attention can be counter-selective: onset cues disrupt sensitivity to color changes.
Müller-Plath, Gisela; Klöckner, Nils
2014-03-01
In peripheral spatial cueing paradigms, exogenous attentional capture is commonly observed after salient onset cues or with cues contingent on target characteristics. We proposed that exogenously captured attention disrupts the selectivity to target features. We tested this by experimentally emulating the everyday observation that in a viewing situation in which the observer is monitoring a stationary display fort change to occur, the onset of a salient stimulus (onset cue) or a change in a stationary stimulus similar to the expected one (contingent cue) has a distracting effect. As predicted, we found that both types of cues reduced the target detection sensitivity but enhanced the bias to respond in a go-nogo-paradigm. With the onset cue, the sensitivity loss was more pronounced at the side of the cue, whereas the contingent cue affected both sides likewise. Moreover, the effects of the onset cue interacted with the task difficulty: the more selectivity a task required the more immune it was against disruption, but the more likely was a response. We concluded that onset capture disrupts selective attention by adding noise to the processing of the target location. The effects of contingent capture could be explained with cue-target confounding. Finally, we suggest a new model of attentional capture in which exogenous and endogenous components interact in a dynamic way.
Does bimodal stimulus presentation increase ERP components usable in BCIs?
NASA Astrophysics Data System (ADS)
Thurlings, Marieke E.; Brouwer, Anne-Marie; Van Erp, Jan B. F.; Blankertz, Benjamin; Werkhoven, Peter J.
2012-08-01
Event-related potential (ERP)-based brain-computer interfaces (BCIs) employ differences in brain responses to attended and ignored stimuli. Typically, visual stimuli are used. Tactile stimuli have recently been suggested as a gaze-independent alternative. Bimodal stimuli could evoke additional brain activity due to multisensory integration which may be of use in BCIs. We investigated the effect of visual-tactile stimulus presentation on the chain of ERP components, BCI performance (classification accuracies and bitrates) and participants’ task performance (counting of targets). Ten participants were instructed to navigate a visual display by attending (spatially) to targets in sequences of either visual, tactile or visual-tactile stimuli. We observe that attending to visual-tactile (compared to either visual or tactile) stimuli results in an enhanced early ERP component (N1). This bimodal N1 may enhance BCI performance, as suggested by a nonsignificant positive trend in offline classification accuracies. A late ERP component (P300) is reduced when attending to visual-tactile compared to visual stimuli, which is consistent with the nonsignificant negative trend of participants’ task performance. We discuss these findings in the light of affected spatial attention at high-level compared to low-level stimulus processing. Furthermore, we evaluate bimodal BCIs from a practical perspective and for future applications.
Stimulus induced reset of 40-Hz auditory steady-state responses.
Ross, B; Herdman, A T; Pantev, C
2004-11-30
Auditory steady-state responses (ASSR) were evoked with 40-Hz amplitude modulated 500-Hz tones. An additional impulse-like noise stimulus (2,000 +/- 500 Hz) with spectrum clearly distinct from the one of the AM sound, induced pronounced perturbations in the ASSR. The effect of the interfering noise was interpreted as (1) reset of the ASSR because of a sudden loss in phase coherence, (2) a decrease in signal power immediately after presentation of the noise impulse, and (3) a modulation of ASSR amplitude and phase resembling the time course of the ASSR onset. The time-course of the ASSR onset was interpreted as reflecting temporal integration over several 100 ms. The reset of the ASSR was discussed as a powerful mechanism, which allows for fast reaction to a short stimulus change that overcomes the disadvantage of the ASSR's long integration time constant.
Role of somatosensory and vestibular cues in attenuating visually induced human postural sway
NASA Technical Reports Server (NTRS)
Peterka, R. J.; Benolken, M. S.
1995-01-01
The purpose of this study was to determine the contribution of visual, vestibular, and somatosensory cues to the maintenance of stance in humans. Postural sway was induced by full-field, sinusoidal visual surround rotations about an axis at the level of the ankle joints. The influences of vestibular and somatosensory cues were characterized by comparing postural sway in normal and bilateral vestibular absent subjects in conditions that provided either accurate or inaccurate somatosensory orientation information. In normal subjects, the amplitude of visually induced sway reached a saturation level as stimulus amplitude increased. The saturation amplitude decreased with increasing stimulus frequency. No saturation phenomena were observed in subjects with vestibular loss, implying that vestibular cues were responsible for the saturation phenomenon. For visually induced sways below the saturation level, the stimulus-response curves for both normal subjects and subjects experiencing vestibular loss were nearly identical, implying (1) that normal subjects were not using vestibular information to attenuate their visually induced sway, possibly because sway was below a vestibular-related threshold level, and (2) that subjects with vestibular loss did not utilize visual cues to a greater extent than normal subjects; that is, a fundamental change in visual system "gain" was not used to compensate for a vestibular deficit. An unexpected finding was that the amplitude of body sway induced by visual surround motion could be almost 3 times greater than the amplitude of the visual stimulus in normal subjects and subjects with vestibular loss. This occurred in conditions where somatosensory cues were inaccurate and at low stimulus amplitudes. A control system model of visually induced postural sway was developed to explain this finding. For both subject groups, the amplitude of visually induced sway was smaller by a factor of about 4 in tests where somatosensory cues provided accurate versus inaccurate orientation information. This implied (1) that the subjects experiencing vestibular loss did not utilize somatosensory cues to a greater extent than normal subjects; that is, changes in somatosensory system "gain" were not used to compensate for a vestibular deficit, and (2) that the threshold for the use of vestibular cues in normal subjects was apparently lower in test conditions where somatosensory cues were providing accurate orientation information.
De Loof, Esther; Van Opstal, Filip; Verguts, Tom
2016-04-01
Theories on visual awareness claim that predicted stimuli reach awareness faster than unpredicted ones. In the current study, we disentangle whether prior information about the upcoming stimulus affects visual awareness of stimulus location (i.e., individuation) by modulating processing efficiency or threshold setting. Analogous research on stimulus identification revealed that prior information modulates threshold setting. However, as identification and individuation are two functionally and neurally distinct processes, the mechanisms underlying identification cannot simply be extrapolated directly to individuation. The goal of this study was therefore to investigate how individuation is influenced by prior information about the upcoming stimulus. To do so, a drift diffusion model was fitted to estimate the processing efficiency and threshold setting for predicted versus unpredicted stimuli in a cued individuation paradigm. Participants were asked to locate a picture, following a cue that was congruent, incongruent or neutral with respect to the picture's identity. Pictures were individuated faster in the congruent and neutral condition compared to the incongruent condition. In the diffusion model analysis, the processing efficiency was not significantly different across conditions. However, the threshold setting was significantly higher following an incongruent cue compared to both congruent and neutral cues. Our results indicate that predictive information about the upcoming stimulus influences visual awareness by shifting the threshold for individuation rather than by enhancing processing efficiency. Copyright © 2016 Elsevier Ltd. All rights reserved.
Valente, Andrea; Bürki, Audrey; Laganaro, Marina
2014-01-01
A major effort in cognitive neuroscience of language is to define the temporal and spatial characteristics of the core cognitive processes involved in word production. One approach consists in studying the effects of linguistic and pre-linguistic variables in picture naming tasks. So far, studies have analyzed event-related potentials (ERPs) during word production by examining one or two variables with factorial designs. Here we extended this approach by investigating simultaneously the effects of multiple theoretical relevant predictors in a picture naming task. High density EEG was recorded on 31 participants during overt naming of 100 pictures. ERPs were extracted on a trial by trial basis from picture onset to 100 ms before the onset of articulation. Mixed-effects regression models were conducted to examine which variables affected production latencies and the duration of periods of stable electrophysiological patterns (topographic maps). Results revealed an effect of a pre-linguistic variable, visual complexity, on an early period of stable electric field at scalp, from 140 to 180 ms after picture presentation, a result consistent with the proposal that this time period is associated with visual object recognition processes. Three other variables, word Age of Acquisition, Name Agreement, and Image Agreement influenced response latencies and modulated ERPs from ~380 ms to the end of the analyzed period. These results demonstrate that a topographic analysis fitted into the single trial ERPs and covering the entire processing period allows one to associate the cost generated by psycholinguistic variables to the duration of specific stable electrophysiological processes and to pinpoint the precise time-course of multiple word production predictors at once.
Fortier-Gauthier, Ulysse; Moffat, Nicolas; Dell'Acqua, Roberto; McDonald, John J; Jolicœur, Pierre
2012-07-01
We studied brain activity during retention and retrieval phases of two visual short-term memory (VSTM) experiments. Experiment 1 used a balanced memory array, with one color stimulus in each hemifield, followed by a retention interval and a central probe, at the fixation point that designated the target stimulus in memory about which to make a determination of orientation. Retrieval of information from VSTM was associated with an event-related lateralization (ERL) with a contralateral negativity relative to the visual field from which the probed stimulus was originally encoded, suggesting a lateralized organization of VSTM. The scalp distribution of the retrieval ERL was more anterior than what is usually associated with simple maintenance activity, which is consistent with the involvement of different brain structures for these distinct visual memory mechanisms. Experiment 2 was like Experiment 1, but used an unbalanced memory array consisting of one lateral color stimulus in a hemifield and one color stimulus on the vertical mid-line. This design enabled us to separate lateralized activity related to target retrieval from distractor processing. Target retrieval was found to generate a negative-going ERL at electrode sites found in Experiment 1, and suggested representations were retrieved from anterior cortical structures. Distractor processing elicited a positive-going ERL at posterior electrodes sites, which could be indicative of a return to baseline of retention activity for the discarded memory of the now-irrelevant stimulus, or an active inhibition mechanism mediating distractor suppression. Copyright © 2012 Elsevier Ltd. All rights reserved.
Henriksson, Linda; Karvonen, Juha; Salminen-Vaparanta, Niina; Railo, Henry; Vanni, Simo
2012-01-01
The localization of visual areas in the human cortex is typically based on mapping the retinotopic organization with functional magnetic resonance imaging (fMRI). The most common approach is to encode the response phase for a slowly moving visual stimulus and to present the result on an individual's reconstructed cortical surface. The main aims of this study were to develop complementary general linear model (GLM)-based retinotopic mapping methods and to characterize the inter-individual variability of the visual area positions on the cortical surface. We studied 15 subjects with two methods: a 24-region multifocal checkerboard stimulus and a blocked presentation of object stimuli at different visual field locations. The retinotopic maps were based on weighted averaging of the GLM parameter estimates for the stimulus regions. In addition to localizing visual areas, both methods could be used to localize multiple retinotopic regions-of-interest. The two methods yielded consistent retinotopic maps in the visual areas V1, V2, V3, hV4, and V3AB. In the higher-level areas IPS0, VO1, LO1, LO2, TO1, and TO2, retinotopy could only be mapped with the blocked stimulus presentation. The gradual widening of spatial tuning and an increase in the responses to stimuli in the ipsilateral visual field along the hierarchy of visual areas likely reflected the increase in the average receptive field size. Finally, after registration to Freesurfer's surface-based atlas of the human cerebral cortex, we calculated the mean and variability of the visual area positions in the spherical surface-based coordinate system and generated probability maps of the visual areas on the average cortical surface. The inter-individual variability in the area locations decreased when the midpoints were calculated along the spherical cortical surface compared with volumetric coordinates. These results can facilitate both analysis of individual functional anatomy and comparisons of visual cortex topology across studies. PMID:22590626
The Characteristics and Limits of Rapid Visual Categorization
Fabre-Thorpe, Michèle
2011-01-01
Visual categorization appears both effortless and virtually instantaneous. The study by Thorpe et al. (1996) was the first to estimate the processing time necessary to perform fast visual categorization of animals in briefly flashed (20 ms) natural photographs. They observed a large differential EEG activity between target and distracter correct trials that developed from 150 ms after stimulus onset, a value that was later shown to be even shorter in monkeys! With such strong processing time constraints, it was difficult to escape the conclusion that rapid visual categorization was relying on massively parallel, essentially feed-forward processing of visual information. Since 1996, we have conducted a large number of studies to determine the characteristics and limits of fast visual categorization. The present chapter will review some of the main results obtained. I will argue that rapid object categorizations in natural scenes can be done without focused attention and are most likely based on coarse and unconscious visual representations activated with the first available (magnocellular) visual information. Fast visual processing proved efficient for the categorization of large superordinate object or scene categories, but shows its limits when more detailed basic representations are required. The representations for basic objects (dogs, cars) or scenes (mountain or sea landscapes) need additional processing time to be activated. This finding is at odds with the widely accepted idea that such basic representations are at the entry level of the system. Interestingly, focused attention is still not required to perform these time consuming basic categorizations. Finally we will show that object and context processing can interact very early in an ascending wave of visual information processing. We will discuss how such data could result from our experience with a highly structured and predictable surrounding world that shaped neuronal visual selectivity. PMID:22007180
Wilbiks, Jonathan M P; Dyson, Benjamin J
2016-01-01
Over 5 experiments, we challenge the idea that the capacity of audio-visual integration need be fixed at 1 item. We observe that the conditions under which audio-visual integration is most likely to exceed 1 occur when stimulus change operates at a slow rather than fast rate of presentation and when the task is of intermediate difficulty such as when low levels of proactive interference (3 rather than 8 interfering visual presentations) are combined with the temporal unpredictability of the critical frame (Experiment 2), or, high levels of proactive interference are combined with the temporal predictability of the critical frame (Experiment 4). Neural data suggest that capacity might also be determined by the quality of perceptual information entering working memory. Experiment 5 supported the proposition that audio-visual integration was at play during the previous experiments. The data are consistent with the dynamic nature usually associated with cross-modal binding, and while audio-visual integration capacity likely cannot exceed uni-modal capacity estimates, performance may be better than being able to associate only one visual stimulus with one auditory stimulus.
Wilbiks, Jonathan M. P.; Dyson, Benjamin J.
2016-01-01
Over 5 experiments, we challenge the idea that the capacity of audio-visual integration need be fixed at 1 item. We observe that the conditions under which audio-visual integration is most likely to exceed 1 occur when stimulus change operates at a slow rather than fast rate of presentation and when the task is of intermediate difficulty such as when low levels of proactive interference (3 rather than 8 interfering visual presentations) are combined with the temporal unpredictability of the critical frame (Experiment 2), or, high levels of proactive interference are combined with the temporal predictability of the critical frame (Experiment 4). Neural data suggest that capacity might also be determined by the quality of perceptual information entering working memory. Experiment 5 supported the proposition that audio-visual integration was at play during the previous experiments. The data are consistent with the dynamic nature usually associated with cross-modal binding, and while audio-visual integration capacity likely cannot exceed uni-modal capacity estimates, performance may be better than being able to associate only one visual stimulus with one auditory stimulus. PMID:27977790
The effect of visual salience on memory-based choices.
Pooresmaeili, Arezoo; Bach, Dominik R; Dolan, Raymond J
2014-02-01
Deciding whether a stimulus is the "same" or "different" from a previous presented one involves integrating among the incoming sensory information, working memory, and perceptual decision making. Visual selective attention plays a crucial role in selecting the relevant information that informs a subsequent course of action. Previous studies have mainly investigated the role of visual attention during the encoding phase of working memory tasks. In this study, we investigate whether manipulation of bottom-up attention by changing stimulus visual salience impacts on later stages of memory-based decisions. In two experiments, we asked subjects to identify whether a stimulus had either the same or a different feature to that of a memorized sample. We manipulated visual salience of the test stimuli by varying a task-irrelevant feature contrast. Subjects chose a visually salient item more often when they looked for matching features and less often so when they looked for a nonmatch. This pattern of results indicates that salient items are more likely to be identified as a match. We interpret the findings in terms of capacity limitations at a comparison stage where a visually salient item is more likely to exhaust resources leading it to be prematurely parsed as a match.
Ellingson, Roger M; Oken, Barry
2010-01-01
Report contains the design overview and key performance measurements demonstrating the feasibility of generating and recording ambulatory visual stimulus evoked potentials using the previously reported custom Complementary and Alternative Medicine physiologic data collection and monitoring system, CAMAS. The methods used to generate visual stimuli on a PDA device and the design of an optical coupling device to convert the display to an electrical waveform which is recorded by the CAMAS base unit are presented. The optical sensor signal, synchronized to the visual stimulus emulates the brain's synchronized EEG signal input to CAMAS normally reviewed for the evoked potential response. Most importantly, the PDA also sends a marker message over the wireless Bluetooth connection to the CAMAS base unit synchronized to the visual stimulus which is the critical averaging reference component to obtain VEP results. Results show the variance in the latency of the wireless marker messaging link is consistent enough to support the generation and recording of visual evoked potentials. The averaged sensor waveforms at multiple CPU speeds are presented and demonstrate suitability of the Bluetooth interface for portable ambulatory visual evoked potential implementation on our CAMAS platform.
Miskovic, Vladimir; Martinovic, Jasna; Wieser, Matthias M.; Petro, Nathan M.; Bradley, Margaret M.; Keil, Andreas
2015-01-01
Emotionally arousing scenes readily capture visual attention, prompting amplified neural activity in sensory regions of the brain. The physical stimulus features and related information channels in the human visual system that contribute to this modulation, however, are not known. Here, we manipulated low-level physical parameters of complex scenes varying in hedonic valence and emotional arousal in order to target the relative contributions of luminance based versus chromatic visual channels to emotional perception. Stimulus-evoked brain electrical activity was measured during picture viewing and used to quantify neural responses sensitive to lower-tier visual cortical involvement (steady-state visual evoked potentials) as well as the late positive potential, reflecting a more distributed cortical event. Results showed that the enhancement for emotional content was stimulus-selective when examining the steady-state segments of the evoked visual potentials. Response amplification was present only for low spatial frequency, grayscale stimuli, and not for high spatial frequency, red/green stimuli. In contrast, the late positive potential was modulated by emotion regardless of the scene’s physical properties. Our findings are discussed in relation to neurophysiologically plausible constraints operating at distinct stages of the cortical processing stream. PMID:25640949
Miskovic, Vladimir; Martinovic, Jasna; Wieser, Matthias J; Petro, Nathan M; Bradley, Margaret M; Keil, Andreas
2015-03-01
Emotionally arousing scenes readily capture visual attention, prompting amplified neural activity in sensory regions of the brain. The physical stimulus features and related information channels in the human visual system that contribute to this modulation, however, are not known. Here, we manipulated low-level physical parameters of complex scenes varying in hedonic valence and emotional arousal in order to target the relative contributions of luminance based versus chromatic visual channels to emotional perception. Stimulus-evoked brain electrical activity was measured during picture viewing and used to quantify neural responses sensitive to lower-tier visual cortical involvement (steady-state visual evoked potentials) as well as the late positive potential, reflecting a more distributed cortical event. Results showed that the enhancement for emotional content was stimulus-selective when examining the steady-state segments of the evoked visual potentials. Response amplification was present only for low spatial frequency, grayscale stimuli, and not for high spatial frequency, red/green stimuli. In contrast, the late positive potential was modulated by emotion regardless of the scene's physical properties. Our findings are discussed in relation to neurophysiologically plausible constraints operating at distinct stages of the cortical processing stream. Copyright © 2015 Elsevier B.V. All rights reserved.
Todd, J Jay; Fougnie, Daryl; Marois, René
2005-12-01
The right temporo-parietal junction (TPJ) is critical for stimulus-driven attention and visual awareness. Here we show that as the visual short-term memory (VSTM) load of a task increases, activity in this region is increasingly suppressed. Correspondingly, increasing VSTM load impairs the ability of subjects to consciously detect the presence of a novel, unexpected object in the visual field. These results not only demonstrate that VSTM load suppresses TPJ activity and induces inattentional blindness, but also offer a plausible neural mechanism for this perceptual deficit: suppression of the stimulus-driven attentional network.
Metzger, Brian A; Mathewson, Kyle E; Tapia, Evelina; Fabiani, Monica; Gratton, Gabriele; Beck, Diane M
2017-06-01
Research on the neural correlates of consciousness (NCC) has implicated an assortment of brain regions, ERP components, and network properties associated with visual awareness. Recently, the P3b ERP component has emerged as a leading NCC candidate. However, typical P3b paradigms depend on the detection of some stimulus change, making it difficult to separate brain processes elicited by the stimulus itself from those associated with updates or changes in visual awareness. Here we used binocular rivalry to ask whether the P3b is associated with changes in awareness even in the absence of changes in the object of awareness. We recorded ERPs during a probe-mediated binocular rivalry paradigm in which brief probes were presented over the image in either the suppressed or dominant eye to determine whether the elicited P3b activity is probe or reversal related. We found that the timing of P3b (but not its amplitude) was closely related to the timing of the report of a perceptual change rather than to the onset of the probe. This is consistent with the proposal that P3b indexes updates in conscious awareness, rather than being related to stimulus processing per se. Conversely, the probe-related P1 amplitude (but not its latency) was associated with reversal latency, suggesting that the degree to which the probe is processed increases the likelihood of a fast perceptual reversal. Finally, the response-locked P3b amplitude (but not its latency) was associated with the duration of an intermediate stage between reversals in which parts of both percepts coexist (piecemeal period). Together, the data suggest that the P3b reflects an update in consciousness and that the intensity of that process (as indexed by P3b amplitude) predicts how immediate that update is.
Donohue, Sarah E; Todisco, Alexandra E; Woldorff, Marty G
2013-04-01
Neuroimaging work on multisensory conflict suggests that the relevant modality receives enhanced processing in the face of incongruency. However, the degree of stimulus processing in the irrelevant modality and the temporal cascade of the attentional modulations in either the relevant or irrelevant modalities are unknown. Here, we employed an audiovisual conflict paradigm with a sensory probe in the task-irrelevant modality (vision) to gauge the attentional allocation to that modality. ERPs were recorded as participants attended to and discriminated spoken auditory letters while ignoring simultaneous bilateral visual letter stimuli that were either fully congruent, fully incongruent, or partially incongruent (one side incongruent, one congruent) with the auditory stimulation. Half of the audiovisual letter stimuli were followed 500-700 msec later by a bilateral visual probe stimulus. As expected, ERPs to the audiovisual stimuli showed an incongruency ERP effect (fully incongruent versus fully congruent) of an enhanced, centrally distributed, negative-polarity wave starting ∼250 msec. More critically here, the sensory ERP components to the visual probes were larger when they followed fully incongruent versus fully congruent multisensory stimuli, with these enhancements greatest on fully incongruent trials with the slowest RTs. In addition, on the slowest-response partially incongruent trials, the P2 sensory component to the visual probes was larger contralateral to the preceding incongruent visual stimulus. These data suggest that, in response to conflicting multisensory stimulus input, the initial cognitive effect is a capture of attention by the incongruent irrelevant-modality input, pulling neural processing resources toward that modality, resulting in rapid enhancement, rather than rapid suppression, of that input.
Expectation and Surprise Determine Neural Population Responses in the Ventral Visual Stream
Egner, Tobias; Monti, Jim M.; Summerfield, Christopher
2014-01-01
Visual cortex is traditionally viewed as a hierarchy of neural feature detectors, with neural population responses being driven by bottom-up stimulus features. Conversely, “predictive coding” models propose that each stage of the visual hierarchy harbors two computationally distinct classes of processing unit: representational units that encode the conditional probability of a stimulus and provide predictions to the next lower level; and error units that encode the mismatch between predictions and bottom-up evidence, and forward prediction error to the next higher level. Predictive coding therefore suggests that neural population responses in category-selective visual regions, like the fusiform face area (FFA), reflect a summation of activity related to prediction (“face expectation”) and prediction error (“face surprise”), rather than a homogenous feature detection response. We tested the rival hypotheses of the feature detection and predictive coding models by collecting functional magnetic resonance imaging data from the FFA while independently varying both stimulus features (faces vs houses) and subjects’ perceptual expectations regarding those features (low vs medium vs high face expectation). The effects of stimulus and expectation factors interacted, whereby FFA activity elicited by face and house stimuli was indistinguishable under high face expectation and maximally differentiated under low face expectation. Using computational modeling, we show that these data can be explained by predictive coding but not by feature detection models, even when the latter are augmented with attentional mechanisms. Thus, population responses in the ventral visual stream appear to be determined by feature expectation and surprise rather than by stimulus features per se. PMID:21147999
Swalve, Natashia; Barrett, Scott T.; Bevins, Rick A.; Li, Ming
2015-01-01
Nicotine is a widely-abused drug, yet its primary reinforcing effect does not seem potent as other stimulants such as cocaine. Recent research on the contributing factors toward chronic use of nicotine-containing products has implicated the role of reinforcement-enhancing effects of nicotine. The present study investigates whether phencyclidine (PCP) may also possess a reinforcement-enhancement effect and how this may interact with the reinforcement-enhancement effect of nicotine. PCP was tested for two reasons: 1) it produces discrepant results on overall reward, similar to that seen with nicotine and 2) it may elucidate how other compounds may interact with the reinforcement-enhancement of nicotine. Adult male Sprague-Dawley rats were trained to lever press for brief visual stimulus presentations under fixed-ratio (FR) schedules of reinforcement and then were tested with nicotine (0.2 or 0.4 mg/kg) and/or PCP (2.0 mg/kg) over six increasing FR values. A selective increase in active lever-pressing for the visual stimulus with drug treatment was considered evidence of a reinforcement-enhancement effect. PCP and nicotine separately increased active lever pressing for a visual stimulus in a dose-dependent manner and across the different FR schedules. The addition of PCP to nicotine did not increase lever-pressing for the visual stimulus, possibly due to a ceiling effect. The effect of PCP may be driven largely by its locomotor stimulant effects, whereas the effect of nicotine was independent of locomotor stimulation. This dissociation emphasizes that distinct pharmacological properties contribute to the reinforcement-enhancement effects of substances. PMID:26026783
Conversion of Phase Information into a Spike-Count Code by Bursting Neurons
Samengo, Inés; Montemurro, Marcelo A.
2010-01-01
Single neurons in the cerebral cortex are immersed in a fluctuating electric field, the local field potential (LFP), which mainly originates from synchronous synaptic input into the local neural neighborhood. As shown by recent studies in visual and auditory cortices, the angular phase of the LFP at the time of spike generation adds significant extra information about the external world, beyond the one contained in the firing rate alone. However, no biologically plausible mechanism has yet been suggested that allows downstream neurons to infer the phase of the LFP at the soma of their pre-synaptic afferents. Therefore, so far there is no evidence that the nervous system can process phase information. Here we study a model of a bursting pyramidal neuron, driven by a time-dependent stimulus. We show that the number of spikes per burst varies systematically with the phase of the fluctuating input at the time of burst onset. The mapping between input phase and number of spikes per burst is a robust response feature for a broad range of stimulus statistics. Our results suggest that cortical bursting neurons could play a crucial role in translating LFP phase information into an easily decodable spike count code. PMID:20300632
COFFMAN, MARIKA C.; TRUBANOVA, ANDREA; RICHEY, J. ANTHONY; WHITE, SUSAN W.; KIM-SPOON, JUNGMEEN; OLLENDICK, THOMAS H.; PINE, DANIEL S.
2016-01-01
Attention to faces is a fundamental psychological process in humans, with atypical attention to faces noted across several clinical disorders. Although many clinical disorders onset in adolescence, there is a lack of well-validated stimulus sets containing adolescent faces available for experimental use. Further, the images comprising most available sets are not controlled for high- and low-level visual properties. Here, we present a cross-site validation of the National Institute of Mental Health Child Emotional Faces Picture Set (NIMH-ChEFS), comprised of 257 photographs of adolescent faces displaying angry, fearful, happy, sad, and neutral expressions. All of the direct facial images from the NIMH-ChEFS set were adjusted in terms of location of facial features and standardized for luminance, size, and smoothness. Although overall agreement between raters in this study and the original development-site raters was high (89.52%), this differed by group such that agreement was lower for adolescents relative to mental health professionals in the current study. These results suggest that future research using this face set or others of adolescent/child faces should base comparisons on similarly-aged validation data. PMID:26359940
Reward priming eliminates color-driven affect in perception.
Hu, Kesong
2018-01-03
Brain and behavior evidence suggests that colors have distinct affective properties. Here, we investigated how reward influences color-driven affect in perception. In Experiment 1, we assessed competition between blue and red patches during a temporal-order judgment (TOJ) across a range of stimulus onset asynchronies (SOAs). During the value reinforcement, reward was linked to either blue (version 1) or red (version 2) in the experiment. The same stimuli then served as test ones in the following unrewarded, unspeeded TOJ task. Our analysis showed that blue patches were consistently seen as occurring first, even when objectively appearing 2nd at short SOAs. This accelerated perception of blue over red was disrupted by prior primes related to reward (vs. neutral) but not perceptional (blue vs. red) priming. Experiment 2 replicated the findings of Experiment 1 while uncoupling action and stimulus values. These results are consistent with the blue-approach and red-avoidance motivation hypothesis and highlight an active nature of the association of reward priming and color processing. Together, the present study implies a link between reward and color affect and contributes to the understanding of how reward influences color affect in visual processing.
Haut, Kristen M.; van Erp, Theo G. M.; Knowlton, Barbara; Bearden, Carrie E.; Subotnik, Kenneth; Ventura, Joseph; Nuechterlein, Keith H.; Cannon, Tyrone D.
2014-01-01
Patients with and at risk for psychosis may have difficulty using associative strategies to facilitate episodic memory encoding and recall. In parallel studies, patients with first-episode schizophrenia (n = 27) and high psychosis risk (n = 28) compared with control participants (n = 22 and n = 20, respectively) underwent functional MRI during a remember-know memory task. Psychophysiological interaction analyses, using medial temporal lobe (MTL) structures as regions of interest, were conducted to measure functional connectivity patterns supporting successful episodic memory. During encoding, patients with first-episode schizophrenia demonstrated reduced functional coupling between MTL regions and regions involved in stimulus representations, stimulus selection, and cognitive control. Relative to control participants and patients with high psychosis risk who did not convert to psychosis, patients with high psychosis risk who later converted to psychosis also demonstrated reduced connectivity between MTL regions and auditory-verbal and visual-association regions. These results suggest that episodic memory deficits in schizophrenia are related to inefficient recruitment of cortical connections involved in associative memory formation; such deficits precede the onset of psychosis among those individuals at high clinical risk. PMID:25750836
Haut, Kristen M; van Erp, Theo G M; Knowlton, Barbara; Bearden, Carrie E; Subotnik, Kenneth; Ventura, Joseph; Nuechterlein, Keith H; Cannon, Tyrone D
2015-03-01
Patients with and at risk for psychosis may have difficulty using associative strategies to facilitate episodic memory encoding and recall. In parallel studies, patients with first-episode schizophrenia ( n = 27) and high psychosis risk ( n = 28) compared with control participants ( n = 22 and n = 20, respectively) underwent functional MRI during a remember-know memory task. Psychophysiological interaction analyses, using medial temporal lobe (MTL) structures as regions of interest, were conducted to measure functional connectivity patterns supporting successful episodic memory. During encoding, patients with first-episode schizophrenia demonstrated reduced functional coupling between MTL regions and regions involved in stimulus representations, stimulus selection, and cognitive control. Relative to control participants and patients with high psychosis risk who did not convert to psychosis, patients with high psychosis risk who later converted to psychosis also demonstrated reduced connectivity between MTL regions and auditory-verbal and visual-association regions. These results suggest that episodic memory deficits in schizophrenia are related to inefficient recruitment of cortical connections involved in associative memory formation; such deficits precede the onset of psychosis among those individuals at high clinical risk.
The phase of prestimulus alpha oscillations affects tactile perception.
Ai, Lei; Ro, Tony
2014-03-01
Previous studies have shown that neural oscillations in the 8- to 12-Hz range influence sensory perception. In the current study, we examined whether both the power and phase of these mu/alpha oscillations predict successful conscious tactile perception. Near-threshold tactile stimuli were applied to the left hand while electroencephalographic (EEG) activity was recorded over the contralateral right somatosensory cortex. We found a significant inverted U-shaped relationship between prestimulus mu/alpha power and detection rate, suggesting that there is an intermediate level of alpha power that is optimal for tactile perception. We also found a significant difference in phase angle concentration at stimulus onset that predicted whether the upcoming tactile stimulus was perceived or missed. As has been shown in the visual system, these findings suggest that these mu/alpha oscillations measured over somatosensory areas exert a strong inhibitory control on tactile perception and that pulsed inhibition by these oscillations shapes the state of brain activity necessary for conscious perception. They further suggest that these common phasic processing mechanisms across different sensory modalities and brain regions may reflect a common underlying encoding principle in perceptual processing that leads to momentary windows of perceptual awareness.
10-Month-Olds Visually Anticipate an Outcome Contingent on Their Own Action
ERIC Educational Resources Information Center
Kenward, Ben
2010-01-01
It is known that young infants can learn to perform an action that elicits a reinforcer, and that they can visually anticipate a predictable stimulus by looking at its location before it begins. Here, in an investigation of the display of these abilities in tandem, I report that 10-month-olds anticipate a reward stimulus that they generate through…
Wang, Dong-Yuan Debbie; Richard, F Dan; Ray, Brittany
2016-01-01
The stimulus-response correspondence (SRC) effect refers to advantages in performance when stimulus and response correspond in dimensions or features, even if the common features are irrelevant to the task. Previous research indicated that the SRC effect depends on the temporal course of stimulus information processing. The current study investigated how the temporal overlap between relevant and irrelevant stimulus processing influences the SRC effect. In this experiment, the irrelevant stimulus (a previously associated tone) preceded the relevant stimulus (a coloured rectangle). The irrelevant and relevant stimuli onset asynchrony was varied to manipulate the temporal overlap between the irrelevant and relevant stimuli processing. Results indicated that the SRC effect size varied as a quadratic function of the temporal overlap between the relevant stimulus and irrelevant stimulus. This finding extends previous experimental observations that the SRC effect size varies in an increasing or decreasing function with reaction time. The current study demonstrated a quadratic function between effect size and the temporal overlap.
A method for real-time visual stimulus selection in the study of cortical object perception.
Leeds, Daniel D; Tarr, Michael J
2016-06-01
The properties utilized by visual object perception in the mid- and high-level ventral visual pathway are poorly understood. To better establish and explore possible models of these properties, we adopt a data-driven approach in which we repeatedly interrogate neural units using functional Magnetic Resonance Imaging (fMRI) to establish each unit's image selectivity. This approach to imaging necessitates a search through a broad space of stimulus properties using a limited number of samples. To more quickly identify the complex visual features underlying human cortical object perception, we implemented a new functional magnetic resonance imaging protocol in which visual stimuli are selected in real-time based on BOLD responses to recently shown images. Two variations of this protocol were developed, one relying on natural object stimuli and a second based on synthetic object stimuli, both embedded in feature spaces based on the complex visual properties of the objects. During fMRI scanning, we continuously controlled stimulus selection in the context of a real-time search through these image spaces in order to maximize neural responses across pre-determined 1cm(3) rain regions. Elsewhere we have reported the patterns of cortical selectivity revealed by this approach (Leeds et al., 2014). In contrast, here our objective is to present more detailed methods and explore the technical and biological factors influencing the behavior of our real-time stimulus search. We observe that: 1) Searches converged more reliably when exploring a more precisely parameterized space of synthetic objects; 2) real-time estimation of cortical responses to stimuli is reasonably consistent; 3) search behavior was acceptably robust to delays in stimulus displays and subject motion effects. Overall, our results indicate that real-time fMRI methods may provide a valuable platform for continuing study of localized neural selectivity, both for visual object representation and beyond. Copyright © 2016 Elsevier Inc. All rights reserved.
A method for real-time visual stimulus selection in the study of cortical object perception
Leeds, Daniel D.; Tarr, Michael J.
2016-01-01
The properties utilized by visual object perception in the mid- and high-level ventral visual pathway are poorly understood. To better establish and explore possible models of these properties, we adopt a data-driven approach in which we repeatedly interrogate neural units using functional Magnetic Resonance Imaging (fMRI) to establish each unit’s image selectivity. This approach to imaging necessitates a search through a broad space of stimulus properties using a limited number of samples. To more quickly identify the complex visual features underlying human cortical object perception, we implemented a new functional magnetic resonance imaging protocol in which visual stimuli are selected in real-time based on BOLD responses to recently shown images. Two variations of this protocol were developed, one relying on natural object stimuli and a second based on synthetic object stimuli, both embedded in feature spaces based on the complex visual properties of the objects. During fMRI scanning, we continuously controlled stimulus selection in the context of a real-time search through these image spaces in order to maximize neural responses across predetermined 1 cm3 brain regions. Elsewhere we have reported the patterns of cortical selectivity revealed by this approach (Leeds 2014). In contrast, here our objective is to present more detailed methods and explore the technical and biological factors influencing the behavior of our real-time stimulus search. We observe that: 1) Searches converged more reliably when exploring a more precisely parameterized space of synthetic objects; 2) Real-time estimation of cortical responses to stimuli are reasonably consistent; 3) Search behavior was acceptably robust to delays in stimulus displays and subject motion effects. Overall, our results indicate that real-time fMRI methods may provide a valuable platform for continuing study of localized neural selectivity, both for visual object representation and beyond. PMID:26973168
Simpson, Claire; Pinkham, Amy E; Kelsven, Skylar; Sasson, Noah J
2013-12-01
Emotion can be expressed by both the voice and face, and previous work suggests that presentation modality may impact emotion recognition performance in individuals with schizophrenia. We investigated the effect of stimulus modality on emotion recognition accuracy and the potential role of visual attention to faces in emotion recognition abilities. Thirty-one patients who met DSM-IV criteria for schizophrenia (n=8) or schizoaffective disorder (n=23) and 30 non-clinical control individuals participated. Both groups identified emotional expressions in three different conditions: audio only, visual only, combined audiovisual. In the visual only and combined conditions, time spent visually fixating salient features of the face were recorded. Patients were significantly less accurate than controls in emotion recognition during both the audio and visual only conditions but did not differ from controls on the combined condition. Analysis of visual scanning behaviors demonstrated that patients attended less than healthy individuals to the mouth in the visual condition but did not differ in visual attention to salient facial features in the combined condition, which may in part explain the absence of a deficit for patients in this condition. Collectively, these findings demonstrate that patients benefit from multimodal stimulus presentations of emotion and support hypotheses that visual attention to salient facial features may serve as a mechanism for accurate emotion identification. © 2013.
Aural, visual, and pictorial stimulus formats in false recall.
Beauchamp, Heather M
2002-12-01
The present investigation is an initial simultaneous examination of the influence of three stimulus formats on false memories. Several pilot tests were conducted to develop new category associate stimulus lists. 73 women and 26 men (M age=21.1 yr.) were in one of three conditions: they either heard words, were shown words, or were shown pictures highly related to critical nonpresented items. As expected, recall of critical nonpresented stimuli was significantly greater for aural lists than for visually presented words and pictorial images. These findings demonstrate that the accuracy of memory is influenced by the format of the information encoded.
Visual and proprioceptive interaction in patients with bilateral vestibular loss☆
Cutfield, Nicholas J.; Scott, Gregory; Waldman, Adam D.; Sharp, David J.; Bronstein, Adolfo M.
2014-01-01
Following bilateral vestibular loss (BVL) patients gradually adapt to the loss of vestibular input and rely more on other sensory inputs. Here we examine changes in the way proprioceptive and visual inputs interact. We used functional magnetic resonance imaging (fMRI) to investigate visual responses in the context of varying levels of proprioceptive input in 12 BVL subjects and 15 normal controls. A novel metal-free vibrator was developed to allow vibrotactile neck proprioceptive input to be delivered in the MRI system. A high level (100 Hz) and low level (30 Hz) control stimulus was applied over the left splenius capitis; only the high frequency stimulus generates a significant proprioceptive stimulus. The neck stimulus was applied in combination with static and moving (optokinetic) visual stimuli, in a factorial fMRI experimental design. We found that high level neck proprioceptive input had more cortical effect on brain activity in the BVL patients. This included a reduction in visual motion responses during high levels of proprioceptive input and differential activation in the midline cerebellum. In early visual cortical areas, the effect of high proprioceptive input was present for both visual conditions but in lateral visual areas, including V5/MT, the effect was only seen in the context of visual motion stimulation. The finding of a cortical visuo-proprioceptive interaction in BVL patients is consistent with behavioural data indicating that, in BVL patients, neck afferents partly replace vestibular input during the CNS-mediated compensatory process. An fMRI cervico-visual interaction may thus substitute the known visuo-vestibular interaction reported in normal subject fMRI studies. The results provide evidence for a cortical mechanism of adaptation to vestibular failure, in the form of an enhanced proprioceptive influence on visual processing. The results may provide the basis for a cortical mechanism involved in proprioceptive substitution of vestibular function in BVL patients. PMID:25061564
Visual and proprioceptive interaction in patients with bilateral vestibular loss.
Cutfield, Nicholas J; Scott, Gregory; Waldman, Adam D; Sharp, David J; Bronstein, Adolfo M
2014-01-01
Following bilateral vestibular loss (BVL) patients gradually adapt to the loss of vestibular input and rely more on other sensory inputs. Here we examine changes in the way proprioceptive and visual inputs interact. We used functional magnetic resonance imaging (fMRI) to investigate visual responses in the context of varying levels of proprioceptive input in 12 BVL subjects and 15 normal controls. A novel metal-free vibrator was developed to allow vibrotactile neck proprioceptive input to be delivered in the MRI system. A high level (100 Hz) and low level (30 Hz) control stimulus was applied over the left splenius capitis; only the high frequency stimulus generates a significant proprioceptive stimulus. The neck stimulus was applied in combination with static and moving (optokinetic) visual stimuli, in a factorial fMRI experimental design. We found that high level neck proprioceptive input had more cortical effect on brain activity in the BVL patients. This included a reduction in visual motion responses during high levels of proprioceptive input and differential activation in the midline cerebellum. In early visual cortical areas, the effect of high proprioceptive input was present for both visual conditions but in lateral visual areas, including V5/MT, the effect was only seen in the context of visual motion stimulation. The finding of a cortical visuo-proprioceptive interaction in BVL patients is consistent with behavioural data indicating that, in BVL patients, neck afferents partly replace vestibular input during the CNS-mediated compensatory process. An fMRI cervico-visual interaction may thus substitute the known visuo-vestibular interaction reported in normal subject fMRI studies. The results provide evidence for a cortical mechanism of adaptation to vestibular failure, in the form of an enhanced proprioceptive influence on visual processing. The results may provide the basis for a cortical mechanism involved in proprioceptive substitution of vestibular function in BVL patients.
Norman, J Farley; Phillips, Flip; Holmin, Jessica S; Norman, Hideko F; Beers, Amanda M; Boswell, Alexandria M; Cheeseman, Jacob R; Stethen, Angela G; Ronning, Cecilia
2012-10-01
A set of three experiments evaluated 96 participants' ability to visually and haptically discriminate solid object shape. In the past, some researchers have found haptic shape discrimination to be substantially inferior to visual shape discrimination, while other researchers have found haptics and vision to be essentially equivalent. A primary goal of the present study was to understand these discrepant past findings and to determine the true capabilities of the haptic system. All experiments used the same task (same vs. different shape discrimination) and stimulus objects (James Gibson's "feelies" and a set of naturally shaped objects--bell peppers). However, the methodology varied across experiments. Experiment 1 used random 3-dimensional (3-D) orientations of the stimulus objects, and the conditions were full-cue (active manipulation of objects and rotation of the visual objects in depth). Experiment 2 restricted the 3-D orientations of the stimulus objects and limited the haptic and visual information available to the participants. Experiment 3 compared restricted and full-cue conditions using random 3-D orientations. We replicated both previous findings in the current study. When we restricted visual and haptic information (and placed the stimulus objects in the same orientation on every trial), the participants' visual performance was superior to that obtained for haptics (replicating the earlier findings of Davidson et al. in Percept Psychophys 15(3):539-543, 1974). When the circumstances resembled those of ordinary life (e.g., participants able to actively manipulate objects and see them from a variety of perspectives), we found no significant difference between visual and haptic solid shape discrimination.
Rademaker, Rosanne L; van de Ven, Vincent G; Tong, Frank; Sack, Alexander T
2017-01-01
Neuroimaging studies have demonstrated that activity patterns in early visual areas predict stimulus properties actively maintained in visual working memory. Yet, the mechanisms by which such information is represented remain largely unknown. In this study, observers remembered the orientations of 4 briefly presented gratings, one in each quadrant of the visual field. A 10Hz Transcranial Magnetic Stimulation (TMS) triplet was applied directly at stimulus offset, or midway through a 2-second delay, targeting early visual cortex corresponding retinotopically to a sample item in the lower hemifield. Memory for one of the four gratings was probed at random, and participants reported this orientation via method of adjustment. Recall errors were smaller when the visual field location targeted by TMS overlapped with that of the cued memory item, compared to errors for stimuli probed diagonally to TMS. This implied topographic storage of orientation information, and a memory-enhancing effect at the targeted location. Furthermore, early pulses impaired performance at all four locations, compared to late pulses. Next, response errors were fit empirically using a mixture model to characterize memory precision and guess rates. Memory was more precise for items proximal to the pulse location, irrespective of pulse timing. Guesses were more probable with early TMS pulses, regardless of stimulus location. Thus, while TMS administered at the offset of the stimulus array might disrupt early-phase consolidation in a non-topographic manner, TMS also boosts the precise representation of an item at its targeted retinotopic location, possibly by increasing attentional resources or by injecting a beneficial amount of noise.
van de Ven, Vincent G.; Tong, Frank; Sack, Alexander T.
2017-01-01
Neuroimaging studies have demonstrated that activity patterns in early visual areas predict stimulus properties actively maintained in visual working memory. Yet, the mechanisms by which such information is represented remain largely unknown. In this study, observers remembered the orientations of 4 briefly presented gratings, one in each quadrant of the visual field. A 10Hz Transcranial Magnetic Stimulation (TMS) triplet was applied directly at stimulus offset, or midway through a 2-second delay, targeting early visual cortex corresponding retinotopically to a sample item in the lower hemifield. Memory for one of the four gratings was probed at random, and participants reported this orientation via method of adjustment. Recall errors were smaller when the visual field location targeted by TMS overlapped with that of the cued memory item, compared to errors for stimuli probed diagonally to TMS. This implied topographic storage of orientation information, and a memory-enhancing effect at the targeted location. Furthermore, early pulses impaired performance at all four locations, compared to late pulses. Next, response errors were fit empirically using a mixture model to characterize memory precision and guess rates. Memory was more precise for items proximal to the pulse location, irrespective of pulse timing. Guesses were more probable with early TMS pulses, regardless of stimulus location. Thus, while TMS administered at the offset of the stimulus array might disrupt early-phase consolidation in a non-topographic manner, TMS also boosts the precise representation of an item at its targeted retinotopic location, possibly by increasing attentional resources or by injecting a beneficial amount of noise. PMID:28384347
Van Ombergen, Angelique; Lubeck, Astrid J; Van Rompaey, Vincent; Maes, Leen K; Stins, John F; Van de Heyning, Paul H; Wuyts, Floris L; Bos, Jelte E
2016-01-01
Vestibular patients occasionally report aggravation or triggering of their symptoms by visual stimuli, which is called visual vestibular mismatch (VVM). These patients therefore experience discomfort, disorientation, dizziness and postural unsteadiness. Firstly, we aimed to get a better insight in the underlying mechanism of VVM by examining perceptual and postural symptoms. Secondly, we wanted to investigate whether roll-motion is a necessary trait to evoke these symptoms or whether a complex but stationary visual pattern equally provokes them. Nine VVM patients and healthy matched control group were examined by exposing both groups to a stationary stimulus as well as an optokinetic stimulus rotating around the naso-occipital axis for a prolonged period of time. Subjective visual vertical (SVV) measurements, posturography and relevant questionnaires were assessed. No significant differences between both groups were found for SVV measurements. Patients always swayed more and reported more symptoms than healthy controls. Prolonged exposure to roll-motion caused in patients and controls an increase in postural sway and symptoms. However, only VVM patients reported significantly more symptoms after prolonged exposure to the optokinetic stimulus compared to scores after exposure to a stationary stimulus. VVM patients differ from healthy controls in postural and subjective symptoms and motion is a crucial factor in provoking these symptoms. A possible explanation could be a central visual-vestibular integration deficit, which has implications for diagnostics and clinical rehabilitation purposes. Future research should focus on the underlying central mechanism of VVM and the effectiveness of optokinetic stimulation in resolving it.
Enhanced audio-visual interactions in the auditory cortex of elderly cochlear-implant users.
Schierholz, Irina; Finke, Mareike; Schulte, Svenja; Hauthal, Nadine; Kantzke, Christoph; Rach, Stefan; Büchner, Andreas; Dengler, Reinhard; Sandmann, Pascale
2015-10-01
Auditory deprivation and the restoration of hearing via a cochlear implant (CI) can induce functional plasticity in auditory cortical areas. How these plastic changes affect the ability to integrate combined auditory (A) and visual (V) information is not yet well understood. In the present study, we used electroencephalography (EEG) to examine whether age, temporary deafness and altered sensory experience with a CI can affect audio-visual (AV) interactions in post-lingually deafened CI users. Young and elderly CI users and age-matched NH listeners performed a speeded response task on basic auditory, visual and audio-visual stimuli. Regarding the behavioral results, a redundant signals effect, that is, faster response times to cross-modal (AV) than to both of the two modality-specific stimuli (A, V), was revealed for all groups of participants. Moreover, in all four groups, we found evidence for audio-visual integration. Regarding event-related responses (ERPs), we observed a more pronounced visual modulation of the cortical auditory response at N1 latency (approximately 100 ms after stimulus onset) in the elderly CI users when compared with young CI users and elderly NH listeners. Thus, elderly CI users showed enhanced audio-visual binding which may be a consequence of compensatory strategies developed due to temporary deafness and/or degraded sensory input after implantation. These results indicate that the combination of aging, sensory deprivation and CI facilitates the coupling between the auditory and the visual modality. We suggest that this enhancement in multisensory interactions could be used to optimize auditory rehabilitation, especially in elderly CI users, by the application of strong audio-visually based rehabilitation strategies after implant switch-on. Copyright © 2015 Elsevier B.V. All rights reserved.
Modulation of visual physiology by behavioral state in monkeys, mice, and flies.
Maimon, Gaby
2011-08-01
When a monkey attends to a visual stimulus, neurons in visual cortex respond differently to that stimulus than when the monkey attends elsewhere. In the 25 years since the initial discovery, the study of attention in primates has been central to understanding flexible visual processing. Recent experiments demonstrate that visual neurons in mice and fruit flies are modulated by locomotor behaviors, like running and flying, in a manner that resembles attention-based modulations in primates. The similar findings across species argue for a more generalized view of state-dependent sensory processing and for a renewed dialogue among vertebrate and invertebrate research communities. Copyright © 2011 Elsevier Ltd. All rights reserved.
Bressler, David W.; Silver, Michael A.
2010-01-01
Spatial attention improves visual perception and increases the amplitude of neural responses in visual cortex. In addition, spatial attention tasks and fMRI have been used to discover topographic visual field representations in regions outside visual cortex. We therefore hypothesized that requiring subjects to attend to a retinotopic mapping stimulus would facilitate the characterization of visual field representations in a number of cortical areas. In our study, subjects attended either a central fixation point or a wedge-shaped stimulus that rotated about the fixation point. Response reliability was assessed by computing coherence between the fMRI time series and a sinusoid with the same frequency as the rotating wedge stimulus. When subjects attended to the rotating wedge instead of ignoring it, the reliability of retinotopic mapping signals increased by approximately 50% in early visual cortical areas (V1, V2, V3, V3A/B, V4) and ventral occipital cortex (VO1) and by approximately 75% in lateral occipital (LO1, LO2) and posterior parietal (IPS0, IPS1 and IPS2) cortical areas. Additionally, one 5-minute run of retinotopic mapping in the attention-to-wedge condition produced responses as reliable as the average of three to five (early visual cortex) or more than five (lateral occipital, ventral occipital, and posterior parietal cortex) attention-to-fixation runs. These results demonstrate that allocating attention to the retinotopic mapping stimulus substantially reduces the amount of scanning time needed to determine the visual field representations in occipital and parietal topographic cortical areas. Attention significantly increased response reliability in every cortical area we examined and may therefore be a general mechanism for improving the fidelity of neural representations of sensory stimuli at multiple levels of the cortical processing hierarchy. PMID:20600961
Qin, Pengmin; Duncan, Niall W; Wiebking, Christine; Gravel, Paul; Lyttelton, Oliver; Hayes, Dave J; Verhaeghe, Jeroen; Kostikov, Alexey; Schirrmacher, Ralf; Reader, Andrew J; Northoff, Georg
2012-01-01
Recent imaging studies have demonstrated that levels of resting γ-aminobutyric acid (GABA) in the visual cortex predict the degree of stimulus-induced activity in the same region. These studies have used the presentation of discrete visual stimulus; the change from closed eyes to open also represents a simple visual stimulus, however, and has been shown to induce changes in local brain activity and in functional connectivity between regions. We thus aimed to investigate the role of the GABA system, specifically GABA(A) receptors, in the changes in brain activity between the eyes closed (EC) and eyes open (EO) state in order to provide detail at the receptor level to complement previous studies of GABA concentrations. We conducted an fMRI study involving two different modes of the change from EC to EO: an EO and EC block design, allowing the modeling of the haemodynamic response, followed by longer periods of EC and EO to allow the measuring of functional connectivity. The same subjects also underwent [(18)F]Flumazenil PET to measure GABA(A) receptor binding potentials. It was demonstrated that the local-to-global ratio of GABA(A) receptor binding potential in the visual cortex predicted the degree of changes in neural activity from EC to EO. This same relationship was also shown in the auditory cortex. Furthermore, the local-to-global ratio of GABA(A) receptor binding potential in the visual cortex also predicted the change in functional connectivity between the visual and auditory cortex from EC to EO. These findings contribute to our understanding of the role of GABA(A) receptors in stimulus-induced neural activity in local regions and in inter-regional functional connectivity.
Friedrich, Andrea M; Clement, Tricia S; Zentall, Thomas R
2005-08-01
Clement, Feltus, Kaiser, and Zentall (2000) found that when pigeons have to work to obtain a discriminative stimulus that is followed by reinforcement, they prefer a discriminative stimulus that requires greater effort over one that requires less effort. The authors suggested that such a preference results from the greater change in hedonic value that occurs between the more aversive event and the onset of the stimulus that signals reinforcement, a contrast effect. It was hypothesized that any stimulus that follows a relatively more aversive event would be preferred over a stimulus that follows a relatively less aversive event. In the present experiment, the authors tested the counterintuitive prediction of that theory, that pigeons should prefer a discriminative stimulus that follows the absence of reinforcement over a discriminative stimulus that follows reinforcement. Results supported the theory.
Opposite Influence of Perceptual Memory on Initial and Prolonged Perception of Sensory Ambiguity
de Jong, Maartje Cathelijne; Knapen, Tomas; van Ee, Raymond
2012-01-01
Observers continually make unconscious inferences about the state of the world based on ambiguous sensory information. This process of perceptual decision-making may be optimized by learning from experience. We investigated the influence of previous perceptual experience on the interpretation of ambiguous visual information. Observers were pre-exposed to a perceptually stabilized sequence of an ambiguous structure-from-motion stimulus by means of intermittent presentation. At the subsequent re-appearance of the same ambiguous stimulus perception was initially biased toward the previously stabilized perceptual interpretation. However, prolonged viewing revealed a bias toward the alternative perceptual interpretation. The prevalence of the alternative percept during ongoing viewing was largely due to increased durations of this percept, as there was no reliable decrease in the durations of the pre-exposed percept. Moreover, the duration of the alternative percept was modulated by the specific characteristics of the pre-exposure, whereas the durations of the pre-exposed percept were not. The increase in duration of the alternative percept was larger when the pre-exposure had lasted longer and was larger after ambiguous pre-exposure than after unambiguous pre-exposure. Using a binocular rivalry stimulus we found analogous perceptual biases, while pre-exposure did not affect eye-bias. We conclude that previously perceived interpretations dominate at the onset of ambiguous sensory information, whereas alternative interpretations dominate prolonged viewing. Thus, at first instance ambiguous information seems to be judged using familiar percepts, while re-evaluation later on allows for alternative interpretations. PMID:22295095
ERIC Educational Resources Information Center
Nokia, Miriam S.; Waselius, Tomi; Mikkonen, Jarno E.; Wikgren, Jan; Penttonen, Markku
2015-01-01
Hippocampal ? (3-12 Hz) oscillations are implicated in learning and memory, but their functional role remains unclear. We studied the effect of the phase of local ? oscillation on hippocampal responses to a neutral conditioned stimulus (CS) and subsequent learning of classical trace eyeblink conditioning in adult rabbits. High-amplitude, regular…
Gap Detection in School-Age Children and Adults: Center Frequency and Ramp Duration
ERIC Educational Resources Information Center
Buss, Emily; Porter, Heather L.; Hall, Joseph W., III; Grose, John H.
2017-01-01
Purpose: The age at which gap detection becomes adultlike differs, depending on the stimulus characteristics. The present study evaluated whether the developmental trajectory differs as a function of stimulus frequency region or duration of the onset and offset ramps bounding the gap. Method: Thresholds were obtained for wideband noise (500-4500…
A pilot study of a novel smartphone application for the estimation of sleep onset.
Scott, Hannah; Lack, Leon; Lovato, Nicole
2018-02-01
The aim of the study was to investigate the accuracy of Sleep On Cue: a novel iPhone application that uses behavioural responses to auditory stimuli to estimate sleep onset. Twelve young adults underwent polysomnography recording while simultaneously using Sleep On Cue. Participants completed as many sleep-onset trials as possible within a 2-h period following their normal bedtime. On each trial, participants were awoken by the app following behavioural sleep onset. Then, after a short break of wakefulness, commenced the next trial. There was a high degree of correspondence between polysomnography-determined sleep onset and Sleep On Cue behavioural sleep onset, r = 0.79, P < 0.001. On average, Sleep On Cue overestimated sleep-onset latency by 3.17 min (SD = 3.04). When polysomnography sleep onset was defined as the beginning of N2 sleep, the discrepancy was reduced considerably (M = 0.81, SD = 1.96). The discrepancy between polysomnography and Sleep On Cue varied between individuals, which was potentially due to variations in auditory stimulus intensity. Further research is required to determine whether modifications to the stimulus intensity and behavioural response could improve the accuracy of the app. Nonetheless, Sleep On Cue is a viable option for estimating sleep onset and may be used to administer Intensive Sleep Retraining or facilitate power naps in the home environment. © 2017 European Sleep Research Society.
Toward FRP-Based Brain-Machine Interfaces—Single-Trial Classification of Fixation-Related Potentials
Finke, Andrea; Essig, Kai; Marchioro, Giuseppe; Ritter, Helge
2016-01-01
The co-registration of eye tracking and electroencephalography provides a holistic measure of ongoing cognitive processes. Recently, fixation-related potentials have been introduced to quantify the neural activity in such bi-modal recordings. Fixation-related potentials are time-locked to fixation onsets, just like event-related potentials are locked to stimulus onsets. Compared to existing electroencephalography-based brain-machine interfaces that depend on visual stimuli, fixation-related potentials have the advantages that they can be used in free, unconstrained viewing conditions and can also be classified on a single-trial level. Thus, fixation-related potentials have the potential to allow for conceptually different brain-machine interfaces that directly interpret cortical activity related to the visual processing of specific objects. However, existing research has investigated fixation-related potentials only with very restricted and highly unnatural stimuli in simple search tasks while participant’s body movements were restricted. We present a study where we relieved many of these restrictions while retaining some control by using a gaze-contingent visual search task. In our study, participants had to find a target object out of 12 complex and everyday objects presented on a screen while the electrical activity of the brain and eye movements were recorded simultaneously. Our results show that our proposed method for the classification of fixation-related potentials can clearly discriminate between fixations on relevant, non-relevant and background areas. Furthermore, we show that our classification approach generalizes not only to different test sets from the same participant, but also across participants. These results promise to open novel avenues for exploiting fixation-related potentials in electroencephalography-based brain-machine interfaces and thus providing a novel means for intuitive human-machine interaction. PMID:26812487
Hoffmann, Michael B; Wolynski, Barbara; Meltendorf, Synke; Behrens-Baumann, Wolfgang; Käsmann-Kellner, Barbara
2008-06-01
In albinism, part of the temporal retina projects abnormally to the contralateral hemisphere. A residual misprojection is also evident in feline carriers that are heterozygous for tyrosinase-related albinism. This study was conducted to test whether such residual abnormalities can also be identified in human carriers of oculocutaneous tyrosinase-related albinism (OCA1a). In eight carriers heterozygous for OCA1a and in eight age- and sex-matched control subjects, monocular pattern-reversal and -onset multifocal visual evoked potentials (mfVEPs) were recorded at 60 locations comprising a visual field of 44 degrees diameter (VERIS 5.01; EDI, San Mateo, CA). For each eye and each stimulus location, interhemispheric difference potentials were calculated and correlated with each other, to assess the lateralization of the responses: positive and negative correlations indicate lateralizations on the same or opposite hemispheres, respectively. Misrouted optic nerves are expected to yield negative interocular correlations. The analysis also allowed for the assessment of the sensitivity and specificity of the detection of projection abnormalities. No significant differences were obtained for the distributions of the interocular correlation coefficients of controls and carriers. Consequently, no local representation abnormalities were observed in the group of OCA1a carriers. For pattern-reversal and -onset stimulation, an assessment of the control data yielded similar specificity (97.9% and 94.6%) and sensitivity (74.4% and 74.8%) estimates for the detection of projection abnormalities. The absence of evidence for projection abnormalities in human OCA1a carriers contrasts with the previously reported evidence for abnormalities in cat-carriers of tyrosinase-related albinism. This discrepancy suggests that animal models of albinism may not provide a match to human albinism.
ERIC Educational Resources Information Center
Kyllingsbaek, Soren; Markussen, Bo; Bundesen, Claus
2012-01-01
The authors propose and test a simple model of the time course of visual identification of briefly presented, mutually confusable single stimuli in pure accuracy tasks. The model implies that during stimulus analysis, tentative categorizations that stimulus i belongs to category j are made at a constant Poisson rate, v(i, j). The analysis is…
ERIC Educational Resources Information Center
Fortier-Gauthier, Ulysse; Moffat, Nicolas; Dell'Acqua, Robert; McDonald, John J.; Jolicoeur, Pierre
2012-01-01
We studied brain activity during retention and retrieval phases of two visual short-term memory (VSTM) experiments. Experiment 1 used a balanced memory array, with one color stimulus in each hemifield, followed by a retention interval and a central probe, at the fixation point that designated the target stimulus in memory about which to make a…
Seno, Takeharu; Fukuda, Haruaki
2012-01-01
Over the last 100 years, numerous studies have examined the effective visual stimulus properties for inducing illusory self-motion (known as vection). This vection is often experienced more strongly in daily life than under controlled experimental conditions. One well-known example of vection in real life is the so-called 'train illusion'. In the present study, we showed that this train illusion can also be generated in the laboratory using virtual computer graphics-based motion stimuli. We also demonstrated that this vection can be modified by altering the meaning of the visual stimuli (i.e., top down effects). Importantly, we show that the semantic meaning of a stimulus can inhibit or facilitate vection, even when there is no physical change to the stimulus.
Bottlenecks of Motion Processing during a Visual Glance: The Leaky Flask Model
Öğmen, Haluk; Ekiz, Onur; Huynh, Duong; Bedell, Harold E.; Tripathy, Srimant P.
2013-01-01
Where do the bottlenecks for information and attention lie when our visual system processes incoming stimuli? The human visual system encodes the incoming stimulus and transfers its contents into three major memory systems with increasing time scales, viz., sensory (or iconic) memory, visual short-term memory (VSTM), and long-term memory (LTM). It is commonly believed that the major bottleneck of information processing resides in VSTM. In contrast to this view, we show major bottlenecks for motion processing prior to VSTM. In the first experiment, we examined bottlenecks at the stimulus encoding stage through a partial-report technique by delivering the cue immediately at the end of the stimulus presentation. In the second experiment, we varied the cue delay to investigate sensory memory and VSTM. Performance decayed exponentially as a function of cue delay and we used the time-constant of the exponential-decay to demarcate sensory memory from VSTM. We then decomposed performance in terms of quality and quantity measures to analyze bottlenecks along these dimensions. In terms of the quality of information, two thirds to three quarters of the motion-processing bottleneck occurs in stimulus encoding rather than memory stages. In terms of the quantity of information, the motion-processing bottleneck is distributed, with the stimulus-encoding stage accounting for one third of the bottleneck. The bottleneck for the stimulus-encoding stage is dominated by the selection compared to the filtering function of attention. We also found that the filtering function of attention is operating mainly at the sensory memory stage in a specific manner, i.e., influencing only quantity and sparing quality. These results provide a novel and more complete understanding of information processing and storage bottlenecks for motion processing. PMID:24391806
Bottlenecks of motion processing during a visual glance: the leaky flask model.
Öğmen, Haluk; Ekiz, Onur; Huynh, Duong; Bedell, Harold E; Tripathy, Srimant P
2013-01-01
Where do the bottlenecks for information and attention lie when our visual system processes incoming stimuli? The human visual system encodes the incoming stimulus and transfers its contents into three major memory systems with increasing time scales, viz., sensory (or iconic) memory, visual short-term memory (VSTM), and long-term memory (LTM). It is commonly believed that the major bottleneck of information processing resides in VSTM. In contrast to this view, we show major bottlenecks for motion processing prior to VSTM. In the first experiment, we examined bottlenecks at the stimulus encoding stage through a partial-report technique by delivering the cue immediately at the end of the stimulus presentation. In the second experiment, we varied the cue delay to investigate sensory memory and VSTM. Performance decayed exponentially as a function of cue delay and we used the time-constant of the exponential-decay to demarcate sensory memory from VSTM. We then decomposed performance in terms of quality and quantity measures to analyze bottlenecks along these dimensions. In terms of the quality of information, two thirds to three quarters of the motion-processing bottleneck occurs in stimulus encoding rather than memory stages. In terms of the quantity of information, the motion-processing bottleneck is distributed, with the stimulus-encoding stage accounting for one third of the bottleneck. The bottleneck for the stimulus-encoding stage is dominated by the selection compared to the filtering function of attention. We also found that the filtering function of attention is operating mainly at the sensory memory stage in a specific manner, i.e., influencing only quantity and sparing quality. These results provide a novel and more complete understanding of information processing and storage bottlenecks for motion processing.
Visual short-term memory: activity supporting encoding and maintenance in retinotopic visual cortex.
Sneve, Markus H; Alnæs, Dag; Endestad, Tor; Greenlee, Mark W; Magnussen, Svein
2012-10-15
Recent studies have demonstrated that retinotopic cortex maintains information about visual stimuli during retention intervals. However, the process by which transient stimulus-evoked sensory responses are transformed into enduring memory representations is unknown. Here, using fMRI and short-term visual memory tasks optimized for univariate and multivariate analysis approaches, we report differential involvement of human retinotopic areas during memory encoding of the low-level visual feature orientation. All visual areas show weaker responses when memory encoding processes are interrupted, possibly due to effects in orientation-sensitive primary visual cortex (V1) propagating across extrastriate areas. Furthermore, intermediate areas in both dorsal (V3a/b) and ventral (LO1/2) streams are significantly more active during memory encoding compared with non-memory (active and passive) processing of the same stimulus material. These effects in intermediate visual cortex are also observed during memory encoding of a different stimulus feature (spatial frequency), suggesting that these areas are involved in encoding processes on a higher level of representation. Using pattern-classification techniques to probe the representational content in visual cortex during delay periods, we further demonstrate that simply initiating memory encoding is not sufficient to produce long-lasting memory traces. Rather, active maintenance appears to underlie the observed memory-specific patterns of information in retinotopic cortex. Copyright © 2012 Elsevier Inc. All rights reserved.
Visual adaptation enhances action sound discrimination.
Barraclough, Nick E; Page, Steve A; Keefe, Bruce D
2017-01-01
Prolonged exposure, or adaptation, to a stimulus in 1 modality can bias, but also enhance, perception of a subsequent stimulus presented within the same modality. However, recent research has also found that adaptation in 1 modality can bias perception in another modality. Here, we show a novel crossmodal adaptation effect, where adaptation to a visual stimulus enhances subsequent auditory perception. We found that when compared to no adaptation, prior adaptation to visual, auditory, or audiovisual hand actions enhanced discrimination between 2 subsequently presented hand action sounds. Discrimination was most enhanced when the visual action "matched" the auditory action. In addition, prior adaptation to a visual, auditory, or audiovisual action caused subsequent ambiguous action sounds to be perceived as less like the adaptor. In contrast, these crossmodal action aftereffects were not generated by adaptation to the names of actions. Enhanced crossmodal discrimination and crossmodal perceptual aftereffects may result from separate mechanisms operating in audiovisual action sensitive neurons within perceptual systems. Adaptation-induced crossmodal enhancements cannot be explained by postperceptual responses or decisions. More generally, these results together indicate that adaptation is a ubiquitous mechanism for optimizing perceptual processing of multisensory stimuli.
Facilitation of listening comprehension by visual information under noisy listening condition
NASA Astrophysics Data System (ADS)
Kashimada, Chiho; Ito, Takumi; Ogita, Kazuki; Hasegawa, Hiroshi; Kamata, Kazuo; Ayama, Miyoshi
2009-02-01
Comprehension of a sentence under a wide range of delay conditions between auditory and visual stimuli was measured in the environment with low auditory clarity of the level of -10dB and -15dB pink noise. Results showed that the image was helpful for comprehension of the noise-obscured voice stimulus when the delay between the auditory and visual stimuli was 4 frames (=132msec) or less, the image was not helpful for comprehension when the delay between the auditory and visual stimulus was 8 frames (=264msec) or more, and in some cases of the largest delay (32 frames), the video image interfered with comprehension.
Place avoidance learning and memory in a jumping spider.
Peckmezian, Tina; Taylor, Phillip W
2017-03-01
Using a conditioned passive place avoidance paradigm, we investigated the relative importance of three experimental parameters on learning and memory in a salticid, Servaea incana. Spiders encountered an aversive electric shock stimulus paired with one side of a two-sided arena. Our three parameters were the ecological relevance of the visual stimulus, the time interval between trials and the time interval before test. We paired electric shock with either a black or white visual stimulus, as prior studies in our laboratory have demonstrated that S. incana prefer dark 'safe' regions to light ones. We additionally evaluated the influence of two temporal features (time interval between trials and time interval before test) on learning and memory. Spiders exposed to the shock stimulus learned to associate shock with the visual background cue, but the extent to which they did so was dependent on which visual stimulus was present and the time interval between trials. Spiders trained with a long interval between trials (24 h) maintained performance throughout training, whereas spiders trained with a short interval (10 min) maintained performance only when the safe side was black. When the safe side was white, performance worsened steadily over time. There was no difference between spiders tested after a short (10 min) or long (24 h) interval before test. These results suggest that the ecological relevance of the stimuli used and the duration of the interval between trials can influence learning and memory in jumping spiders.
Reduced Perceptual Exclusivity during Object and Grating Rivalry in Autism
Freyberg, J.; Robertson, C.E.; Baron-Cohen, S.
2015-01-01
Background The dynamics of binocular rivalry may be a behavioural footprint of excitatory and inhibitory neural transmission in visual cortex. Given the presence of atypical visual features in Autism Spectrum Conditions (ASC), and evidence in support of the idea of an imbalance in excitatory/inhibitory neural transmission in ASC, we hypothesized that binocular rivalry might prove a simple behavioural marker of such a transmission imbalance in the autistic brain. In support of this hypothesis, we previously reported a slower rate of rivalry in ASC, driven by reduced perceptual exclusivity. Methods We tested whether atypical dynamics of binocular rivalry in ASC are specific to certain stimulus features. 53 participants (26 with ASC, matched for age, sex and IQ) participated in binocular rivalry experiments in which the dynamics of rivalry were measured at two levels of stimulus complexity, low (grayscale gratings) and high (coloured objects). Results Individuals with ASC experienced a slower rate of rivalry, driven by longer transitional states between dominant percepts. These exaggerated transitional states were present at both low and high levels of stimulus complexity, suggesting that atypical rivalry dynamics in autism are robust with respect to stimulus choice. Interactions between stimulus properties and rivalry dynamics in autism indicate that achromatic grating stimuli produce stronger group differences. Conclusion These results confirm the finding of atypical dynamics of binocular rivalry in ASC. These dynamics were present for stimuli of both low and high levels of visual complexity, suggesting an imbalance in competitive interactions throughout the visual system of individuals with ASC. PMID:26382002
Decoding and reconstructing color from responses in human visual cortex.
Brouwer, Gijs Joost; Heeger, David J
2009-11-04
How is color represented by spatially distributed patterns of activity in visual cortex? Functional magnetic resonance imaging responses to several stimulus colors were analyzed with multivariate techniques: conventional pattern classification, a forward model of idealized color tuning, and principal component analysis (PCA). Stimulus color was accurately decoded from activity in V1, V2, V3, V4, and VO1 but not LO1, LO2, V3A/B, or MT+. The conventional classifier and forward model yielded similar accuracies, but the forward model (unlike the classifier) also reliably reconstructed novel stimulus colors not used to train (specify parameters of) the model. The mean responses, averaged across voxels in each visual area, were not reliably distinguishable for the different stimulus colors. Hence, each stimulus color was associated with a unique spatially distributed pattern of activity, presumably reflecting the color selectivity of cortical neurons. Using PCA, a color space was derived from the covariation, across voxels, in the responses to different colors. In V4 and VO1, the first two principal component scores (main source of variation) of the responses revealed a progression through perceptual color space, with perceptually similar colors evoking the most similar responses. This was not the case for any of the other visual cortical areas, including V1, although decoding was most accurate in V1. This dissociation implies a transformation from the color representation in V1 to reflect perceptual color space in V4 and VO1.
Lee, Junghee; Cohen, Mark S; Engel, Stephen A; Glahn, David; Nuechterlein, Keith H; Wynn, Jonathan K; Green, Michael F
2010-07-01
Visual masking paradigms assess the early part of visual information processing, which may reflect vulnerability measures for schizophrenia. We examined the neural substrates of visual backward performance in unaffected sibling of schizophrenia patients using functional magnetic resonance imaging (fMRI). Twenty-one unaffected siblings of schizophrenia patients and 19 healthy controls performed a backward masking task and three functional localizer tasks to identify three visual processing regions of interest (ROI): lateral occipital complex (LO), the motion-sensitive area, and retinotopic areas. In the masking task, we systematically manipulated stimulus onset asynchronies (SOAs). We analyzed fMRI data in two complementary ways: 1) an ROI approach for three visual areas, and 2) a whole-brain analysis. The groups did not differ in behavioral performance. For ROI analysis, both groups increased activation as SOAs increased in LO. Groups did not differ in activation levels of the three ROIs. For whole-brain analysis, controls increased activation as a function of SOAs, compared with siblings in several regions (i.e., anterior cingulate cortex, posterior cingulate cortex, inferior prefrontal cortex, inferior parietal lobule). The study found: 1) area LO showed sensitivity to the masking effect in both groups; 2) siblings did not differ from controls in activation of LO; and 3) groups differed significantly in several brain regions outside visual processing areas that have been related to attentional or re-entrant processes. These findings suggest that LO dysfunction may be a disease indicator rather than a risk indicator for schizophrenia. Copyright 2010 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
Simple and powerful visual stimulus generator.
Kremlácek, J; Kuba, M; Kubová, Z; Vít, F
1999-02-01
We describe a cheap, simple, portable and efficient approach to visual stimulation for neurophysiology which does not need any special hardware equipment. The method based on an animation technique uses the FLI autodesk animator format. This form of the animation is replayed by a special program ('player') providing synchronisation pulses toward recording system via parallel port. The 'player is running on an IBM compatible personal computer under MS-DOS operation system and stimulus is displayed on a VGA computer monitor. Various stimuli created with this technique for visual evoked potentials (VEPs) are presented.
The auditory P50 component to onset and offset of sound
Pratt, Hillel; Starr, Arnold; Michalewski, Henry J.; Bleich, Naomi; Mittelman, Nomi
2008-01-01
Objective: The auditory Event-Related Potentials (ERP) component P50 to sound onset and offset have been reported to be similar, but their magnetic homologue has been reported absent to sound offset. We compared the spatio-temporal distribution of cortical activity during P50 to sound onset and offset, without confounds of spectral change. Methods: ERPs were recorded in response to onsets and offsets of silent intervals of 0.5 s (gaps) appearing randomly in otherwise continuous white noise and compared to ERPs to randomly distributed click pairs with half second separation presented in silence. Subjects were awake and distracted from the stimuli by reading a complicated text. Measures of P50 included peak latency and amplitude, as well as source current density estimates to the clicks and sound onsets and offsets. Results P50 occurred in response to noise onsets and to clicks, while to noise offset it was absent. Latency of P50 was similar to noise onset (56 msec) and to clicks (53 msec). Sources of P50 to noise onsets and clicks included bilateral superior parietal areas. In contrast, noise offsets activated left inferior temporal and occipital areas at the time of P50. Source current density was significantly higher to noise onset than offset in the vicinity of the temporo-parietal junction. Conclusions: P50 to sound offset is absent compared to the distinct P50 to sound onset and to clicks, at different intracranial sources. P50 to stimulus onset and to clicks appears to reflect preattentive arousal by a new sound in the scene. Sound offset does not involve a new sound and hence the absent P50. Significance: Stimulus onset activates distinct early cortical processes that are absent to offset. PMID:18055255
Swalve, Natashia; Barrett, Scott T; Bevins, Rick A; Li, Ming
2015-09-15
Nicotine is a widely-abused drug, yet its primary reinforcing effect does not seem potent as other stimulants such as cocaine. Recent research on the contributing factors toward chronic use of nicotine-containing products has implicated the role of reinforcement-enhancing effects of nicotine. The present study investigates whether phencyclidine (PCP) may also possess a reinforcement-enhancement effect and how this may interact with the reinforcement-enhancement effect of nicotine. PCP was tested for two reasons: (1) it produces discrepant results on overall reward, similar to that seen with nicotine and (2) it may elucidate how other compounds may interact with the reinforcement-enhancement of nicotine. Adult male Sprague-Dawley rats were trained to lever press for brief visual stimulus presentations under fixed-ratio (FR) schedules of reinforcement and then were tested with nicotine (0.2 or 0.4 mg/kg) and/or PCP (2.0mg/kg) over six increasing FR values. A selective increase in active lever-pressing for the visual stimulus with drug treatment was considered evidence of a reinforcement-enhancement effect. PCP and nicotine separately increased active lever pressing for a visual stimulus in a dose-dependent manner and across the different FR schedules. The addition of PCP to nicotine did not increase lever-pressing for the visual stimulus, possibly due to a ceiling effect. The effect of PCP may be driven largely by its locomotor stimulant effects, whereas the effect of nicotine was independent of locomotor stimulation. This dissociation emphasizes that distinct pharmacological properties contribute to the reinforcement-enhancement effects of substances. Copyright © 2015 Elsevier B.V. All rights reserved.
The interrelations between verbal working memory and visual selection of emotional faces.
Grecucci, Alessandro; Soto, David; Rumiati, Raffaella Ida; Humphreys, Glyn W; Rotshtein, Pia
2010-06-01
Working memory (WM) and visual selection processes interact in a reciprocal fashion based on overlapping representations abstracted from the physical characteristics of stimuli. Here, we assessed the neural basis of this interaction using facial expressions that conveyed emotion information. Participants memorized an emotional word for a later recognition test and then searched for a face of a particular gender presented in a display with two faces that differed in gender and expression. The relation between the emotional word and the expressions of the target and distractor faces was varied. RTs for the memory test were faster when the target face matched the emotional word held in WM (on valid trials) relative to when the emotional word matched the expression of the distractor (on invalid trials). There was also enhanced activation on valid compared with invalid trials in the lateral orbital gyrus, superior frontal polar (BA 10), lateral occipital sulcus, and pulvinar. Re-presentation of the WM stimulus in the search display led to an earlier onset of activity in the superior and inferior frontal gyri and the anterior hippocampus irrespective of the search validity of the re-presented stimulus. The data indicate that the middle temporal and prefrontal cortices are sensitive to the reappearance of stimuli that are held in WM, whereas a fronto-thalamic occipital network is sensitive to the behavioral significance of the match between WM and targets for selection. We conclude that these networks are modulated by high-level matches between the contents of WM, behavioral goals, and current sensory input.
Schallmo, Michael-Paul; Grant, Andrea N; Burton, Philip C; Olman, Cheryl A
2016-08-01
Although V1 responses are driven primarily by elements within a neuron's receptive field, which subtends about 1° visual angle in parafoveal regions, previous work has shown that localized fMRI responses to visual elements reflect not only local feature encoding but also long-range pattern attributes. However, separating the response to an image feature from the response to the surrounding stimulus and studying the interactions between these two responses demands both spatial precision and signal independence, which may be challenging to attain with fMRI. The present study used 7 Tesla fMRI with 1.2-mm resolution to measure the interactions between small sinusoidal grating patches (targets) at 3° eccentricity and surrounds of various sizes and orientations to test the conditions under which localized, context-dependent fMRI responses could be predicted from either psychophysical or electrophysiological data. Targets were presented at 8%, 16%, and 32% contrast while manipulating (a) spatial extent of parallel (strongly suppressive) or orthogonal (weakly suppressive) surrounds, (b) locus of attention, (c) stimulus onset asynchrony between target and surround, and (d) blocked versus event-related design. In all experiments, the V1 fMRI signal was lower when target stimuli were flanked by parallel versus orthogonal context. Attention amplified fMRI responses to all stimuli but did not show a selective effect on central target responses or a measurable effect on orientation-dependent surround suppression. Suppression of the V1 fMRI response by parallel surrounds was stronger than predicted from psychophysics but showed a better match to previous electrophysiological reports.
On the use of continuous flash suppression for the study of visual processing outside of awareness
Yang, Eunice; Brascamp, Jan; Kang, Min-Suk; Blake, Randolph
2014-01-01
The interocular suppression technique termed continuous flash suppression (CFS) has become an immensely popular tool for investigating visual processing outside of awareness. The emerging picture from studies using CFS is that extensive processing of a visual stimulus, including its semantic and affective content, occurs despite suppression from awareness of that stimulus by CFS. However, the current implementation of CFS in many studies examining processing outside of awareness has several drawbacks that may be improved upon for future studies using CFS. In this paper, we address some of those shortcomings, particularly ones that affect the assessment of unawareness during CFS, and ones to do with the use of “visible” conditions that are often included as a comparison to a CFS condition. We also discuss potential biases in stimulus processing as a result of spatial attention and feature-selective suppression. We suggest practical guidelines that minimize the effects of those limitations in using CFS to study visual processing outside of awareness. PMID:25071685
Tyndall, Ian; Ragless, Liam; O'Hora, Denis
2018-04-01
The present study examined whether increasing visual perceptual load differentially affected both Socially Meaningful and Non-socially Meaningful auditory stimulus awareness in neurotypical (NT, n = 59) adults and Autism Spectrum Disorder (ASD, n = 57) adults. On a target trial, an unexpected critical auditory stimulus (CAS), either a Non-socially Meaningful ('beep' sound) or Socially Meaningful ('hi') stimulus, was played concurrently with the presentation of the visual task. Under conditions of low visual perceptual load both NT and ASD samples reliably noticed the CAS at similar rates (77-81%), whether the CAS was Socially Meaningful or Non-socially Meaningful. However, during high visual perceptual load NT and ASD participants reliably noticed the meaningful CAS (NT = 71%, ASD = 67%), but NT participants were unlikely to notice the Non-meaningful CAS (20%), whereas ASD participants reliably noticed it (80%), suggesting an inability to engage selective attention to ignore non-salient irrelevant distractor stimuli in ASD. Copyright © 2018 Elsevier Inc. All rights reserved.
Duncum, A J F; Atkins, K J; Beilharz, F L; Mundy, M E
2016-01-01
Individuals with body dysmorphic disorder (BDD) and clinically concerning body-image concern (BIC) appear to possess abnormalities in the way they perceive visual information in the form of a bias towards local visual processing. As inversion interrupts normal global processing, forcing individuals to process locally, an upright-inverted stimulus discrimination task was used to investigate this phenomenon. We examined whether individuals with nonclinical, yet high levels of BIC would show signs of this bias, in the form of reduced inversion effects (i.e., increased local processing). Furthermore, we assessed whether this bias appeared for general visual stimuli or specifically for appearance-related stimuli, such as faces and bodies. Participants with high-BIC (n = 25) and low-BIC (n = 30) performed a stimulus discrimination task with upright and inverted faces, scenes, objects, and bodies. Unexpectedly, the high-BIC group showed an increased inversion effect compared to the low-BIC group, indicating perceptual abnormalities may not be present as local processing biases, as originally thought. There was no significant difference in performance across stimulus types, signifying that any visual processing abnormalities may be general rather than appearance-based. This has important implications for whether visual processing abnormalities are predisposing factors for BDD or develop throughout the disorder.
ERIC Educational Resources Information Center
Seibert, Warren F.; Reid, Christopher J.
Learning and retention may be influenced by subtle instructional stimulus characteristics and certain visual memory aptitudes. Ten stimulus characteristics were chosen for study; 50 sequences of programed instructional material were specially written to conform to sampled values of each stimulus characteristic. Seventy-three freshman subjects…
Alerting Attention and Time Perception in Children.
ERIC Educational Resources Information Center
Droit-Volet, Sylvie
2003-01-01
Examined effects of a click signaling arrival of a visual stimulus to be timed on temporal discrimination in 3-, 5-, and 8-year-olds. Found that in all groups, the proportion of long responses increased with the stimulus duration, although the steepness of functions increased with age. Stimulus duration was judged longer with than without the…
Order of Stimulus Presentation Influences Children's Acquisition in Receptive Identification Tasks
ERIC Educational Resources Information Center
Petursdottir, Anna Ingeborg; Aguilar, Gabriella
2016-01-01
Receptive identification is usually taught in matching-to-sample format, which entails the presentation of an auditory sample stimulus and several visual comparison stimuli in each trial. Conflicting recommendations exist regarding the order of stimulus presentation in matching-to-sample trials. The purpose of this study was to compare acquisition…
Stimulus Intensity and the Perception of Duration
ERIC Educational Resources Information Center
Matthews, William J.; Stewart, Neil; Wearden, John H.
2011-01-01
This article explores the widely reported finding that the subjective duration of a stimulus is positively related to its magnitude. In Experiments 1 and 2 we show that, for both auditory and visual stimuli, the effect of stimulus magnitude on the perception of duration depends upon the background: Against a high intensity background, weak stimuli…
Harvey, Ben M; Dumoulin, Serge O
2016-02-15
Several studies demonstrate that visual stimulus motion affects neural receptive fields and fMRI response amplitudes. Here we unite results of these two approaches and extend them by examining the effects of visual motion on neural position preferences throughout the hierarchy of human visual field maps. We measured population receptive field (pRF) properties using high-field fMRI (7T), characterizing position preferences simultaneously over large regions of the visual cortex. We measured pRFs properties using sine wave gratings in stationary apertures, moving at various speeds in either the direction of pRF measurement or the orthogonal direction. We find direction- and speed-dependent changes in pRF preferred position and size in all visual field maps examined, including V1, V3A, and the MT+ map TO1. These effects on pRF properties increase up the hierarchy of visual field maps. However, both within and between visual field maps the extent of pRF changes was approximately proportional to pRF size. This suggests that visual motion transforms the representation of visual space similarly throughout the visual hierarchy. Visual motion can also produce an illusory displacement of perceived stimulus position. We demonstrate perceptual displacements using the same stimulus configuration. In contrast to effects on pRF properties, perceptual displacements show only weak effects of motion speed, with far larger speed-independent effects. We describe a model where low-level mechanisms could underlie the observed effects on neural position preferences. We conclude that visual motion induces similar transformations of visuo-spatial representations throughout the visual hierarchy, which may arise through low-level mechanisms. Copyright © 2015 Elsevier Inc. All rights reserved.
Effects of temporal integration on the shape of visual backward masking functions.
Francis, Gregory; Cho, Yang Seok
2008-10-01
Many studies of cognition and perception use a visual mask to explore the dynamics of information processing of a target. Especially important in these applications is the time between the target and mask stimuli. A plot of some measure of target visibility against stimulus onset asynchrony is called a masking function, which can sometimes be monotonic increasing but other times is U-shaped. Theories of backward masking have long hypothesized that temporal integration of the target and mask influences properties of masking but have not connected the influence of integration with the shape of the masking function. With two experiments that vary the spatial properties of the target and mask, the authors provide evidence that temporal integration of the stimuli plays a critical role in determining the shape of the masking function. The resulting data both challenge current theories of backward masking and indicate what changes to the theories are needed to account for the new data. The authors further discuss the implication of the findings for uses of backward masking to explore other aspects of cognition.
Choi, Jong Moon; Cho, Yang Seok; Proctor, Robert W
2009-09-01
A Stroop task with separate color bar and color word stimuli was combined with an inhibition-of-return procedure to examine whether visual attention modulates color word processing. In Experiment 1, the color bar was presented at the cued location and the color word at the uncued location, or vice versa, with a 100- or 1,050-msec stimulus onset asynchrony (SOA) between cue and Stroop stimuli. In Experiment 2, on Stroop trials, the color bar was presented at a central fixated location and the color word at a cued or uncued location above or below the color bar. In both experiments, with a 100-msec SOA, the Stroop effect was numerically larger when the color word was displayed at the cued location than when it was displayed at the uncued location, but with the 1,050-msec SOA, this relation between Stroop effect magnitude and location was reversed. These results provide evidence that processing of the color word in the Stroop task is modulated by the location to which visual attention is directed.
Spatiotemporal dynamics in human visual cortex rapidly encode the emotional content of faces.
Dima, Diana C; Perry, Gavin; Messaritaki, Eirini; Zhang, Jiaxiang; Singh, Krish D
2018-06-08
Recognizing emotion in faces is important in human interaction and survival, yet existing studies do not paint a consistent picture of the neural representation supporting this task. To address this, we collected magnetoencephalography (MEG) data while participants passively viewed happy, angry and neutral faces. Using time-resolved decoding of sensor-level data, we show that responses to angry faces can be discriminated from happy and neutral faces as early as 90 ms after stimulus onset and only 10 ms later than faces can be discriminated from scrambled stimuli, even in the absence of differences in evoked responses. Time-resolved relevance patterns in source space track expression-related information from the visual cortex (100 ms) to higher-level temporal and frontal areas (200-500 ms). Together, our results point to a system optimised for rapid processing of emotional faces and preferentially tuned to threat, consistent with the important evolutionary role that such a system must have played in the development of human social interactions. © 2018 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.
Response Activation in Overlapping Tasks and the Response-Selection Bottleneck
ERIC Educational Resources Information Center
Schubert, Torsten; Fischer, Rico; Stelzel, Christine
2008-01-01
The authors investigated the impact of response activation on dual-task performance by presenting a subliminal prime before the stimulus in Task 2 (S2) of a psychological refractory period (PRP) task. Congruence between prime and S2 modulated the reaction times in Task 2 at short stimulus onset asynchrony despite a PRP effect. This Task 2…
Onset and Offset of Aversive Events Establish Distinct Memories Requiring Fear and Reward Networks
ERIC Educational Resources Information Center
Andreatta, Marta; Fendt, Markus; Muhlberger, Andreas; Wieser, Matthias J.; Imobersteg, Stefan; Yarali, Ayse; Gerber, Bertram; Pauli, Paul
2012-01-01
Two things are worth remembering about an aversive event: What made it happen? What made it cease? If a stimulus precedes an aversive event, it becomes a signal for threat and will later elicit behavior indicating conditioned fear. However, if the stimulus is presented upon cessation of the aversive event, it elicits behavior indicating…
NASA Astrophysics Data System (ADS)
Massof, Robert W.; Schmidt, Karen M.; Laby, Daniel M.; Kirschen, David; Meadows, David
2013-09-01
Visual acuity, a forced-choice psychophysical measure of visual spatial resolution, is the sine qua non of clinical visual impairment testing in ophthalmology and optometry patients with visual system disorders ranging from refractive error to retinal, optic nerve, or central visual system pathology. Visual acuity measures are standardized against a norm, but it is well known that visual acuity depends on a variety of stimulus parameters, including contrast and exposure duration. This paper asks if it is possible to estimate a single global visual state measure from visual acuity measures as a function of stimulus parameters that can represent the patient's overall visual health state with a single variable. Psychophysical theory (at the sensory level) and psychometric theory (at the decision level) are merged to identify the conditions that must be satisfied to derive a global visual state measure from parameterised visual acuity measures. A global visual state measurement model is developed and tested with forced-choice visual acuity measures from 116 subjects with no visual impairments and 560 subjects with uncorrected refractive error. The results are in agreement with the expectations of the model.