Sample records for visual conditioned stimulus

  1. Different target-discrimination times can be followed by the same saccade-initiation timing in different stimulus conditions during visual searches

    PubMed Central

    Tanaka, Tomohiro; Nishida, Satoshi

    2015-01-01

    The neuronal processes that underlie visual searches can be divided into two stages: target discrimination and saccade preparation/generation. This predicts that the length of time of the prediscrimination stage varies according to the search difficulty across different stimulus conditions, whereas the length of the latter postdiscrimination stage is stimulus invariant. However, recent studies have suggested that the length of the postdiscrimination interval changes with different stimulus conditions. To address whether and how the visual stimulus affects determination of the postdiscrimination interval, we recorded single-neuron activity in the lateral intraparietal area (LIP) when monkeys (Macaca fuscata) performed a color-singleton search involving four stimulus conditions that differed regarding luminance (Bright vs. Dim) and target-distractor color similarity (Easy vs. Difficult). We specifically focused on comparing activities between the Bright-Difficult and Dim-Easy conditions, in which the visual stimuli were considerably different, but the mean reaction times were indistinguishable. This allowed us to examine the neuronal activity when the difference in the degree of search speed between different stimulus conditions was minimal. We found that not only prediscrimination but also postdiscrimination intervals varied across stimulus conditions: the postdiscrimination interval was longer in the Dim-Easy condition than in the Bright-Difficult condition. Further analysis revealed that the postdiscrimination interval might vary with stimulus luminance. A computer simulation using an accumulation-to-threshold model suggested that the luminance-related difference in visual response strength at discrimination time could be the cause of different postdiscrimination intervals. PMID:25995344

  2. The Role of Temporal Disparity on Audiovisual Integration in Low-Vision Individuals.

    PubMed

    Targher, Stefano; Micciolo, Rocco; Occelli, Valeria; Zampini, Massimiliano

    2017-12-01

    Recent findings have shown that sounds improve visual detection in low vision individuals when the audiovisual stimuli pairs of stimuli are presented simultaneously and from the same spatial position. The present study purports to investigate the temporal aspects of the audiovisual enhancement effect previously reported. Low vision participants were asked to detect the presence of a visual stimulus (yes/no task) presented either alone or together with an auditory stimulus at different stimulus onset asynchronies (SOAs). In the first experiment, the sound was presented either simultaneously or before the visual stimulus (i.e., SOAs 0, 100, 250, 400 ms). The results show that the presence of a task-irrelevant auditory stimulus produced a significant visual detection enhancement in all the conditions. In the second experiment, the sound was either synchronized with, or randomly preceded/lagged behind the visual stimulus (i.e., SOAs 0, ± 250, ± 400 ms). The visual detection enhancement was reduced in magnitude and limited only to the synchronous condition and to the condition in which the sound stimulus was presented 250 ms before the visual stimulus. Taken together, the evidence of the present study seems to suggest that audiovisual interaction in low vision individuals is highly modulated by top-down mechanisms.

  3. Modulation of auditory stimulus processing by visual spatial or temporal cue: an event-related potentials study.

    PubMed

    Tang, Xiaoyu; Li, Chunlin; Li, Qi; Gao, Yulin; Yang, Weiping; Yang, Jingjing; Ishikawa, Soushirou; Wu, Jinglong

    2013-10-11

    Utilizing the high temporal resolution of event-related potentials (ERPs), we examined how visual spatial or temporal cues modulated the auditory stimulus processing. The visual spatial cue (VSC) induces orienting of attention to spatial locations; the visual temporal cue (VTC) induces orienting of attention to temporal intervals. Participants were instructed to respond to auditory targets. Behavioral responses to auditory stimuli following VSC were faster and more accurate than those following VTC. VSC and VTC had the same effect on the auditory N1 (150-170 ms after stimulus onset). The mean amplitude of the auditory P1 (90-110 ms) in VSC condition was larger than that in VTC condition, and the mean amplitude of late positivity (300-420 ms) in VTC condition was larger than that in VSC condition. These findings suggest that modulation of auditory stimulus processing by visually induced spatial or temporal orienting of attention were different, but partially overlapping. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  4. Stimulus onset predictability modulates proactive action control in a Go/No-go task

    PubMed Central

    Berchicci, Marika; Lucci, Giuliana; Spinelli, Donatella; Di Russo, Francesco

    2015-01-01

    The aim of the study was to evaluate whether the presence/absence of visual cues specifying the onset of an upcoming, action-related stimulus modulates pre-stimulus brain activity, associated with the proactive control of goal-directed actions. To this aim we asked 12 subjects to perform an equal probability Go/No-go task with four stimulus configurations in two conditions: (1) uncued, i.e., without any external information about the timing of stimulus onset; and (2) cued, i.e., with external visual cues providing precise information about the timing of stimulus onset. During task both behavioral performance and event-related potentials (ERPs) were recorded. Behavioral results showed faster response times in the cued than uncued condition, confirming existing literature. ERPs showed novel results in the proactive control stage, that started about 1 s before the motor response. We observed a slow rising prefrontal positive activity, more pronounced in the cued than the uncued condition. Further, also pre-stimulus activity of premotor areas was larger in cued than uncued condition. In the post-stimulus period, the P3 amplitude was enhanced when the time of stimulus onset was externally driven, confirming that external cueing enhances processing of stimulus evaluation and response monitoring. Our results suggest that different pre-stimulus processing come into play in the two conditions. We hypothesize that the large prefrontal and premotor activities recorded with external visual cues index the monitoring of the external stimuli in order to finely regulate the action. PMID:25964751

  5. Discrepant visual speech facilitates covert selective listening in "cocktail party" conditions.

    PubMed

    Williams, Jason A

    2012-06-01

    The presence of congruent visual speech information facilitates the identification of auditory speech, while the addition of incongruent visual speech information often impairs accuracy. This latter arrangement occurs naturally when one is being directly addressed in conversation but listens to a different speaker. Under these conditions, performance may diminish since: (a) one is bereft of the facilitative effects of the corresponding lip motion and (b) one becomes subject to visual distortion by incongruent visual speech; by contrast, speech intelligibility may be improved due to (c) bimodal localization of the central unattended stimulus. Participants were exposed to centrally presented visual and auditory speech while attending to a peripheral speech stream. In some trials, the lip movements of the central visual stimulus matched the unattended speech stream; in others, the lip movements matched the attended peripheral speech. Accuracy for the peripheral stimulus was nearly one standard deviation greater with incongruent visual information, compared to the congruent condition which provided bimodal pattern recognition cues. Likely, the bimodal localization of the central stimulus further differentiated the stimuli and thus facilitated intelligibility. Results are discussed with regard to similar findings in an investigation of the ventriloquist effect, and the relative strength of localization and speech cues in covert listening.

  6. Stimulus Dependence of Correlated Variability across Cortical Areas

    PubMed Central

    Cohen, Marlene R.

    2016-01-01

    The way that correlated trial-to-trial variability between pairs of neurons in the same brain area (termed spike count or noise correlation, rSC) depends on stimulus or task conditions can constrain models of cortical circuits and of the computations performed by networks of neurons (Cohen and Kohn, 2011). In visual cortex, rSC tends not to depend on stimulus properties (Kohn and Smith, 2005; Huang and Lisberger, 2009) but does depend on cognitive factors like visual attention (Cohen and Maunsell, 2009; Mitchell et al., 2009). However, neurons across visual areas respond to any visual stimulus or contribute to any perceptual decision, and the way that information from multiple areas is combined to guide perception is unknown. To gain insight into these issues, we recorded simultaneously from neurons in two areas of visual cortex (primary visual cortex, V1, and the middle temporal area, MT) while rhesus monkeys viewed different visual stimuli in different attention conditions. We found that correlations between neurons in different areas depend on stimulus and attention conditions in very different ways than do correlations within an area. Correlations across, but not within, areas depend on stimulus direction and the presence of a second stimulus, and attention has opposite effects on correlations within and across areas. This observed pattern of cross-area correlations is predicted by a normalization model where MT units sum V1 inputs that are passed through a divisive nonlinearity. Together, our results provide insight into how neurons in different areas interact and constrain models of the neural computations performed across cortical areas. SIGNIFICANCE STATEMENT Correlations in the responses of pairs of neurons within the same cortical area have been a subject of growing interest in systems neuroscience. However, correlated variability between different cortical areas is likely just as important. We recorded simultaneously from neurons in primary visual cortex and the middle temporal area while rhesus monkeys viewed different visual stimuli in different attention conditions. We found that correlations between neurons in different areas depend on stimulus and attention conditions in very different ways than do correlations within an area. The observed pattern of cross-area correlations was predicted by a simple normalization model. Our results provide insight into how neurons in different areas interact and constrain models of the neural computations performed across cortical areas. PMID:27413163

  7. Emotion recognition abilities across stimulus modalities in schizophrenia and the role of visual attention.

    PubMed

    Simpson, Claire; Pinkham, Amy E; Kelsven, Skylar; Sasson, Noah J

    2013-12-01

    Emotion can be expressed by both the voice and face, and previous work suggests that presentation modality may impact emotion recognition performance in individuals with schizophrenia. We investigated the effect of stimulus modality on emotion recognition accuracy and the potential role of visual attention to faces in emotion recognition abilities. Thirty-one patients who met DSM-IV criteria for schizophrenia (n=8) or schizoaffective disorder (n=23) and 30 non-clinical control individuals participated. Both groups identified emotional expressions in three different conditions: audio only, visual only, combined audiovisual. In the visual only and combined conditions, time spent visually fixating salient features of the face were recorded. Patients were significantly less accurate than controls in emotion recognition during both the audio and visual only conditions but did not differ from controls on the combined condition. Analysis of visual scanning behaviors demonstrated that patients attended less than healthy individuals to the mouth in the visual condition but did not differ in visual attention to salient facial features in the combined condition, which may in part explain the absence of a deficit for patients in this condition. Collectively, these findings demonstrate that patients benefit from multimodal stimulus presentations of emotion and support hypotheses that visual attention to salient facial features may serve as a mechanism for accurate emotion identification. © 2013.

  8. Neurons in the pigeon caudolateral nidopallium differentiate Pavlovian conditioned stimuli but not their associated reward value in a sign-tracking paradigm

    PubMed Central

    Kasties, Nils; Starosta, Sarah; Güntürkün, Onur; Stüttgen, Maik C.

    2016-01-01

    Animals exploit visual information to identify objects, form stimulus-reward associations, and prepare appropriate behavioral responses. The nidopallium caudolaterale (NCL), an associative region of the avian endbrain, contains neurons exhibiting prominent response modulation during presentation of reward-predicting visual stimuli, but it is unclear whether neural activity represents valuation signals, stimulus properties, or sensorimotor contingencies. To test the hypothesis that NCL neurons represent stimulus value, we subjected pigeons to a Pavlovian sign-tracking paradigm in which visual cues predicted rewards differing in magnitude (large vs. small) and delay to presentation (short vs. long). Subjects’ strength of conditioned responding to visual cues reliably differentiated between predicted reward types and thus indexed valuation. The majority of NCL neurons discriminated between visual cues, with discriminability peaking shortly after stimulus onset and being maintained at lower levels throughout the stimulus presentation period. However, while some cells’ firing rates correlated with reward value, such neurons were not more frequent than expected by chance. Instead, neurons formed discernible clusters which differed in their preferred visual cue. We propose that this activity pattern constitutes a prerequisite for using visual information in more complex situations e.g. requiring value-based choices. PMID:27762287

  9. Object form discontinuity facilitates displacement discrimination across saccades.

    PubMed

    Demeyer, Maarten; De Graef, Peter; Wagemans, Johan; Verfaillie, Karl

    2010-06-01

    Stimulus displacements coinciding with a saccadic eye movement are poorly detected by human observers. In recent years, converging evidence has shown that this phenomenon does not result from poor transsaccadic retention of presaccadic stimulus position information, but from the visual system's efforts to spatially align presaccadic and postsaccadic perception on the basis of visual landmarks. It is known that this process can be disrupted, and transsaccadic displacement detection performance can be improved, by briefly blanking the stimulus display during and immediately after the saccade. In the present study, we investigated whether this improvement could also follow from a discontinuity in the task-irrelevant form of the displaced stimulus. We observed this to be the case: Subjects more accurately identified the direction of intrasaccadic displacements when the displaced stimulus simultaneously changed form, compared to conditions without a form change. However, larger improvements were still observed under blanking conditions. In a second experiment, we show that facilitation induced by form changes and blanks can combine. We conclude that a strong assumption of visual stability underlies the suppression of transsaccadic change detection performance, the rejection of which generalizes from stimulus form to stimulus position.

  10. Visual Presentation Effects on Identification of Multiple Environmental Sounds

    PubMed Central

    Masakura, Yuko; Ichikawa, Makoto; Shimono, Koichi; Nakatsuka, Reio

    2016-01-01

    This study examined how the contents and timing of a visual stimulus affect the identification of mixed sounds recorded in a daily life environment. For experiments, we presented four environment sounds as auditory stimuli for 5 s along with a picture or a written word as a visual stimulus that might or might not denote the source of one of the four sounds. Three conditions of temporal relations between the visual stimuli and sounds were used. The visual stimulus was presented either: (a) for 5 s simultaneously with the sound; (b) for 5 s, 1 s before the sound (SOA between the audio and visual stimuli was 6 s); or (c) for 33 ms, 1 s before the sound (SOA was 1033 ms). Participants reported all identifiable sounds for those audio–visual stimuli. To characterize the effects of visual stimuli on sound identification, the following were used: the identification rates of sounds for which the visual stimulus denoted its sound source, the rates of other sounds for which the visual stimulus did not denote the sound source, and the frequency of false hearing of a sound that was not presented for each sound set. Results of the four experiments demonstrated that a picture or a written word promoted identification of the sound when it was related to the sound, particularly when the visual stimulus was presented for 5 s simultaneously with the sounds. However, a visual stimulus preceding the sounds had a benefit only for the picture, not for the written word. Furthermore, presentation with a picture denoting a sound simultaneously with the sound reduced the frequency of false hearing. These results suggest three ways that presenting a visual stimulus affects identification of the auditory stimulus. First, activation of the visual representation extracted directly from the picture promotes identification of the denoted sound and suppresses the processing of sounds for which the visual stimulus did not denote the sound source. Second, effects based on processing of the conceptual information promote identification of the denoted sound and suppress the processing of sounds for which the visual stimulus did not denote the sound source. Third, processing of the concurrent visual representation suppresses false hearing. PMID:26973478

  11. Startle Auditory Stimuli Enhance the Performance of Fast Dynamic Contractions

    PubMed Central

    Fernandez-Del-Olmo, Miguel; Río-Rodríguez, Dan; Iglesias-Soler, Eliseo; Acero, Rafael M.

    2014-01-01

    Fast reaction times and the ability to develop a high rate of force development (RFD) are crucial for sports performance. However, little is known regarding the relationship between these parameters. The aim of this study was to investigate the effects of auditory stimuli of different intensities on the performance of a concentric bench-press exercise. Concentric bench-presses were performed by thirteen trained subjects in response to three different conditions: a visual stimulus (VS); a visual stimulus accompanied by a non-startle auditory stimulus (AS); and a visual stimulus accompanied by a startle auditory stimulus (SS). Peak RFD, peak velocity, onset movement, movement duration and electromyography from pectoralis and tricep muscles were recorded. The SS condition induced an increase in the RFD and peak velocity and a reduction in the movement onset and duration, in comparison with the VS and AS condition. The onset activation of the pectoralis and tricep muscles was shorter for the SS than for the VS and AS conditions. These findings point out to specific enhancement effects of loud auditory stimulation on the rate of force development. This is of relevance since startle stimuli could be used to explore neural adaptations to resistance training. PMID:24489967

  12. Effects of age, gender, and stimulus presentation period on visual short-term memory.

    PubMed

    Kunimi, Mitsunobu

    2016-01-01

    This study focused on age-related changes in visual short-term memory using visual stimuli that did not allow verbal encoding. Experiment 1 examined the effects of age and the length of the stimulus presentation period on visual short-term memory function. Experiment 2 examined the effects of age, gender, and the length of the stimulus presentation period on visual short-term memory function. The worst memory performance and the largest performance difference between the age groups were observed in the shortest stimulus presentation period conditions. The performance difference between the age groups became smaller as the stimulus presentation period became longer; however, it did not completely disappear. Although gender did not have a significant effect on d' regardless of the presentation period in the young group, a significant gender-based difference was observed for stimulus presentation periods of 500 ms and 1,000 ms in the older group. This study indicates that the decline in visual short-term memory observed in the older group is due to the interaction of several factors.

  13. Effects of nonspatial selective and divided visual attention on fMRI BOLD responses.

    PubMed

    Weerda, Riklef; Vallines, Ignacio; Thomas, James P; Rutschmann, Roland M; Greenlee, Mark W

    2006-09-01

    Using an uncertainty paradigm and functional magnetic resonance imaging (fMRI) we studied the effect of nonspatial selective and divided visual attention on the activity of specific areas of human extrastriate visual cortex. The stimuli were single ovals that differed from an implicit standard oval in either colour or width. The subjects' task was to classify the current stimulus as one of two possible alternatives per stimulus dimension. Three different experimental conditions were conducted: "colour-certainty", "shape-certainty" and "uncertainty". In all experimental conditions, the stimulus differed in only one stimulus dimension per trial. In the two certainty conditions, the subjects knew in advance which dimension this would be. During the uncertainty condition they had no such previous knowledge and had to monitor both dimensions simultaneously. Statistical analysis of the fMRI data (with SPM2) revealed a modest effect of the attended stimulus dimension on the neural activity in colour sensitive area V4 (more activity during attention to colour) and in shape sensitive area LOC (more activity during attention to shape). Furthermore, cortical areas known to be related to attention and working memory processes (e.g., lateral prefrontal and posterior parietal cortex) exhibit higher activity during the condition of divided attention ("uncertainty") than during that of selective attention ("certainty").

  14. Stimulus Processing and Associative Learning in Wistar and WKHA Rats

    PubMed Central

    Chess, Amy C.; Keene, Christopher S.; Wyzik, Elizabeth C.; Bucci, David J.

    2007-01-01

    This study assessed basic learning and attention abilities in WKHA (Wistar-Kyoto Hyperactive) rats using appetitive conditioning preparations. Two measures of conditioned responding to a visual stimulus, orienting behavior (rearing on the hindlegs) and food cup behavior (placing the head inside the recessed food cup) were measured. In Experiment 1, simple conditioning but not extinction was impaired in WKHA rats compared to Wistar rats. In Experiment 2, non-reinforced presentations of the visual cue preceded the conditioning sessions. WKHA rats displayed less orienting behavior than Wistar rats, but comparable levels of food cup behavior. These data suggest that WKHA rats exhibit specific abnormalities in attentional processing as well as learning stimulus-reward relationships. PMID:15998198

  15. Visual perceptual learning by operant conditioning training follows rules of contingency.

    PubMed

    Kim, Dongho; Seitz, Aaron R; Watanabe, Takeo

    2015-01-01

    Visual perceptual learning (VPL) can occur as a result of a repetitive stimulus-reward pairing in the absence of any task. This suggests that rules that guide Conditioning, such as stimulus-reward contingency (e.g. that stimulus predicts the likelihood of reward), may also guide the formation of VPL. To address this question, we trained subjects with an operant conditioning task in which there were contingencies between the response to one of three orientations and the presence of reward. Results showed that VPL only occurred for positive contingencies, but not for neutral or negative contingencies. These results suggest that the formation of VPL is influenced by similar rules that guide the process of Conditioning.

  16. Visual perceptual learning by operant conditioning training follows rules of contingency

    PubMed Central

    Kim, Dongho; Seitz, Aaron R; Watanabe, Takeo

    2015-01-01

    Visual perceptual learning (VPL) can occur as a result of a repetitive stimulus-reward pairing in the absence of any task. This suggests that rules that guide Conditioning, such as stimulus-reward contingency (e.g. that stimulus predicts the likelihood of reward), may also guide the formation of VPL. To address this question, we trained subjects with an operant conditioning task in which there were contingencies between the response to one of three orientations and the presence of reward. Results showed that VPL only occurred for positive contingencies, but not for neutral or negative contingencies. These results suggest that the formation of VPL is influenced by similar rules that guide the process of Conditioning. PMID:26028984

  17. Oculomotor Reflexes as a Test of Visual Dysfunctions in Cognitively Impaired Observers

    DTIC Science & Technology

    2013-09-01

    right. Gaze horizontal position is plotted along the y-axis. The red bar indicates a visual nystagmus event detected by the filter. (d) A mild curse word...experimental conditions were chosen to simulate testing cognitively impaired observers. Reflex Stimulus Functions Visual Nystagmus luminance grating low-level...developed a new stimulus for visual nystagmus to 8 test visual motion processing in the presence of incoherent motion noise. The drifting equiluminant

  18. Probing feedforward and feedback contributions to awareness with visual masking and transcranial magnetic stimulation.

    PubMed

    Tapia, Evelina; Beck, Diane M

    2014-01-01

    A number of influential theories posit that visual awareness relies not only on the initial, stimulus-driven (i.e., feedforward) sweep of activation but also on recurrent feedback activity within and between brain regions. These theories of awareness draw heavily on data from masking paradigms in which visibility of one stimulus is reduced due to the presence of another stimulus. More recently transcranial magnetic stimulation (TMS) has been used to study the temporal dynamics of visual awareness. TMS over occipital cortex affects performance on visual tasks at distinct time points and in a manner that is comparable to visual masking. We draw parallels between these two methods and examine evidence for the neural mechanisms by which visual masking and TMS suppress stimulus visibility. Specifically, both methods have been proposed to affect feedforward as well as feedback signals when applied at distinct time windows relative to stimulus onset and as a result modify visual awareness. Most recent empirical evidence, moreover, suggests that while visual masking and TMS impact stimulus visibility comparably, the processes these methods affect may not be as similar as previously thought. In addition to reviewing both masking and TMS studies that examine feedforward and feedback processes in vision, we raise questions to guide future studies and further probe the necessary conditions for visual awareness.

  19. Time-resolved neuroimaging of visual short term memory consolidation by post-perceptual attention shifts.

    PubMed

    Hecht, Marcus; Thiemann, Ulf; Freitag, Christine M; Bender, Stephan

    2016-01-15

    Post-perceptual cues can enhance visual short term memory encoding even after the offset of the visual stimulus. However, both the mechanisms by which the sensory stimulus characteristics are buffered as well as the mechanisms by which post-perceptual selective attention enhances short term memory encoding remain unclear. We analyzed late post-perceptual event-related potentials (ERPs) in visual change detection tasks (100ms stimulus duration) by high-resolution ERP analysis to elucidate these mechanisms. The effects of early and late auditory post-cues (300ms or 850ms after visual stimulus onset) as well as the effects of a visual interference stimulus were examined in 27 healthy right-handed adults. Focusing attention with post-perceptual cues at both latencies significantly improved memory performance, i.e. sensory stimulus characteristics were available for up to 850ms after stimulus presentation. Passive watching of the visual stimuli without auditory cue presentation evoked a slow negative wave (N700) over occipito-temporal visual areas. N700 was strongly reduced by a visual interference stimulus which impeded memory maintenance. In contrast, contralateral delay activity (CDA) still developed in this condition after the application of auditory post-cues and was thereby dissociated from N700. CDA and N700 seem to represent two different processes involved in short term memory encoding. While N700 could reflect visual post processing by automatic attention attraction, CDA may reflect the top-down process of searching selectively for the required information through post-perceptual attention. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Effects of auditory information on self-motion perception during simultaneous presentation of visual shearing motion

    PubMed Central

    Tanahashi, Shigehito; Ashihara, Kaoru; Ujike, Hiroyasu

    2015-01-01

    Recent studies have found that self-motion perception induced by simultaneous presentation of visual and auditory motion is facilitated when the directions of visual and auditory motion stimuli are identical. They did not, however, examine possible contributions of auditory motion information for determining direction of self-motion perception. To examine this, a visual stimulus projected on a hemisphere screen and an auditory stimulus presented through headphones were presented separately or simultaneously, depending on experimental conditions. The participant continuously indicated the direction and strength of self-motion during the 130-s experimental trial. When the visual stimulus with a horizontal shearing rotation and the auditory stimulus with a horizontal one-directional rotation were presented simultaneously, the duration and strength of self-motion perceived in the opposite direction of the auditory rotation stimulus were significantly longer and stronger than those perceived in the same direction of the auditory rotation stimulus. However, the auditory stimulus alone could not sufficiently induce self-motion perception, and if it did, its direction was not consistent within each experimental trial. We concluded that auditory motion information can determine perceived direction of self-motion during simultaneous presentation of visual and auditory motion information, at least when visual stimuli moved in opposing directions (around the yaw-axis). We speculate that the contribution of auditory information depends on the plausibility and information balance of visual and auditory information. PMID:26113828

  1. Modality-dependent effect of motion information in sensory-motor synchronised tapping.

    PubMed

    Ono, Kentaro

    2018-05-14

    Synchronised action is important for everyday life. Generally, the auditory domain is more sensitive for coding temporal information, and previous studies have shown that auditory-motor synchronisation is much more precise than visuo-motor synchronisation. Interestingly, adding motion information improves synchronisation with visual stimuli and the advantage of the auditory modality seems to diminish. However, whether adding motion information also improves auditory-motor synchronisation remains unknown. This study compared tapping accuracy with a stationary or moving stimulus in both auditory and visual modalities. Participants were instructed to tap in synchrony with the onset of a sound or flash in the stationary condition, while these stimuli were perceived as moving from side to side in the motion condition. The results demonstrated that synchronised tapping with a moving visual stimulus was significantly more accurate than tapping with a stationary visual stimulus, as previous studies have shown. However, tapping with a moving auditory stimulus was significantly poorer than tapping with a stationary auditory stimulus. Although motion information impaired audio-motor synchronisation, an advantage of auditory modality compared to visual modality still existed. These findings are likely the result of higher temporal resolution in the auditory domain, which is likely due to the physiological and structural differences in the auditory and visual pathways in the brain. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Probing feedforward and feedback contributions to awareness with visual masking and transcranial magnetic stimulation

    PubMed Central

    Tapia, Evelina; Beck, Diane M.

    2014-01-01

    A number of influential theories posit that visual awareness relies not only on the initial, stimulus-driven (i.e., feedforward) sweep of activation but also on recurrent feedback activity within and between brain regions. These theories of awareness draw heavily on data from masking paradigms in which visibility of one stimulus is reduced due to the presence of another stimulus. More recently transcranial magnetic stimulation (TMS) has been used to study the temporal dynamics of visual awareness. TMS over occipital cortex affects performance on visual tasks at distinct time points and in a manner that is comparable to visual masking. We draw parallels between these two methods and examine evidence for the neural mechanisms by which visual masking and TMS suppress stimulus visibility. Specifically, both methods have been proposed to affect feedforward as well as feedback signals when applied at distinct time windows relative to stimulus onset and as a result modify visual awareness. Most recent empirical evidence, moreover, suggests that while visual masking and TMS impact stimulus visibility comparably, the processes these methods affect may not be as similar as previously thought. In addition to reviewing both masking and TMS studies that examine feedforward and feedback processes in vision, we raise questions to guide future studies and further probe the necessary conditions for visual awareness. PMID:25374548

  3. Predictive information speeds up visual awareness in an individuation task by modulating threshold setting, not processing efficiency.

    PubMed

    De Loof, Esther; Van Opstal, Filip; Verguts, Tom

    2016-04-01

    Theories on visual awareness claim that predicted stimuli reach awareness faster than unpredicted ones. In the current study, we disentangle whether prior information about the upcoming stimulus affects visual awareness of stimulus location (i.e., individuation) by modulating processing efficiency or threshold setting. Analogous research on stimulus identification revealed that prior information modulates threshold setting. However, as identification and individuation are two functionally and neurally distinct processes, the mechanisms underlying identification cannot simply be extrapolated directly to individuation. The goal of this study was therefore to investigate how individuation is influenced by prior information about the upcoming stimulus. To do so, a drift diffusion model was fitted to estimate the processing efficiency and threshold setting for predicted versus unpredicted stimuli in a cued individuation paradigm. Participants were asked to locate a picture, following a cue that was congruent, incongruent or neutral with respect to the picture's identity. Pictures were individuated faster in the congruent and neutral condition compared to the incongruent condition. In the diffusion model analysis, the processing efficiency was not significantly different across conditions. However, the threshold setting was significantly higher following an incongruent cue compared to both congruent and neutral cues. Our results indicate that predictive information about the upcoming stimulus influences visual awareness by shifting the threshold for individuation rather than by enhancing processing efficiency. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Effect of stimulus size and luminance on the rod-, cone-, and melanopsin-mediated pupillary light reflex

    PubMed Central

    Park, Jason C.; McAnany, J. Jason

    2015-01-01

    This study determined if the pupillary light reflex (PLR) driven by brief stimulus presentations can be accounted for by the product of stimulus luminance and area (i.e., corneal flux density, CFD) under conditions biased toward the rod, cone, and melanopsin pathways. Five visually normal subjects participated in the study. Stimuli consisted of 1-s short- and long-wavelength flashes that spanned a large range of luminance and angular subtense. The stimuli were presented in the central visual field in the dark (rod and melanopsin conditions) and against a rod-suppressing short-wavelength background (cone condition). Rod- and cone-mediated PLRs were measured at the maximum constriction after stimulus onset whereas the melanopsin-mediated PLR was measured 5–7 s after stimulus offset. The rod- and melanopsin-mediated PLRs were well accounted for by CFD, such that doubling the stimulus luminance had the same effect on the PLR as doubling the stimulus area. Melanopsin-mediated PLRs were elicited only by short-wavelength, large (>16°) stimuli with luminance greater than 10 cd/m2, but when present, the melanopsin-mediated PLR was well accounted for by CFD. In contrast, CFD could not account for the cone-mediated PLR because the PLR was approximately independent of stimulus size but strongly dependent on stimulus luminance. These findings highlight important differences in how stimulus luminance and size combine to govern the PLR elicited by brief flashes under rod-, cone-, and melanopsin-mediated conditions. PMID:25788707

  5. Medial Auditory Thalamic Stimulation as a Conditioned Stimulus for Eyeblink Conditioning in Rats

    ERIC Educational Resources Information Center

    Campolattaro, Matthew M.; Halverson, Hunter E.; Freeman, John H.

    2007-01-01

    The neural pathways that convey conditioned stimulus (CS) information to the cerebellum during eyeblink conditioning have not been fully delineated. It is well established that pontine mossy fiber inputs to the cerebellum convey CS-related stimulation for different sensory modalities (e.g., auditory, visual, tactile). Less is known about the…

  6. Left neglect dyslexia and the effect of stimulus duration.

    PubMed

    Arduino, Lisa S; Vallar, Giuseppe; Burani, Cristina

    2006-01-01

    The present study investigated the effects of the duration of the stimulus on the reading performance of right-brain-damaged patients with left neglect dyslexia. Three Italian patients read aloud words and nonwords, under conditions of unlimited time of stimulus exposure and of timed presentation. In the untimed condition, the majority of the patients' errors involved the left side of the letter string (i.e., neglect dyslexia errors). Conversely, in the timed condition, although the overall level of performance decreased, errors were more evenly distributed across the whole letter string (i.e., visual - nonlateralized - errors). This reduction of neglect errors with a reduced time of presentation of the stimulus may reflect the read out of elements of the letter string from a preserved visual storage component, such as iconic memory. Conversely, a time-unlimited presentation of the stimulus may bring about the rightward bias that characterizes the performance of neglect patients, possibly by a capture of the patients' attention by the final (rightward) letters of the string.

  7. Influence of Visual Motion, Suggestion, and Illusory Motion on Self-Motion Perception in the Horizontal Plane.

    PubMed

    Rosenblatt, Steven David; Crane, Benjamin Thomas

    2015-01-01

    A moving visual field can induce the feeling of self-motion or vection. Illusory motion from static repeated asymmetric patterns creates a compelling visual motion stimulus, but it is unclear if such illusory motion can induce a feeling of self-motion or alter self-motion perception. In these experiments, human subjects reported the perceived direction of self-motion for sway translation and yaw rotation at the end of a period of viewing set visual stimuli coordinated with varying inertial stimuli. This tested the hypothesis that illusory visual motion would influence self-motion perception in the horizontal plane. Trials were arranged into 5 blocks based on stimulus type: moving star field with yaw rotation, moving star field with sway translation, illusory motion with yaw, illusory motion with sway, and static arrows with sway. Static arrows were used to evaluate the effect of cognitive suggestion on self-motion perception. Each trial had a control condition; the illusory motion controls were altered versions of the experimental image, which removed the illusory motion effect. For the moving visual stimulus, controls were carried out in a dark room. With the arrow visual stimulus, controls were a gray screen. In blocks containing a visual stimulus there was an 8s viewing interval with the inertial stimulus occurring over the final 1s. This allowed measurement of the visual illusion perception using objective methods. When no visual stimulus was present, only the 1s motion stimulus was presented. Eight women and five men (mean age 37) participated. To assess for a shift in self-motion perception, the effect of each visual stimulus on the self-motion stimulus (cm/s) at which subjects were equally likely to report motion in either direction was measured. Significant effects were seen for moving star fields for both translation (p = 0.001) and rotation (p<0.001), and arrows (p = 0.02). For the visual motion stimuli, inertial motion perception was shifted in the direction consistent with the visual stimulus. Arrows had a small effect on self-motion perception driven by a minority of subjects. There was no significant effect of illusory motion on self-motion perception for either translation or rotation (p>0.1 for both). Thus, although a true moving visual field can induce self-motion, results of this study show that illusory motion does not.

  8. Facilitation of listening comprehension by visual information under noisy listening condition

    NASA Astrophysics Data System (ADS)

    Kashimada, Chiho; Ito, Takumi; Ogita, Kazuki; Hasegawa, Hiroshi; Kamata, Kazuo; Ayama, Miyoshi

    2009-02-01

    Comprehension of a sentence under a wide range of delay conditions between auditory and visual stimuli was measured in the environment with low auditory clarity of the level of -10dB and -15dB pink noise. Results showed that the image was helpful for comprehension of the noise-obscured voice stimulus when the delay between the auditory and visual stimuli was 4 frames (=132msec) or less, the image was not helpful for comprehension when the delay between the auditory and visual stimulus was 8 frames (=264msec) or more, and in some cases of the largest delay (32 frames), the video image interfered with comprehension.

  9. Dynamic Network Communication in the Human Functional Connectome Predicts Perceptual Variability in Visual Illusion.

    PubMed

    Wang, Zhiwei; Zeljic, Kristina; Jiang, Qinying; Gu, Yong; Wang, Wei; Wang, Zheng

    2018-01-01

    Ubiquitous variability between individuals in visual perception is difficult to standardize and has thus essentially been ignored. Here we construct a quantitative psychophysical measure of illusory rotary motion based on the Pinna-Brelstaff figure (PBF) in 73 healthy volunteers and investigate the neural circuit mechanisms underlying perceptual variation using functional magnetic resonance imaging (fMRI). We acquired fMRI data from a subset of 42 subjects during spontaneous and 3 stimulus conditions: expanding PBF, expanding modified-PBF (illusion-free) and expanding modified-PBF with physical rotation. Brain-wide graph analysis of stimulus-evoked functional connectivity patterns yielded a functionally segregated architecture containing 3 discrete hierarchical networks, commonly shared between rest and stimulation conditions. Strikingly, communication efficiency and strength between 2 networks predominantly located in visual areas robustly predicted individual perceptual differences solely in the illusory stimulus condition. These unprecedented findings demonstrate that stimulus-dependent, not spontaneous, dynamic functional integration between distributed brain networks contributes to perceptual variability in humans. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  10. Establishing Auditory-Tactile-Visual Equivalence Classes in Children with Autism and Developmental Delays

    ERIC Educational Resources Information Center

    Mullen, Stuart; Dixon, Mark R.; Belisle, Jordan; Stanley, Caleb

    2017-01-01

    The current study sought to evaluate the efficacy of a stimulus equivalence training procedure in establishing auditory-tactile-visual stimulus classes with 2 children with autism and developmental delays. Participants were exposed to vocal-tactile (A-B) and tactile-picture (B-C) conditional discrimination training and were tested for the…

  11. Role of somatosensory and vestibular cues in attenuating visually induced human postural sway

    NASA Technical Reports Server (NTRS)

    Peterka, Robert J.; Benolken, Martha S.

    1993-01-01

    The purpose was to determine the contribution of visual, vestibular, and somatosensory cues to the maintenance of stance in humans. Postural sway was induced by full field, sinusoidal visual surround rotations about an axis at the level of the ankle joints. The influences of vestibular and somatosensory cues were characterized by comparing postural sway in normal and bilateral vestibular absent subjects in conditions that provided either accurate or inaccurate somatosensory orientation information. In normal subjects, the amplitude of visually induced sway reached a saturation level as stimulus amplitude increased. The saturation amplitude decreased with increasing stimulus frequency. No saturation phenomena was observed in subjects with vestibular loss, implying that vestibular cues were responsible for the saturation phenomenon. For visually induced sways below the saturation level, the stimulus-response curves for both normal and vestibular loss subjects were nearly identical implying that (1) normal subjects were not using vestibular information to attenuate their visually induced sway, possibly because sway was below a vestibular-related threshold level, and (2) vestibular loss subjects did not utilize visual cues to a greater extent than normal subjects; that is, a fundamental change in visual system 'gain' was not used to compensate for a vestibular deficit. An unexpected finding was that the amplitude of body sway induced by visual surround motion could be almost three times greater than the amplitude of the visual stimulus in normals and vestibular loss subjects. This occurred in conditions where somatosensory cues were inaccurate and at low stimulus amplitudes. A control system model of visually induced postural sway was developed to explain this finding. For both subject groups, the amplitude of visually induced sway was smaller by a factor of about four in tests where somatosensory cues provided accurate versus inaccurate orientation information. This implied that (1) the vestibular loss subjects did not utilize somatosensory cues to a greater extent than normal subjects; that is, changes in somatosensory system 'gain' were not used to compensate for a vestibular deficit, and (2) the threshold for the use of vestibular cues in normals was apparently lower in test conditions where somatosensory cues were providing accurate orientation information.

  12. Effects of Temporal Features and Order on the Apparent duration of a Visual Stimulus

    PubMed Central

    Bruno, Aurelio; Ayhan, Inci; Johnston, Alan

    2012-01-01

    The apparent duration of a visual stimulus has been shown to be influenced by its speed. For low speeds, apparent duration increases linearly with stimulus speed. This effect has been ascribed to the number of changes that occur within a visual interval. Accordingly, a higher number of changes should produce an increase in apparent duration. In order to test this prediction, we asked subjects to compare the relative duration of a 10-Hz drifting comparison stimulus with a standard stimulus that contained a different number of changes in different conditions. The standard could be static, drifting at 10 Hz, or mixed (a combination of variable duration static and drifting intervals). In this last condition the number of changes was intermediate between the static and the continuously drifting stimulus. For all standard durations, the mixed stimulus looked significantly compressed (∼20% reduction) relative to the drifting stimulus. However, no difference emerged between the static (that contained no changes) and the mixed stimuli (which contained an intermediate number of changes). We also observed that when the standard was displayed first, it appeared compressed relative to when it was displayed second with a magnitude that depended on standard duration. These results are at odds with a model of time perception that simply reflects the number of temporal features within an interval in determining the perceived passing of time. PMID:22461778

  13. Eye movements and the span of the effective stimulus in visual search.

    PubMed

    Bertera, J H; Rayner, K

    2000-04-01

    The span of the effective stimulus during visual search through an unstructured alphanumeric array was investigated by using eye-contingent-display changes while the subjects searched for a target letter. In one condition, a window exposing the search array moved in synchrony with the subjects' eye movements, and the size of the window was varied. Performance reached asymptotic levels when the window was 5 degrees. In another condition, a foveal mask moved in synchrony with each eye movement, and the size of the mask was varied. The foveal mask conditions were much more detrimental to search behavior than the window conditions, indicating the importance of foveal vision during search. The size of the array also influenced performance, but performance reached asymptote for all array sizes tested at the same window size, and the effect of the foveal mask was the same for all array sizes. The results indicate that both acuity and difficulty of the search task influenced the span of the effective stimulus during visual search.

  14. Suppressed visual looming stimuli are not integrated with auditory looming signals: Evidence from continuous flash suppression.

    PubMed

    Moors, Pieter; Huygelier, Hanne; Wagemans, Johan; de-Wit, Lee; van Ee, Raymond

    2015-01-01

    Previous studies using binocular rivalry have shown that signals in a modality other than the visual can bias dominance durations depending on their congruency with the rivaling stimuli. More recently, studies using continuous flash suppression (CFS) have reported that multisensory integration influences how long visual stimuli remain suppressed. In this study, using CFS, we examined whether the contrast thresholds for detecting visual looming stimuli are influenced by a congruent auditory stimulus. In Experiment 1, we show that a looming visual stimulus can result in lower detection thresholds compared to a static concentric grating, but that auditory tone pips congruent with the looming stimulus did not lower suppression thresholds any further. In Experiments 2, 3, and 4, we again observed no advantage for congruent multisensory stimuli. These results add to our understanding of the conditions under which multisensory integration is possible, and suggest that certain forms of multisensory integration are not evident when the visual stimulus is suppressed from awareness using CFS.

  15. Stimulus-related activity during conditional associations in monkey perirhinal cortex neurons depends on upcoming reward outcome.

    PubMed

    Ohyama, Kaoru; Sugase-Miyamoto, Yasuko; Matsumoto, Narihisa; Shidara, Munetaka; Sato, Chikara

    2012-11-28

    Acquiring the significance of events based on reward-related information is critical for animals to survive and to conduct social activities. The importance of the perirhinal cortex for reward-related information processing has been suggested. To examine whether or not neurons in this cortex represent reward information flexibly when a visual stimulus indicates either a rewarded or unrewarded outcome, neuronal activity in the macaque perirhinal cortex was examined using a conditional-association cued-reward task. The task design allowed us to study how the neuronal responses depended on the animal's prediction of whether it would or would not be rewarded. Two visual stimuli, a color stimulus as Cue1 followed by a pattern stimulus as Cue2, were sequentially presented. Each pattern stimulus was conditionally associated with both rewarded and unrewarded outcomes depending on the preceding color stimulus. We found an activity depending upon the two reward conditions during Cue2, i.e., pattern stimulus presentation. The response appeared after the response dependent upon the image identity of Cue2. The response delineating a specific cue sequence also appeared between the responses dependent upon the identity of Cue2 and reward conditions. Thus, when Cue1 sets the context for whether or not Cue2 indicates a reward, this region represents the meaning of Cue2, i.e., the reward conditions, independent of the identity of Cue2. These results suggest that neurons in the perirhinal cortex do more than associate a single stimulus with a reward to achieve flexible representations of reward information.

  16. Stimulus change as a factor in response maintenance with free food available.

    PubMed Central

    Osborne, S R; Shelby, M

    1975-01-01

    Rats bar pressed for food on a reinforcement schedule in which every response was reinforced, even though a dish of pellets was present. Initially, auditory and visual stimuli accompanied response-produced food presentation. With stimulus feedback as an added consequence of bar pressing, responding was maintained in the presence of free food; without stimulus feedback, responding decreased to a low level. Auditory feedback maintained slightly more responding than did visual feedback, and both together maintained more responding than did either separately. Almost no responding occurred when the only consequence of bar pressing was stimulus feedback. The data indicated conditioned and sensory reinforcement effects of response-produced stimulus feedback. PMID:1202121

  17. Compound Stimulus Extinction Reduces Spontaneous Recovery in Humans

    ERIC Educational Resources Information Center

    Coelho, Cesar A. O.; Dunsmoor, Joseph E.; Phelps, Elizabeth A.

    2015-01-01

    Fear-related behaviors are prone to relapse following extinction. We tested in humans a compound extinction design ("deepened extinction") shown in animal studies to reduce post-extinction fear recovery. Adult subjects underwent fear conditioning to a visual and an auditory conditioned stimulus (CSA and CSB, respectively) separately…

  18. Variability and Correlations in Primary Visual Cortical Neurons Driven by Fixational Eye Movements

    PubMed Central

    McFarland, James M.; Cumming, Bruce G.

    2016-01-01

    The ability to distinguish between elements of a sensory neuron's activity that are stimulus independent versus driven by the stimulus is critical for addressing many questions in systems neuroscience. This is typically accomplished by measuring neural responses to repeated presentations of identical stimuli and identifying the trial-variable components of the response as noise. In awake primates, however, small “fixational” eye movements (FEMs) introduce uncontrolled trial-to-trial differences in the visual stimulus itself, potentially confounding this distinction. Here, we describe novel analytical methods that directly quantify the stimulus-driven and stimulus-independent components of visual neuron responses in the presence of FEMs. We apply this approach, combined with precise model-based eye tracking, to recordings from primary visual cortex (V1), finding that standard approaches that ignore FEMs typically miss more than half of the stimulus-driven neural response variance, creating substantial biases in measures of response reliability. We show that these effects are likely not isolated to the particular experimental conditions used here, such as the choice of visual stimulus or spike measurement time window, and thus will be a more general problem for V1 recordings in awake primates. We also demonstrate that measurements of the stimulus-driven and stimulus-independent correlations among pairs of V1 neurons can be greatly biased by FEMs. These results thus illustrate the potentially dramatic impact of FEMs on measures of signal and noise in visual neuron activity and also demonstrate a novel approach for controlling for these eye-movement-induced effects. SIGNIFICANCE STATEMENT Distinguishing between the signal and noise in a sensory neuron's activity is typically accomplished by measuring neural responses to repeated presentations of an identical stimulus. For recordings from the visual cortex of awake animals, small “fixational” eye movements (FEMs) inevitably introduce trial-to-trial variability in the visual stimulus, potentially confounding such measures. Here, we show that FEMs often have a dramatic impact on several important measures of response variability for neurons in primary visual cortex. We also present an analytical approach for quantifying signal and noise in visual neuron activity in the presence of FEMs. These results thus highlight the importance of controlling for FEMs in studies of visual neuron function, and demonstrate novel methods for doing so. PMID:27277801

  19. Audio-visual synchrony and spatial attention enhance processing of dynamic visual stimulation independently and in parallel: A frequency-tagging study.

    PubMed

    Covic, Amra; Keitel, Christian; Porcu, Emanuele; Schröger, Erich; Müller, Matthias M

    2017-11-01

    The neural processing of a visual stimulus can be facilitated by attending to its position or by a co-occurring auditory tone. Using frequency-tagging, we investigated whether facilitation by spatial attention and audio-visual synchrony rely on similar neural processes. Participants attended to one of two flickering Gabor patches (14.17 and 17 Hz) located in opposite lower visual fields. Gabor patches further "pulsed" (i.e. showed smooth spatial frequency variations) at distinct rates (3.14 and 3.63 Hz). Frequency-modulating an auditory stimulus at the pulse-rate of one of the visual stimuli established audio-visual synchrony. Flicker and pulsed stimulation elicited stimulus-locked rhythmic electrophysiological brain responses that allowed tracking the neural processing of simultaneously presented Gabor patches. These steady-state responses (SSRs) were quantified in the spectral domain to examine visual stimulus processing under conditions of synchronous vs. asynchronous tone presentation and when respective stimulus positions were attended vs. unattended. Strikingly, unique patterns of effects on pulse- and flicker driven SSRs indicated that spatial attention and audiovisual synchrony facilitated early visual processing in parallel and via different cortical processes. We found attention effects to resemble the classical top-down gain effect facilitating both, flicker and pulse-driven SSRs. Audio-visual synchrony, in turn, only amplified synchrony-producing stimulus aspects (i.e. pulse-driven SSRs) possibly highlighting the role of temporally co-occurring sights and sounds in bottom-up multisensory integration. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Selection for associative learning of colour stimuli reveals correlated evolution of this learning ability across multiple stimuli and rewards.

    PubMed

    Liefting, Maartje; Hoedjes, Katja M; Lann, Cécile Le; Smid, Hans M; Ellers, Jacintha

    2018-05-16

    We are only starting to understand how variation in cognitive ability can result from local adaptations to environmental conditions. A major question in this regard is to what extent selection on cognitive ability in a specific context affects that ability in general through correlated evolution. To address this question we performed artificial selection on visual associative learning in female Nasonia vitripennis wasps. Using appetitive conditioning in which a visual stimulus was offered in association with a host reward, the ability to learn visual associations was enhanced within 10 generations of selection. To test for correlated evolution affecting this form of learning, the ability to readily form learned associations in females was also tested using an olfactory instead of a visual stimulus in the appetitive conditioning. Additionally, we assessed whether the improved associative learning ability was expressed across sexes by colour-conditioning males with a mating reward. Both females and males from the selected lines consistently demonstrated an increased associative learning ability compared to the control lines, independent of learning context or conditioned stimulus. No difference in relative volume of brain neuropils was detected between the selected and control lines. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  1. On the use of continuous flash suppression for the study of visual processing outside of awareness

    PubMed Central

    Yang, Eunice; Brascamp, Jan; Kang, Min-Suk; Blake, Randolph

    2014-01-01

    The interocular suppression technique termed continuous flash suppression (CFS) has become an immensely popular tool for investigating visual processing outside of awareness. The emerging picture from studies using CFS is that extensive processing of a visual stimulus, including its semantic and affective content, occurs despite suppression from awareness of that stimulus by CFS. However, the current implementation of CFS in many studies examining processing outside of awareness has several drawbacks that may be improved upon for future studies using CFS. In this paper, we address some of those shortcomings, particularly ones that affect the assessment of unawareness during CFS, and ones to do with the use of “visible” conditions that are often included as a comparison to a CFS condition. We also discuss potential biases in stimulus processing as a result of spatial attention and feature-selective suppression. We suggest practical guidelines that minimize the effects of those limitations in using CFS to study visual processing outside of awareness. PMID:25071685

  2. Dissociable effects of inter-stimulus interval and presentation duration on rapid face categorization.

    PubMed

    Retter, Talia L; Jiang, Fang; Webster, Michael A; Rossion, Bruno

    2018-04-01

    Fast periodic visual stimulation combined with electroencephalography (FPVS-EEG) has unique sensitivity and objectivity in measuring rapid visual categorization processes. It constrains image processing time by presenting stimuli rapidly through brief stimulus presentation durations and short inter-stimulus intervals. However, the selective impact of these temporal parameters on visual categorization is largely unknown. Here, we presented natural images of objects at a rate of 10 or 20 per second (10 or 20 Hz), with faces appearing once per second (1 Hz), leading to two distinct frequency-tagged EEG responses. Twelve observers were tested with three squarewave image presentation conditions: 1) with an ISI, a traditional 50% duty cycle at 10 Hz (50-ms stimulus duration separated by a 50-ms ISI); 2) removing the ISI and matching the rate, a 100% duty cycle at 10 Hz (100-ms duration with 0-ms ISI); 3) removing the ISI and matching the stimulus presentation duration, a 100% duty cycle at 20 Hz (50-ms duration with 0-ms ISI). The face categorization response was significantly decreased in the 20 Hz 100% condition. The conditions at 10 Hz showed similar face-categorization responses, peaking maximally over the right occipito-temporal (ROT) cortex. However, the onset of the 10 Hz 100% response was delayed by about 20 ms over the ROT region relative to the 10 Hz 50% condition, likely due to immediate forward-masking by preceding images. Taken together, these results help to interpret how the FPVS-EEG paradigm sets temporal constraints on visual image categorization. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Optical images of visible and invisible percepts in the primary visual cortex of primates

    PubMed Central

    Macknik, Stephen L.; Haglund, Michael M.

    1999-01-01

    We optically imaged a visual masking illusion in primary visual cortex (area V-1) of rhesus monkeys to ask whether activity in the early visual system more closely reflects the physical stimulus or the generated percept. Visual illusions can be a powerful way to address this question because they have the benefit of dissociating the stimulus from perception. We used an illusion in which a flickering target (a bar oriented in visual space) is rendered invisible by two counter-phase flickering bars, called masks, which flank and abut the target. The target and masks, when shown separately, each generated correlated activity on the surface of the cortex. During the illusory condition, however, optical signals generated in the cortex by the target disappeared although the image of the masks persisted. The optical image thus was correlated with perception but not with the physical stimulus. PMID:10611363

  4. Role of somatosensory and vestibular cues in attenuating visually induced human postural sway

    NASA Technical Reports Server (NTRS)

    Peterka, R. J.; Benolken, M. S.

    1995-01-01

    The purpose of this study was to determine the contribution of visual, vestibular, and somatosensory cues to the maintenance of stance in humans. Postural sway was induced by full-field, sinusoidal visual surround rotations about an axis at the level of the ankle joints. The influences of vestibular and somatosensory cues were characterized by comparing postural sway in normal and bilateral vestibular absent subjects in conditions that provided either accurate or inaccurate somatosensory orientation information. In normal subjects, the amplitude of visually induced sway reached a saturation level as stimulus amplitude increased. The saturation amplitude decreased with increasing stimulus frequency. No saturation phenomena were observed in subjects with vestibular loss, implying that vestibular cues were responsible for the saturation phenomenon. For visually induced sways below the saturation level, the stimulus-response curves for both normal subjects and subjects experiencing vestibular loss were nearly identical, implying (1) that normal subjects were not using vestibular information to attenuate their visually induced sway, possibly because sway was below a vestibular-related threshold level, and (2) that subjects with vestibular loss did not utilize visual cues to a greater extent than normal subjects; that is, a fundamental change in visual system "gain" was not used to compensate for a vestibular deficit. An unexpected finding was that the amplitude of body sway induced by visual surround motion could be almost 3 times greater than the amplitude of the visual stimulus in normal subjects and subjects with vestibular loss. This occurred in conditions where somatosensory cues were inaccurate and at low stimulus amplitudes. A control system model of visually induced postural sway was developed to explain this finding. For both subject groups, the amplitude of visually induced sway was smaller by a factor of about 4 in tests where somatosensory cues provided accurate versus inaccurate orientation information. This implied (1) that the subjects experiencing vestibular loss did not utilize somatosensory cues to a greater extent than normal subjects; that is, changes in somatosensory system "gain" were not used to compensate for a vestibular deficit, and (2) that the threshold for the use of vestibular cues in normal subjects was apparently lower in test conditions where somatosensory cues were providing accurate orientation information.

  5. Tilt and Translation Motion Perception during Pitch Tilt with Visual Surround Translation

    NASA Technical Reports Server (NTRS)

    O'Sullivan, Brita M.; Harm, Deborah L.; Reschke, Millard F.; Wood, Scott J.

    2006-01-01

    The central nervous system must resolve the ambiguity of inertial motion sensory cues in order to derive an accurate representation of spatial orientation. Previous studies suggest that multisensory integration is critical for discriminating linear accelerations arising from tilt and translation head motion. Visual input is especially important at low frequencies where canal input is declining. The NASA Tilt Translation Device (TTD) was designed to recreate postflight orientation disturbances by exposing subjects to matching tilt self motion with conflicting visual surround translation. Previous studies have demonstrated that brief exposures to pitch tilt with foreaft visual surround translation produced changes in compensatory vertical eye movement responses, postural equilibrium, and motion sickness symptoms. Adaptation appeared greatest with visual scene motion leading (versus lagging) the tilt motion, and the adaptation time constant appeared to be approximately 30 min. The purpose of this study was to compare motion perception when the visual surround translation was inphase versus outofphase with pitch tilt. The inphase stimulus presented visual surround motion one would experience if the linear acceleration was due to foreaft self translation within a stationary surround, while the outofphase stimulus had the visual scene motion leading the tilt by 90 deg as previously used. The tilt stimuli in these conditions were asymmetrical, ranging from an upright orientation to 10 deg pitch back. Another objective of the study was to compare motion perception with the inphase stimulus when the tilts were asymmetrical relative to upright (0 to 10 deg back) versus symmetrical (10 deg forward to 10 deg back). Twelve subjects (6M, 6F, 22-55 yrs) were tested during 3 sessions separated by at least one week. During each of the three sessions (out-of-phase asymmetrical, in-phase asymmetrical, inphase symmetrical), subjects were exposed to visual surround translation synchronized with pitch tilt at 0.1 Hz for a total of 30 min. Tilt and translation motion perception was obtained from verbal reports and a joystick mounted on a linear stage. Horizontal vergence and vertical eye movements were obtained with a binocular video system. Responses were also obtained during darkness before and following 15 min and 30 min of visual surround translation. Each of the three stimulus conditions involving visual surround translation elicited a significantly reduced sense of perceived tilt and strong linear vection (perceived translation) compared to pre-exposure tilt stimuli in darkness. This increase in perceived translation with reduction in tilt perception was also present in darkness following 15 and 30 min exposures, provided the tilt stimuli were not interrupted. Although not significant, there was a trend for the inphase asymmetrical stimulus to elicit a stronger sense of both translation and tilt than the out-of-phase asymmetrical stimulus. Surprisingly, the inphase asymmetrical stimulus also tended to elicit a stronger sense of peak-to-peak translation than the inphase symmetrical stimulus, even though the range of linear acceleration during the symmetrical stimulus was twice that of the asymmetrical stimulus. These results are consistent with the hypothesis that the central nervous system resolves the ambiguity of inertial motion sensory cues by integrating inputs from visual, vestibular, and somatosensory systems.

  6. Role of inter-hemispheric transfer in generating visual evoked potentials in V1-damaged brain hemispheres

    PubMed Central

    Kavcic, Voyko; Triplett, Regina L.; Das, Anasuya; Martin, Tim; Huxlin, Krystel R.

    2015-01-01

    Partial cortical blindness is a visual deficit caused by unilateral damage to the primary visual cortex, a condition previously considered beyond hopes of rehabilitation. However, recent data demonstrate that patients may recover both simple and global motion discrimination following intensive training in their blind field. The present experiments characterized motion-induced neural activity of cortically blind (CB) subjects prior to the onset of visual rehabilitation. This was done to provide information about visual processing capabilities available to mediate training-induced visual improvements. Visual Evoked Potentials (VEPs) were recorded from two experimental groups consisting of 9 CB subjects and 9 age-matched, visually-intact controls. VEPs were collected following lateralized stimulus presentation to each of the 4 visual field quadrants. VEP waveforms were examined for both stimulus-onset (SO) and motion-onset (MO) related components in postero-lateral electrodes. While stimulus presentation to intact regions of the visual field elicited normal SO-P1, SO-N1, SO-P2 and MO-N2 amplitudes and latencies in contralateral brain regions of CB subjects, these components were not observed contralateral to stimulus presentation in blind quadrants of the visual field. In damaged brain hemispheres, SO-VEPs were only recorded following stimulus presentation to intact visual field quadrants, via inter-hemispheric transfer. MO-VEPs were only recorded from damaged left brain hemispheres, possibly reflecting a native left/right asymmetry in inter-hemispheric connections. The present findings suggest that damaged brain hemispheres contain areas capable of responding to visual stimulation. However, in the absence of training or rehabilitation, these areas only generate detectable VEPs in response to stimulation of the intact hemifield of vision. PMID:25575450

  7. Release of inattentional blindness by high working memory load: elucidating the relationship between working memory and selective attention.

    PubMed

    de Fockert, Jan W; Bremner, Andrew J

    2011-12-01

    An unexpected stimulus often remains unnoticed if attention is focused elsewhere. This inattentional blindness has been shown to be increased under conditions of high memory load. Here we show that increasing working memory load can also have the opposite effect of reducing inattentional blindness (i.e., improving stimulus detection) if stimulus detection is competing for attention with a concurrent visual task. Participants were required to judge which of two lines was the longer while holding in working memory either one digit (low load) or six digits (high load). An unexpected visual stimulus was presented once alongside the line judgment task. Detection of the unexpected stimulus was significantly improved under conditions of higher working memory load. This improvement in performance prompts the striking conclusion that an effect of cognitive load is to increase attentional spread, thereby enhancing our ability to detect perceptual stimuli to which we would normally be inattentionally blind under less taxing cognitive conditions. We discuss the implications of these findings for our understanding of the relationship between working memory and selective attention. Copyright © 2011 Elsevier B.V. All rights reserved.

  8. Graded Neuronal Modulations Related to Visual Spatial Attention.

    PubMed

    Mayo, J Patrick; Maunsell, John H R

    2016-05-11

    Studies of visual attention in monkeys typically measure neuronal activity when the stimulus event to be detected occurs at a cued location versus when it occurs at an uncued location. But this approach does not address how neuronal activity changes relative to conditions where attention is unconstrained by cueing. Human psychophysical studies have used neutral cueing conditions and found that neutrally cued behavioral performance is generally intermediate to that of cued and uncued conditions (Posner et al., 1978; Mangun and Hillyard, 1990; Montagna et al., 2009). To determine whether the neuronal correlates of visual attention during neutral cueing are similarly intermediate, we trained macaque monkeys to detect changes in stimulus orientation that were more likely to occur at one location (cued) than another (uncued), or were equally likely to occur at either stimulus location (neutral). Consistent with human studies, performance was best when the location was cued, intermediate when both locations were neutrally cued, and worst when the location was uncued. Neuronal modulations in visual area V4 were also graded as a function of cue validity and behavioral performance. By recording from both hemispheres simultaneously, we investigated the possibility of switching attention between stimulus locations during neutral cueing. The results failed to support a unitary "spotlight" of attention. Overall, our findings indicate that attention-related changes in V4 are graded to accommodate task demands. Studies of the neuronal correlates of attention in monkeys typically use visual cues to manipulate where attention is focused ("cued" vs "uncued"). Human psychophysical studies often also include neutrally cued trials to study how attention naturally varies between points of interest. But the neuronal correlates of this neutral condition are unclear. We measured behavioral performance and neuronal activity in cued, uncued, and neutrally cued blocks of trials. Behavioral performance and neuronal responses during neutral cueing were intermediate to those of the cued and uncued conditions. We found no signatures of a single mechanism of attention that switches between stimulus locations. Thus, attention-related changes in neuronal activity are largely hemisphere-specific and graded according to task demands. Copyright © 2016 the authors 0270-6474/16/365353-09$15.00/0.

  9. Graded Neuronal Modulations Related to Visual Spatial Attention

    PubMed Central

    Maunsell, John H. R.

    2016-01-01

    Studies of visual attention in monkeys typically measure neuronal activity when the stimulus event to be detected occurs at a cued location versus when it occurs at an uncued location. But this approach does not address how neuronal activity changes relative to conditions where attention is unconstrained by cueing. Human psychophysical studies have used neutral cueing conditions and found that neutrally cued behavioral performance is generally intermediate to that of cued and uncued conditions (Posner et al., 1978; Mangun and Hillyard, 1990; Montagna et al., 2009). To determine whether the neuronal correlates of visual attention during neutral cueing are similarly intermediate, we trained macaque monkeys to detect changes in stimulus orientation that were more likely to occur at one location (cued) than another (uncued), or were equally likely to occur at either stimulus location (neutral). Consistent with human studies, performance was best when the location was cued, intermediate when both locations were neutrally cued, and worst when the location was uncued. Neuronal modulations in visual area V4 were also graded as a function of cue validity and behavioral performance. By recording from both hemispheres simultaneously, we investigated the possibility of switching attention between stimulus locations during neutral cueing. The results failed to support a unitary “spotlight” of attention. Overall, our findings indicate that attention-related changes in V4 are graded to accommodate task demands. SIGNIFICANCE STATEMENT Studies of the neuronal correlates of attention in monkeys typically use visual cues to manipulate where attention is focused (“cued” vs “uncued”). Human psychophysical studies often also include neutrally cued trials to study how attention naturally varies between points of interest. But the neuronal correlates of this neutral condition are unclear. We measured behavioral performance and neuronal activity in cued, uncued, and neutrally cued blocks of trials. Behavioral performance and neuronal responses during neutral cueing were intermediate to those of the cued and uncued conditions. We found no signatures of a single mechanism of attention that switches between stimulus locations. Thus, attention-related changes in neuronal activity are largely hemisphere-specific and graded according to task demands. PMID:27170131

  10. Solid shape discrimination from vision and haptics: natural objects (Capsicum annuum) and Gibson's "feelies".

    PubMed

    Norman, J Farley; Phillips, Flip; Holmin, Jessica S; Norman, Hideko F; Beers, Amanda M; Boswell, Alexandria M; Cheeseman, Jacob R; Stethen, Angela G; Ronning, Cecilia

    2012-10-01

    A set of three experiments evaluated 96 participants' ability to visually and haptically discriminate solid object shape. In the past, some researchers have found haptic shape discrimination to be substantially inferior to visual shape discrimination, while other researchers have found haptics and vision to be essentially equivalent. A primary goal of the present study was to understand these discrepant past findings and to determine the true capabilities of the haptic system. All experiments used the same task (same vs. different shape discrimination) and stimulus objects (James Gibson's "feelies" and a set of naturally shaped objects--bell peppers). However, the methodology varied across experiments. Experiment 1 used random 3-dimensional (3-D) orientations of the stimulus objects, and the conditions were full-cue (active manipulation of objects and rotation of the visual objects in depth). Experiment 2 restricted the 3-D orientations of the stimulus objects and limited the haptic and visual information available to the participants. Experiment 3 compared restricted and full-cue conditions using random 3-D orientations. We replicated both previous findings in the current study. When we restricted visual and haptic information (and placed the stimulus objects in the same orientation on every trial), the participants' visual performance was superior to that obtained for haptics (replicating the earlier findings of Davidson et al. in Percept Psychophys 15(3):539-543, 1974). When the circumstances resembled those of ordinary life (e.g., participants able to actively manipulate objects and see them from a variety of perspectives), we found no significant difference between visual and haptic solid shape discrimination.

  11. On the role of covarying functions in stimulus class formation and transfer of function.

    PubMed Central

    Markham, Rebecca G; Markham, Michael R

    2002-01-01

    This experiment investigated whether directly trained covarying functions are necessary for stimulus class formation and transfer of function in humans. Initial class training was designed to establish two respondent-based stimulus classes by pairing two visual stimuli with shock and two other visual stimuli with no shock. Next, two operant discrimination functions were trained to one stimulus of each putative class. The no-shock group received the same training and testing in all phases, except no stimuli were ever paired with shock. The data indicated that skin conductance response conditioning did not occur for the shock groups or for the no-shock group. Tests showed transfer of the established discriminative functions, however, only for the shock groups, indicating the formation of two stimulus classes only for those participants who received respondent class training. The results suggest that transfer of function does not depend on first covarying the stimulus class functions. PMID:12507017

  12. Aural, visual, and pictorial stimulus formats in false recall.

    PubMed

    Beauchamp, Heather M

    2002-12-01

    The present investigation is an initial simultaneous examination of the influence of three stimulus formats on false memories. Several pilot tests were conducted to develop new category associate stimulus lists. 73 women and 26 men (M age=21.1 yr.) were in one of three conditions: they either heard words, were shown words, or were shown pictures highly related to critical nonpresented items. As expected, recall of critical nonpresented stimuli was significantly greater for aural lists than for visually presented words and pictorial images. These findings demonstrate that the accuracy of memory is influenced by the format of the information encoded.

  13. The Neural Basis of Taste-visual Modal Conflict Control in Appetitive and Aversive Gustatory Context.

    PubMed

    Xiao, Xiao; Dupuis-Roy, Nicolas; Jiang, Jun; Du, Xue; Zhang, Mingmin; Zhang, Qinglin

    2018-02-21

    The functional magnetic resonance imaging (fMRI) technique was used to investigate brain activations related to conflict control in a taste-visual cross-modal pairing task. On each trial, participants had to decide whether the taste of a gustatory stimulus matched or did not match the expected taste of the food item depicted in an image. There were four conditions: Negative match (NM; sour gustatory stimulus and image of sour food), negative mismatch (NMM; sour gustatory stimulus and image of sweet food), positive match (PM; sweet gustatory stimulus and image of sweet food), positive mismatch (PMM; sweet gustatory stimulus and image of sour food). Blood oxygenation level-dependent (BOLD) contrasts between the NMM and the NM conditions revealed an increased activity in the middle frontal gyrus (MFG) (BA 6), the lingual gyrus (LG) (BA 18), and the postcentral gyrus. Furthermore, the NMM minus NM BOLD differences observed in the MFG were correlated with the NMM minus NM differences in response time. These activations were specifically associated with conflict control during the aversive gustatory stimulation. BOLD contrasts between the PMM and the PM condition revealed no significant positive activation, which supported the hypothesis that the human brain is especially sensitive to aversive stimuli. Altogether, these results suggest that the MFG is associated with the taste-visual cross-modal conflict control. A possible role of the LG as an information conflict detector at an early perceptual stage is further discussed, along with a possible involvement of the postcentral gyrus in the processing of the taste-visual cross-modal sensory contrast. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.

  14. Roles of octopaminergic and dopaminergic neurons in appetitive and aversive memory recall in an insect.

    PubMed

    Mizunami, Makoto; Unoki, Sae; Mori, Yasuhiro; Hirashima, Daisuke; Hatano, Ai; Matsumoto, Yukihisa

    2009-08-04

    In insect classical conditioning, octopamine (the invertebrate counterpart of noradrenaline) or dopamine has been suggested to mediate reinforcing properties of appetitive or aversive unconditioned stimulus, respectively. However, the roles of octopaminergic and dopaminergic neurons in memory recall have remained unclear. We studied the roles of octopaminergic and dopaminergic neurons in appetitive and aversive memory recall in olfactory and visual conditioning in crickets. We found that pharmacological blockade of octopamine and dopamine receptors impaired aversive memory recall and appetitive memory recall, respectively, thereby suggesting that activation of octopaminergic and dopaminergic neurons and the resulting release of octopamine and dopamine are needed for appetitive and aversive memory recall, respectively. On the basis of this finding, we propose a new model in which it is assumed that two types of synaptic connections are formed by conditioning and are activated during memory recall, one type being connections from neurons representing conditioned stimulus to neurons inducing conditioned response and the other being connections from neurons representing conditioned stimulus to octopaminergic or dopaminergic neurons representing appetitive or aversive unconditioned stimulus, respectively. The former is called 'stimulus-response connection' and the latter is called 'stimulus-stimulus connection' by theorists studying classical conditioning in higher vertebrates. Our model predicts that pharmacological blockade of octopamine or dopamine receptors during the first stage of second-order conditioning does not impair second-order conditioning, because it impairs the formation of the stimulus-response connection but not the stimulus-stimulus connection. The results of our study with a cross-modal second-order conditioning were in full accordance with this prediction. We suggest that insect classical conditioning involves the formation of two kinds of memory traces, which match to stimulus-stimulus connection and stimulus-response connection. This is the first study to suggest that classical conditioning in insects involves, as does classical conditioning in higher vertebrates, the formation of stimulus-stimulus connection and its activation for memory recall, which are often called cognitive processes.

  15. Cortical networks involved in visual awareness independent of visual attention.

    PubMed

    Webb, Taylor W; Igelström, Kajsa M; Schurger, Aaron; Graziano, Michael S A

    2016-11-29

    It is now well established that visual attention, as measured with standard spatial attention tasks, and visual awareness, as measured by report, can be dissociated. It is possible to attend to a stimulus with no reported awareness of the stimulus. We used a behavioral paradigm in which people were aware of a stimulus in one condition and unaware of it in another condition, but the stimulus drew a similar amount of spatial attention in both conditions. The paradigm allowed us to test for brain regions active in association with awareness independent of level of attention. Participants performed the task in an MRI scanner. We looked for brain regions that were more active in the aware than the unaware trials. The largest cluster of activity was obtained in the temporoparietal junction (TPJ) bilaterally. Local independent component analysis (ICA) revealed that this activity contained three distinct, but overlapping, components: a bilateral, anterior component; a left dorsal component; and a right dorsal component. These components had brain-wide functional connectivity that partially overlapped the ventral attention network and the frontoparietal control network. In contrast, no significant activity in association with awareness was found in the banks of the intraparietal sulcus, a region connected to the dorsal attention network and traditionally associated with attention control. These results show the importance of separating awareness and attention when testing for cortical substrates. They are also consistent with a recent proposal that awareness is associated with ventral attention areas, especially in the TPJ.

  16. Ventral Lateral Geniculate Input to the Medial Pons Is Necessary for Visual Eyeblink Conditioning in Rats

    ERIC Educational Resources Information Center

    Halverson, Hunter E.; Freeman, John H.

    2010-01-01

    The conditioned stimulus (CS) pathway that is necessary for visual delay eyeblink conditioning was investigated in the current study. Rats were initially given eyeblink conditioning with stimulation of the ventral nucleus of the lateral geniculate (LGNv) as the CS followed by conditioning with light and tone CSs in separate training phases.…

  17. Disruption of visual awareness during the attentional blink is reflected by selective disruption of late-stage neural processing

    PubMed Central

    Harris, Joseph A.; McMahon, Alex R.; Woldorff, Marty G.

    2015-01-01

    Any information represented in the brain holds the potential to influence behavior. It is therefore of broad interest to determine the extent and quality of neural processing of stimulus input that occurs with and without awareness. The attentional blink is a useful tool for dissociating neural and behavioral measures of perceptual visual processing across conditions of awareness. The extent of higher-order visual information beyond basic sensory signaling that is processed during the attentional blink remains controversial. To determine what neural processing at the level of visual-object identification occurs in the absence of awareness, electrophysiological responses to images of faces and houses were recorded both within and outside of the attentional blink period during a rapid serial visual presentation (RSVP) stream. Electrophysiological results were sorted according to behavioral performance (correctly identified targets versus missed targets) within these blink and non-blink periods. An early index of face-specific processing (the N170, 140–220 ms post-stimulus) was observed regardless of whether the subject demonstrated awareness of the stimulus, whereas a later face-specific effect with the same topographic distribution (500–700 ms post-stimulus) was only seen for accurate behavioral discrimination of the stimulus content. The present findings suggest a multi-stage process of object-category processing, with only the later phase being associated with explicit visual awareness. PMID:23859644

  18. Frontal brain deactivation during a non-verbal cognitive judgement bias test in sheep.

    PubMed

    Guldimann, Kathrin; Vögeli, Sabine; Wolf, Martin; Wechsler, Beat; Gygax, Lorenz

    2015-02-01

    Animal welfare concerns have raised an interest in animal affective states. These states also play an important role in the proximate control of behaviour. Due to their potential to modulate short-term emotional reactions, one specific focus is on long-term affective states, that is, mood. These states can be assessed by using non-verbal cognitive judgement bias paradigms. Here, we conducted a spatial variant of such a test on 24 focal animals that were kept under either unpredictable, stimulus-poor or predictable, stimulus-rich housing conditions to induce differential mood states. Based on functional near-infrared spectroscopy, we measured haemodynamic frontal brain reactions during 10 s in which the sheep could observe the configuration of the cognitive judgement bias trial before indicating their assessment based on the go/no-go reaction. We used (generalised) mixed-effects models to evaluate the data. Sheep from the unpredictable, stimulus-poor housing conditions took longer and were less likely to reach the learning criterion and reacted slightly more optimistically in the cognitive judgement bias test than sheep from the predictable, stimulus-rich housing conditions. A frontal cortical increase in deoxy-haemoglobin [HHb] and a decrease in oxy-haemoglobin [O2Hb] were observed during the visual assessment of the test situation by the sheep, indicating a frontal cortical brain deactivation. This deactivation was more pronounced with the negativity of the test situation, which was reflected by the provenance of the sheep from the unpredictable, stimulus-poor housing conditions, the proximity of the cue to the negatively reinforced cue location, or the absence of a go reaction in the trial. It seems that (1) sheep from the unpredictable, stimulus-poor in comparison to sheep from the predictable, stimulus-rich housing conditions dealt less easily with the test conditions rich in stimuli, that (2) long-term housing conditions seemingly did not influence mood--which may be related to the difficulty of tracking a constant long-term state in the brain--and that (3) visual assessment of an emotional stimulus leads to frontal brain deactivation in sheep, specifically if that stimulus is negative. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Behavioural evidence for separate mechanisms of audiovisual temporal binding as a function of leading sensory modality.

    PubMed

    Cecere, Roberto; Gross, Joachim; Thut, Gregor

    2016-06-01

    The ability to integrate auditory and visual information is critical for effective perception and interaction with the environment, and is thought to be abnormal in some clinical populations. Several studies have investigated the time window over which audiovisual events are integrated, also called the temporal binding window, and revealed asymmetries depending on the order of audiovisual input (i.e. the leading sense). When judging audiovisual simultaneity, the binding window appears narrower and non-malleable for auditory-leading stimulus pairs and wider and trainable for visual-leading pairs. Here we specifically examined the level of independence of binding mechanisms when auditory-before-visual vs. visual-before-auditory input is bound. Three groups of healthy participants practiced audiovisual simultaneity detection with feedback, selectively training on auditory-leading stimulus pairs (group 1), visual-leading stimulus pairs (group 2) or both (group 3). Subsequently, we tested for learning transfer (crossover) from trained stimulus pairs to non-trained pairs with opposite audiovisual input. Our data confirmed the known asymmetry in size and trainability for auditory-visual vs. visual-auditory binding windows. More importantly, practicing one type of audiovisual integration (e.g. auditory-visual) did not affect the other type (e.g. visual-auditory), even if trainable by within-condition practice. Together, these results provide crucial evidence that audiovisual temporal binding for auditory-leading vs. visual-leading stimulus pairs are independent, possibly tapping into different circuits for audiovisual integration due to engagement of different multisensory sampling mechanisms depending on leading sense. Our results have implications for informing the study of multisensory interactions in healthy participants and clinical populations with dysfunctional multisensory integration. © 2016 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  20. Lack of Multisensory Integration in Hemianopia: No Influence of Visual Stimuli on Aurally Guided Saccades to the Blind Hemifield

    PubMed Central

    Ten Brink, Antonia F.; Nijboer, Tanja C. W.; Bergsma, Douwe P.; Barton, Jason J. S.; Van der Stigchel, Stefan

    2015-01-01

    In patients with visual hemifield defects residual visual functions may be present, a phenomenon called blindsight. The superior colliculus (SC) is part of the spared pathway that is considered to be responsible for this phenomenon. Given that the SC processes input from different modalities and is involved in the programming of saccadic eye movements, the aim of the present study was to examine whether multimodal integration can modulate oculomotor competition in the damaged hemifield. We conducted two experiments with eight patients who had visual field defects due to lesions that affected the retinogeniculate pathway but spared the retinotectal direct SC pathway. They had to make saccades to an auditory target that was presented alone or in combination with a visual stimulus. The visual stimulus could either be spatially coincident with the auditory target (possibly enhancing the auditory target signal), or spatially disparate to the auditory target (possibly competing with the auditory tar-get signal). For each patient we compared the saccade endpoint deviation in these two bi-modal conditions with the endpoint deviation in the unimodal condition (auditory target alone). In all seven hemianopic patients, saccade accuracy was affected only by visual stimuli in the intact, but not in the blind visual field. In one patient with a more limited quadrantano-pia, a facilitation effect of the spatially coincident visual stimulus was observed. We conclude that our results show that multisensory integration is infrequent in the blind field of patients with hemianopia. PMID:25835952

  1. Masking disrupts reentrant processing in human visual cortex.

    PubMed

    Fahrenfort, J J; Scholte, H S; Lamme, V A F

    2007-09-01

    In masking, a stimulus is rendered invisible through the presentation of a second stimulus shortly after the first. Over the years, authors have typically explained masking by postulating some early disruption process. In these feedforward-type explanations, the mask somehow "catches up" with the target stimulus, disrupting its processing either through lateral or interchannel inhibition. However, studies from recent years indicate that visual perception--and most notably visual awareness itself--may depend strongly on cortico-cortical feedback connections from higher to lower visual areas. This has led some researchers to propose that masking derives its effectiveness from selectively interrupting these reentrant processes. In this experiment, we used electroencephalogram measurements to determine what happens in the human visual cortex during detection of a texture-defined square under nonmasked (seen) and masked (unseen) conditions. Electro-encephalogram derivatives that are typically associated with reentrant processing turn out to be absent in the masked condition. Moreover, extrastriate visual areas are still activated early on by both seen and unseen stimuli, as shown by scalp surface Laplacian current source-density maps. This conclusively shows that feedforward processing is preserved, even when subject performance is at chance as determined by objective measures. From these results, we conclude that masking derives its effectiveness, at least partly, from disrupting reentrant processing, thereby interfering with the neural mechanisms of figure-ground segmentation and visual awareness itself.

  2. Putative inhibitory training of a stimulus makes it a facilitator: a within-subject comparison of visual and auditory stimuli in autoshaping.

    PubMed

    Nakajima, S

    2000-03-14

    Pigeons were trained with the A+, AB-, ABC+, AD- and ADE+ task where each of stimulus A and stimulus compounds ABC and ADE signalled food (positive trials), and each of stimulus compounds AB and AD signalled no food (negative trials). Stimuli A, B, C and E were small visual figures localised on a response key, and stimulus D was a white noise. Stimulus B was more effective than D as an inhibitor of responding to A during the training. After the birds learned to respond exclusively on the positive trials, effects of B and D on responding to C and E, respectively, were tested by comparing C, BC, E and DE trials. Stimulus B continuously facilitated responding to C on the BC test trials, but D's facilitative effect was observed only on the first DE test trial. Stimulus B also facilitated responding to E on BE test trials. Implications for the Rescorla-Wagner elemental model and the Pearce configural model of Pavlovian conditioning were discussed.

  3. Response-specifying cue for action interferes with perception of feature-sharing stimuli.

    PubMed

    Nishimura, Akio; Yokosawa, Kazuhiko

    2010-06-01

    Perceiving a visual stimulus is more difficult when a to-be-executed action is compatible with that stimulus, which is known as blindness to response-compatible stimuli. The present study explored how the factors constituting the action event (i.e., response-specifying cue, response intention, and response feature) affect the occurrence of this blindness effect. The response-specifying cue varied along the horizontal and vertical dimensions, while the response buttons were arranged diagonally. Participants responded based on one dimension randomly determined in a trial-by-trial manner. The response intention varied along a single dimension, whereas the response location and the response-specifying cue varied within both vertical and horizontal dimensions simultaneously. Moreover, the compatibility between the visual stimulus and the response location and the compatibility between that stimulus and the response-specifying cue was separately determined. The blindness effect emerged exclusively based on the feature correspondence between the response-specifying cue of the action task and the visual target of the perceptual task. The size of this stimulus-stimulus (S-S) blindness effect did not differ significantly across conditions, showing no effect of response intention and response location. This finding emphasizes the effect of stimulus factors, rather than response factors, of the action event as a source of the blindness to response-compatible stimuli.

  4. Stimulus selectivity and response latency in putative inhibitory and excitatory neurons of the primate inferior temporal cortex

    PubMed Central

    Mruczek, Ryan E. B.

    2012-01-01

    The cerebral cortex is composed of many distinct classes of neurons. Numerous studies have demonstrated corresponding differences in neuronal properties across cell types, but these comparisons have largely been limited to conditions outside of awake, behaving animals. Thus the functional role of the various cell types is not well understood. Here, we investigate differences in the functional properties of two widespread and broad classes of cells in inferior temporal cortex of macaque monkeys: inhibitory interneurons and excitatory projection cells. Cells were classified as putative inhibitory or putative excitatory neurons on the basis of their extracellular waveform characteristics (e.g., spike duration). Consistent with previous intracellular recordings in cortical slices, putative inhibitory neurons had higher spontaneous firing rates and higher stimulus-evoked firing rates than putative excitatory neurons. Additionally, putative excitatory neurons were more susceptible to spike waveform adaptation following very short interspike intervals. Finally, we compared two functional properties of each neuron's stimulus-evoked response: stimulus selectivity and response latency. First, putative excitatory neurons showed stronger stimulus selectivity compared with putative inhibitory neurons. Second, putative inhibitory neurons had shorter response latencies compared with putative excitatory neurons. Selectivity differences were maintained and latency differences were enhanced during a visual search task emulating more natural viewing conditions. Our results suggest that short-latency inhibitory responses are likely to sculpt visual processing in excitatory neurons, yielding a sparser visual representation. PMID:22933717

  5. Visual-somatosensory integration in aging: Does stimulus location really matter?

    PubMed Central

    MAHONEY, JEANNETTE R.; WANG, CUILING; DUMAS, KRISTINA; HOLTZER, ROEE

    2014-01-01

    Individuals are constantly bombarded by sensory stimuli across multiple modalities that must be integrated efficiently. Multisensory integration (MSI) is said to be governed by stimulus properties including space, time, and magnitude. While there is a paucity of research detailing MSI in aging, we have demonstrated that older adults reveal the greatest reaction time (RT) benefi t when presented with simultaneous visual-somatosensory (VS) stimuli. To our knowledge, the differential RT benefit of visual and somatosensory stimuli presented within and across spatial hemifields has not been investigated in aging. Eighteen older adults (Mean = 74 years; 11 female), who were determined to be non-demented and without medical or psychiatric conditions that may affect their performance, participated in this study. Participants received eight randomly presented stimulus conditions (four unisensory and four multisensory) and were instructed to make speeded foot-pedal responses as soon as they detected any stimulation, regardless of stimulus type and location of unisensory inputs. Results from a linear mixed effect model, adjusted for speed of processing and other covariates, revealed that RTs to all multisensory pairings were significantly faster than those elicited to averaged constituent unisensory conditions (p < 0.01). Similarly, race model violation did not differ based on unisensory spatial location (p = 0.41). In summary, older adults demonstrate significant VS multisensory RT effects to stimuli both within and across spatial hemifields. PMID:24698637

  6. Blur adaptation: contrast sensitivity changes and stimulus extent.

    PubMed

    Venkataraman, Abinaya Priya; Winter, Simon; Unsbo, Peter; Lundström, Linda

    2015-05-01

    A prolonged exposure to foveal defocus is well known to affect the visual functions in the fovea. However, the effects of peripheral blur adaptation on foveal vision, or vice versa, are still unclear. In this study, we therefore examined the changes in contrast sensitivity function from baseline, following blur adaptation to small as well as laterally extended stimuli in four subjects. The small field stimulus (7.5° visual field) was a 30min video of forest scenery projected on a screen and the large field stimulus consisted of 7-tiles of the 7.5° stimulus stacked horizontally. Both stimuli were used for adaptation with optical blur (+2.00D trial lens) as well as for clear control conditions. After small field blur adaptation foveal contrast sensitivity improved in the mid spatial frequency region. However, these changes neither spread to the periphery nor occurred for the large field blur adaptation. To conclude, visual performance after adaptation is dependent on the lateral extent of the adaptation stimulus. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  7. Effect of eye position during human visual-vestibular integration of heading perception.

    PubMed

    Crane, Benjamin T

    2017-09-01

    Visual and inertial stimuli provide heading discrimination cues. Integration of these multisensory stimuli has been demonstrated to depend on their relative reliability. However, the reference frame of visual stimuli is eye centered while inertia is head centered, and it remains unclear how these are reconciled with combined stimuli. Seven human subjects completed a heading discrimination task consisting of a 2-s translation with a peak velocity of 16 cm/s. Eye position was varied between 0° and ±25° left/right. Experiments were done with inertial motion, visual motion, or a combined visual-inertial motion. Visual motion coherence varied between 35% and 100%. Subjects reported whether their perceived heading was left or right of the midline in a forced-choice task. With the inertial stimulus the eye position had an effect such that the point of subjective equality (PSE) shifted 4.6 ± 2.4° in the gaze direction. With the visual stimulus the PSE shift was 10.2 ± 2.2° opposite the gaze direction, consistent with retinotopic coordinates. Thus with eccentric eye positions the perceived inertial and visual headings were offset ~15°. During the visual-inertial conditions the PSE varied consistently with the relative reliability of these stimuli such that at low visual coherence the PSE was similar to that of the inertial stimulus and at high coherence it was closer to the visual stimulus. On average, the inertial stimulus was weighted near Bayesian ideal predictions, but there was significant deviation from ideal in individual subjects. These findings support visual and inertial cue integration occurring in independent coordinate systems. NEW & NOTEWORTHY In multiple cortical areas visual heading is represented in retinotopic coordinates while inertial heading is in body coordinates. It remains unclear whether multisensory integration occurs in a common coordinate system. The experiments address this using a multisensory integration task with eccentric gaze positions making the effect of coordinate systems clear. The results indicate that the coordinate systems remain separate to the perceptual level and that during the multisensory task the perception depends on relative stimulus reliability. Copyright © 2017 the American Physiological Society.

  8. Tachistoscopic exposure and masking of real three-dimensional scenes

    PubMed Central

    Pothier, Stephen; Philbeck, John; Chichka, David; Gajewski, Daniel A.

    2010-01-01

    Although there are many well-known forms of visual cues specifying absolute and relative distance, little is known about how visual space perception develops at small temporal scales. How much time does the visual system require to extract the information in the various absolute and relative distance cues? In this article, we describe a system that may be used to address this issue by presenting brief exposures of real, three-dimensional scenes, followed by a masking stimulus. The system is composed of an electronic shutter (a liquid crystal smart window) for exposing the stimulus scene, and a liquid crystal projector coupled with an electromechanical shutter for presenting the masking stimulus. This system can be used in both full- and reduced-cue viewing conditions, under monocular and binocular viewing, and at distances limited only by the testing space. We describe a configuration that may be used for studying the microgenesis of visual space perception in the context of visually directed walking. PMID:19182129

  9. Stimulus meanings alter illusory self-motion (vection)--experimental examination of the train illusion.

    PubMed

    Seno, Takeharu; Fukuda, Haruaki

    2012-01-01

    Over the last 100 years, numerous studies have examined the effective visual stimulus properties for inducing illusory self-motion (known as vection). This vection is often experienced more strongly in daily life than under controlled experimental conditions. One well-known example of vection in real life is the so-called 'train illusion'. In the present study, we showed that this train illusion can also be generated in the laboratory using virtual computer graphics-based motion stimuli. We also demonstrated that this vection can be modified by altering the meaning of the visual stimuli (i.e., top down effects). Importantly, we show that the semantic meaning of a stimulus can inhibit or facilitate vection, even when there is no physical change to the stimulus.

  10. Acquisition of Conditioning between Methamphetamine and Cues in Healthy Humans

    PubMed Central

    Mayo, Leah M.; de Wit, Harriet

    2016-01-01

    Environmental stimuli repeatedly paired with drugs of abuse can elicit conditioned responses that are thought to promote future drug seeking. We recently showed that healthy volunteers acquired conditioned responses to auditory and visual stimuli after just two pairings with methamphetamine (MA, 20 mg, oral). This study extended these findings by systematically varying the number of drug-stimuli pairings. We expected that more pairings would result in stronger conditioning. Three groups of healthy adults were randomly assigned to receive 1, 2 or 4 pairings (Groups P1, P2 and P4, Ns = 13, 16, 16, respectively) of an auditory-visual stimulus with MA, and another stimulus with placebo (PBO). Drug-cue pairings were administered in an alternating, counterbalanced order, under double-blind conditions, during 4 hr sessions. MA produced prototypic subjective effects (mood, ratings of drug effects) and alterations in physiology (heart rate, blood pressure). Although subjects did not exhibit increased behavioral preference for, or emotional reactivity to, the MA-paired cue after conditioning, they did exhibit an increase in attentional bias (initial gaze) toward the drug-paired stimulus. Further, subjects who had four pairings reported “liking” the MA-paired cue more than the PBO cue after conditioning. Thus, the number of drug-stimulus pairings, varying from one to four, had only modest effects on the strength of conditioned responses. Further studies investigating the parameters under which drug conditioning occurs will help to identify risk factors for developing drug abuse, and provide new treatment strategies. PMID:27548681

  11. Conditioned pain modulation is minimally influenced by cognitive evaluation or imagery of the conditioning stimulus

    PubMed Central

    Bernaba, Mario; Johnson, Kevin A; Kong, Jiang-Ti; Mackey, Sean

    2014-01-01

    Purpose Conditioned pain modulation (CPM) is an experimental approach for probing endogenous analgesia by which one painful stimulus (the conditioning stimulus) may inhibit the perceived pain of a subsequent stimulus (the test stimulus). Animal studies suggest that CPM is mediated by a spino–bulbo–spinal loop using objective measures such as neuronal firing. In humans, pain ratings are often used as the end point. Because pain self-reports are subject to cognitive influences, we tested whether cognitive factors would impact on CPM results in healthy humans. Methods We conducted a within-subject, crossover study of healthy adults to determine the extent to which CPM is affected by 1) threatening and reassuring evaluation and 2) imagery alone of a cold conditioning stimulus. We used a heat stimulus individualized to 5/10 on a visual analog scale as the testing stimulus and computed the magnitude of CPM by subtracting the postconditioning rating from the baseline pain rating of the heat stimulus. Results We found that although evaluation can increase the pain rating of the conditioning stimulus, it did not significantly alter the magnitude of CPM. We also found that imagery of cold pain alone did not result in statistically significant CPM effect. Conclusion Our results suggest that CPM is primarily dependent on sensory input, and that the cortical processes of evaluation and imagery have little impact on CPM. These findings lend support for CPM as a useful tool for probing endogenous analgesia through subcortical mechanisms. PMID:25473310

  12. Adaptation in human visual cortex as a mechanism for rapid discrimination of aversive stimuli.

    PubMed

    Keil, Andreas; Stolarova, Margarita; Moratti, Stephan; Ray, William J

    2007-06-01

    The ability to react rapidly and efficiently to adverse stimuli is crucial for survival. Neuroscience and behavioral studies have converged to show that visual information associated with aversive content is processed quickly and accurately and is associated with rapid amplification of the neural responses. In particular, unpleasant visual information has repeatedly been shown to evoke increased cortical activity during early visual processing between 60 and 120 ms following the onset of a stimulus. However, the nature of these early responses is not well understood. Using neutral versus unpleasant colored pictures, the current report examines the time course of short-term changes in the human visual cortex when a subject is repeatedly exposed to simple grating stimuli in a classical conditioning paradigm. We analyzed changes in amplitude and synchrony of large-scale oscillatory activity across 2 days of testing, which included baseline measurements, 2 conditioning sessions, and a final extinction session. We found a gradual increase in amplitude and synchrony of very early cortical oscillations in the 20-35 Hz range across conditioning sessions, specifically for conditioned stimuli predicting aversive visual events. This increase for conditioned stimuli affected stimulus-locked cortical oscillations at a latency of around 60-90 ms and disappeared during extinction. Our findings suggest that reorganization of neural connectivity on the level of the visual cortex acts to optimize early perception of specific features indicative of emotional relevance.

  13. Stimulation of the Lateral Geniculate, Superior Colliculus, or Visual Cortex is Sufficient for Eyeblink Conditioning in Rats

    ERIC Educational Resources Information Center

    Halverson, Hunter E.; Hubbard, Erin M.; Freeman, John H.

    2009-01-01

    The role of the cerebellum in eyeblink conditioning is well established. Less work has been done to identify the necessary conditioned stimulus (CS) pathways that project sensory information to the cerebellum. A possible visual CS pathway has been hypothesized that consists of parallel inputs to the pontine nuclei from the lateral geniculate…

  14. Conditioned suppression, punishment, and aversion

    NASA Technical Reports Server (NTRS)

    Orme-Johnson, D. W.; Yarczower, M.

    1974-01-01

    The aversive action of visual stimuli was studied in two groups of pigeons which received response-contingent or noncontingent electric shocks in cages with translucent response keys. Presentation of grain for 3 sec, contingent on key pecking, was the visual stimulus associated with conditioned punishment or suppression. The responses of the pigeons in three different experiments are compared.

  15. Memorable Audiovisual Narratives Synchronize Sensory and Supramodal Neural Responses

    PubMed Central

    2016-01-01

    Abstract Our brains integrate information across sensory modalities to generate perceptual experiences and form memories. However, it is difficult to determine the conditions under which multisensory stimulation will benefit or hinder the retrieval of everyday experiences. We hypothesized that the determining factor is the reliability of information processing during stimulus presentation, which can be measured through intersubject correlation of stimulus-evoked activity. We therefore presented biographical auditory narratives and visual animations to 72 human subjects visually, auditorily, or combined, while neural activity was recorded using electroencephalography. Memory for the narrated information, contained in the auditory stream, was tested 3 weeks later. While the visual stimulus alone led to no meaningful retrieval, this related stimulus improved memory when it was combined with the story, even when it was temporally incongruent with the audio. Further, individuals with better subsequent memory elicited neural responses during encoding that were more correlated with their peers. Surprisingly, portions of this predictive synchronized activity were present regardless of the sensory modality of the stimulus. These data suggest that the strength of sensory and supramodal activity is predictive of memory performance after 3 weeks, and that neural synchrony may explain the mnemonic benefit of the functionally uninformative visual context observed for these real-world stimuli. PMID:27844062

  16. Visuocortical Changes During Delay and Trace Aversive Conditioning: Evidence From Steady-State Visual Evoked Potentials

    PubMed Central

    Miskovic, Vladimir; Keil, Andreas

    2015-01-01

    The visual system is biased towards sensory cues that have been associated with danger or harm through temporal co-occurrence. An outstanding question about conditioning-induced changes in visuocortical processing is the extent to which they are driven primarily by top-down factors such as expectancy or by low-level factors such as the temporal proximity between conditioned stimuli and aversive outcomes. Here, we examined this question using two different differential aversive conditioning experiments: participants learned to associate a particular grating stimulus with an aversive noise that was presented either in close temporal proximity (delay conditioning experiment) or after a prolonged stimulus-free interval (trace conditioning experiment). In both experiments we probed cue-related cortical responses by recording steady-state visual evoked potentials (ssVEPs). Although behavioral ratings indicated that all participants successfully learned to discriminate between the grating patterns that predicted the presence versus absence of the aversive noise, selective amplification of population-level responses in visual cortex for the conditioned danger signal was observed only when the grating and the noise were temporally contiguous. Our findings are in line with notions purporting that changes in the electrocortical response of visual neurons induced by aversive conditioning are a product of Hebbian associations among sensory cell assemblies rather than being driven entirely by expectancy-based, declarative processes. PMID:23398582

  17. The Extraction of Information From Visual Persistence

    ERIC Educational Resources Information Center

    Erwin, Donald E.

    1976-01-01

    This research sought to distinguish among three concepts of visual persistence by substituting the physical presence of the target stimulus while simultaneously inhibiting the formation of a persisting representation. Reportability of information about the stimuli was compared to a condition in which visual persistence was allowed to fully develop…

  18. Visual awareness suppression by pre-stimulus brain stimulation; a neural effect.

    PubMed

    Jacobs, Christianne; Goebel, Rainer; Sack, Alexander T

    2012-01-02

    Transcranial magnetic stimulation (TMS) has established the functional relevance of early visual cortex (EVC) for visual awareness with great temporal specificity non-invasively in conscious human volunteers. Many studies have found a suppressive effect when TMS was applied over EVC 80-100 ms after the onset of the visual stimulus (post-stimulus TMS time window). Yet, few studies found task performance to also suffer when TMS was applied even before visual stimulus presentation (pre-stimulus TMS time window). This pre-stimulus TMS effect, however, remains controversially debated and its origin had mainly been ascribed to TMS-induced eye-blinking artifacts. Here, we applied chronometric TMS over EVC during the execution of a visual discrimination task, covering an exhaustive range of visual stimulus-locked TMS time windows ranging from -80 pre-stimulus to 300 ms post-stimulus onset. Electrooculographical (EoG) recordings, sham TMS stimulation, and vertex TMS stimulation controlled for different types of non-neural TMS effects. Our findings clearly reveal TMS-induced masking effects for both pre- and post-stimulus time windows, and for both objective visual discrimination performance and subjective visibility. Importantly, all effects proved to be still present after post hoc removal of eye blink trials, suggesting a neural origin for the pre-stimulus TMS suppression effect on visual awareness. We speculate based on our data that TMS exerts its pre-stimulus effect via generation of a neural state which interacts with subsequent visual input. Copyright © 2011 Elsevier Inc. All rights reserved.

  19. More Than the Verbal Stimulus Matters: Visual Attention in Language Assessment for People With Aphasia Using Multiple-Choice Image Displays

    PubMed Central

    Ivanova, Maria V.; Hallowell, Brooke

    2017-01-01

    Purpose Language comprehension in people with aphasia (PWA) is frequently evaluated using multiple-choice displays: PWA are asked to choose the image that best corresponds to the verbal stimulus in a display. When a nontarget image is selected, comprehension failure is assumed. However, stimulus-driven factors unrelated to linguistic comprehension may influence performance. In this study we explore the influence of physical image characteristics of multiple-choice image displays on visual attention allocation by PWA. Method Eye fixations of 41 PWA were recorded while they viewed 40 multiple-choice image sets presented with and without verbal stimuli. Within each display, 3 images (majority images) were the same and 1 (singleton image) differed in terms of 1 image characteristic. The mean proportion of fixation duration (PFD) allocated across majority images was compared against the PFD allocated to singleton images. Results PWA allocated significantly greater PFD to the singleton than to the majority images in both nonverbal and verbal conditions. Those with greater severity of comprehension deficits allocated greater PFD to nontarget singleton images in the verbal condition. Conclusion When using tasks that rely on multiple-choice displays and verbal stimuli, one cannot assume that verbal stimuli will override the effect of visual-stimulus characteristics. PMID:28520866

  20. The Effects of Visual Stimuli on the Spoken Narrative Performance of School-Age African American Children

    ERIC Educational Resources Information Center

    Mills, Monique T.

    2015-01-01

    Purpose: This study investigated the fictional narrative performance of school-age African American children across 3 elicitation contexts that differed in the type of visual stimulus presented. Method: A total of 54 children in Grades 2 through 5 produced narratives across 3 different visual conditions: no visual, picture sequence, and single…

  1. Using complex auditory-visual samples to produce emergent relations in children with autism.

    PubMed

    Groskreutz, Nicole C; Karsina, Allen; Miguel, Caio F; Groskreutz, Mark P

    2010-03-01

    Six participants with autism learned conditional relations between complex auditory-visual sample stimuli (dictated words and pictures) and simple visual comparisons (printed words) using matching-to-sample training procedures. Pre- and posttests examined potential stimulus control by each element of the complex sample when presented individually and emergence of additional conditional relations and oral labeling. Tests revealed class-consistent performance for all participants following training.

  2. Self-organization of head-centered visual responses under ecological training conditions.

    PubMed

    Mender, Bedeho M W; Stringer, Simon M

    2014-01-01

    We have studied the development of head-centered visual responses in an unsupervised self-organizing neural network model which was trained under ecological training conditions. Four independent spatio-temporal characteristics of the training stimuli were explored to investigate the feasibility of the self-organization under more ecological conditions. First, the number of head-centered visual training locations was varied over a broad range. Model performance improved as the number of training locations approached the continuous sampling of head-centered space. Second, the model depended on periods of time where visual targets remained stationary in head-centered space while it performed saccades around the scene, and the severity of this constraint was explored by introducing increasing levels of random eye movement and stimulus dynamics. Model performance was robust over a range of randomization. Third, the model was trained on visual scenes where multiple simultaneous targets where always visible. Model self-organization was successful, despite never being exposed to a visual target in isolation. Fourth, the duration of fixations during training were made stochastic. With suitable changes to the learning rule, it self-organized successfully. These findings suggest that the fundamental learning mechanism upon which the model rests is robust to the many forms of stimulus variability under ecological training conditions.

  3. Brief daily exposures to Asian females reverses perceptual narrowing for Asian faces in Caucasian infants

    PubMed Central

    Anzures, Gizelle; Wheeler, Andrea; Quinn, Paul C.; Pascalis, Olivier; Slater, Alan M.; Heron-Delaney, Michelle; Tanaka, James W.; Lee, Kang

    2012-01-01

    Perceptual narrowing in the visual, auditory, and multisensory domains has its developmental origins in infancy. The present study shows that experimentally induced experience can reverse the effects of perceptual narrowing on infants’ visual recognition memory of other-race faces. Caucasian 8- to 10-month-olds who could not discriminate between novel and familiarized Asian faces at the beginning of testing were given brief daily experience with Asian female faces in the experimental condition and Caucasian female faces in the control condition. At the end of three weeks, only infants who received daily experience with Asian females showed above-chance recognition of novel Asian female and male faces. Further, infants in the experimental condition showed greater efficiency in learning novel Asian females compared to infants in the control condition. Thus, visual experience with a novel stimulus category can reverse the effects of perceptual narrowing in infancy via improved stimulus recognition and encoding. PMID:22625845

  4. Orienting attention in visual space by nociceptive stimuli: investigation with a temporal order judgment task based on the adaptive PSI method.

    PubMed

    Filbrich, Lieve; Alamia, Andrea; Burns, Soline; Legrain, Valéry

    2017-07-01

    Despite their high relevance for defending the integrity of the body, crossmodal links between nociception, the neural system specifically coding potentially painful information, and vision are still poorly studied, especially the effects of nociception on visual perception. This study investigated if, and in which time window, a nociceptive stimulus can attract attention to its location on the body, independently of voluntary control, to facilitate the processing of visual stimuli occurring in the same side of space as the limb on which the visual stimulus was applied. In a temporal order judgment task based on an adaptive procedure, participants judged which of two visual stimuli, one presented next to either hand in either side of space, had been perceived first. Each pair of visual stimuli was preceded (by 200, 400, or 600 ms) by a nociceptive stimulus applied either unilaterally on one single hand, or bilaterally, on both hands simultaneously. Results show that, as compared to the bilateral condition, participants' judgments were biased to the advantage of the visual stimuli that occurred in the same side of space as the hand on which a unilateral, nociceptive stimulus was applied. This effect was present in a time window ranging from 200 to 600 ms, but importantly, biases increased with decreasing time interval. These results suggest that nociceptive stimuli can affect the perceptual processing of spatially congruent visual inputs.

  5. Female-female mounting among goats stimulates sexual performance in males.

    PubMed

    Shearer, Meagan K; Katz, Larry S

    2006-06-01

    The hypothesis that female-female mounting is proceptivity in goats, in that male goats are aroused by the visual cues of this mounting behavior, was tested. Once a week, male goats were randomly selected and placed in a test pen in which they were allowed to observe one of six selected social or sexual stimulus conditions. The stimulus conditions were one familiar male with two estrous females (MEE); three estrous females that displayed female-female mounting (E(m)); three estrous females that did not mount (E(nm)); three non-estrous females (N(E)); three familiar males (M); and no animals in the pen (Empty). After 10 min, the stimulus animals were removed, and an estrous female was placed in the test pen with the male for a 20-min sexual performance test. During sexual performance tests, the frequencies and latencies of all sexual behaviors were recorded. This procedure was repeated so all males (n = 6) were tested once each test day, and all the stimulus conditions were presented each test day. This was repeated weekly until all males had been exposed to each stimulus condition. Viewing mounting behavior, whether male-female or female-female, increased the total number of sexual behaviors displayed, increased ejaculation frequency, and decreased latency to first mount and ejaculation, post-ejaculatory interval, and the interval between ejaculations. We conclude that male goats are aroused by the visual cues of mounting behavior, and that female-female mounting is proceptivity in goats.

  6. The Dynamics and Neural Correlates of Audio-Visual Integration Capacity as Determined by Temporal Unpredictability, Proactive Interference, and SOA.

    PubMed

    Wilbiks, Jonathan M P; Dyson, Benjamin J

    2016-01-01

    Over 5 experiments, we challenge the idea that the capacity of audio-visual integration need be fixed at 1 item. We observe that the conditions under which audio-visual integration is most likely to exceed 1 occur when stimulus change operates at a slow rather than fast rate of presentation and when the task is of intermediate difficulty such as when low levels of proactive interference (3 rather than 8 interfering visual presentations) are combined with the temporal unpredictability of the critical frame (Experiment 2), or, high levels of proactive interference are combined with the temporal predictability of the critical frame (Experiment 4). Neural data suggest that capacity might also be determined by the quality of perceptual information entering working memory. Experiment 5 supported the proposition that audio-visual integration was at play during the previous experiments. The data are consistent with the dynamic nature usually associated with cross-modal binding, and while audio-visual integration capacity likely cannot exceed uni-modal capacity estimates, performance may be better than being able to associate only one visual stimulus with one auditory stimulus.

  7. The Dynamics and Neural Correlates of Audio-Visual Integration Capacity as Determined by Temporal Unpredictability, Proactive Interference, and SOA

    PubMed Central

    Wilbiks, Jonathan M. P.; Dyson, Benjamin J.

    2016-01-01

    Over 5 experiments, we challenge the idea that the capacity of audio-visual integration need be fixed at 1 item. We observe that the conditions under which audio-visual integration is most likely to exceed 1 occur when stimulus change operates at a slow rather than fast rate of presentation and when the task is of intermediate difficulty such as when low levels of proactive interference (3 rather than 8 interfering visual presentations) are combined with the temporal unpredictability of the critical frame (Experiment 2), or, high levels of proactive interference are combined with the temporal predictability of the critical frame (Experiment 4). Neural data suggest that capacity might also be determined by the quality of perceptual information entering working memory. Experiment 5 supported the proposition that audio-visual integration was at play during the previous experiments. The data are consistent with the dynamic nature usually associated with cross-modal binding, and while audio-visual integration capacity likely cannot exceed uni-modal capacity estimates, performance may be better than being able to associate only one visual stimulus with one auditory stimulus. PMID:27977790

  8. Order of stimulus presentation influences children's acquisition in receptive identification tasks.

    PubMed

    Petursdottir, Anna Ingeborg; Aguilar, Gabriella

    2016-03-01

    Receptive identification is usually taught in matching-to-sample format, which entails the presentation of an auditory sample stimulus and several visual comparison stimuli in each trial. Conflicting recommendations exist regarding the order of stimulus presentation in matching-to-sample trials. The purpose of this study was to compare acquisition in receptive identification tasks under 2 conditions: when the sample was presented before the comparisons (sample first) and when the comparisons were presented before the sample (comparison first). Participants included 4 typically developing kindergarten-age boys. Stimuli, which included birds and flags, were presented on a computer screen. Acquisition in the 2 conditions was compared in an adapted alternating-treatments design combined with a multiple baseline design across stimulus sets. All participants took fewer trials to meet the mastery criterion in the sample-first condition than in the comparison-first condition. © 2015 Society for the Experimental Analysis of Behavior.

  9. Preattentive binding of auditory and visual stimulus features.

    PubMed

    Winkler, István; Czigler, István; Sussman, Elyse; Horváth, János; Balázs, Lászlo

    2005-02-01

    We investigated the role of attention in feature binding in the auditory and the visual modality. One auditory and one visual experiment used the mismatch negativity (MMN and vMMN, respectively) event-related potential to index the memory representations created from stimulus sequences, which were either task-relevant and, therefore, attended or task-irrelevant and ignored. In the latter case, the primary task was a continuous demanding within-modality task. The test sequences were composed of two frequently occurring stimuli, which differed from each other in two stimulus features (standard stimuli) and two infrequently occurring stimuli (deviants), which combined one feature from one standard stimulus with the other feature of the other standard stimulus. Deviant stimuli elicited MMN responses of similar parameters across the different attentional conditions. These results suggest that the memory representations involved in the MMN deviance detection response encoded the frequently occurring feature combinations whether or not the test sequences were attended. A possible alternative to the memory-based interpretation of the visual results, the elicitation of the McCollough color-contingent aftereffect, was ruled out by the results of our third experiment. The current results are compared with those supporting the attentive feature integration theory. We conclude that (1) with comparable stimulus paradigms, similar results have been obtained in the two modalities, (2) there exist preattentive processes of feature binding, however, (3) conjoining features within rich arrays of objects under time pressure and/or longterm retention of the feature-conjoined memory representations may require attentive processes.

  10. Contextual Control by Function and Form of Transfer of Functions

    ERIC Educational Resources Information Center

    Perkins, David R.; Dougher, Michael J.; Greenway, David E.

    2007-01-01

    This study investigated conditions leading to contextual control by stimulus topography over transfer of functions. Three 4-member stimulus equivalence classes, each consisting of four (A, B, C, D) topographically distinct visual stimuli, were established for 5 college students. Across classes, designated A stimuli were open-ended linear figures,…

  11. Effects of perceptual load and socially meaningful stimuli on crossmodal selective attention in Autism Spectrum Disorder and neurotypical samples.

    PubMed

    Tyndall, Ian; Ragless, Liam; O'Hora, Denis

    2018-04-01

    The present study examined whether increasing visual perceptual load differentially affected both Socially Meaningful and Non-socially Meaningful auditory stimulus awareness in neurotypical (NT, n = 59) adults and Autism Spectrum Disorder (ASD, n = 57) adults. On a target trial, an unexpected critical auditory stimulus (CAS), either a Non-socially Meaningful ('beep' sound) or Socially Meaningful ('hi') stimulus, was played concurrently with the presentation of the visual task. Under conditions of low visual perceptual load both NT and ASD samples reliably noticed the CAS at similar rates (77-81%), whether the CAS was Socially Meaningful or Non-socially Meaningful. However, during high visual perceptual load NT and ASD participants reliably noticed the meaningful CAS (NT = 71%, ASD = 67%), but NT participants were unlikely to notice the Non-meaningful CAS (20%), whereas ASD participants reliably noticed it (80%), suggesting an inability to engage selective attention to ignore non-salient irrelevant distractor stimuli in ASD. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. Neuroimaging investigations of dorsal stream processing and effects of stimulus synchrony in schizophrenia.

    PubMed

    Sanfratello, Lori; Aine, Cheryl; Stephen, Julia

    2018-05-25

    Impairments in auditory and visual processing are common in schizophrenia (SP). In the unisensory realm visual deficits are primarily noted for the dorsal visual stream. In addition, insensitivity to timing offsets between stimuli are widely reported for SP. The aim of the present study was to test at the physiological level differences in dorsal/ventral stream visual processing and timing sensitivity between SP and healthy controls (HC) using MEG and a simple auditory/visual task utilizing a variety of multisensory conditions. The paradigm included all combinations of synchronous/asynchronous and central/peripheral stimuli, yielding 4 task conditions. Both HC and SP groups showed activation in parietal areas (dorsal visual stream) during all multisensory conditions, with parietal areas showing decreased activation for SP relative to HC, and a significantly delayed peak of activation for SP in intraparietal sulcus (IPS). We also observed a differential effect of stimulus synchrony on HC and SP parietal response. Furthermore, a (negative) correlation was found between SP positive symptoms and activity in IPS. Taken together, our results provide evidence of impairment of the dorsal visual stream in SP during a multisensory task, along with an altered response to timing offsets between presented multisensory stimuli. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Multiple serial picture presentation with millisecond resolution using a three-way LC-shutter-tachistoscope

    PubMed Central

    Fischmeister, Florian Ph.S.; Leodolter, Ulrich; Windischberger, Christian; Kasess, Christian H.; Schöpf, Veronika; Moser, Ewald; Bauer, Herbert

    2010-01-01

    Throughout recent years there has been an increasing interest in studying unconscious visual processes. Such conditions of unawareness are typically achieved by either a sufficient reduction of the stimulus presentation time or visual masking. However, there are growing concerns about the reliability of the presentation devices used. As all these devices show great variability in presentation parameters, the processing of visual stimuli becomes dependent on the display-device, e.g. minimal changes in the physical stimulus properties may have an enormous impact on stimulus processing by the sensory system and on the actual experience of the stimulus. Here we present a custom-built three-way LC-shutter-tachistoscope which allows experimental setups with both, precise and reliable stimulus delivery, and millisecond resolution. This tachistoscope consists of three LCD-projectors equipped with zoom lenses to enable stimulus presentation via a built-in mirror-system onto a back projection screen from an adjacent room. Two high-speed liquid crystal shutters are mounted serially in front of each projector to control the stimulus duration. To verify the intended properties empirically, different sequences of presentation times were performed while changes in optical power were measured using a photoreceiver. The obtained results demonstrate that interfering variabilities in stimulus parameters and stimulus rendering are markedly reduced. Together with the possibility to collect external signals and to send trigger-signals to other devices, this tachistoscope represents a highly flexible and easy to set up research tool not only for the study of unconscious processing in the brain but for vision research in general. PMID:20122963

  14. Shades of yellow: interactive effects of visual and odour cues in a pest beetle

    PubMed Central

    Stevenson, Philip C.; Belmain, Steven R.

    2016-01-01

    Background: The visual ecology of pest insects is poorly studied compared to the role of odour cues in determining their behaviour. Furthermore, the combined effects of both odour and vision on insect orientation are frequently ignored, but could impact behavioural responses. Methods: A locomotion compensator was used to evaluate use of different visual stimuli by a major coleopteran pest of stored grains (Sitophilus zeamais), with and without the presence of host odours (known to be attractive to this species), in an open-loop setup. Results: Some visual stimuli—in particular, one shade of yellow, solid black and high-contrast black-against-white stimuli—elicited positive orientation behaviour from the beetles in the absence of odour stimuli. When host odours were also present, at 90° to the source of the visual stimulus, the beetles presented with yellow and vertical black-on-white grating patterns changed their walking course and typically adopted a path intermediate between the two stimuli. The beetles presented with a solid black-on-white target continued to orient more strongly towards the visual than the odour stimulus. Discussion: Visual stimuli can strongly influence orientation behaviour, even in species where use of visual cues is sometimes assumed to be unimportant, while the outcomes from exposure to multimodal stimuli are unpredictable and need to be determined under differing conditions. The importance of the two modalities of stimulus (visual and olfactory) in food location is likely to depend upon relative stimulus intensity and motivational state of the insect. PMID:27478707

  15. Human postural responses to motion of real and virtual visual environments under different support base conditions.

    PubMed

    Mergner, T; Schweigart, G; Maurer, C; Blümle, A

    2005-12-01

    The role of visual orientation cues for human control of upright stance is still not well understood. We, therefore, investigated stance control during motion of a visual scene as stimulus, varying the stimulus parameters and the contribution from other senses (vestibular and leg proprioceptive cues present or absent). Eight normal subjects and three patients with chronic bilateral loss of vestibular function participated. They stood on a motion platform inside a cabin with an optokinetic pattern on its interior walls. The cabin was sinusoidally rotated in anterior-posterior (a-p) direction with the horizontal rotation axis through the ankle joints (f=0.05-0.4 Hz; A (max)=0.25 degrees -4 degrees ; v (max)=0.08-10 degrees /s). The subjects' centre of mass (COM) angular position was calculated from opto-electronically measured body sway parameters. The platform was either kept stationary or moved by coupling its position 1:1 to a-p hip position ('body sway referenced', BSR, platform condition), by which proprioceptive feedback of ankle joint angle became inactivated. The visual stimulus evoked in-phase COM excursions (visual responses) in all subjects. (1) In normal subjects on a stationary platform, the visual responses showed saturation with both increasing velocity and displacement of the visual stimulus. The saturation showed up abruptly when visually evoked COM velocity and displacement reached approximately 0.1 degrees /s and 0.1 degrees , respectively. (2) In normal subjects on a BSR platform (proprioceptive feedback disabled), the visual responses showed similar saturation characteristics, but at clearly higher COM velocity and displacement values ( approximately 1 degrees /s and 1 degrees , respectively). (3) In patients on a stationary platform (no vestibular cues), the visual responses were basically similar to those of the normal subjects, apart from somewhat higher gain values and less-pronounced saturation effects. (4) In patients on a BSR platform (no vestibular and proprioceptive cues, presumably only somatosensory graviceptive and visual cues), the visual responses showed an abnormal increase in gain with increasing stimulus frequency in addition to a displacement saturation. On the normal subjects we performed additional experiments in which we varied the gain of the visual response by using a 'virtual reality' visual stimulus or by applying small lateral platform tilts. This did not affect the saturation characteristics of the visual response to a considerable degree. We compared the present results to previous psychophysical findings on motion perception, noting similarities of the saturation characteristics in (1) with leg proprioceptive detection thresholds of approximately 0.1 degrees /s and 0.1 degrees and those in (2) with vestibular detection thresholds of 1 degrees /s and 1 degrees , respectively. From the psychophysical data one might hypothesise that a proprioceptive postural mechanism limits the visually evoked body excursions if these excursions exceed 0.1 degrees /s and 0.1 degrees in condition (1) and that a vestibular mechanism is doing so at 1 degrees /s and 1 degrees in (2). To better understand this, we performed computer simulations using a posture control model with multiple sensory feedbacks. We had recently designed the model to describe postural responses to body pull and platform tilt stimuli. Here, we added a visual input and adjusted its gain to fit the simulated data to the experimental data. The saturation characteristics of the visual responses of the normals were well mimicked by the simulations. They were caused by central thresholds of proprioceptive, vestibular and somatosensory signals in the model, which, however, differed from the psychophysical thresholds. Yet, we demonstrate in a theoretical approach that for condition (1) the model can be made monomodal proprioceptive with the psychophysical 0.1 degrees /s and 0.1 degrees thresholds, and for (2) monomodal vestibular with the psychophysical 1 degrees /s and 1 degrees thresholds, and still shows the corresponding saturation characteristics (whereas our original model covers both conditions without adjustments). The model simulations also predicted the almost normal visual responses of patients on a stationary platform and their clearly abnormal responses on a BSR platform.

  16. Night vision in barn owls: visual acuity and contrast sensitivity under dark adaptation.

    PubMed

    Orlowski, Julius; Harmening, Wolf; Wagner, Hermann

    2012-12-06

    Barn owls are effective nocturnal predators. We tested their visual performance at low light levels and determined visual acuity and contrast sensitivity of three barn owls by their behavior at stimulus luminances ranging from photopic to fully scotopic levels (23.5 to 1.5 × 10⁻⁶). Contrast sensitivity and visual acuity decreased only slightly from photopic to scotopic conditions. Peak grating acuity was at mesopic (4 × 10⁻² cd/m²) conditions. Barn owls retained a quarter of their maximal acuity when luminance decreased by 5.5 log units. We argue that the visual system of barn owls is designed to yield as much visual acuity under low light conditions as possible, thereby sacrificing resolution at photopic conditions.

  17. The effect of integration masking on visual processing in perceptual categorization.

    PubMed

    Hélie, Sébastien

    2017-08-01

    Learning to recognize and categorize objects is an essential cognitive skill allowing animals to function in the world. However, animals rarely have access to a canonical view of an object in an uncluttered environment. Hence, it is essential to study categorization under noisy, degraded conditions. In this article, we explore how the brain processes categorization stimuli in low signal-to-noise conditions using multivariate pattern analysis. We used an integration masking paradigm with mask opacity of 50%, 60%, and 70% inside a magnetic resonance imaging scanner. The results show that mask opacity affects blood-oxygen-level dependent (BOLD) signal in visual processing areas (V1, V2, V3, and V4) but does not affect the BOLD signal in brain areas traditionally associated with categorization (prefrontal cortex, striatum, hippocampus). This suggests that when a stimulus is difficult to extract from its background (e.g., low signal-to-noise ratio), the visual system extracts the stimulus and that activity in areas typically associated with categorization are not affected by the difficulty level of the visual conditions. We conclude with implications of this result for research on visual attention, categorization, and the integration of these fields. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Merging Psychophysical and Psychometric Theory to Estimate Global Visual State Measures from Forced-Choices

    NASA Astrophysics Data System (ADS)

    Massof, Robert W.; Schmidt, Karen M.; Laby, Daniel M.; Kirschen, David; Meadows, David

    2013-09-01

    Visual acuity, a forced-choice psychophysical measure of visual spatial resolution, is the sine qua non of clinical visual impairment testing in ophthalmology and optometry patients with visual system disorders ranging from refractive error to retinal, optic nerve, or central visual system pathology. Visual acuity measures are standardized against a norm, but it is well known that visual acuity depends on a variety of stimulus parameters, including contrast and exposure duration. This paper asks if it is possible to estimate a single global visual state measure from visual acuity measures as a function of stimulus parameters that can represent the patient's overall visual health state with a single variable. Psychophysical theory (at the sensory level) and psychometric theory (at the decision level) are merged to identify the conditions that must be satisfied to derive a global visual state measure from parameterised visual acuity measures. A global visual state measurement model is developed and tested with forced-choice visual acuity measures from 116 subjects with no visual impairments and 560 subjects with uncorrected refractive error. The results are in agreement with the expectations of the model.

  19. Fear of falling and postural reactivity in patients with glaucoma.

    PubMed

    Daga, Fábio B; Diniz-Filho, Alberto; Boer, Erwin R; Gracitelli, Carolina P B; Abe, Ricardo Y; Medeiros, Felipe A

    2017-01-01

    To investigate the relationship between postural metrics obtained by dynamic visual stimulation in a virtual reality environment and the presence of fear of falling in glaucoma patients. This cross-sectional study included 35 glaucoma patients and 26 controls that underwent evaluation of postural balance by a force platform during presentation of static and dynamic visual stimuli with head-mounted goggles (Oculus Rift). In dynamic condition, a peripheral translational stimulus was used to induce vection and assess postural reactivity. Standard deviations of torque moments (SDTM) were calculated as indicative of postural stability. Fear of falling was assessed by a standardized questionnaire. The relationship between a summary score of fear of falling and postural metrics was investigated using linear regression models, adjusting for potentially confounding factors. Subjects with glaucoma reported greater fear of falling compared to controls (-0.21 vs. 0.27; P = 0.039). In glaucoma patients, postural metrics during dynamic visual stimulus were more associated with fear of falling (R2 = 18.8%; P = 0.001) than static (R2 = 3.0%; P = 0.005) and dark field (R2 = 5.7%; P = 0.007) conditions. In the univariable model, fear of falling was not significantly associated with binocular standard perimetry mean sensitivity (P = 0.855). In the multivariable model, each 1 Nm larger SDTM in anteroposterior direction during dynamic stimulus was associated with a worsening of 0.42 units in the fear of falling questionnaire score (P = 0.001). In glaucoma patients, postural reactivity to a dynamic visual stimulus using a virtual reality environment was more strongly associated with fear of falling than visual field testing and traditional balance assessment.

  20. Fear of falling and postural reactivity in patients with glaucoma

    PubMed Central

    Daga, Fábio B.; Diniz-Filho, Alberto; Boer, Erwin R.; Gracitelli, Carolina P. B.; Abe, Ricardo Y.; Medeiros, Felipe A.

    2017-01-01

    Purpose To investigate the relationship between postural metrics obtained by dynamic visual stimulation in a virtual reality environment and the presence of fear of falling in glaucoma patients. Methods This cross-sectional study included 35 glaucoma patients and 26 controls that underwent evaluation of postural balance by a force platform during presentation of static and dynamic visual stimuli with head-mounted goggles (Oculus Rift). In dynamic condition, a peripheral translational stimulus was used to induce vection and assess postural reactivity. Standard deviations of torque moments (SDTM) were calculated as indicative of postural stability. Fear of falling was assessed by a standardized questionnaire. The relationship between a summary score of fear of falling and postural metrics was investigated using linear regression models, adjusting for potentially confounding factors. Results Subjects with glaucoma reported greater fear of falling compared to controls (-0.21 vs. 0.27; P = 0.039). In glaucoma patients, postural metrics during dynamic visual stimulus were more associated with fear of falling (R2 = 18.8%; P = 0.001) than static (R2 = 3.0%; P = 0.005) and dark field (R2 = 5.7%; P = 0.007) conditions. In the univariable model, fear of falling was not significantly associated with binocular standard perimetry mean sensitivity (P = 0.855). In the multivariable model, each 1 Nm larger SDTM in anteroposterior direction during dynamic stimulus was associated with a worsening of 0.42 units in the fear of falling questionnaire score (P = 0.001). Conclusion In glaucoma patients, postural reactivity to a dynamic visual stimulus using a virtual reality environment was more strongly associated with fear of falling than visual field testing and traditional balance assessment. PMID:29211742

  1. Infant Attention to Dynamic Audiovisual Stimuli: Look Duration from 3 to 9 Months of Age

    ERIC Educational Resources Information Center

    Reynolds, Greg D.; Zhang, Dantong; Guy, Maggie W.

    2013-01-01

    The goal of this study was to examine developmental change in visual attention to dynamic visual and audiovisual stimuli in 3-, 6-, and 9-month-old infants. Infant look duration was measured during exposure to dynamic geometric patterns and Sesame Street video clips under three different stimulus modality conditions: unimodal visual, synchronous…

  2. Selective attention to visual compound stimuli in squirrel monkeys (Saimiri sciureus).

    PubMed

    Ploog, Bertram O

    2011-05-01

    Five squirrel monkeys served under a simultaneous discrimination paradigm with visual compound stimuli that allowed measurement of excitatory and inhibitory control exerted by individual stimulus components (form and luminance/"color"), which could not be presented in isolation (i.e., form could not be presented without color). After performance exceeded a criterion of 75% correct during training, unreinforced test trials with stimuli comprising recombined training stimulus components were interspersed while the overall reinforcement rate remained constant for training and testing. The training-testing series was then repeated with reversed reinforcement contingencies. The findings were that color acquired greater excitatory control than form under the original condition, that no such difference was found for the reversal condition or for inhibitory control under either condition, and that overall inhibitory control was less pronounced than excitatory control. The remarkably accurate performance throughout suggested that a forced 4-s delay between the stimulus presentation and the opportunity to respond was effective in reducing "impulsive" responding, which has implications for suppressing impulsive responding in children with autism and with attention deficit disorder. Copyright © 2011 Elsevier B.V. All rights reserved.

  3. Visual evoked potentials through night vision goggles.

    PubMed

    Rabin, J

    1994-04-01

    Night vision goggles (NVG's) have widespread use in military and civilian environments. NVG's amplify ambient illumination making performance possible when there is insufficient illumination for normal vision. While visual performance through NVG's is commonly assessed by measuring threshold functions such as visual acuity, few attempts have been made to assess vision through NVG's at suprathreshold levels of stimulation. Such information would be useful to better understand vision through NVG's across a range of stimulus conditions. In this study visual evoked potentials (VEP's) were used to evaluate vision through NVG's across a range of stimulus contrasts. The amplitude and latency of the VEP varied linearly with log contrast. A comparison of VEP's recorded with and without NVG's was used to estimate contrast attenuation through the device. VEP's offer an objective, electrophysiological tool to assess visual performance through NVG's at both threshold and suprathreshold levels of visual stimulation.

  4. Working memory can enhance unconscious visual perception.

    PubMed

    Pan, Yi; Cheng, Qiu-Ping; Luo, Qian-Ying

    2012-06-01

    We demonstrate that unconscious processing of a stimulus property can be enhanced when there is a match between the contents of working memory and the stimulus presented in the visual field. Participants first held a cue (a colored circle) in working memory and then searched for a brief masked target shape presented simultaneously with a distractor shape. When participants reported having no awareness of the target shape at all, search performance was more accurate in the valid condition, where the target matched the cue in color, than in the neutral condition, where the target mismatched the cue. This effect cannot be attributed to bottom-up perceptual priming from the presentation of a memory cue, because unconscious perception was not enhanced when the cue was merely perceptually identified but not actively held in working memory. These findings suggest that reentrant feedback from the contents of working memory modulates unconscious visual perception.

  5. Audiovisual integration in hemianopia: A neurocomputational account based on cortico-collicular interaction.

    PubMed

    Magosso, Elisa; Bertini, Caterina; Cuppini, Cristiano; Ursino, Mauro

    2016-10-01

    Hemianopic patients retain some abilities to integrate audiovisual stimuli in the blind hemifield, showing both modulation of visual perception by auditory stimuli and modulation of auditory perception by visual stimuli. Indeed, conscious detection of a visual target in the blind hemifield can be improved by a spatially coincident auditory stimulus (auditory enhancement of visual detection), while a visual stimulus in the blind hemifield can improve localization of a spatially coincident auditory stimulus (visual enhancement of auditory localization). To gain more insight into the neural mechanisms underlying these two perceptual phenomena, we propose a neural network model including areas of neurons representing the retina, primary visual cortex (V1), extrastriate visual cortex, auditory cortex and the Superior Colliculus (SC). The visual and auditory modalities in the network interact via both direct cortical-cortical connections and subcortical-cortical connections involving the SC; the latter, in particular, integrates visual and auditory information and projects back to the cortices. Hemianopic patients were simulated by unilaterally lesioning V1, and preserving spared islands of V1 tissue within the lesion, to analyze the role of residual V1 neurons in mediating audiovisual integration. The network is able to reproduce the audiovisual phenomena in hemianopic patients, linking perceptions to neural activations, and disentangles the individual contribution of specific neural circuits and areas via sensitivity analyses. The study suggests i) a common key role of SC-cortical connections in mediating the two audiovisual phenomena; ii) a different role of visual cortices in the two phenomena: auditory enhancement of conscious visual detection being conditional on surviving V1 islands, while visual enhancement of auditory localization persisting even after complete V1 damage. The present study may contribute to advance understanding of the audiovisual dialogue between cortical and subcortical structures in healthy and unisensory deficit conditions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Internal state of monkey primary visual cortex (V1) predicts figure-ground perception.

    PubMed

    Supèr, Hans; van der Togt, Chris; Spekreijse, Henk; Lamme, Victor A F

    2003-04-15

    When stimulus information enters the visual cortex, it is rapidly processed for identification. However, sometimes the processing of the stimulus is inadequate and the subject fails to notice the stimulus. Human psychophysical studies show that this occurs during states of inattention or absent-mindedness. At a neurophysiological level, it remains unclear what these states are. To study the role of cortical state in perception, we analyzed neural activity in the monkey primary visual cortex before the appearance of a stimulus. We show that, before the appearance of a reported stimulus, neural activity was stronger and more correlated than for a not-reported stimulus. This indicates that the strength of neural activity and the functional connectivity between neurons in the primary visual cortex participate in the perceptual processing of stimulus information. Thus, to detect a stimulus, the visual cortex needs to be in an appropriate state.

  7. Expectation and Surprise Determine Neural Population Responses in the Ventral Visual Stream

    PubMed Central

    Egner, Tobias; Monti, Jim M.; Summerfield, Christopher

    2014-01-01

    Visual cortex is traditionally viewed as a hierarchy of neural feature detectors, with neural population responses being driven by bottom-up stimulus features. Conversely, “predictive coding” models propose that each stage of the visual hierarchy harbors two computationally distinct classes of processing unit: representational units that encode the conditional probability of a stimulus and provide predictions to the next lower level; and error units that encode the mismatch between predictions and bottom-up evidence, and forward prediction error to the next higher level. Predictive coding therefore suggests that neural population responses in category-selective visual regions, like the fusiform face area (FFA), reflect a summation of activity related to prediction (“face expectation”) and prediction error (“face surprise”), rather than a homogenous feature detection response. We tested the rival hypotheses of the feature detection and predictive coding models by collecting functional magnetic resonance imaging data from the FFA while independently varying both stimulus features (faces vs houses) and subjects’ perceptual expectations regarding those features (low vs medium vs high face expectation). The effects of stimulus and expectation factors interacted, whereby FFA activity elicited by face and house stimuli was indistinguishable under high face expectation and maximally differentiated under low face expectation. Using computational modeling, we show that these data can be explained by predictive coding but not by feature detection models, even when the latter are augmented with attentional mechanisms. Thus, population responses in the ventral visual stream appear to be determined by feature expectation and surprise rather than by stimulus features per se. PMID:21147999

  8. Sex differences in audiovisual discrimination learning by Bengalese finches (Lonchura striata var. domestica).

    PubMed

    Seki, Yoshimasa; Okanoya, Kazuo

    2008-02-01

    Both visual and auditory information are important for songbirds, especially in developmental and sexual contexts. To investigate bimodal cognition in songbirds, the authors conducted audiovisual discrimination training in Bengalese finches. The authors used two types of stimulus: an "artificial stimulus," which is a combination of simple figures and sound, and a "biological stimulus," consisting of video images of singing males along with their songs. The authors found that while both sexes predominantly used visual cues in the discrimination tasks, males tended to be more dependent on auditory information for the biological stimulus. Female responses were always dependent on the visual stimulus for both stimulus types. Only males changed their discrimination strategy according to stimulus type. Although males used both visual and auditory cues for the biological stimulus, they responded to the artificial stimulus depending only on visual information, as the females did. These findings suggest a sex difference in innate auditory sensitivity. (c) 2008 APA.

  9. Cortical response tracking the conscious experience of threshold duration visual stimuli indicates visual perception is all or none

    PubMed Central

    Sekar, Krithiga; Findley, William M.; Poeppel, David; Llinás, Rodolfo R.

    2013-01-01

    At perceptual threshold, some stimuli are available for conscious access whereas others are not. Such threshold inputs are useful tools for investigating the events that separate conscious awareness from unconscious stimulus processing. Here, viewing unmasked, threshold-duration images was combined with recording magnetoencephalography to quantify differences among perceptual states, ranging from no awareness to ambiguity to robust perception. A four-choice scale was used to assess awareness: “didn’t see” (no awareness), “couldn’t identify” (awareness without identification), “unsure” (awareness with low certainty identification), and “sure” (awareness with high certainty identification). Stimulus-evoked neuromagnetic signals were grouped according to behavioral response choices. Three main cortical responses were elicited. The earliest response, peaking at ∼100 ms after stimulus presentation, showed no significant correlation with stimulus perception. A late response (∼290 ms) showed moderate correlation with stimulus awareness but could not adequately differentiate conscious access from its absence. By contrast, an intermediate response peaking at ∼240 ms was observed only for trials in which stimuli were consciously detected. That this signal was similar for all conditions in which awareness was reported is consistent with the hypothesis that conscious visual access is relatively sharply demarcated. PMID:23509248

  10. A Classical Conditioning Procedure for the Hearing Assessment of Multiply Handicapped Persons.

    ERIC Educational Resources Information Center

    Lancioni, Giulio E.; And Others

    1989-01-01

    Hearing assessments of multiply handicapped children/adolescents were conducted using classical conditioning (with an air puff as unconditioned stimulus) and operant conditioning (with a modified visual reinforcement audiometry procedure or edible reinforcement). Findings indicate that classical conditioning was successful with 21 of the 23…

  11. Entrainment to a real time fractal visual stimulus modulates fractal gait dynamics.

    PubMed

    Rhea, Christopher K; Kiefer, Adam W; D'Andrea, Susan E; Warren, William H; Aaron, Roy K

    2014-08-01

    Fractal patterns characterize healthy biological systems and are considered to reflect the ability of the system to adapt to varying environmental conditions. Previous research has shown that fractal patterns in gait are altered following natural aging or disease, and this has potential negative consequences for gait adaptability that can lead to increased risk of injury. However, the flexibility of a healthy neurological system to exhibit different fractal patterns in gait has yet to be explored, and this is a necessary step toward understanding human locomotor control. Fifteen participants walked for 15min on a treadmill, either in the absence of a visual stimulus or while they attempted to couple the timing of their gait with a visual metronome that exhibited a persistent fractal pattern (contained long-range correlations) or a random pattern (contained no long-range correlations). The stride-to-stride intervals of the participants were recorded via analog foot pressure switches and submitted to detrended fluctuation analysis (DFA) to determine if the fractal patterns during the visual metronome conditions differed from the baseline (no metronome) condition. DFA α in the baseline condition was 0.77±0.09. The fractal patterns in the stride-to-stride intervals were significantly altered when walking to the fractal metronome (DFA α=0.87±0.06) and to the random metronome (DFA α=0.61±0.10) (both p<.05 when compared to the baseline condition), indicating that a global change in gait dynamics was observed. A variety of strategies were identified at the local level with a cross-correlation analysis, indicating that local behavior did not account for the consistent global changes. Collectively, the results show that a gait dynamics can be shifted in a prescribed manner using a visual stimulus and the shift appears to be a global phenomenon. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. On the Role of Mentalizing Processes in Aesthetic Appreciation: An ERP Study.

    PubMed

    Beudt, Susan; Jacobsen, Thomas

    2015-01-01

    We used event-related brain potentials to explore the impact of mental perspective taking on processes of aesthetic appreciation of visual art. Participants (non-experts) were first presented with information about the life and attitudes of a fictitious artist. Subsequently, they were cued trial-wise to make an aesthetic judgment regarding an image depicting a piece of abstract art either from their own perspective or from the imagined perspective of the fictitious artist [i.e., theory of mind (ToM) condition]. Positive self-referential judgments were made more quickly and negative self-referential judgments were made more slowly than the corresponding judgments from the imagined perspective. Event-related potential analyses revealed significant differences between the two tasks both within the preparation period (i.e., during the cue-stimulus interval) and within the stimulus presentation period. For the ToM condition we observed a relative centro-parietal negativity during the preparation period (700-330 ms preceding picture onset) and a relative centro-parietal positivity during the stimulus presentation period (700-1100 ms after stimulus onset). These findings suggest that different subprocesses are involved in aesthetic appreciation and judgment of visual abstract art from one's own vs. from another person's perspective.

  13. Evaluative conditioning increases with temporal contiguity. The influence of stimulus order and stimulus interval on evaluative conditioning.

    PubMed

    Gast, Anne; Langer, Sebastian; Sengewald, Marie-Ann

    2016-10-01

    Evaluative conditioning (EC) is a change in valence that is due to pairing a conditioned stimulus (CS) with another, typically valent, unconditioned stimulus (US). This paper investigates how basic presentation parameters moderate EC effects. In two studies we tested the effectiveness of different temporal relations of the CS and the US, that is, the order in which the stimuli were presented and the temporal distance between them. Both studies showed that the size of EC effects was independent of the presentation order of CS and US within a stimulus pair. Contrary to classical conditioning effects, EC effects are thus not most pronounced after CS-first presentations. Furthermore, as shown in Experiment 2, EC effects increased in magnitude as the temporal interval between CS and US presentations decreased. Experiment 1 showed largest EC effects in the condition with simultaneous presentations - which can be seen as the condition with the temporally closest presentation. In this Experiment stimuli were presented in two different modalities, which might have facilitated simultaneous processing. In Experiment 2, in which all stimuli were presented visually, this advantage of simultaneous presentation was not found. We discuss practical and theoretical implications of our findings. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. A magnetoencephalography study of visual processing of pain anticipation.

    PubMed

    Machado, Andre G; Gopalakrishnan, Raghavan; Plow, Ela B; Burgess, Richard C; Mosher, John C

    2014-07-15

    Anticipating pain is important for avoiding injury; however, in chronic pain patients, anticipatory behavior can become maladaptive, leading to sensitization and limiting function. Knowledge of networks involved in pain anticipation and conditioning over time could help devise novel, better-targeted therapies. With the use of magnetoencephalography, we evaluated in 10 healthy subjects the neural processing of pain anticipation. Anticipatory cortical activity elicited by consecutive visual cues that signified imminent painful stimulus was compared with cues signifying nonpainful and no stimulus. We found that the neural processing of visually evoked pain anticipation involves the primary visual cortex along with cingulate and frontal regions. Visual cortex could quickly and independently encode and discriminate between visual cues associated with pain anticipation and no pain during preconscious phases following object presentation. When evaluating the effect of task repetition on participating cortical areas, we found that activity of prefrontal and cingulate regions was mostly prominent early on when subjects were still naive to a cue's contextual meaning. Visual cortical activity was significant throughout later phases. Although visual cortex may precisely and time efficiently decode cues anticipating pain or no pain, prefrontal areas establish the context associated with each cue. These findings have important implications toward processes involved in pain anticipation and maladaptive pain conditioning. Copyright © 2014 the American Physiological Society.

  15. Place avoidance learning and memory in a jumping spider.

    PubMed

    Peckmezian, Tina; Taylor, Phillip W

    2017-03-01

    Using a conditioned passive place avoidance paradigm, we investigated the relative importance of three experimental parameters on learning and memory in a salticid, Servaea incana. Spiders encountered an aversive electric shock stimulus paired with one side of a two-sided arena. Our three parameters were the ecological relevance of the visual stimulus, the time interval between trials and the time interval before test. We paired electric shock with either a black or white visual stimulus, as prior studies in our laboratory have demonstrated that S. incana prefer dark 'safe' regions to light ones. We additionally evaluated the influence of two temporal features (time interval between trials and time interval before test) on learning and memory. Spiders exposed to the shock stimulus learned to associate shock with the visual background cue, but the extent to which they did so was dependent on which visual stimulus was present and the time interval between trials. Spiders trained with a long interval between trials (24 h) maintained performance throughout training, whereas spiders trained with a short interval (10 min) maintained performance only when the safe side was black. When the safe side was white, performance worsened steadily over time. There was no difference between spiders tested after a short (10 min) or long (24 h) interval before test. These results suggest that the ecological relevance of the stimuli used and the duration of the interval between trials can influence learning and memory in jumping spiders.

  16. Reduced Perceptual Exclusivity during Object and Grating Rivalry in Autism

    PubMed Central

    Freyberg, J.; Robertson, C.E.; Baron-Cohen, S.

    2015-01-01

    Background The dynamics of binocular rivalry may be a behavioural footprint of excitatory and inhibitory neural transmission in visual cortex. Given the presence of atypical visual features in Autism Spectrum Conditions (ASC), and evidence in support of the idea of an imbalance in excitatory/inhibitory neural transmission in ASC, we hypothesized that binocular rivalry might prove a simple behavioural marker of such a transmission imbalance in the autistic brain. In support of this hypothesis, we previously reported a slower rate of rivalry in ASC, driven by reduced perceptual exclusivity. Methods We tested whether atypical dynamics of binocular rivalry in ASC are specific to certain stimulus features. 53 participants (26 with ASC, matched for age, sex and IQ) participated in binocular rivalry experiments in which the dynamics of rivalry were measured at two levels of stimulus complexity, low (grayscale gratings) and high (coloured objects). Results Individuals with ASC experienced a slower rate of rivalry, driven by longer transitional states between dominant percepts. These exaggerated transitional states were present at both low and high levels of stimulus complexity, suggesting that atypical rivalry dynamics in autism are robust with respect to stimulus choice. Interactions between stimulus properties and rivalry dynamics in autism indicate that achromatic grating stimuli produce stronger group differences. Conclusion These results confirm the finding of atypical dynamics of binocular rivalry in ASC. These dynamics were present for stimuli of both low and high levels of visual complexity, suggesting an imbalance in competitive interactions throughout the visual system of individuals with ASC. PMID:26382002

  17. Effects of a Blacklight Visual Field on Eye-Contact Training of Spastic Cerebral Palsied Children.

    ERIC Educational Resources Information Center

    Poland, D. J.; Doebler, L, K.

    1980-01-01

    Four subjects, aged six to seven, identified as visually impaired, were given training in making eye contact with a stimulus under both white and black light visual field. All subjects performed better under the black light condition, even overcoming the expected practice effect when white light training followed black light training. (Author/SJL)

  18. Spatial attention improves reliability of fMRI retinotopic mapping signals in occipital and parietal cortex

    PubMed Central

    Bressler, David W.; Silver, Michael A.

    2010-01-01

    Spatial attention improves visual perception and increases the amplitude of neural responses in visual cortex. In addition, spatial attention tasks and fMRI have been used to discover topographic visual field representations in regions outside visual cortex. We therefore hypothesized that requiring subjects to attend to a retinotopic mapping stimulus would facilitate the characterization of visual field representations in a number of cortical areas. In our study, subjects attended either a central fixation point or a wedge-shaped stimulus that rotated about the fixation point. Response reliability was assessed by computing coherence between the fMRI time series and a sinusoid with the same frequency as the rotating wedge stimulus. When subjects attended to the rotating wedge instead of ignoring it, the reliability of retinotopic mapping signals increased by approximately 50% in early visual cortical areas (V1, V2, V3, V3A/B, V4) and ventral occipital cortex (VO1) and by approximately 75% in lateral occipital (LO1, LO2) and posterior parietal (IPS0, IPS1 and IPS2) cortical areas. Additionally, one 5-minute run of retinotopic mapping in the attention-to-wedge condition produced responses as reliable as the average of three to five (early visual cortex) or more than five (lateral occipital, ventral occipital, and posterior parietal cortex) attention-to-fixation runs. These results demonstrate that allocating attention to the retinotopic mapping stimulus substantially reduces the amount of scanning time needed to determine the visual field representations in occipital and parietal topographic cortical areas. Attention significantly increased response reliability in every cortical area we examined and may therefore be a general mechanism for improving the fidelity of neural representations of sensory stimuli at multiple levels of the cortical processing hierarchy. PMID:20600961

  19. A comparison of methods for teaching receptive labeling to children with autism spectrum disorders.

    PubMed

    Grow, Laura L; Carr, James E; Kodak, Tiffany M; Jostad, Candice M; Kisamore, April N

    2011-01-01

    Many early intervention curricular manuals recommend teaching auditory-visual conditional discriminations (i.e., receptive labeling) using the simple-conditional method in which component simple discriminations are taught in isolation and in the presence of a distracter stimulus before the learner is required to respond conditionally. Some have argued that this procedure might be susceptible to faulty stimulus control such as stimulus overselectivity (Green, 2001). Consequently, there has been a call for the use of alternative teaching procedures such as the conditional-only method, which involves conditional discrimination training from the onset of intervention. The purpose of the present study was to compare the simple-conditional and conditional-only methods for teaching receptive labeling to 3 young children diagnosed with autism spectrum disorders. The data indicated that the conditional-only method was a more reliable and efficient teaching procedure. In addition, several error patterns emerged during training using the simple-conditional method. The implications of the results with respect to current teaching practices in early intervention programs are discussed.

  20. Summation in autoshaping is affected by the similarity of the visual stimuli to the stimulation they replace.

    PubMed

    Pearce, John M; Redhead, Edward S; George, David N

    2002-04-01

    Pigeons received autoshaping with 2 stimuli, A and B, presented in adjacent regions on a television screen. Conditioning with each stimulus was therefore accompanied by stimulation that was displaced from the screen whenever the other stimulus was presented. Test trials with AB revealed stronger responding if this displaced stimulation was similar to, rather than different from, A and B. For a further experiment the training just described included trials with A and B accompanied by an additional, similar, stimulus. Responding during test trials with AB was stronger if the additional trials signaled the presence rather than the absence of food. The results are explained with a configural theory of conditioning.

  1. Dissociating neural variability related to stimulus quality and response times in perceptual decision-making.

    PubMed

    Bode, Stefan; Bennett, Daniel; Sewell, David K; Paton, Bryan; Egan, Gary F; Smith, Philip L; Murawski, Carsten

    2018-03-01

    According to sequential sampling models, perceptual decision-making is based on accumulation of noisy evidence towards a decision threshold. The speed with which a decision is reached is determined by both the quality of incoming sensory information and random trial-by-trial variability in the encoded stimulus representations. To investigate those decision dynamics at the neural level, participants made perceptual decisions while functional magnetic resonance imaging (fMRI) was conducted. On each trial, participants judged whether an image presented under conditions of high, medium, or low visual noise showed a piano or a chair. Higher stimulus quality (lower visual noise) was associated with increased activation in bilateral medial occipito-temporal cortex and ventral striatum. Lower stimulus quality was related to stronger activation in posterior parietal cortex (PPC) and dorsolateral prefrontal cortex (DLPFC). When stimulus quality was fixed, faster response times were associated with a positive parametric modulation of activation in medial prefrontal and orbitofrontal cortex, while slower response times were again related to more activation in PPC, DLPFC and insula. Our results suggest that distinct neural networks were sensitive to the quality of stimulus information, and to trial-to-trial variability in the encoded stimulus representations, but that reaching a decision was a consequence of their joint activity. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Setting and changing feature priorities in visual short-term memory.

    PubMed

    Kalogeropoulou, Zampeta; Jagadeesh, Akshay V; Ohl, Sven; Rolfs, Martin

    2017-04-01

    Many everyday tasks require prioritizing some visual features over competing ones, both during the selection from the rich sensory input and while maintaining information in visual short-term memory (VSTM). Here, we show that observers can change priorities in VSTM when, initially, they attended to a different feature. Observers reported from memory the orientation of one of two spatially interspersed groups of black and white gratings. Using colored pre-cues (presented before stimulus onset) and retro-cues (presented after stimulus offset) predicting the to-be-reported group, we manipulated observers' feature priorities independently during stimulus encoding and maintenance, respectively. Valid pre-cues reliably increased observers' performance (reduced guessing, increased report precision) as compared to neutral ones; invalid pre-cues had the opposite effect. Valid retro-cues also consistently improved performance (by reducing random guesses), even if the unexpected group suddenly became relevant (invalid-valid condition). Thus, feature-based attention can reshape priorities in VSTM protecting information that would otherwise be forgotten.

  3. Exploring conflict- and target-related movement of visual attention.

    PubMed

    Wendt, Mike; Garling, Marco; Luna-Rodriguez, Aquiles; Jacobsen, Thomas

    2014-01-01

    Intermixing trials of a visual search task with trials of a modified flanker task, the authors investigated whether the presentation of conflicting distractors at only one side (left or right) of a target stimulus triggers shifts of visual attention towards the contralateral side. Search time patterns provided evidence for lateral attention shifts only when participants performed the flanker task under an instruction assumed to widen the focus of attention, demonstrating that instruction-based control settings of an otherwise identical task can impact performance in an unrelated task. Contrasting conditions with response-related and response-unrelated distractors showed that shifting attention does not depend on response conflict and may be explained as stimulus-conflict-related withdrawal or target-related deployment of attention.

  4. Impaired Contingent Attentional Capture Predicts Reduced Working Memory Capacity in Schizophrenia

    PubMed Central

    Mayer, Jutta S.; Fukuda, Keisuke; Vogel, Edward K.; Park, Sohee

    2012-01-01

    Although impairments in working memory (WM) are well documented in schizophrenia, the specific factors that cause these deficits are poorly understood. In this study, we hypothesized that a heightened susceptibility to attentional capture at an early stage of visual processing would result in working memory encoding problems. 30 patients with schizophrenia and 28 demographically matched healthy participants were presented with a search array and asked to report the orientation of the target stimulus. In some of the trials, a flanker stimulus preceded the search array that either matched the color of the target (relevant-flanker capture) or appeared in a different color (irrelevant-flanker capture). Working memory capacity was determined in each individual using the visual change detection paradigm. Patients needed considerably more time to find the target in the no-flanker condition. After adjusting the individual exposure time, both groups showed equivalent capture costs in the irrelevant-flanker condition. However, in the relevant-flanker condition, capture costs were increased in patients compared to controls when the stimulus onset asynchrony between the flanker and the search array was high. Moreover, the increase in relevant capture costs correlated negatively with working memory capacity. This study demonstrates preserved stimulus-driven attentional capture but impaired contingent attentional capture associated with low working memory capacity in schizophrenia. These findings suggest a selective impairment of top-down attentional control in schizophrenia, which may impair working memory encoding. PMID:23152783

  5. Impaired contingent attentional capture predicts reduced working memory capacity in schizophrenia.

    PubMed

    Mayer, Jutta S; Fukuda, Keisuke; Vogel, Edward K; Park, Sohee

    2012-01-01

    Although impairments in working memory (WM) are well documented in schizophrenia, the specific factors that cause these deficits are poorly understood. In this study, we hypothesized that a heightened susceptibility to attentional capture at an early stage of visual processing would result in working memory encoding problems. 30 patients with schizophrenia and 28 demographically matched healthy participants were presented with a search array and asked to report the orientation of the target stimulus. In some of the trials, a flanker stimulus preceded the search array that either matched the color of the target (relevant-flanker capture) or appeared in a different color (irrelevant-flanker capture). Working memory capacity was determined in each individual using the visual change detection paradigm. Patients needed considerably more time to find the target in the no-flanker condition. After adjusting the individual exposure time, both groups showed equivalent capture costs in the irrelevant-flanker condition. However, in the relevant-flanker condition, capture costs were increased in patients compared to controls when the stimulus onset asynchrony between the flanker and the search array was high. Moreover, the increase in relevant capture costs correlated negatively with working memory capacity. This study demonstrates preserved stimulus-driven attentional capture but impaired contingent attentional capture associated with low working memory capacity in schizophrenia. These findings suggest a selective impairment of top-down attentional control in schizophrenia, which may impair working memory encoding.

  6. Changes in compensatory eye movements associated with simulated stimulus conditions of spaceflight

    NASA Technical Reports Server (NTRS)

    Harm, Deborah L.; Zografos, Linda M.; Skinner, Noel C.; Parker, Donald E.

    1993-01-01

    Compensatory vertical eye movement gain (CVEMG) was recorded during pitch oscillation in darkness before, during, and immediately after exposures to the stimulus rearrangement produced by the Preflight Adaptation Trainer (PAT) Tilt-Translation Device (TTD). The TTD is designed to elicit adaptive responses that are similar to those observed in microgravity-adapted astronauts. The data from Experiment 1 yielded a statistically significant CVEMG decrease following 15 min of exposure to a stimulus rearrangement condition where the phase angle between subject pitch tilt and visual scene translation was 270 deg; statistically significant gain decreases were not observed following exposures either to a condition where the phase angle between subject pitch and scene translation was 90 deg or to a no-stimulus-rearrangement condition. Experiment 2 replicated the 270-deg-phase condition from Experiment 1 and extended the exposure duration from 30 to 45 min. Statistically significant additional changes in CVEMG associated with the increased exposure duration were not observed. The adaptation time constant estimated fram the combined data from Experiments 1 and 2 was 29 min.

  7. Changes in Compensatory Eye Movements Associated with Simulated Stimulus Conditions of Spaceflight

    NASA Technical Reports Server (NTRS)

    Harm, Deborah L.; Zografos, Linda M.; Skinner, Noel C.; Parker, Donald E.

    1993-01-01

    Compensatory vertical eye movement gain (CVEMG) was recorded during pitch oscillation in darkness before, during and immediately after exposures to the stimulus rearrangement produced by the Preflight Adaptation Trainer (PAT) Tilt-Translation Device (TTD). The TTD is designed to elicit adaptive responses that are similar to those observed in microgravity-adapted astronauts. The data from Experiment 1 yielded a statistically significant CVEMG decrease following 15 minutes of exposure to a stimulus rearrangement condition where the phase angle between subject pitch tilt and visual scene translation was 270 degrees; statistically significant gain decreases were not observed following exposures either to a condition where the phase angle between subject pitch and scene translation was 90 degrees or to a no-stimulus-rearrangement condition. Experiment 2 replicated the 270 degree phase condition from Experiment 1 and extended the exposure duration from 30 to 45 minutes. Statistically significant additional changes in CVEMG associated with the increased exposure duration were not observed. The adaptation time constant estimated from the combined data from Experiments 1 and 2 was 29 minutes.

  8. Visual and proprioceptive interaction in patients with bilateral vestibular loss☆

    PubMed Central

    Cutfield, Nicholas J.; Scott, Gregory; Waldman, Adam D.; Sharp, David J.; Bronstein, Adolfo M.

    2014-01-01

    Following bilateral vestibular loss (BVL) patients gradually adapt to the loss of vestibular input and rely more on other sensory inputs. Here we examine changes in the way proprioceptive and visual inputs interact. We used functional magnetic resonance imaging (fMRI) to investigate visual responses in the context of varying levels of proprioceptive input in 12 BVL subjects and 15 normal controls. A novel metal-free vibrator was developed to allow vibrotactile neck proprioceptive input to be delivered in the MRI system. A high level (100 Hz) and low level (30 Hz) control stimulus was applied over the left splenius capitis; only the high frequency stimulus generates a significant proprioceptive stimulus. The neck stimulus was applied in combination with static and moving (optokinetic) visual stimuli, in a factorial fMRI experimental design. We found that high level neck proprioceptive input had more cortical effect on brain activity in the BVL patients. This included a reduction in visual motion responses during high levels of proprioceptive input and differential activation in the midline cerebellum. In early visual cortical areas, the effect of high proprioceptive input was present for both visual conditions but in lateral visual areas, including V5/MT, the effect was only seen in the context of visual motion stimulation. The finding of a cortical visuo-proprioceptive interaction in BVL patients is consistent with behavioural data indicating that, in BVL patients, neck afferents partly replace vestibular input during the CNS-mediated compensatory process. An fMRI cervico-visual interaction may thus substitute the known visuo-vestibular interaction reported in normal subject fMRI studies. The results provide evidence for a cortical mechanism of adaptation to vestibular failure, in the form of an enhanced proprioceptive influence on visual processing. The results may provide the basis for a cortical mechanism involved in proprioceptive substitution of vestibular function in BVL patients. PMID:25061564

  9. Visual and proprioceptive interaction in patients with bilateral vestibular loss.

    PubMed

    Cutfield, Nicholas J; Scott, Gregory; Waldman, Adam D; Sharp, David J; Bronstein, Adolfo M

    2014-01-01

    Following bilateral vestibular loss (BVL) patients gradually adapt to the loss of vestibular input and rely more on other sensory inputs. Here we examine changes in the way proprioceptive and visual inputs interact. We used functional magnetic resonance imaging (fMRI) to investigate visual responses in the context of varying levels of proprioceptive input in 12 BVL subjects and 15 normal controls. A novel metal-free vibrator was developed to allow vibrotactile neck proprioceptive input to be delivered in the MRI system. A high level (100 Hz) and low level (30 Hz) control stimulus was applied over the left splenius capitis; only the high frequency stimulus generates a significant proprioceptive stimulus. The neck stimulus was applied in combination with static and moving (optokinetic) visual stimuli, in a factorial fMRI experimental design. We found that high level neck proprioceptive input had more cortical effect on brain activity in the BVL patients. This included a reduction in visual motion responses during high levels of proprioceptive input and differential activation in the midline cerebellum. In early visual cortical areas, the effect of high proprioceptive input was present for both visual conditions but in lateral visual areas, including V5/MT, the effect was only seen in the context of visual motion stimulation. The finding of a cortical visuo-proprioceptive interaction in BVL patients is consistent with behavioural data indicating that, in BVL patients, neck afferents partly replace vestibular input during the CNS-mediated compensatory process. An fMRI cervico-visual interaction may thus substitute the known visuo-vestibular interaction reported in normal subject fMRI studies. The results provide evidence for a cortical mechanism of adaptation to vestibular failure, in the form of an enhanced proprioceptive influence on visual processing. The results may provide the basis for a cortical mechanism involved in proprioceptive substitution of vestibular function in BVL patients.

  10. Attention to the Color of a Moving Stimulus Modulates Motion-Signal Processing in Macaque Area MT: Evidence for a Unified Attentional System.

    PubMed

    Katzner, Steffen; Busse, Laura; Treue, Stefan

    2009-01-01

    Directing visual attention to spatial locations or to non-spatial stimulus features can strongly modulate responses of individual cortical sensory neurons. Effects of attention typically vary in magnitude, not only between visual cortical areas but also between individual neurons from the same area. Here, we investigate whether the size of attentional effects depends on the match between the tuning properties of the recorded neuron and the perceptual task at hand. We recorded extracellular responses from individual direction-selective neurons in the middle temporal area (MT) of rhesus monkeys trained to attend either to the color or the motion signal of a moving stimulus. We found that effects of spatial and feature-based attention in MT, which are typically observed in tasks allocating attention to motion, were very similar even when attention was directed to the color of the stimulus. We conclude that attentional modulation can occur in extrastriate cortex, even under conditions without a match between the tuning properties of the recorded neuron and the perceptual task at hand. Our data are consistent with theories of object-based attention describing a transfer of attention from relevant to irrelevant features, within the attended object and across the visual field. These results argue for a unified attentional system that modulates responses to a stimulus across cortical areas, even if a given area is specialized for processing task-irrelevant aspects of that stimulus.

  11. fMRI during natural sleep as a method to study brain function during early childhood.

    PubMed

    Redcay, Elizabeth; Kennedy, Daniel P; Courchesne, Eric

    2007-12-01

    Many techniques to study early functional brain development lack the whole-brain spatial resolution that is available with fMRI. We utilized a relatively novel method in which fMRI data were collected from children during natural sleep. Stimulus-evoked responses to auditory and visual stimuli as well as stimulus-independent functional networks were examined in typically developing 2-4-year-old children. Reliable fMRI data were collected from 13 children during presentation of auditory stimuli (tones, vocal sounds, and nonvocal sounds) in a block design. Twelve children were presented with visual flashing lights at 2.5 Hz. When analyses combined all three types of auditory stimulus conditions as compared to rest, activation included bilateral superior temporal gyri/sulci (STG/S) and right cerebellum. Direct comparisons between conditions revealed significantly greater responses to nonvocal sounds and tones than to vocal sounds in a number of brain regions including superior temporal gyrus/sulcus, medial frontal cortex and right lateral cerebellum. The response to visual stimuli was localized to occipital cortex. Furthermore, stimulus-independent functional connectivity MRI analyses (fcMRI) revealed functional connectivity between STG and other temporal regions (including contralateral STG) and medial and lateral prefrontal regions. Functional connectivity with an occipital seed was localized to occipital and parietal cortex. In sum, 2-4 year olds showed a differential fMRI response both between stimulus modalities and between stimuli in the auditory modality. Furthermore, superior temporal regions showed functional connectivity with numerous higher-order regions during sleep. We conclude that the use of sleep fMRI may be a valuable tool for examining functional brain organization in young children.

  12. Stimulus size and eccentricity in visually induced perception of horizontally translational self-motion.

    PubMed

    Nakamura, S; Shimojo, S

    1998-10-01

    The effects of the size and eccentricity of the visual stimulus upon visually induced perception of self-motion (vection) were examined with various sizes of central and peripheral visual stimulation. Analysis indicated the strength of vection increased linearly with the size of the area in which the moving pattern was presented, but there was no difference in vection strength between central and peripheral stimuli when stimulus sizes were the same. Thus, the effect of stimulus size is homogeneous across eccentricities in the visual field.

  13. Weighting Mean and Variability during Confidence Judgments

    PubMed Central

    de Gardelle, Vincent; Mamassian, Pascal

    2015-01-01

    Humans can not only perform some visual tasks with great precision, they can also judge how good they are in these tasks. However, it remains unclear how observers produce such metacognitive evaluations, and how these evaluations might be dissociated from the performance in the visual task. Here, we hypothesized that some stimulus variables could affect confidence judgments above and beyond their impact on performance. In a motion categorization task on moving dots, we manipulated the mean and the variance of the motion directions, to obtain a low-mean low-variance condition and a high-mean high-variance condition with matched performances. Critically, in terms of confidence, observers were not indifferent between these two conditions. Observers exhibited marked preferences, which were heterogeneous across individuals, but stable within each observer when assessed one week later. Thus, confidence and performance are dissociable and observers’ confidence judgments put different weights on the stimulus variables that limit performance. PMID:25793275

  14. Visually cued motor synchronization: modulation of fMRI activation patterns by baseline condition.

    PubMed

    Cerasa, Antonio; Hagberg, Gisela E; Bianciardi, Marta; Sabatini, Umberto

    2005-01-03

    A well-known issue in functional neuroimaging studies, regarding motor synchronization, is to design suitable control tasks able to discriminate between the brain structures involved in primary time-keeper functions and those related to other processes such as attentional effort. The aim of this work was to investigate how the predictability of stimulus onsets in the baseline condition modulates the activity in brain structures related to processes involved in time-keeper functions during the performance of a visually cued motor synchronization task (VM). The rational behind this choice derives from the notion that using different stimulus predictability can vary the subject's attention and the consequently neural activity. For this purpose, baseline levels of BOLD activity were obtained from 12 subjects during a conventional-baseline condition: maintained fixation of the visual rhythmic stimuli presented in the VM task, and a random-baseline condition: maintained fixation of visual stimuli occurring randomly. fMRI analysis demonstrated that while brain areas with a documented role in basic time processing are detected independent of the baseline condition (right cerebellum, bilateral putamen, left thalamus, left superior temporal gyrus, left sensorimotor cortex, left dorsal premotor cortex and supplementary motor area), the ventral premotor cortex, caudate nucleus, insula and inferior frontal gyrus exhibited a baseline-dependent activation. We conclude that maintained fixation of unpredictable visual stimuli can be employed in order to reduce or eliminate neural activity related to attentional components present in the synchronization task.

  15. Audio-visual synchrony and feature-selective attention co-amplify early visual processing.

    PubMed

    Keitel, Christian; Müller, Matthias M

    2016-05-01

    Our brain relies on neural mechanisms of selective attention and converging sensory processing to efficiently cope with rich and unceasing multisensory inputs. One prominent assumption holds that audio-visual synchrony can act as a strong attractor for spatial attention. Here, we tested for a similar effect of audio-visual synchrony on feature-selective attention. We presented two superimposed Gabor patches that differed in colour and orientation. On each trial, participants were cued to selectively attend to one of the two patches. Over time, spatial frequencies of both patches varied sinusoidally at distinct rates (3.14 and 3.63 Hz), giving rise to pulse-like percepts. A simultaneously presented pure tone carried a frequency modulation at the pulse rate of one of the two visual stimuli to introduce audio-visual synchrony. Pulsed stimulation elicited distinct time-locked oscillatory electrophysiological brain responses. These steady-state responses were quantified in the spectral domain to examine individual stimulus processing under conditions of synchronous versus asynchronous tone presentation and when respective stimuli were attended versus unattended. We found that both, attending to the colour of a stimulus and its synchrony with the tone, enhanced its processing. Moreover, both gain effects combined linearly for attended in-sync stimuli. Our results suggest that audio-visual synchrony can attract attention to specific stimulus features when stimuli overlap in space.

  16. Rethinking volitional control over task choice in multitask environments: use of a stimulus set selection strategy in voluntary task switching.

    PubMed

    Arrington, Catherine M; Weaver, Starla M

    2015-01-01

    Under conditions of volitional control in multitask environments, subjects may engage in a variety of strategies to guide task selection. The current research examines whether subjects may sometimes use a top-down control strategy of selecting a task-irrelevant stimulus dimension, such as location, to guide task selection. We term this approach a stimulus set selection strategy. Using a voluntary task switching procedure, subjects voluntarily switched between categorizing letter and number stimuli that appeared in two, four, or eight possible target locations. Effects of stimulus availability, manipulated by varying the stimulus onset asynchrony between the two target stimuli, and location repetition were analysed to assess the use of a stimulus set selection strategy. Considered across position condition, Experiment 1 showed effects of both stimulus availability and location repetition on task choice suggesting that only in the 2-position condition, where selection based on location always results in a target at the selected location, subjects may have been using a stimulus set selection strategy on some trials. Experiment 2 replicated and extended these findings in a visually more cluttered environment. These results indicate that, contrary to current models of task selection in voluntary task switching, the top-down control of task selection may occur in the absence of the formation of an intention to perform a particular task.

  17. The malleability of emotional perception: Short-term plasticity in retinotopic neurons accompanies the formation of perceptual biases to threat.

    PubMed

    Thigpen, Nina N; Bartsch, Felix; Keil, Andreas

    2017-04-01

    Emotional experience changes visual perception, leading to the prioritization of sensory information associated with threats and opportunities. These emotional biases have been extensively studied by basic and clinical scientists, but their underlying mechanism is not known. The present study combined measures of brain-electric activity and autonomic physiology to establish how threat biases emerge in human observers. Participants viewed stimuli designed to differentially challenge known properties of different neuronal populations along the visual pathway: location, eye, and orientation specificity. Biases were induced using aversive conditioning with only 1 combination of eye, orientation, and location predicting a noxious loud noise and replicated in a separate group of participants. Selective heart rate-orienting responses for the conditioned threat stimulus indicated bias formation. Retinotopic visual brain responses were persistently and selectively enhanced after massive aversive learning for only the threat stimulus and dissipated after extinction training. These changes were location-, eye-, and orientation-specific, supporting the hypothesis that short-term plasticity in primary visual neurons mediates the formation of perceptual biases to threat. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  18. The dynamic-stimulus advantage of visual symmetry perception.

    PubMed

    Niimi, Ryosuke; Watanabe, Katsumi; Yokosawa, Kazuhiko

    2008-09-01

    It has been speculated that visual symmetry perception from dynamic stimuli involves mechanisms different from those for static stimuli. However, previous studies found no evidence that dynamic stimuli lead to active temporal processing and improve symmetry detection. In this study, four psychophysical experiments investigated temporal processing in symmetry perception using both dynamic and static stimulus presentations of dot patterns. In Experiment 1, rapid successive presentations of symmetric patterns (e.g., 16 patterns per 853 ms) produced more accurate discrimination of orientations of symmetry axes than static stimuli (single pattern presented through 853 ms). In Experiments 2-4, we confirmed that the dynamic-stimulus advantage depended upon presentation of a large number of unique patterns within a brief period (853 ms) in the dynamic conditions. Evidently, human vision takes advantage of temporal processing for symmetry perception from dynamic stimuli.

  19. Integration of nonthematic details in pictures and passages.

    PubMed

    Viera, C L; Homa, D L

    1991-01-01

    Nonthematic details in naturalistic scenes were manipulated to produce four stimulus versions: color photos, black-white copies, and elaborated and unelaborated line drawings (Experiment 1); analogous verbal descriptions of each visual version were produced for Experiment 2. In Experiment 1, two or three different versions of a scene were presented in the mixed condition; the same version of the scene was repeated either two or three times in the same condition, and a 1-presentation control condition was also included. In Experiment 2, the same presentation conditions were used across different groups of subjects who either viewed the pictures or heard the descriptions. An old/new recognition test was given in which the nonstudied versions of the studied items were used as foils. Higher false recognition performances for the mixed condition were found for the visual materials in both experiments, and in the second experiment the verbal materials produced equivalently high levels of false recognition for both same and mixed conditions. Additionally, in Experiment 2 the patterns of performances across material conditions were differentially affected by the manipulation of detail in the four stimulus versions. These differences across materials suggest that the integration of semantically consistent details across temporally separable presentations is facilitated when the stimuli do not provide visual/physical attributes to enhance discrimination of different presentations. Further, the evidence derived from the visual scenes in both experiments indicates that the semantic schema abstracted from a picture is not the sole mediator of recognition performance.

  20. Characteristics of implicit chaining in cotton-top tamarins (Saguinus oedipus).

    PubMed

    Locurto, Charles; Gagne, Matthew; Nutile, Lauren

    2010-07-01

    In human cognition there has been considerable interest in observing the conditions under which subjects learn material without explicit instructions to learn. In the present experiments, we adapted this issue to nonhumans by asking what subjects learn in the absence of explicit reinforcement for correct responses. Two experiments examined the acquisition of sequence information by cotton-top tamarins (Saguinus oedipus) when such learning was not demanded by the experimental contingencies. An implicit chaining procedure was used in which visual stimuli were presented serially on a touchscreen. Subjects were required to touch one stimulus to advance to the next stimulus. Stimulus presentations followed a pattern, but learning the pattern was not necessary for reinforcement. In Experiment 1 the chain consisted of five different visual stimuli that were presented in the same order on each trial. Each stimulus could occur at any one of six touchscreen positions. In Experiment 2 the same visual element was presented serially in the same five locations on each trial, thereby allowing a behavioral pattern to be correlated with the visual pattern. In this experiment two new tests, a Wild-Card test and a Running-Start test, were used to assess what was learned in this procedure. Results from both experiments indicated that tamarins acquired more information from an implicit chain than was required by the contingencies of reinforcement. These results contribute to the developing literature on nonhuman analogs of implicit learning.

  1. Dissociable Roles of Different Types of Working Memory Load in Visual Detection

    PubMed Central

    Konstantinou, Nikos; Lavie, Nilli

    2013-01-01

    We contrasted the effects of different types of working memory (WM) load on detection. Considering the sensory-recruitment hypothesis of visual short-term memory (VSTM) within load theory (e.g., Lavie, 2010) led us to predict that VSTM load would reduce visual-representation capacity, thus leading to reduced detection sensitivity during maintenance, whereas load on WM cognitive control processes would reduce priority-based control, thus leading to enhanced detection sensitivity for a low-priority stimulus. During the retention interval of a WM task, participants performed a visual-search task while also asked to detect a masked stimulus in the periphery. Loading WM cognitive control processes (with the demand to maintain a random digit order [vs. fixed in conditions of low load]) led to enhanced detection sensitivity. In contrast, loading VSTM (with the demand to maintain the color and positions of six squares [vs. one in conditions of low load]) reduced detection sensitivity, an effect comparable with that found for manipulating perceptual load in the search task. The results confirmed our predictions and established a new functional dissociation between the roles of different types of WM load in the fundamental visual perception process of detection. PMID:23713796

  2. Interval timing in children: effects of auditory and visual pacing stimuli and relationships with reading and attention variables.

    PubMed

    Birkett, Emma E; Talcott, Joel B

    2012-01-01

    Motor timing tasks have been employed in studies of neurodevelopmental disorders such as developmental dyslexia and ADHD, where they provide an index of temporal processing ability. Investigations of these disorders have used different stimulus parameters within the motor timing tasks that are likely to affect performance measures. Here we assessed the effect of auditory and visual pacing stimuli on synchronised motor timing performance and its relationship with cognitive and behavioural predictors that are commonly used in the diagnosis of these highly prevalent developmental disorders. Twenty-one children (mean age 9.6 years) completed a finger tapping task in two stimulus conditions, together with additional psychometric measures. As anticipated, synchronisation to the beat (ISI 329 ms) was less accurate in the visually paced condition. Decomposition of timing variance indicated that this effect resulted from differences in the way that visual and auditory paced tasks are processed by central timekeeping and associated peripheral implementation systems. The ability to utilise an efficient processing strategy on the visual task correlated with both reading and sustained attention skills. Dissociations between these patterns of relationship across task modality suggest that not all timing tasks are equivalent.

  3. TMS effects on subjective and objective measures of vision: stimulation intensity and pre- versus post-stimulus masking.

    PubMed

    de Graaf, Tom A; Cornelsen, Sonja; Jacobs, Christianne; Sack, Alexander T

    2011-12-01

    Transcranial magnetic stimulation (TMS) can be used to mask visual stimuli, disrupting visual task performance or preventing visual awareness. While TMS masking studies generally fix stimulation intensity, we hypothesized that varying the intensity of TMS pulses in a masking paradigm might inform several ongoing debates concerning TMS disruption of vision as measured subjectively versus objectively, and pre-stimulus (forward) versus post-stimulus (backward) TMS masking. We here show that both pre-stimulus TMS pulses and post-stimulus TMS pulses could strongly mask visual stimuli. We found no dissociations between TMS effects on the subjective and objective measures of vision for any masking window or intensity, ruling out the option that TMS intensity levels determine whether dissociations between subjective and objective vision are obtained. For the post-stimulus time window particularly, we suggest that these data provide new constraints for (e.g. recurrent) models of vision and visual awareness. Finally, our data are in line with the idea that pre-stimulus masking operates differently from conventional post-stimulus masking. Copyright © 2011 Elsevier Inc. All rights reserved.

  4. Attention Modulates Visual-Tactile Interaction in Spatial Pattern Matching

    PubMed Central

    Göschl, Florian; Engel, Andreas K.; Friese, Uwe

    2014-01-01

    Factors influencing crossmodal interactions are manifold and operate in a stimulus-driven, bottom-up fashion, as well as via top-down control. Here, we evaluate the interplay of stimulus congruence and attention in a visual-tactile task. To this end, we used a matching paradigm requiring the identification of spatial patterns that were concurrently presented visually on a computer screen and haptically to the fingertips by means of a Braille stimulator. Stimulation in our paradigm was always bimodal with only the allocation of attention being manipulated between conditions. In separate blocks of the experiment, participants were instructed to (a) focus on a single modality to detect a specific target pattern, (b) pay attention to both modalities to detect a specific target pattern, or (c) to explicitly evaluate if the patterns in both modalities were congruent or not. For visual as well as tactile targets, congruent stimulus pairs led to quicker and more accurate detection compared to incongruent stimulation. This congruence facilitation effect was more prominent under divided attention. Incongruent stimulation led to behavioral decrements under divided attention as compared to selectively attending a single sensory channel. Additionally, when participants were asked to evaluate congruence explicitly, congruent stimulation was associated with better performance than incongruent stimulation. Our results extend previous findings from audiovisual studies, showing that stimulus congruence also resulted in behavioral improvements in visuotactile pattern matching. The interplay of stimulus processing and attentional control seems to be organized in a highly flexible fashion, with the integration of signals depending on both bottom-up and top-down factors, rather than occurring in an ‘all-or-nothing’ manner. PMID:25203102

  5. Nervus terminalis innervation of the goldfish retina and behavioral visual sensitivity.

    PubMed

    Davis, R E; Kyle, A; Klinger, P D

    1988-08-31

    The possibility that axon terminals of the nervus terminalis in the goldfish retina regulate visual sensitivity was examined psychophysically. Fish were classically conditioned to respond in darkness to a diffuse red light conditioned stimulus. Bilateral ablation of the olfactory bulb and telencephalon had no significant effect on response threshold which was measured by a staircase method. Retinopetal nervus terminalis fibres thus appear to play no role in maintaining scotopic photosensitivity.

  6. Topographic brain mapping of emotion-related hemisphere asymmetries.

    PubMed

    Roschmann, R; Wittling, W

    1992-03-01

    The study used topographic brain mapping of visual evoked potentials to investigate emotion-related hemisphere asymmetries. The stimulus material consisted of color photographs of human faces, grouped into two emotion-related categories: normal faces (neutral stimuli) and faces deformed by dermatological diseases (emotional stimuli). The pictures were presented tachistoscopically to 20 adult right-handed subjects. Brain activity was recorded by 30 EEG electrodes with linked ears as reference. The waveforms were averaged separately with respect to each of the two stimulus conditions. Statistical analysis by means of significance probability mapping revealed significant differences between stimulus conditions for two periods of time, indicating right hemisphere superiority in emotion-related processing. The results are discussed in terms of a 2-stage-model of emotional processing in the cerebral hemispheres.

  7. On the Role of Mentalizing Processes in Aesthetic Appreciation: An ERP Study

    PubMed Central

    Beudt, Susan; Jacobsen, Thomas

    2015-01-01

    We used event-related brain potentials to explore the impact of mental perspective taking on processes of aesthetic appreciation of visual art. Participants (non-experts) were first presented with information about the life and attitudes of a fictitious artist. Subsequently, they were cued trial-wise to make an aesthetic judgment regarding an image depicting a piece of abstract art either from their own perspective or from the imagined perspective of the fictitious artist [i.e., theory of mind (ToM) condition]. Positive self-referential judgments were made more quickly and negative self-referential judgments were made more slowly than the corresponding judgments from the imagined perspective. Event-related potential analyses revealed significant differences between the two tasks both within the preparation period (i.e., during the cue-stimulus interval) and within the stimulus presentation period. For the ToM condition we observed a relative centro-parietal negativity during the preparation period (700–330 ms preceding picture onset) and a relative centro-parietal positivity during the stimulus presentation period (700–1100 ms after stimulus onset). These findings suggest that different subprocesses are involved in aesthetic appreciation and judgment of visual abstract art from one’s own vs. from another person’s perspective. PMID:26617506

  8. Pedunculopontine tegmental nucleus lesions impair stimulus--reward learning in autoshaping and conditioned reinforcement paradigms.

    PubMed

    Inglis, W L; Olmstead, M C; Robbins, T W

    2000-04-01

    The role of the pedunculopontine tegmental nucleus (PPTg) in stimulus-reward learning was assessed by testing the effects of PPTg lesions on performance in visual autoshaping and conditioned reinforcement (CRf) paradigms. Rats with PPTg lesions were unable to learn an association between a conditioned stimulus (CS) and a primary reward in either paradigm. In the autoshaping experiment, PPTg-lesioned rats approached the CS+ and CS- with equal frequency, and the latencies to respond to the two stimuli did not differ. PPTg lesions also disrupted discriminated approaches to an appetitive CS in the CRf paradigm and completely abolished the acquisition of responding with CRf. These data are discussed in the context of a possible cognitive function of the PPTg, particularly in terms of lesion-induced disruptions of attentional processes that are mediated by the thalamus.

  9. N-methyl-D-aspartate receptor antagonist MK-801 impairs learning but not memory fixation or expression of classical fear conditioning in goldfish (Carassius auratus).

    PubMed

    Xu, X; Davis, R E

    1992-04-01

    The amnestic effects of the noncompetitive antagonist MK-801 on visually mediated, classic fear conditioning in goldfish (Carassius auratus) was examined in 5 experiments. MK-801 was administered 30 min before the training session on Day 1 to look for anterograde amnestic effects, immediately after training to look for retrograde amnestic effects, and before the training or test session, or both, to look for state-dependence effects. The results showed that MK-801 produced anterograde amnesia at doses that did not produce retrograde amnesia or state dependency and did not impair the expression of conditioned or unconditioned branchial suppression responses (BSRs) to the conditioned stimulus. The results indicate that MK-801 disrupts the mechanism of learning of the conditioned stimulus-unconditioned stimulus relation. Evidence is also presented that the learning processes that are disrupted by MK-801 occur during the initial stage of BSR conditioning.

  10. Neuronal correlates of the visually elicited escape response of the crab Chasmagnathus upon seasonal variations, stimuli changes and perceptual alterations.

    PubMed

    Sztarker, Julieta; Tomsic, Daniel

    2008-06-01

    When confronted with predators, animals are forced to take crucial decisions such as the timing and manner of escape. In the case of the crab Chasmagnathus, cumulative evidence suggests that the escape response to a visual danger stimulus (VDS) can be accounted for by the response of a group of lobula giant (LG) neurons. To further investigate this hypothesis, we examined the relationship between behavioral and neuronal activities within a variety of experimental conditions that affected the level of escape. The intensity of the escape response to VDS was influenced by seasonal variations, changes in stimulus features, and whether the crab perceived stimuli monocularly or binocularly. These experimental conditions consistently affected the response of LG neurons in a way that closely matched the effects observed at the behavioral level. In other words, the intensity of the stimulus-elicited spike activity of LG neurons faithfully reflected the intensity of the escape response. These results support the idea that the LG neurons from the lobula of crabs are deeply involved in the decision for escaping from VDS.

  11. Gait bradykinesia in Parkinson's disease: a change in the motor program which controls the synergy of gait.

    PubMed

    Warabi, Tateo; Furuyama, Hiroyasu; Sugai, Eri; Kato, Masamichi; Yanagisawa, Nobuo

    2018-01-01

    This study examined how gait bradykinesia is changed by the motor programming in Parkinson's disease. Thirty-five idiopathic Parkinson's disease patients and nine age-matched healthy subjects participated in this study. After the patients fixated on a visual-fixation target (conditioning-stimulus), the voluntary-gait was triggered by a visual on-stimulus. While the subject walked on a level floor, soleus, tibialis anterior EMG latencies, and the y-axis-vector of the sole-floor reaction force were examined. Three paradigms were used to distinguish between the off-/on-latencies. The gap-task: the visual-fixation target was turned off; 200 ms before the on-stimulus was engaged (resulting in a 200 ms-gap). EMG latency was not influenced by the visual-fixation target. The overlap-task: the on-stimulus was turned on during the visual-fixation target presentation (200 ms-overlap). The no-gap-task: the fixation target was turned off and the on-stimulus was turned on simultaneously. The onset of EMG pause following the tonic soleus EMG was defined as the off-latency of posture (termination). The onset of the tibialis anterior EMG burst was defined as the on-latency of gait (initiation). In the gap-task, the on-latency was unchanged in all of the subjects. In Parkinson's disease, the visual-fixation target prolonged both the off-/on-latencies in the overlap-task. In all tasks, the off-latency was prolonged and the off-/on-latencies were unsynchronized, which changed the synergic movement to a slow, short-step-gait. The synergy of gait was regulated by two independent sensory-motor programs of the off- and on-latency levels. In Parkinson's disease, the delayed gait initiation was due to the difficulty in terminating the sensory-motor program which controls the subject's fixation. The dynamic gait bradykinesia was involved in the difficulty (long off-latency) in terminating the motor program of the prior posture/movement.

  12. A Gaze Independent Brain-Computer Interface Based on Visual Stimulation through Closed Eyelids

    NASA Astrophysics Data System (ADS)

    Hwang, Han-Jeong; Ferreria, Valeria Y.; Ulrich, Daniel; Kilic, Tayfun; Chatziliadis, Xenofon; Blankertz, Benjamin; Treder, Matthias

    2015-10-01

    A classical brain-computer interface (BCI) based on visual event-related potentials (ERPs) is of limited application value for paralyzed patients with severe oculomotor impairments. In this study, we introduce a novel gaze independent BCI paradigm that can be potentially used for such end-users because visual stimuli are administered on closed eyelids. The paradigm involved verbally presented questions with 3 possible answers. Online BCI experiments were conducted with twelve healthy subjects, where they selected one option by attending to one of three different visual stimuli. It was confirmed that typical cognitive ERPs can be evidently modulated by the attention of a target stimulus in eyes-closed and gaze independent condition, and further classified with high accuracy during online operation (74.58% ± 17.85 s.d.; chance level 33.33%), demonstrating the effectiveness of the proposed novel visual ERP paradigm. Also, stimulus-specific eye movements observed during stimulation were verified as reflex responses to light stimuli, and they did not contribute to classification. To the best of our knowledge, this study is the first to show the possibility of using a gaze independent visual ERP paradigm in an eyes-closed condition, thereby providing another communication option for severely locked-in patients suffering from complex ocular dysfunctions.

  13. Visual word form familiarity and attention in lateral difference during processing Japanese Kana words.

    PubMed

    Nakagawa, A; Sukigara, M

    2000-09-01

    The purpose of this study was to examine the relationship between familiarity and laterality in reading Japanese Kana words. In two divided-visual-field experiments, three- or four-character Hiragana or Katakana words were presented in both familiar and unfamiliar scripts, to which subjects performed lexical decisions. Experiment 1, using three stimulus durations (40, 100, 160 ms), suggested that only in the unfamiliar script condition was increased stimulus presentation time differently affected in each visual field. To examine this lateral difference during the processing of unfamiliar scripts as related to attentional laterality, a concurrent auditory shadowing task was added in Experiment 2. The results suggested that processing words in an unfamiliar script requires attention, which could be left-hemisphere lateralized, while orthographically familiar kana words can be processed automatically on the basis of their word-level orthographic representations or visual word form. Copyright 2000 Academic Press.

  14. The time-course of the cross-modal semantic modulation of visual picture processing by naturalistic sounds and spoken words.

    PubMed

    Chen, Yi-Chuan; Spence, Charles

    2013-01-01

    The time-course of cross-modal semantic interactions between pictures and either naturalistic sounds or spoken words was compared. Participants performed a speeded picture categorization task while hearing a task-irrelevant auditory stimulus presented at various stimulus onset asynchronies (SOAs) with respect to the visual picture. Both naturalistic sounds and spoken words gave rise to cross-modal semantic congruency effects (i.e., facilitation by semantically congruent sounds and inhibition by semantically incongruent sounds, as compared to a baseline noise condition) when the onset of the sound led that of the picture by 240 ms or more. Both naturalistic sounds and spoken words also gave rise to inhibition irrespective of their semantic congruency when presented within 106 ms of the onset of the picture. The peak of this cross-modal inhibitory effect occurred earlier for spoken words than for naturalistic sounds. These results therefore demonstrate that the semantic priming of visual picture categorization by auditory stimuli only occurs when the onset of the sound precedes that of the visual stimulus. The different time-courses observed for naturalistic sounds and spoken words likely reflect the different processing pathways to access the relevant semantic representations.

  15. The influence of spontaneous activity on stimulus processing in primary visual cortex.

    PubMed

    Schölvinck, M L; Friston, K J; Rees, G

    2012-02-01

    Spontaneous activity in the resting human brain has been studied extensively; however, how such activity affects the local processing of a sensory stimulus is relatively unknown. Here, we examined the impact of spontaneous activity in primary visual cortex on neuronal and behavioural responses to a simple visual stimulus, using functional MRI. Stimulus-evoked responses remained essentially unchanged by spontaneous fluctuations, combining with them in a largely linear fashion (i.e., with little evidence for an interaction). However, interactions between spontaneous fluctuations and stimulus-evoked responses were evident behaviourally; high levels of spontaneous activity tended to be associated with increased stimulus detection at perceptual threshold. Our results extend those found in studies of spontaneous fluctuations in motor cortex and higher order visual areas, and suggest a fundamental role for spontaneous activity in stimulus processing. Copyright © 2011. Published by Elsevier Inc.

  16. Similar brain networks for detecting visuo-motor and visuo-proprioceptive synchrony.

    PubMed

    Balslev, Daniela; Nielsen, Finn A; Lund, Torben E; Law, Ian; Paulson, Olaf B

    2006-05-15

    The ability to recognize feedback from own movement as opposed to the movement of someone else is important for motor control and social interaction. The neural processes involved in feedback recognition are incompletely understood. Two competing hypotheses have been proposed: the stimulus is compared with either (a) the proprioceptive feedback or with (b) the motor command and if they match, then the external stimulus is identified as feedback. Hypothesis (a) predicts that the neural mechanisms or brain areas involved in distinguishing self from other during passive and active movement are similar, whereas hypothesis (b) predicts that they are different. In this fMRI study, healthy subjects saw visual cursor movement that was either synchronous or asynchronous with their active or passive finger movements. The aim was to identify the brain areas where the neural activity depended on whether the visual stimulus was feedback from own movement and to contrast the functional activation maps for active and passive movement. We found activity increases in the right temporoparietal cortex in the condition with asynchronous relative to synchronous visual feedback from both active and passive movements. However, no statistically significant difference was found between these sets of activated areas when the active and passive movement conditions were compared. With a posterior probability of 0.95, no brain voxel had a contrast effect above 0.11% of the whole-brain mean signal. These results do not support the hypothesis that recognition of visual feedback during active and passive movement relies on different brain areas.

  17. V1 projection zone signals in human macular degeneration depend on task, not stimulus.

    PubMed

    Masuda, Yoichiro; Dumoulin, Serge O; Nakadomari, Satoshi; Wandell, Brian A

    2008-11-01

    We used functional magnetic resonance imaging to assess abnormal cortical signals in humans with juvenile macular degeneration (JMD). These signals have been interpreted as indicating large-scale cortical reorganization. Subjects viewed a stimulus passively or performed a task; the task was either related or unrelated to the stimulus. During passive viewing, or while performing tasks unrelated to the stimulus, there were large unresponsive V1 regions. These regions included the foveal projection zone, and we refer to them as the lesion projection zone (LPZ). In 3 JMD subjects, we observed highly significant responses in the LPZ while they performed stimulus-related judgments. In control subjects, where we presented the stimulus only within the peripheral visual field, there was no V1 response in the foveal projection zone in any condition. The difference between JMD and control responses can be explained by hypotheses that have very different implications for V1 reorganization. In controls retinal afferents carry signals indicating the presence of a uniform (zero-contrast) region of the visual field. Deletion of retinal input may 1) spur the formation of new cortical pathways that carry task-dependent signals (reorganization), or 2) unmask preexisting task-dependent cortical signals that ordinarily are suppressed by the deleted signals (no reorganization).

  18. V1 Projection Zone Signals in Human Macular Degeneration Depend on Task, not Stimulus

    PubMed Central

    Dumoulin, Serge O.; Nakadomari, Satoshi; Wandell, Brian A.

    2008-01-01

    We used functional magnetic resonance imaging to assess abnormal cortical signals in humans with juvenile macular degeneration (JMD). These signals have been interpreted as indicating large-scale cortical reorganization. Subjects viewed a stimulus passively or performed a task; the task was either related or unrelated to the stimulus. During passive viewing, or while performing tasks unrelated to the stimulus, there were large unresponsive V1 regions. These regions included the foveal projection zone, and we refer to them as the lesion projection zone (LPZ). In 3 JMD subjects, we observed highly significant responses in the LPZ while they performed stimulus-related judgments. In control subjects, where we presented the stimulus only within the peripheral visual field, there was no V1 response in the foveal projection zone in any condition. The difference between JMD and control responses can be explained by hypotheses that have very different implications for V1 reorganization. In controls retinal afferents carry signals indicating the presence of a uniform (zero-contrast) region of the visual field. Deletion of retinal input may 1) spur the formation of new cortical pathways that carry task-dependent signals (reorganization), or 2) unmask preexisting task-dependent cortical signals that ordinarily are suppressed by the deleted signals (no reorganization). PMID:18250083

  19. Double Dissociation of Conditioning and Declarative Knowledge Relative to the Amygdala and Hippocampus in Humans

    NASA Astrophysics Data System (ADS)

    Bechara, Antoine; Tranel, Daniel; Damasio, Hanna; Adolphs, Ralph; Rockland, Charles; Damasio, Antonio R.

    1995-08-01

    A patient with selective bilateral damage to the amygdala did not acquire conditioned autonomic responses to visual or auditory stimuli but did acquire the declarative facts about which visual or auditory stimuli were paired with the unconditioned stimulus. By contrast, a patient with selective bilateral damage to the hippocampus failed to acquire the facts but did acquire the conditioning. Finally, a patient with bilateral damage to both amygdala and hippocampal formation acquired neither the conditioning nor the facts. These findings demonstrate a double dissociation of conditioning and declarative knowledge relative to the human amygdala and hippocampus.

  20. Verbal Recall of Auditory and Visual Signals by Normal and Deficient Reading Children.

    ERIC Educational Resources Information Center

    Levine, Maureen Julianne

    Verbal recall of bisensory memory tasks was compared among 48 9- to 12-year old boys in three groups: normal readers, primary deficit readers, and secondary deficit readers. Auditory and visual stimulus pairs composed of digits, which incorporated variations of intersensory and intrasensory conditions were administered to Ss through a Bell and…

  1. Comparison on driving fatigue related hemodynamics activated by auditory and visual stimulus

    NASA Astrophysics Data System (ADS)

    Deng, Zishan; Gao, Yuan; Li, Ting

    2018-02-01

    As one of the main causes of traffic accidents, driving fatigue deserves researchers' attention and its detection and monitoring during long-term driving require a new technique to realize. Since functional near-infrared spectroscopy (fNIRS) can be applied to detect cerebral hemodynamic responses, we can promisingly expect its application in fatigue level detection. Here, we performed three different kinds of experiments on a driver and recorded his cerebral hemodynamic responses when driving for long hours utilizing our device based on fNIRS. Each experiment lasted for 7 hours and one of the three specific experimental tests, detecting the driver's response to sounds, traffic lights and direction signs respectively, was done every hour. The results showed that visual stimulus was easier to cause fatigue compared with auditory stimulus and visual stimulus induced by traffic lights scenes was easier to cause fatigue compared with visual stimulus induced by direction signs in the first few hours. We also found that fatigue related hemodynamics caused by auditory stimulus increased fastest, then traffic lights scenes, and direction signs scenes slowest. Our study successfully compared audio, visual color, and visual character stimulus in sensitivity to cause driving fatigue, which is meaningful for driving safety management.

  2. Effects of auditory and visual modalities in recall of words.

    PubMed

    Gadzella, B M; Whitehead, D A

    1975-02-01

    Ten experimental conditions were used to study the effects of auditory and visual (printed words, uncolored and colored pictures) modalities and their various combinations with college students. A recall paradigm was employed in which subjects responded in a written test. Analysis of data showed the auditory modality was superior to visual (pictures) ones but was not significantly different from visual (printed words) modality. In visual modalities, printed words were superior to colored pictures. Generally, conditions with multiple modes of representation of stimuli were significantly higher than for conditions with single modes. Multiple modalities, consisting of two or three modes, did not differ significantly from each other. It was concluded that any two modalities of the stimuli presented simultaneously were just as effective as three in recall of stimulus words.

  3. Evidence for an All-Or-None Perceptual Response: Single-Trial Analyses of Magnetoencephalography Signals Indicate an Abrupt Transition Between Visual Perception and Its Absence

    PubMed Central

    Sekar, Krithiga; Findley, William M.; Llinás, Rodolfo R.

    2014-01-01

    Whether consciousness is an all-or-none or graded phenomenon is an area of inquiry that has received considerable interest in neuroscience and is as of yet, still debated. In this magnetoencephalography (MEG) study we used a single stimulus paradigm with sub-threshold, threshold and supra-threshold duration inputs to assess whether stimulus perception is continuous with or abruptly differentiated from unconscious stimulus processing in the brain. By grouping epochs according to stimulus identification accuracy and exposure duration, we were able to investigate whether a high-amplitude perception-related cortical event was (1) only evoked for conditions where perception was most probable (2) had invariant amplitude once evoked and (3) was largely absent for conditions where perception was least probable (criteria satisfying an all-on-none hypothesis). We found that averaged evoked responses showed a gradual increase in amplitude with increasing perceptual strength. However, single trial analyses demonstrated that stimulus perception was correlated with an all-or-none response, the temporal precision of which increased systematically as perception transitioned from ambiguous to robust states. Due to poor signal-to-noise resolution of single trial data, whether perception-related responses, whenever present, were invariant in amplitude could not be unambiguously demonstrated. However, our findings strongly suggest that visual perception of simple stimuli is associated with an all-or-none cortical evoked response the temporal precision of which varies as a function of perceptual strength. PMID:22020091

  4. A COMPARISON OF METHODS FOR TEACHING RECEPTIVE LABELING TO CHILDREN WITH AUTISM SPECTRUM DISORDERS

    PubMed Central

    Grow, Laura L; Carr, James E; Kodak, Tiffany M; Jostad, Candice M; Kisamore, April N

    2011-01-01

    Many early intervention curricular manuals recommend teaching auditory-visual conditional discriminations (i.e., receptive labeling) using the simple-conditional method in which component simple discriminations are taught in isolation and in the presence of a distracter stimulus before the learner is required to respond conditionally. Some have argued that this procedure might be susceptible to faulty stimulus control such as stimulus overselectivity (Green, 2001). Consequently, there has been a call for the use of alternative teaching procedures such as the conditional-only method, which involves conditional discrimination training from the onset of intervention. The purpose of the present study was to compare the simple-conditional and conditional-only methods for teaching receptive labeling to 3 young children diagnosed with autism spectrum disorders. The data indicated that the conditional-only method was a more reliable and efficient teaching procedure. In addition, several error patterns emerged during training using the simple-conditional method. The implications of the results with respect to current teaching practices in early intervention programs are discussed. PMID:21941380

  5. Temporal expectancy in the context of a theory of visual attention.

    PubMed

    Vangkilde, Signe; Petersen, Anders; Bundesen, Claus

    2013-10-19

    Temporal expectation is expectation with respect to the timing of an event such as the appearance of a certain stimulus. In this paper, temporal expectancy is investigated in the context of the theory of visual attention (TVA), and we begin by summarizing the foundations of this theoretical framework. Next, we present a parametric experiment exploring the effects of temporal expectation on perceptual processing speed in cued single-stimulus letter recognition with unspeeded motor responses. The length of the cue-stimulus foreperiod was exponentially distributed with one of six hazard rates varying between blocks. We hypothesized that this manipulation would result in a distinct temporal expectation in each hazard rate condition. Stimulus exposures were varied such that both the temporal threshold of conscious perception (t0 ms) and the perceptual processing speed (v letters s(-1)) could be estimated using TVA. We found that the temporal threshold t0 was unaffected by temporal expectation, but the perceptual processing speed v was a strikingly linear function of the logarithm of the hazard rate of the stimulus presentation. We argue that the effects on the v values were generated by changes in perceptual biases, suggesting that our perceptual biases are directly related to our temporal expectations.

  6. Retinotopy and attention to the face and house images in the human visual cortex.

    PubMed

    Wang, Bin; Yan, Tianyi; Ohno, Seiichiro; Kanazawa, Susumu; Wu, Jinglong

    2016-06-01

    Attentional modulation of the neural activities in human visual areas has been well demonstrated. However, the retinotopic activities that are driven by face and house images and attention to face and house images remain unknown. In the present study, we used images of faces and houses to estimate the retinotopic activities that were driven by both the images and attention to the images, driven by attention to the images, and driven by the images. Generally, our results show that both face and house images produced similar retinotopic activities in visual areas, which were only observed in the attention + stimulus and the attention conditions, but not in the stimulus condition. The fusiform face area (FFA) responded to faces that were presented on the horizontal meridian, whereas parahippocampal place area (PPA) rarely responded to house at any visual field. We further analyzed the amplitudes of the neural responses to the target wedge. In V1, V2, V3, V3A, lateral occipital area 1 (LO-1), and hV4, the neural responses to the attended target wedge were significantly greater than those to the unattended target wedge. However, in LO-2, ventral occipital areas 1 and 2 (VO-1 and VO-2) and FFA and PPA, the differences were not significant. We proposed that these areas likely have large fields of attentional modulation for face and house images and exhibit responses to both the target wedge and the background stimuli. In addition, we proposed that the absence of retinotopic activity in the stimulus condition might imply no perceived difference between the target wedge and the background stimuli.

  7. Evidence for top-down control of eye movements during visual decision making.

    PubMed

    Glaholt, Mackenzie G; Wu, Mei-Chun; Reingold, Eyal M

    2010-05-01

    Participants' eye movements were monitored while they viewed displays containing 6 exemplars from one of several categories of everyday items (belts, sunglasses, shirts, shoes), with a column of 3 items presented on the left and another column of 3 items presented on the right side of the display. Participants were either required to choose which of the two sets of 3 items was the most expensive (2-AFC) or which of the 6 items was the most expensive (6-AFC). Importantly, the stimulus display, and the relevant stimulus dimension, were held constant across conditions. Consistent with the hypothesis of top-down control of eye movements during visual decision making, we documented greater selectivity in the processing of stimulus information in the 6-AFC than the 2-AFC decision. In addition, strong spatial biases in looking behavior were demonstrated, but these biases were largely insensitive to the instructional manipulation, and did not substantially influence participants' choices.

  8. Synergistic interaction between baclofen administration into the median raphe nucleus and inconsequential visual stimuli on investigatory behavior of rats

    PubMed Central

    Vollrath-Smith, Fiori R.; Shin, Rick

    2011-01-01

    Rationale Noncontingent administration of amphetamine into the ventral striatum or systemic nicotine increases responses rewarded by inconsequential visual stimuli. When these drugs are contingently administered, rats learn to self-administer them. We recently found that rats self-administer the GABAB receptor agonist baclofen into the median (MR) or dorsal (DR) raphe nuclei. Objectives We examined whether noncontingent administration of baclofen into the MR or DR increases rats’ investigatory behavior rewarded by a flash of light. Results Contingent presentations of a flash of light slightly increased lever presses. Whereas noncontingent administration of baclofen into the MR or DR did not reliably increase lever presses in the absence of visual stimulus reward, the same manipulation markedly increased lever presses rewarded by the visual stimulus. Heightened locomotor activity induced by intraperitoneal injections of amphetamine (3 mg/kg) failed to concur with increased lever pressing for the visual stimulus. These results indicate that the observed enhancement of visual stimulus seeking is distinct from an enhancement of general locomotor activity. Visual stimulus seeking decreased when baclofen was co-administered with the GABAB receptor antagonist, SCH 50911, confirming the involvement of local GABAB receptors. Seeking for visual stimulus also abated when baclofen administration was preceded by intraperitoneal injections of the dopamine antagonist, SCH 23390 (0.025 mg/kg), suggesting enhanced visual stimulus seeking depends on intact dopamine signals. Conclusions Baclofen administration into the MR or DR increased investigatory behavior induced by visual stimuli. Stimulation of GABAB receptors in the MR and DR appears to disinhibit the motivational process involving stimulus–approach responses. PMID:21904820

  9. Surround-Masking Affects Visual Estimation Ability

    PubMed Central

    Jastrzebski, Nicola R.; Hugrass, Laila E.; Crewther, Sheila G.; Crewther, David P.

    2017-01-01

    Visual estimation of numerosity involves the discrimination of magnitude between two distributions or perceptual sets that vary in number of elements. How performance on such estimation depends on peripheral sensory stimulation is unclear, even in typically developing adults. Here, we varied the central and surround contrast of stimuli that comprised a visual estimation task in order to determine whether mechanisms involved with the removal of unessential visual input functionally contributes toward number acuity. The visual estimation judgments of typically developed adults were significantly impaired for high but not low contrast surround stimulus conditions. The center and surround contrasts of the stimuli also differentially affected the accuracy of numerosity estimation depending on whether fewer or more dots were presented. Remarkably, observers demonstrated the highest mean percentage accuracy across stimulus conditions in the discrimination of more elements when the surround contrast was low and the background luminance of the central region containing the elements was dark (black center). Conversely, accuracy was severely impaired during the discrimination of fewer elements when the surround contrast was high and the background luminance of the central region was mid level (gray center). These findings suggest that estimation ability is functionally related to the quality of low-order filtration of unessential visual information. These surround masking results may help understanding of the poor visual estimation ability commonly observed in developmental dyscalculia. PMID:28360845

  10. Taking Attention Away from the Auditory Modality: Context-dependent Effects on Early Sensory Encoding of Speech.

    PubMed

    Xie, Zilong; Reetzke, Rachel; Chandrasekaran, Bharath

    2018-05-24

    Increasing visual perceptual load can reduce pre-attentive auditory cortical activity to sounds, a reflection of the limited and shared attentional resources for sensory processing across modalities. Here, we demonstrate that modulating visual perceptual load can impact the early sensory encoding of speech sounds, and that the impact of visual load is highly dependent on the predictability of the incoming speech stream. Participants (n = 20, 9 females) performed a visual search task of high (target similar to distractors) and low (target dissimilar to distractors) perceptual load, while early auditory electrophysiological responses were recorded to native speech sounds. Speech sounds were presented either in a 'repetitive context', or a less predictable 'variable context'. Independent of auditory stimulus context, pre-attentive auditory cortical activity was reduced during high visual load, relative to low visual load. We applied a data-driven machine learning approach to decode speech sounds from the early auditory electrophysiological responses. Decoding performance was found to be poorer under conditions of high (relative to low) visual load, when the incoming acoustic stream was predictable. When the auditory stimulus context was less predictable, decoding performance was substantially greater for the high (relative to low) visual load conditions. Our results provide support for shared attentional resources between visual and auditory modalities that substantially influence the early sensory encoding of speech signals in a context-dependent manner. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.

  11. Performance effects of nicotine during selective attention, divided attention, and simple stimulus detection: an fMRI study.

    PubMed

    Hahn, Britta; Ross, Thomas J; Wolkenberg, Frank A; Shakleya, Diaa M; Huestis, Marilyn A; Stein, Elliot A

    2009-09-01

    Attention-enhancing effects of nicotine appear to depend on the nature of the attentional function. Underlying neuroanatomical mechanisms, too, may vary depending on the function modulated. This functional magnetic resonance imaging study recorded blood oxygen level-dependent (BOLD) activity in minimally deprived smokers during tasks of simple stimulus detection, selective attention, or divided attention after single-blind application of a transdermal nicotine (21 mg) or placebo patch. Smokers' performance in the placebo condition was unimpaired as compared with matched nonsmokers. Nicotine reduced reaction time (RT) in the stimulus detection and selective attention but not divided attention condition. Across all task conditions, nicotine reduced activation in frontal, temporal, thalamic, and visual regions and enhanced deactivation in so-called "default" regions. Thalamic effects correlated with RT reduction selectively during stimulus detection. An interaction with task condition was observed in middle and superior frontal gyri, where nicotine reduced activation only during stimulus detection. A visuomotor control experiment provided evidence against nonspecific effects of nicotine. In conclusion, although prefrontal activity partly displayed differential modulation by nicotine, most BOLD effects were identical across tasks, despite differential performance effects, suggesting that common neuronal mechanisms can selectively benefit different attentional functions. Overall, the effects of nicotine may be explained by increased functional efficiency and downregulated task-independent "default" functions.

  12. Visual and auditory accessory stimulus offset and the Simon effect.

    PubMed

    Nishimura, Akio; Yokosawa, Kazuhiko

    2010-10-01

    We investigated the effect on the right and left responses of the disappearance of a task-irrelevant stimulus located on the right or left side. Participants pressed a right or left response key on the basis of the color of a centrally located visual target. Visual (Experiment 1) or auditory (Experiment 2) task-irrelevant accessory stimuli appeared or disappeared at locations to the right or left of the central target. In Experiment 1, responses were faster when onset or offset of the visual accessory stimulus was spatially congruent with the response. In Experiment 2, responses were again faster when onset of the auditory accessory stimulus and the response were on the same side. However, responses were slightly slower when offset of the auditory accessory stimulus and the response were on the same side than when they were on opposite sides. These findings indicate that transient change information is crucial for a visual Simon effect, whereas sustained stimulation from an ongoing stimulus also contributes to an auditory Simon effect.

  13. Near-field visual acuity of pigeons: effects of head location and stimulus luminance.

    PubMed

    Hodos, W; Leibowitz, R W; Bonbright, J C

    1976-03-01

    Two pigeons were trained to discriminate a grating stimulus from a blank stimulus of equivalent luminance in a three-key chamber. The stimuli and blanks were presented behind a transparent center key. The procedure was a conditional discrimination in which pecks on the left key were reinforced if the blank had been present behind the center key and pecks on the right key were reinforced if the grating had been present behind the center key. The spatial frequency of the stimuli was varied in each session from four to 29.5 lines per millimeter in accordance with a variation of the method of constant stimuli. The number of lines per millimeter that the subjects could discriminate at threshold was determined from psychometric functions. Data were collected at five values of stimulus luminance ranging from--0.07 to 3.29 log cd/m2. The distance from the stimulus to the anterior nodal point of the eye, which was determined from measurements taken from high-speed motion-picture photographs of three additional pigeons and published intraocular measurements, was 62.0 mm. This distance and the grating detection thresholds were used to calculate the visual acuity of the birds at each level of luminance. Acuity improved with increasing luminance to a peak value of 0.52, which corresponds to a visual angle of 1.92 min, at a luminance of 2.33 log cd/m2. Further increase in luminance produced a small decline in acuity.

  14. I Think, Therefore Eyeblink

    PubMed Central

    Weidemann, Gabrielle; Satkunarajah, Michelle; Lovibond, Peter F.

    2016-01-01

    Can conditioning occur without conscious awareness of the contingency between the stimuli? We trained participants on two separate reaction time tasks that ensured attention to the experimental stimuli. The tasks were then interleaved to create a differential Pavlovian contingency between visual stimuli from one task and an airpuff stimulus from the other. Many participants were unaware of the contingency and failed to show differential eyeblink conditioning, despite attending to a salient stimulus that was contingently and contiguously related to the airpuff stimulus over many trials. Manipulation of awareness by verbal instruction dramatically increased awareness and differential eyeblink responding. These findings cast doubt on dual-system theories, which propose an automatic associative system independent of cognition, and provide strong evidence that cognitive processes associated with awareness play a causal role in learning. PMID:26905277

  15. Differential Responses to a Visual Self-Motion Signal in Human Medial Cortical Regions Revealed by Wide-View Stimulation

    PubMed Central

    Wada, Atsushi; Sakano, Yuichi; Ando, Hiroshi

    2016-01-01

    Vision is important for estimating self-motion, which is thought to involve optic-flow processing. Here, we investigated the fMRI response profiles in visual area V6, the precuneus motion area (PcM), and the cingulate sulcus visual area (CSv)—three medial brain regions recently shown to be sensitive to optic-flow. We used wide-view stereoscopic stimulation to induce robust self-motion processing. Stimuli included static, randomly moving, and coherently moving dots (simulating forward self-motion). We varied the stimulus size and the presence of stereoscopic information. A combination of univariate and multi-voxel pattern analyses (MVPA) revealed that fMRI responses in the three regions differed from each other. The univariate analysis identified optic-flow selectivity and an effect of stimulus size in V6, PcM, and CSv, among which only CSv showed a significantly lower response to random motion stimuli compared with static conditions. Furthermore, MVPA revealed an optic-flow specific multi-voxel pattern in the PcM and CSv, where the discrimination of coherent motion from both random motion and static conditions showed above-chance prediction accuracy, but that of random motion from static conditions did not. Additionally, while area V6 successfully classified different stimulus sizes regardless of motion pattern, this classification was only partial in PcM and was absent in CSv. This may reflect the known retinotopic representation in V6 and the absence of such clear visuospatial representation in CSv. We also found significant correlations between the strength of subjective self-motion and univariate activation in all examined regions except for primary visual cortex (V1). This neuro-perceptual correlation was significantly higher for V6, PcM, and CSv when compared with V1, and higher for CSv when compared with the visual motion area hMT+. Our convergent results suggest the significant involvement of CSv in self-motion processing, which may give rise to its percept. PMID:26973588

  16. Stimulus specificity of a steady-state visual-evoked potential-based brain-computer interface.

    PubMed

    Ng, Kian B; Bradley, Andrew P; Cunnington, Ross

    2012-06-01

    The mechanisms of neural excitation and inhibition when given a visual stimulus are well studied. It has been established that changing stimulus specificity such as luminance contrast or spatial frequency can alter the neuronal activity and thus modulate the visual-evoked response. In this paper, we study the effect that stimulus specificity has on the classification performance of a steady-state visual-evoked potential-based brain-computer interface (SSVEP-BCI). For example, we investigate how closely two visual stimuli can be placed before they compete for neural representation in the cortex and thus influence BCI classification accuracy. We characterize stimulus specificity using the four stimulus parameters commonly encountered in SSVEP-BCI design: temporal frequency, spatial size, number of simultaneously displayed stimuli and their spatial proximity. By varying these quantities and measuring the SSVEP-BCI classification accuracy, we are able to determine the parameters that provide optimal performance. Our results show that superior SSVEP-BCI accuracy is attained when stimuli are placed spatially more than 5° apart, with size that subtends at least 2° of visual angle, when using a tagging frequency of between high alpha and beta band. These findings may assist in deciding the stimulus parameters for optimal SSVEP-BCI design.

  17. Stimulus specificity of a steady-state visual-evoked potential-based brain-computer interface

    NASA Astrophysics Data System (ADS)

    Ng, Kian B.; Bradley, Andrew P.; Cunnington, Ross

    2012-06-01

    The mechanisms of neural excitation and inhibition when given a visual stimulus are well studied. It has been established that changing stimulus specificity such as luminance contrast or spatial frequency can alter the neuronal activity and thus modulate the visual-evoked response. In this paper, we study the effect that stimulus specificity has on the classification performance of a steady-state visual-evoked potential-based brain-computer interface (SSVEP-BCI). For example, we investigate how closely two visual stimuli can be placed before they compete for neural representation in the cortex and thus influence BCI classification accuracy. We characterize stimulus specificity using the four stimulus parameters commonly encountered in SSVEP-BCI design: temporal frequency, spatial size, number of simultaneously displayed stimuli and their spatial proximity. By varying these quantities and measuring the SSVEP-BCI classification accuracy, we are able to determine the parameters that provide optimal performance. Our results show that superior SSVEP-BCI accuracy is attained when stimuli are placed spatially more than 5° apart, with size that subtends at least 2° of visual angle, when using a tagging frequency of between high alpha and beta band. These findings may assist in deciding the stimulus parameters for optimal SSVEP-BCI design.

  18. The effect of synesthetic associations between the visual and auditory modalities on the Colavita effect.

    PubMed

    Stekelenburg, Jeroen J; Keetels, Mirjam

    2016-05-01

    The Colavita effect refers to the phenomenon that when confronted with an audiovisual stimulus, observers report more often to have perceived the visual than the auditory component. The Colavita effect depends on low-level stimulus factors such as spatial and temporal proximity between the unimodal signals. Here, we examined whether the Colavita effect is modulated by synesthetic congruency between visual size and auditory pitch. If the Colavita effect depends on synesthetic congruency, we expect a larger Colavita effect for synesthetically congruent size/pitch (large visual stimulus/low-pitched tone; small visual stimulus/high-pitched tone) than synesthetically incongruent (large visual stimulus/high-pitched tone; small visual stimulus/low-pitched tone) combinations. Participants had to identify stimulus type (visual, auditory or audiovisual). The study replicated the Colavita effect because participants reported more often the visual than auditory component of the audiovisual stimuli. Synesthetic congruency had, however, no effect on the magnitude of the Colavita effect. EEG recordings to congruent and incongruent audiovisual pairings showed a late frontal congruency effect at 400-550 ms and an occipitoparietal effect at 690-800 ms with neural sources in the anterior cingulate and premotor cortex for the 400- to 550-ms window and premotor cortex, inferior parietal lobule and the posterior middle temporal gyrus for the 690- to 800-ms window. The electrophysiological data show that synesthetic congruency was probably detected in a processing stage subsequent to the Colavita effect. We conclude that-in a modality detection task-the Colavita effect can be modulated by low-level structural factors but not by higher-order associations between auditory and visual inputs.

  19. Gestalt perception modulates early visual processing.

    PubMed

    Herrmann, C S; Bosch, V

    2001-04-17

    We examined whether early visual processing reflects perceptual properties of a stimulus in addition to physical features. We recorded event-related potentials (ERPs) of 13 subjects in a visual classification task. We used four different stimuli which were all composed of four identical elements. One of the stimuli constituted an illusory Kanizsa square, another was composed of the same number of collinear line segments but the elements did not form a Gestalt. In addition, a target and a control stimulus were used which were arranged differently. These stimuli allow us to differentiate the processing of colinear line elements (stimulus features) and illusory figures (perceptual properties). The visual N170 in response to the illusory figure was significantly larger as compared to the other collinear stimulus. This is taken to indicate that the visual N170 reflects cognitive processes of Gestalt perception in addition to attentional processes and physical stimulus properties.

  20. Effects of refractive errors on visual evoked magnetic fields.

    PubMed

    Suzuki, Masaya; Nagae, Mizuki; Nagata, Yuko; Kumagai, Naoya; Inui, Koji; Kakigi, Ryusuke

    2015-11-09

    The latency and amplitude of visual evoked cortical responses are known to be affected by refractive states, suggesting that they may be used as an objective index of refractive errors. In order to establish an easy and reliable method for this purpose, we herein examined the effects of refractive errors on visual evoked magnetic fields (VEFs). Binocular VEFs following the presentation of a simple grating of 0.16 cd/m(2) in the lower visual field were recorded in 12 healthy volunteers and compared among four refractive states: 0D, +1D, +2D, and +4D, by using plus lenses. The low-luminance visual stimulus evoked a main MEG response at approximately 120 ms (M100) that reversed its polarity between the upper and lower visual field stimulations and originated from the occipital midline area. When refractive errors were induced by plus lenses, the latency of M100 increased, while its amplitude decreased with an increase in power of the lens. Differences from the control condition (+0D) were significant for all three lenses examined. The results of dipole analyses showed that evoked fields for the control (+0D) condition were explainable by one dipole in the primary visual cortex (V1), while other sources, presumably in V3 or V6, slightly contributed to shape M100 for the +2D or +4D condition. The present results showed that the latency and amplitude of M100 are both useful indicators for assessing refractive states. The contribution of neural sources other than V1 to M100 was modest under the 0D and +1D conditions. By considering the nature of the activity of M100 including its high sensitivity to a spatial frequency and lower visual field dominance, a simple low-luminance grating stimulus at an optimal spatial frequency in the lower visual field appears appropriate for obtaining data on high S/N ratios and reducing the load on subjects.

  1. Task-dependent V1 responses in human retinitis pigmentosa.

    PubMed

    Masuda, Yoichiro; Horiguchi, Hiroshi; Dumoulin, Serge O; Furuta, Ayumu; Miyauchi, Satoru; Nakadomari, Satoshi; Wandell, Brian A

    2010-10-01

    During measurement with functional MRI (fMRI) during passive viewing, subjects with macular degeneration (MD) have a large unresponsive lesion projection zone (LPZ) in V1. fMRI responses can be evoked from the LPZ when subjects engage in a stimulus-related task. The authors report fMRI measurements on a different class of subjects, those with retinitis pigmentosa (RP), who have intact foveal vision but peripheral visual field loss. The authors measured three RP subjects and two control subjects. fMRI was performed while the subjects viewed drifting contrast pattern stimuli. The subjects passively viewed the stimuli or performed a stimulus-related task. During passive viewing, the BOLD response in the posterior calcarine cortex of all RP subjects was in phase with the stimulus. A bordering, anterior LPZ could be identified by responses that were in opposite phase to the stimulus. When the RP subjects made stimulus-related judgments, however, the LPZ responses changed: the responses modulated in phase with the stimulus and task. In control subjects, the responses in a simulated V1 LPZ were unchanged between the passive and the stimulus-related judgment conditions. Task-dependent LPZ responses are present in RP subjects, similar to responses measured in MD subjects. The results are consistent with the hypothesis that deleting the retinal input to the LPZ unmasks preexisting extrastriate feedback signals that are present across V1. The authors discuss the implications of this hypothesis for visual therapy designed to replace the missing V1 LPZ inputs and to restore vision.

  2. Episodic memory for spatial context biases spatial attention.

    PubMed

    Ciaramelli, Elisa; Lin, Olivia; Moscovitch, Morris

    2009-01-01

    The study explores the bottom-up attentional consequences of episodic memory retrieval. Individuals studied words (Experiment 1) or pictures (Experiment 2) presented on the left or on the right of the screen. They then viewed studied and new stimuli in the centre of the screen. One-second after the appearance of each stimulus, participants had to respond to a dot presented on the left or on the right of the screen. The dot could follow a stimulus that had been presented, during the study phase, on the same side as the dot (congruent condition), a stimulus that had been presented on the opposite side (incongruent condition), or a new stimulus (neutral condition). Subjects were faster to respond to the dot in the congruent compared to the incongruent condition, with an overall right visual field advantage in Experiment 1. The memory-driven facilitation effect correlated with subjects' re-experiencing of the encoding context (R responses; Experiment 1), but not with their explicit memory for the side of items' presentation (source memory; Experiment 2). The results indicate that memory contents are attended automatically and can bias the deployment of attention. The degree to which memory and attention interact appears related to subjective but not objective indicators of memory strength.

  3. Brief Communication: visual-field superiority as a function of stimulus type and content: further evidence.

    PubMed

    Basu, Anamitra; Mandal, Manas K

    2004-07-01

    The present study examined visual-field advantage as a function of presentation mode (unilateral, bilateral), stimulus structure (facial, lexical), and stimulus content (emotional, neutral). The experiment was conducted in a split visual-field paradigm using a JAVA-based computer program with recognition accuracy as the dependent measure. Unilaterally, rather than bilaterally, presented stimuli were significantly better recognized. Words were significantly better recognized than faces in the right visual-field; the difference was nonsignificant in the left visual-field. Emotional content elicited left visual-field and neutral content elicited right visual-field advantages. Copyright Taylor and Francis Inc.

  4. Information processing capacity while wearing personal protective eyewear.

    PubMed

    Wade, Chip; Davis, Jerry; Marzilli, Thomas S; Weimar, Wendi H

    2006-08-15

    It is difficult to overemphasize the function vision plays in information processing, specifically in maintaining postural control. Vision appears to be an immediate, effortless event; suggesting that eyes need only to be open to employ the visual information provided by the environment. This study is focused on investigating the effect of Occupational Safety and Health Administration regulated personal protective eyewear (29 CFR 1910.133) on physiological and cognitive factors associated with information processing capabilities. Twenty-one college students between the ages of 19 and 25 years were randomly tested in each of three eyewear conditions (control, new and artificially aged) on an inclined and horizontal support surface for auditory and visual stimulus reaction time. Data collection trials consisted of 50 randomly selected (25 auditory, 25 visual) stimuli over a 10-min surface-eyewear condition trial. Auditory stimulus reaction time was significantly affected by the surface by eyewear interaction (F2,40 = 7.4; p < 0.05). Similarly, analysis revealed a significant surface by eyewear interaction in reaction time following the visual stimulus (F2,40 = 21.7; p < 0.05). The current findings do not trivialize the importance of personal protective eyewear usage in an occupational setting; rather, they suggest the value of future research focused on the effect that personal protective eyewear has on the physiological, cognitive and biomechanical contributions to postural control. These findings suggest that while personal protective eyewear may serve to protect an individual from eye injury, an individual's use of such personal protective eyewear may have deleterious effects on sensory information associated with information processing and postural control.

  5. Neural Responses in Parietal and Occipital Areas in Response to Visual Events Are Modulated by Prior Multisensory Stimuli

    PubMed Central

    Innes-Brown, Hamish; Barutchu, Ayla; Crewther, David P.

    2013-01-01

    The effect of multi-modal vs uni-modal prior stimuli on the subsequent processing of a simple flash stimulus was studied in the context of the audio-visual ‘flash-beep’ illusion, in which the number of flashes a person sees is influenced by accompanying beep stimuli. EEG recordings were made while combinations of simple visual and audio-visual stimuli were presented. The experiments found that the electric field strength related to a flash stimulus was stronger when it was preceded by a multi-modal flash/beep stimulus, compared to when it was preceded by another uni-modal flash stimulus. This difference was found to be significant in two distinct timeframes – an early timeframe, from 130–160 ms, and a late timeframe, from 300–320 ms. Source localisation analysis found that the increased activity in the early interval was localised to an area centred on the inferior and superior parietal lobes, whereas the later increase was associated with stronger activity in an area centred on primary and secondary visual cortex, in the occipital lobe. The results suggest that processing of a visual stimulus can be affected by the presence of an immediately prior multisensory event. Relatively long-lasting interactions generated by the initial auditory and visual stimuli altered the processing of a subsequent visual stimulus. PMID:24391939

  6. Stimulus similarity determines the prevalence of behavioral laterality in a visual discrimination task for mice

    PubMed Central

    Treviño, Mario

    2014-01-01

    Animal choices depend on direct sensory information, but also on the dynamic changes in the magnitude of reward. In visual discrimination tasks, the emergence of lateral biases in the choice record from animals is often described as a behavioral artifact, because these are highly correlated with error rates affecting psychophysical measurements. Here, we hypothesized that biased choices could constitute a robust behavioral strategy to solve discrimination tasks of graded difficulty. We trained mice to swim in a two-alterative visual discrimination task with escape from water as the reward. Their prevalence of making lateral choices increased with stimulus similarity and was present in conditions of high discriminability. While lateralization occurred at the individual level, it was absent, on average, at the population level. Biased choice sequences obeyed the generalized matching law and increased task efficiency when stimulus similarity was high. A mathematical analysis revealed that strongly-biased mice used information from past rewards but not past choices to make their current choices. We also found that the amount of lateralized choices made during the first day of training predicted individual differences in the average learning behavior. This framework provides useful analysis tools to study individualized visual-learning trajectories in mice. PMID:25524257

  7. Human Pupillary Dilation Response to Deviant Auditory Stimuli: Effects of Stimulus Properties and Voluntary Attention

    PubMed Central

    Liao, Hsin-I; Yoneya, Makoto; Kidani, Shunsuke; Kashino, Makio; Furukawa, Shigeto

    2016-01-01

    A unique sound that deviates from a repetitive background sound induces signature neural responses, such as mismatch negativity and novelty P3 response in electro-encephalography studies. Here we show that a deviant auditory stimulus induces a human pupillary dilation response (PDR) that is sensitive to the stimulus properties and irrespective whether attention is directed to the sounds or not. In an auditory oddball sequence, we used white noise and 2000-Hz tones as oddballs against repeated 1000-Hz tones. Participants' pupillary responses were recorded while they listened to the auditory oddball sequence. In Experiment 1, they were not involved in any task. Results show that pupils dilated to the noise oddballs for approximately 4 s, but no such PDR was found for the 2000-Hz tone oddballs. In Experiments 2, two types of visual oddballs were presented synchronously with the auditory oddballs. Participants discriminated the auditory or visual oddballs while trying to ignore stimuli from the other modality. The purpose of this manipulation was to direct attention to or away from the auditory sequence. In Experiment 3, the visual oddballs and the auditory oddballs were always presented asynchronously to prevent residuals of attention on to-be-ignored oddballs due to the concurrence with the attended oddballs. Results show that pupils dilated to both the noise and 2000-Hz tone oddballs in all conditions. Most importantly, PDRs to noise were larger than those to the 2000-Hz tone oddballs regardless of the attention condition in both experiments. The overall results suggest that the stimulus-dependent factor of the PDR appears to be independent of attention. PMID:26924959

  8. Emotional conditioning to masked stimuli and modulation of visuospatial attention.

    PubMed

    Beaver, John D; Mogg, Karin; Bradley, Brendan P

    2005-03-01

    Two studies investigated the effects of conditioning to masked stimuli on visuospatial attention. During the conditioning phase, masked snakes and spiders were paired with a burst of white noise, or paired with an innocuous tone, in the conditioned stimulus (CS)+ and CS- conditions, respectively. Attentional allocation to the CSs was then assessed with a visual probe task, in which the CSs were presented unmasked (Experiment 1) or both unmasked and masked (Experiment 2), together with fear-irrelevant control stimuli (flowers and mushrooms). In Experiment 1, participants preferentially allocated attention to CS+ relative to control stimuli. Experiment 2 suggested that this attentional bias depended on the perceived aversiveness of the unconditioned stimulus and did not require conscious recognition of the CSs during both acquisition and expression. Copyright 2005 APA, all rights reserved.

  9. Integration time for the perception of depth from motion parallax.

    PubMed

    Nawrot, Mark; Stroyan, Keith

    2012-04-15

    The perception of depth from relative motion is believed to be a slow process that "builds-up" over a period of observation. However, in the case of motion parallax, the potential accuracy of the depth estimate suffers as the observer translates during the viewing period. Our recent quantitative model for the perception of depth from motion parallax proposes that relative object depth (d) can be determined from retinal image motion (dθ/dt), pursuit eye movement (dα/dt), and fixation distance (f) by the formula: d/f≈dθ/dα. Given the model's dynamics, it is important to know the integration time required by the visual system to recover dα and dθ, and then estimate d. Knowing the minimum integration time reveals the incumbent error in this process. A depth-phase discrimination task was used to determine the time necessary to perceive depth-sign from motion parallax. Observers remained stationary and viewed a briefly translating random-dot motion parallax stimulus. Stimulus duration varied between trials. Fixation on the translating stimulus was monitored and enforced with an eye-tracker. The study found that relative depth discrimination can be performed with presentations as brief as 16.6 ms, with only two stimulus frames providing both retinal image motion and the stimulus window motion for pursuit (mean range=16.6-33.2 ms). This was found for conditions in which, prior to stimulus presentation, the eye was engaged in ongoing pursuit or the eye was stationary. A large high-contrast masking stimulus disrupted depth-discrimination for stimulus presentations less than 70-75 ms in both pursuit and stationary conditions. This interval might be linked to ocular-following response eye-movement latencies. We conclude that neural mechanisms serving depth from motion parallax generate a depth estimate much more quickly than previously believed. We propose that additional sluggishness might be due to the visual system's attempt to determine the maximum dθ/dα ratio for a selection of points on a complicated stimulus. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Sequential pictorial presentation of neural interaction in the retina. 2. The depolarizing and hyperpolarizing bipolar cells at rod terminals.

    PubMed

    Sjöstrand, F S

    2002-01-01

    Each rod is connected to one depolarizing and one hyperpolarizing bipolar cell. The synaptic connections of cone processes to each bipolar cell and presynaptically to the two rod-bipolar cell synapses establishes conditions for lateral interaction at this level. Thus, the cones raise the threshold for bipolar cell depolarization which is the basis for spatial brightness contrast enhancement and consequently for high visual acuity (Sjöstrand, 2001a). The cones facilitate ganglion cell depolarization by the bipolar cells and cone input prevents horizontal cell blocking of depolarization of the depolarizing bipolar cell, extending rod vision to low illumination. The combination of reduced cone input and transient hyperpolarization of the hyperpolarizing bipolar cell at onset of a light stimulus facilitates ganglion cell depolarization extensively at onset of the stimulus while no corresponding enhancement applies to the ganglion cell response at cessation of the stimulus, possibly establishing conditions for discrimination between on- vs. off-signals in the visual centre. Reduced cone input and hyperpolarization of the hyperpolarizing bipolar cell at onset of a light stimulus accounts for Granit's (1941) 'preexcitatory inhibition'. Presynaptic inhibition maintains transmitter concentration low in the synaptic gap at rod-bipolar cell and bipolar cell-ganglion cell synapses, securing proportional and amplified postsynaptic responses at these synapses. Perfect timing of variations in facilitatory and inhibitory input to the ganglion cell confines the duration of ganglion cell depolarization at onset and at cessation of a light stimulus to that of a single synaptic transmission.

  11. Neural Correlates of Individual Differences in Infant Visual Attention and Recognition Memory

    PubMed Central

    Reynolds, Greg D.; Guy, Maggie W.; Zhang, Dantong

    2010-01-01

    Past studies have identified individual differences in infant visual attention based upon peak look duration during initial exposure to a stimulus. Colombo and colleagues (e.g., Colombo & Mitchell, 1990) found that infants that demonstrate brief visual fixations (i.e., short lookers) during familiarization are more likely to demonstrate evidence of recognition memory during subsequent stimulus exposure than infants that demonstrate long visual fixations (i.e., long lookers). The current study utilized event-related potentials to examine possible neural mechanisms associated with individual differences in visual attention and recognition memory for 6- and 7.5-month-old infants. Short- and long-looking infants viewed images of familiar and novel objects during ERP testing. There was a stimulus type by looker type interaction at temporal and frontal electrodes on the late slow wave (LSW). Short lookers demonstrated a LSW that was significantly greater in amplitude in response to novel stimulus presentations. No significant differences in LSW amplitude were found based on stimulus type for long lookers. These results indicate deeper processing and recognition memory of the familiar stimulus for short lookers. PMID:21666833

  12. The Effects of Audiovisual Inputs on Solving the Cocktail Party Problem in the Human Brain: An fMRI Study.

    PubMed

    Li, Yuanqing; Wang, Fangyi; Chen, Yongbin; Cichocki, Andrzej; Sejnowski, Terrence

    2017-09-25

    At cocktail parties, our brains often simultaneously receive visual and auditory information. Although the cocktail party problem has been widely investigated under auditory-only settings, the effects of audiovisual inputs have not. This study explored the effects of audiovisual inputs in a simulated cocktail party. In our fMRI experiment, each congruent audiovisual stimulus was a synthesis of 2 facial movie clips, each of which could be classified into 1 of 2 emotion categories (crying and laughing). Visual-only (faces) and auditory-only stimuli (voices) were created by extracting the visual and auditory contents from the synthesized audiovisual stimuli. Subjects were instructed to selectively attend to 1 of the 2 objects contained in each stimulus and to judge its emotion category in the visual-only, auditory-only, and audiovisual conditions. The neural representations of the emotion features were assessed by calculating decoding accuracy and brain pattern-related reproducibility index based on the fMRI data. We compared the audiovisual condition with the visual-only and auditory-only conditions and found that audiovisual inputs enhanced the neural representations of emotion features of the attended objects instead of the unattended objects. This enhancement might partially explain the benefits of audiovisual inputs for the brain to solve the cocktail party problem. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. Stroboscopic Training Enhances Anticipatory Timing.

    PubMed

    Smith, Trevor Q; Mitroff, Stephen R

    The dynamic aspects of sports often place heavy demands on visual processing. As such, an important goal for sports training should be to enhance visual abilities. Recent research has suggested that training in a stroboscopic environment, where visual experiences alternate between visible and obscured, may provide a means of improving attentional and visual abilities. The current study explored whether stroboscopic training could impact anticipatory timing - the ability to predict where a moving stimulus will be at a specific point in time. Anticipatory timing is a critical skill for both sports and non-sports activities, and thus finding training improvements could have broad impacts. Participants completed a pre-training assessment that used a Bassin Anticipation Timer to measure their abilities to accurately predict the timing of a moving visual stimulus. Immediately after this initial assessment, the participants completed training trials, but in one of two conditions. Those in the Control condition proceeded as before with no change. Those in the Strobe condition completed the training trials while wearing specialized eyewear that had lenses that alternated between transparent and opaque (rate of 100ms visible to 150ms opaque). Post-training assessments were administered immediately after training, 10-minutes after training, and 10-days after training. Compared to the Control group, the Strobe group was significantly more accurate immediately after training, was more likely to respond early than to respond late immediately after training and 10 minutes later, and was more consistent in their timing estimates immediately after training and 10 minutes later.

  14. Reaching to virtual targets: The oblique effect reloaded in 3-D.

    PubMed

    Kaspiris-Rousellis, Christos; Siettos, Constantinos I; Evdokimidis, Ioannis; Smyrnis, Nikolaos

    2017-02-20

    Perceiving and reproducing direction of visual stimuli in 2-D space produces the visual oblique effect, which manifests as increased precision in the reproduction of cardinal compared to oblique directions. A second cognitive oblique effect emerges when stimulus information is degraded (such as when reproducing stimuli from memory) and manifests as a systematic distortion where reproduced directions close to the cardinal axes deviate toward the oblique, leading to space expansion at cardinal and contraction at oblique axes. We studied the oblique effect in 3-D using a virtual reality system to present a large number of stimuli, covering the surface of an imaginary half sphere, to which subjects had to reach. We used two conditions, one with no delay (no-memory condition) and one where a three-second delay intervened between stimulus presentation and movement initiation (memory condition). A visual oblique effect was observed for the reproduction of cardinal directions compared to oblique, which did not differ with memory condition. A cognitive oblique effect also emerged, which was significantly larger in the memory compared to the no-memory condition, leading to distortion of directional space with expansion near the cardinal axes and compression near the oblique axes on the hemispherical surface. This effect provides evidence that existing models of 2-D directional space categorization could be extended in the natural 3-D space. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  15. Concurrent visual and tactile steady-state evoked potentials index allocation of inter-modal attention: a frequency-tagging study.

    PubMed

    Porcu, Emanuele; Keitel, Christian; Müller, Matthias M

    2013-11-27

    We investigated effects of inter-modal attention on concurrent visual and tactile stimulus processing by means of stimulus-driven oscillatory brain responses, so-called steady-state evoked potentials (SSEPs). To this end, we frequency-tagged a visual (7.5Hz) and a tactile stimulus (20Hz) and participants were cued, on a trial-by-trial basis, to attend to either vision or touch to perform a detection task in the cued modality. SSEPs driven by the stimulation comprised stimulus frequency-following (i.e. fundamental frequency) as well as frequency-doubling (i.e. second harmonic) responses. We observed that inter-modal attention to vision increased amplitude and phase synchrony of the fundamental frequency component of the visual SSEP while the second harmonic component showed an increase in phase synchrony, only. In contrast, inter-modal attention to touch increased SSEP amplitude of the second harmonic but not of the fundamental frequency, while leaving phase synchrony unaffected in both responses. Our results show that inter-modal attention generally influences concurrent stimulus processing in vision and touch, thus, extending earlier audio-visual findings to a visuo-tactile stimulus situation. The pattern of results, however, suggests differences in the neural implementation of inter-modal attentional influences on visual vs. tactile stimulus processing. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  16. Temporal expectancy in the context of a theory of visual attention

    PubMed Central

    Vangkilde, Signe; Petersen, Anders; Bundesen, Claus

    2013-01-01

    Temporal expectation is expectation with respect to the timing of an event such as the appearance of a certain stimulus. In this paper, temporal expectancy is investigated in the context of the theory of visual attention (TVA), and we begin by summarizing the foundations of this theoretical framework. Next, we present a parametric experiment exploring the effects of temporal expectation on perceptual processing speed in cued single-stimulus letter recognition with unspeeded motor responses. The length of the cue–stimulus foreperiod was exponentially distributed with one of six hazard rates varying between blocks. We hypothesized that this manipulation would result in a distinct temporal expectation in each hazard rate condition. Stimulus exposures were varied such that both the temporal threshold of conscious perception (t0 ms) and the perceptual processing speed (v letters s−1) could be estimated using TVA. We found that the temporal threshold t0 was unaffected by temporal expectation, but the perceptual processing speed v was a strikingly linear function of the logarithm of the hazard rate of the stimulus presentation. We argue that the effects on the v values were generated by changes in perceptual biases, suggesting that our perceptual biases are directly related to our temporal expectations. PMID:24018716

  17. Acceptance Presentation and Research Study Summary: Research in Educational Communications and Technology. 1982 Association for Educational Communications and Technology Young Researcher Award, Research and Theory Division.

    ERIC Educational Resources Information Center

    Canelos, James

    An internal cognitive variable--mental imagery representation--was studied using a set of three information-processing strategies under external stimulus visual display conditions for various learning levels. The copy strategy provided verbal and visual dual-coding and required formation of a vivid mental image. The relational strategy combined…

  18. Tracking the Sensory Environment: An ERP Study of Probability and Context Updating in ASD

    PubMed Central

    Westerfield, Marissa A.; Zinni, Marla; Vo, Khang; Townsend, Jeanne

    2014-01-01

    We recorded visual event-related brain potentials (ERPs) from 32 adult male participants (16 high-functioning participants diagnosed with Autism Spectrum Disorder (ASD) and 16 control participants, ranging in age from 18–53 yrs) during a three-stimulus oddball paradigm. Target and non-target stimulus probability was varied across three probability conditions, whereas the probability of a third non-target stimulus was held constant in all conditions. P3 amplitude to target stimuli was more sensitive to probability in ASD than in TD participants, whereas P3 amplitude to non-target stimuli was less responsive to probability in ASD participants. This suggests that neural responses to changes in event probability are attention-dependant in high-functioning ASD. The implications of these findings for higher-level behaviors such as prediction and planning are discussed. PMID:24488156

  19. The modulation of auditory novelty processing by working memory load in school age children and adults: a combined behavioral and event-related potential study

    PubMed Central

    2010-01-01

    Background We investigated the processing of task-irrelevant and unexpected novel sounds and its modulation by working-memory load in children aged 9-10 and in adults. Environmental sounds (novels) were embedded amongst frequently presented standard sounds in an auditory-visual distraction paradigm. Each sound was followed by a visual target. In two conditions, participants evaluated the position of a visual stimulus (0-back, low load) or compared the position of the current stimulus with the one two trials before (2-back, high load). Processing of novel sounds were measured with reaction times, hit rates and the auditory event-related brain potentials (ERPs) Mismatch Negativity (MMN), P3a, Reorienting Negativity (RON) and visual P3b. Results In both memory load conditions novels impaired task performance in adults whereas they improved performance in children. Auditory ERPs reflect age-related differences in the time-window of the MMN as children showed a positive ERP deflection to novels whereas adults lack an MMN. The attention switch towards the task irrelevant novel (reflected by P3a) was comparable between the age groups. Adults showed more efficient reallocation of attention (reflected by RON) under load condition than children. Finally, the P3b elicited by the visual target stimuli was reduced in both age groups when the preceding sound was a novel. Conclusion Our results give new insights in the development of novelty processing as they (1) reveal that task-irrelevant novel sounds can result in contrary effects on the performance in a visual primary task in children and adults, (2) show a positive ERP deflection to novels rather than an MMN in children, and (3) reveal effects of auditory novels on visual target processing. PMID:20929535

  20. The pigeon's distant visual acuity as a function of viewing angle.

    PubMed

    Uhlrich, D J; Blough, P M; Blough, D S

    1982-01-01

    Distant visual acuity was determined for several viewing angles in two restrained White Carneaux pigeons. The behavioral technique was a classical conditioning procedure that paired presentation of sinusoidal gratings with shock. A conditioned heart rate acceleration during the grating presentation indicated resolution of the grating. The bird's acuity was fairly uniform across a large range of their lateral visual field; performance decreased slightly for posterior stimulus placement and sharply for frontal placements. The data suggest that foveal viewing is relatively less advantageous for acuity in pigeons than in humans. The data are also consistent with the current view that pigeons are myopic in frontal vision.

  1. Visual statistical learning is not reliably modulated by selective attention to isolated events

    PubMed Central

    Musz, Elizabeth; Weber, Matthew J.; Thompson-Schill, Sharon L.

    2014-01-01

    Recent studies of visual statistical learning (VSL) indicate that the visual system can automatically extract temporal and spatial relationships between objects. We report several attempts to replicate and extend earlier work (Turk-Browne et al., 2005) in which observers performed a cover task on one of two interleaved stimulus sets, resulting in learning of temporal relationships that occur in the attended stream, but not those present in the unattended stream. Across four experiments, we exposed observers to a similar or identical familiarization protocol, directing attention to one of two interleaved stimulus sets; afterward, we assessed VSL efficacy for both sets using either implicit response-time measures or explicit familiarity judgments. In line with prior work, we observe learning for the attended stimulus set. However, unlike previous reports, we also observe learning for the unattended stimulus set. When instructed to selectively attend to only one of the stimulus sets and ignore the other set, observers could extract temporal regularities for both sets. Our efforts to experimentally decrease this effect by changing the cover task (Experiment 1) or the complexity of the statistical regularities (Experiment 3) were unsuccessful. A fourth experiment using a different assessment of learning likewise failed to show an attentional effect. Simulations drawing random samples our first three experiments (n=64) confirm that the distribution of attentional effects in our sample closely approximates the null. We offer several potential explanations for our failure to replicate earlier findings, and discuss how our results suggest limiting conditions on the relevance of attention to VSL. PMID:25172196

  2. Modification of a prey catching response and the development of behavioral persistence in the fire-bellied toad (Bombina orientalis).

    PubMed

    Ramsay, Zachary J; Ikura, Juntaro; Laberge, Frédéric

    2013-11-01

    The present report investigated how fire-bellied toads (Bombina orientalis) modified their response in a prey catching task in which the attribution of food reward was contingent on snapping toward a visual stimulus of moving prey displayed on a computer screen. Two experiments investigated modification of the snapping response, with different intervals between the opportunity to snap at the visual stimulus and reward administration. The snapping response of unpaired controls was decreased compared with the conditioned toads when hour or day intervals were used, but intervals of 5 min produced only minimal change in snapping. The determinants of extinction of the response toward the visual stimulus were then investigated in 3 experiments. The results of the first experiment suggested that increased resistance to extinction depended mostly on the number of training trials, not on partial reinforcement or the magnitude of reinforcement during training. This was confirmed in a second experiment showing that overtraining resulted in resistance to extinction, and that the pairing of the reward with a response toward the stimulus was necessary for that effect, as opposed to pairing reward solely with the experimental context. The last experiment showed that the time elapsed between training trials also influenced extinction, but only in toads that received few training trials. Overall, the results suggest that toads learning about a prey stimulus progress from an early flexible phase, when an action can be modified by its consequences, to an acquired habit characterized by an increasingly inflexible and automatic response.

  3. Hemispheric differences in visual search of simple line arrays.

    PubMed

    Polich, J; DeFrancesco, D P; Garon, J F; Cohen, W

    1990-01-01

    The effects of perceptual organization on hemispheric visual-information processing were assessed with stimulus arrays composed of short lines arranged in columns. A visual-search task was employed in which subjects judged whether all the lines were vertical (same) or whether a single horizontal line was present (different). Stimulus-display organization was manipulated in two experiments by variation of line density, linear organization, and array size. In general, left-visual-field/right-hemisphere presentations demonstrated more rapid and accurate responses when the display was perceived as a whole. Right-visual-field/left-hemisphere superiorities were observed when the display organization coerced assessment of individual array elements because the physical qualities of the stimulus did not effect a gestalt whole. Response times increased somewhat with increases in array size, although these effects interacted with other stimulus variables. Error rates tended to follow the reaction-time patterns. The results suggest that laterality differences in visual search are governed by stimulus properties which contribute to, or inhibit, the perception of a display as a gestalt. The implications of these findings for theoretical interpretations of hemispheric specialization are discussed.

  4. The impact of task demand on visual word recognition.

    PubMed

    Yang, J; Zevin, J

    2014-07-11

    The left occipitotemporal cortex has been found sensitive to the hierarchy of increasingly complex features in visually presented words, from individual letters to bigrams and morphemes. However, whether this sensitivity is a stable property of the brain regions engaged by word recognition is still unclear. To address the issue, the current study investigated whether different task demands modify this sensitivity. Participants viewed real English words and stimuli with hierarchical word-likeness while performing a lexical decision task (i.e., to decide whether each presented stimulus is a real word) and a symbol detection task. General linear model and independent component analysis indicated strong activation in the fronto-parietal and temporal regions during the two tasks. Furthermore, the bilateral inferior frontal gyrus and insula showed significant interaction effects between task demand and stimulus type in the pseudoword condition. The occipitotemporal cortex showed strong main effects for task demand and stimulus type, but no sensitivity to the hierarchical word-likeness was found. These results suggest that different task demands on semantic, phonological and orthographic processes can influence the involvement of the relevant regions during visual word recognition. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.

  5. Color-dependent learning in restrained Africanized honey bees.

    PubMed

    Jernigan, C M; Roubik, D W; Wcislo, W T; Riveros, A J

    2014-02-01

    Associative color learning has been demonstrated to be very poor using restrained European honey bees unless the antennae are amputated. Consequently, our understanding of proximate mechanisms in visual information processing is handicapped. Here we test learning performance of Africanized honey bees under restrained conditions with visual and olfactory stimulation using the proboscis extension response (PER) protocol. Restrained individuals were trained to learn an association between a color stimulus and a sugar-water reward. We evaluated performance for 'absolute' learning (learned association between a stimulus and a reward) and 'discriminant' learning (discrimination between two stimuli). Restrained Africanized honey bees (AHBs) readily learned the association of color stimulus for both blue and green LED stimuli in absolute and discriminatory learning tasks within seven presentations, but not with violet as the rewarded color. Additionally, 24-h memory improved considerably during the discrimination task, compared with absolute association (15-55%). We found that antennal amputation was unnecessary and reduced performance in AHBs. Thus color learning can now be studied using the PER protocol with intact AHBs. This finding opens the way towards investigating visual and multimodal learning with application of neural techniques commonly used in restrained honey bees.

  6. Attention stabilizes the shared gain of V4 populations

    PubMed Central

    Rabinowitz, Neil C; Goris, Robbe L; Cohen, Marlene; Simoncelli, Eero P

    2015-01-01

    Responses of sensory neurons represent stimulus information, but are also influenced by internal state. For example, when monkeys direct their attention to a visual stimulus, the response gain of specific subsets of neurons in visual cortex changes. Here, we develop a functional model of population activity to investigate the structure of this effect. We fit the model to the spiking activity of bilateral neural populations in area V4, recorded while the animal performed a stimulus discrimination task under spatial attention. The model reveals four separate time-varying shared modulatory signals, the dominant two of which each target task-relevant neurons in one hemisphere. In attention-directed conditions, the associated shared modulatory signal decreases in variance. This finding provides an interpretable and parsimonious explanation for previous observations that attention reduces variability and noise correlations of sensory neurons. Finally, the recovered modulatory signals reflect previous reward, and are predictive of subsequent choice behavior. DOI: http://dx.doi.org/10.7554/eLife.08998.001 PMID:26523390

  7. Differential priming effects of color-opponent subliminal stimulation on visual magnetic responses.

    PubMed

    Hoshiyama, Minoru; Kakigi, Ryusuke; Takeshima, Yasuyuki; Miki, Kensaku; Watanabe, Shoko

    2006-10-01

    We investigated the effects of subliminal stimulation on visible stimulation to demonstrate the priority of facial discrimination processing, using a unique, indiscernible, color-opponent subliminal (COS) stimulation. We recorded event-related magnetic cortical fields (ERF) by magnetoencephalography (MEG) after the presentation of a face or flower stimulus with COS conditioning using a face, flower, random pattern, and blank. The COS stimulation enhanced the response to visible stimulation when the figure in the COS stimulation was identical to the target visible stimulus, but more so for the face than for the flower stimulus. The ERF component modulated by the COS stimulation was estimated to be located in the ventral temporal cortex. We speculated that the enhancement was caused by an interaction of the responses after subthreshold stimulation by the COS stimulation and the suprathreshold stimulation after target stimulation, such as in the processing for categorization or discrimination. We also speculated that the face was processed with priority at the level of the ventral temporal cortex during visual processing outside of consciousness.

  8. Nonconscious emotional activation colors first impressions: a regulatory role for conscious awareness.

    PubMed

    Lapate, Regina C; Rokers, Bas; Li, Tianyi; Davidson, Richard J

    2014-02-01

    Emotions can color people's attitudes toward unrelated objects in the environment. Existing evidence suggests that such emotional coloring is particularly strong when emotion-triggering information escapes conscious awareness. But is emotional reactivity stronger after nonconscious emotional provocation than after conscious emotional provocation, or does conscious processing specifically change the association between emotional reactivity and evaluations of unrelated objects? In this study, we independently indexed emotional reactivity and coloring as a function of emotional-stimulus awareness to disentangle these accounts. Specifically, we recorded skin-conductance responses to spiders and fearful faces, along with subsequent preferences for novel neutral faces during visually aware and unaware states. Fearful faces increased skin-conductance responses comparably in both stimulus-aware and stimulus-unaware conditions. Yet only when visual awareness was precluded did skin-conductance responses to fearful faces predict decreased likability of neutral faces. These findings suggest a regulatory role for conscious awareness in breaking otherwise automatic associations between physiological reactivity and evaluative emotional responses.

  9. Task- and age-dependent effects of visual stimulus properties on children's explicit numerosity judgments.

    PubMed

    Defever, Emmy; Reynvoet, Bert; Gebuis, Titia

    2013-10-01

    Researchers investigating numerosity processing manipulate the visual stimulus properties (e.g., surface). This is done to control for the confound between numerosity and its visual properties and should allow the examination of pure number processes. Nevertheless, several studies have shown that, despite different visual controls, visual cues remained to exert their influence on numerosity judgments. This study, therefore, investigated whether the impact of the visual stimulus manipulations on numerosity judgments is dependent on the task at hand (comparison task vs. same-different task) and whether this impact changes throughout development. In addition, we examined whether the influence of visual stimulus manipulations on numerosity judgments plays a role in the relation between performance on numerosity tasks and mathematics achievement. Our findings confirmed that the visual stimulus manipulations affect numerosity judgments; more important, we found that these influences changed with increasing age and differed between the comparison and the same-different tasks. Consequently, direct comparisons between numerosity studies using different tasks and age groups are difficult. No meaningful relationship between the performance on the comparison and same-different tasks and mathematics achievement was found in typically developing children, nor did we find consistent differences between children with and without mathematical learning disability (MLD). Copyright © 2013 Elsevier Inc. All rights reserved.

  10. Human discrimination of visual direction of motion with and without smooth pursuit eye movements

    NASA Technical Reports Server (NTRS)

    Krukowski, Anton E.; Pirog, Kathleen A.; Beutter, Brent R.; Brooks, Kevin R.; Stone, Leland S.

    2003-01-01

    It has long been known that ocular pursuit of a moving target has a major influence on its perceived speed (Aubert, 1886; Fleischl, 1882). However, little is known about the effect of smooth pursuit on the perception of target direction. Here we compare the precision of human visual-direction judgments under two oculomotor conditions (pursuit vs. fixation). We also examine the impact of stimulus duration (200 ms vs. 800 ms) and absolute direction (cardinal vs. oblique). Our main finding is that direction discrimination thresholds in the fixation and pursuit conditions are indistinguishable. Furthermore, the two oculomotor conditions showed oblique effects of similar magnitudes. These data suggest that the neural direction signals supporting perception are the same with or without pursuit, despite remarkably different retinal stimulation. During fixation, the stimulus information is restricted to large, purely peripheral retinal motion, while during steady-state pursuit, the stimulus information consists of small, unreliable foveal retinal motion and a large efference-copy signal. A parsimonious explanation of our findings is that the signal limiting the precision of direction judgments is a neural estimate of target motion in head-centered (or world-centered) coordinates (i.e., a combined retinal and eye motion signal) as found in the medial superior temporal area (MST), and not simply an estimate of retinal motion as found in the middle temporal area (MT).

  11. Is nevtral NEUTRAL? Visual similarity effects in the early phases of written-word recognition.

    PubMed

    Marcet, Ana; Perea, Manuel

    2017-08-01

    For simplicity, contemporary models of written-word recognition and reading have unspecified feature/letter levels-they predict that the visually similar substituted-letter nonword PEQPLE is as effective at activating the word PEOPLE as the visually dissimilar substituted-letter nonword PEYPLE. Previous empirical evidence on the effects of visual similarly across letters during written-word recognition is scarce and nonconclusive. To examine whether visual similarity across letters plays a role early in word processing, we conducted two masked priming lexical decision experiments (stimulus-onset asynchrony = 50 ms). The substituted-letter primes were visually very similar to the target letters (u/v in Experiment 1 and i/j in Experiment 2; e.g., nevtral-NEUTRAL). For comparison purposes, we included an identity prime condition (neutral-NEUTRAL) and a dissimilar-letter prime condition (neztral-NEUTRAL). Results showed that the similar-letter prime condition produced faster word identification times than the dissimilar-letter prime condition. We discuss how models of written-word recognition should be amended to capture visual similarity effects across letters.

  12. Visual and auditory synchronization deficits among dyslexic readers as compared to non-impaired readers: a cross-correlation algorithm analysis

    PubMed Central

    Sela, Itamar

    2014-01-01

    Visual and auditory temporal processing and crossmodal integration are crucial factors in the word decoding process. The speed of processing (SOP) gap (Asynchrony) between these two modalities, which has been suggested as related to the dyslexia phenomenon, is the focus of the current study. Nineteen dyslexic and 17 non-impaired University adult readers were given stimuli in a reaction time (RT) procedure where participants were asked to identify whether the stimulus type was only visual, only auditory or crossmodally integrated. Accuracy, RT, and Event Related Potential (ERP) measures were obtained for each of the three conditions. An algorithm to measure the contribution of the temporal SOP of each modality to the crossmodal integration in each group of participants was developed. Results obtained using this model for the analysis of the current study data, indicated that in the crossmodal integration condition the presence of the auditory modality at the pre-response time frame (between 170 and 240 ms after stimulus presentation), increased processing speed in the visual modality among the non-impaired readers, but not in the dyslexic group. The differences between the temporal SOP of the modalities among the dyslexics and the non-impaired readers give additional support to the theory that an asynchrony between the visual and auditory modalities is a cause of dyslexia. PMID:24959125

  13. The role of prestimulus activity in visual extinction☆

    PubMed Central

    Urner, Maren; Sarri, Margarita; Grahn, Jessica; Manly, Tom; Rees, Geraint; Friston, Karl

    2013-01-01

    Patients with visual extinction following right-hemisphere damage sometimes see and sometimes miss stimuli in the left visual field, particularly when stimuli are presented simultaneously to both visual fields. Awareness of left visual field stimuli is associated with increased activity in bilateral parietal and frontal cortex. However, it is unknown why patients see or miss these stimuli. Previous neuroimaging studies in healthy adults show that prestimulus activity biases perceptual decisions, and biases in visual perception can be attributed to fluctuations in prestimulus activity in task relevant brain regions. Here, we used functional MRI to investigate whether prestimulus activity affected perception in the context of visual extinction following stroke. We measured prestimulus activity in stimulus-responsive cortical areas during an extinction paradigm in a patient with unilateral right parietal damage and visual extinction. This allowed us to compare prestimulus activity on physically identical bilateral trials that either did or did not lead to visual extinction. We found significantly increased activity prior to stimulus presentation in two areas that were also activated by visual stimulation: the left calcarine sulcus and right occipital inferior cortex. Using dynamic causal modelling (DCM) we found that both these differences in prestimulus activity and stimulus evoked responses could be explained by enhanced effective connectivity within and between visual areas, prior to stimulus presentation. Thus, we provide evidence for the idea that differences in ongoing neural activity in visually responsive areas prior to stimulus onset affect awareness in visual extinction, and that these differences are mediated by fluctuations in extrinsic and intrinsic connectivity. PMID:23680398

  14. The role of prestimulus activity in visual extinction.

    PubMed

    Urner, Maren; Sarri, Margarita; Grahn, Jessica; Manly, Tom; Rees, Geraint; Friston, Karl

    2013-07-01

    Patients with visual extinction following right-hemisphere damage sometimes see and sometimes miss stimuli in the left visual field, particularly when stimuli are presented simultaneously to both visual fields. Awareness of left visual field stimuli is associated with increased activity in bilateral parietal and frontal cortex. However, it is unknown why patients see or miss these stimuli. Previous neuroimaging studies in healthy adults show that prestimulus activity biases perceptual decisions, and biases in visual perception can be attributed to fluctuations in prestimulus activity in task relevant brain regions. Here, we used functional MRI to investigate whether prestimulus activity affected perception in the context of visual extinction following stroke. We measured prestimulus activity in stimulus-responsive cortical areas during an extinction paradigm in a patient with unilateral right parietal damage and visual extinction. This allowed us to compare prestimulus activity on physically identical bilateral trials that either did or did not lead to visual extinction. We found significantly increased activity prior to stimulus presentation in two areas that were also activated by visual stimulation: the left calcarine sulcus and right occipital inferior cortex. Using dynamic causal modelling (DCM) we found that both these differences in prestimulus activity and stimulus evoked responses could be explained by enhanced effective connectivity within and between visual areas, prior to stimulus presentation. Thus, we provide evidence for the idea that differences in ongoing neural activity in visually responsive areas prior to stimulus onset affect awareness in visual extinction, and that these differences are mediated by fluctuations in extrinsic and intrinsic connectivity. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.

  15. Eccentricity effects in vision and attention.

    PubMed

    Staugaard, Camilla Funch; Petersen, Anders; Vangkilde, Signe

    2016-11-01

    Stimulus eccentricity affects visual processing in multiple ways. Performance on a visual task is often better when target stimuli are presented near or at the fovea compared to the retinal periphery. For instance, reaction times and error rates are often reported to increase with increasing eccentricity. Such findings have been interpreted as purely visual, reflecting neurophysiological differences in central and peripheral vision, as well as attentional, reflecting a central bias in the allocation of attentional resources. Other findings indicate that in some cases, information from the periphery is preferentially processed. Specifically, it has been suggested that visual processing speed increases with increasing stimulus eccentricity, and that this positive correlation is reduced, but not eliminated, when the amount of cortex activated by a stimulus is kept constant by magnifying peripheral stimuli (Carrasco et al., 2003). In this study, we investigated effects of eccentricity on visual attentional capacity with and without magnification, using computational modeling based on Bundesen's (1990) theory of visual attention. Our results suggest a general decrease in attentional capacity with increasing stimulus eccentricity, irrespective of magnification. We discuss these results in relation to the physiology of the visual system, the use of different paradigms for investigating visual perception across the visual field, and the use of different stimulus materials (e.g. Gabor patches vs. letters). Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  16. Spatial updating in human parietal cortex

    NASA Technical Reports Server (NTRS)

    Merriam, Elisha P.; Genovese, Christopher R.; Colby, Carol L.

    2003-01-01

    Single neurons in monkey parietal cortex update visual information in conjunction with eye movements. This remapping of stimulus representations is thought to contribute to spatial constancy. We hypothesized that a similar process occurs in human parietal cortex and that we could visualize it with functional MRI. We scanned subjects during a task that involved remapping of visual signals across hemifields. We observed an initial response in the hemisphere contralateral to the visual stimulus, followed by a remapped response in the hemisphere ipsilateral to the stimulus. We ruled out the possibility that this remapped response resulted from either eye movements or visual stimuli alone. Our results demonstrate that updating of visual information occurs in human parietal cortex.

  17. Moving Stimuli Facilitate Synchronization But Not Temporal Perception

    PubMed Central

    Silva, Susana; Castro, São Luís

    2016-01-01

    Recent studies have shown that a moving visual stimulus (e.g., a bouncing ball) facilitates synchronization compared to a static stimulus (e.g., a flashing light), and that it can even be as effective as an auditory beep. We asked a group of participants to perform different tasks with four stimulus types: beeps, siren-like sounds, visual flashes (static) and bouncing balls. First, participants performed synchronization with isochronous sequences (stimulus-guided synchronization), followed by a continuation phase in which the stimulus was internally generated (imagery-guided synchronization). Then they performed a perception task, in which they judged whether the final part of a temporal sequence was compatible with the previous beat structure (stimulus-guided perception). Similar to synchronization, an imagery-guided variant was added, in which sequences contained a gap in between (imagery-guided perception). Balls outperformed flashes and matched beeps (powerful ball effect) in stimulus-guided synchronization but not in perception (stimulus- or imagery-guided). In imagery-guided synchronization, performance accuracy decreased for beeps and balls, but not for flashes and sirens. Our findings suggest that the advantages of moving visual stimuli over static ones are grounded in action rather than perception, and they support the hypothesis that the sensorimotor coupling mechanisms for auditory (beeps) and moving visual stimuli (bouncing balls) overlap. PMID:27909419

  18. Moving Stimuli Facilitate Synchronization But Not Temporal Perception.

    PubMed

    Silva, Susana; Castro, São Luís

    2016-01-01

    Recent studies have shown that a moving visual stimulus (e.g., a bouncing ball) facilitates synchronization compared to a static stimulus (e.g., a flashing light), and that it can even be as effective as an auditory beep. We asked a group of participants to perform different tasks with four stimulus types: beeps, siren-like sounds, visual flashes (static) and bouncing balls. First, participants performed synchronization with isochronous sequences (stimulus-guided synchronization), followed by a continuation phase in which the stimulus was internally generated (imagery-guided synchronization). Then they performed a perception task, in which they judged whether the final part of a temporal sequence was compatible with the previous beat structure (stimulus-guided perception). Similar to synchronization, an imagery-guided variant was added, in which sequences contained a gap in between (imagery-guided perception). Balls outperformed flashes and matched beeps (powerful ball effect) in stimulus-guided synchronization but not in perception (stimulus- or imagery-guided). In imagery-guided synchronization, performance accuracy decreased for beeps and balls, but not for flashes and sirens. Our findings suggest that the advantages of moving visual stimuli over static ones are grounded in action rather than perception, and they support the hypothesis that the sensorimotor coupling mechanisms for auditory (beeps) and moving visual stimuli (bouncing balls) overlap.

  19. Decoding conjunctions of direction-of-motion and binocular disparity from human visual cortex.

    PubMed

    Seymour, Kiley J; Clifford, Colin W G

    2012-05-01

    Motion and binocular disparity are two features in our environment that share a common correspondence problem. Decades of psychophysical research dedicated to understanding stereopsis suggest that these features interact early in human visual processing to disambiguate depth. Single-unit recordings in the monkey also provide evidence for the joint encoding of motion and disparity across much of the dorsal visual stream. Here, we used functional MRI and multivariate pattern analysis to examine where in the human brain conjunctions of motion and disparity are encoded. Subjects sequentially viewed two stimuli that could be distinguished only by their conjunctions of motion and disparity. Specifically, each stimulus contained the same feature information (leftward and rightward motion and crossed and uncrossed disparity) but differed exclusively in the way these features were paired. Our results revealed that a linear classifier could accurately decode which stimulus a subject was viewing based on voxel activation patterns throughout the dorsal visual areas and as early as V2. This decoding success was conditional on some voxels being individually sensitive to the unique conjunctions comprising each stimulus, thus a classifier could not rely on independent information about motion and binocular disparity to distinguish these conjunctions. This study expands on evidence that disparity and motion interact at many levels of human visual processing, particularly within the dorsal stream. It also lends support to the idea that stereopsis is subserved by early mechanisms also tuned to direction of motion.

  20. Boosting pitch encoding with audiovisual interactions in congenital amusia.

    PubMed

    Albouy, Philippe; Lévêque, Yohana; Hyde, Krista L; Bouchet, Patrick; Tillmann, Barbara; Caclin, Anne

    2015-01-01

    The combination of information across senses can enhance perception, as revealed for example by decreased reaction times or improved stimulus detection. Interestingly, these facilitatory effects have been shown to be maximal when responses to unisensory modalities are weak. The present study investigated whether audiovisual facilitation can be observed in congenital amusia, a music-specific disorder primarily ascribed to impairments of pitch processing. Amusic individuals and their matched controls performed two tasks. In Task 1, they were required to detect auditory, visual, or audiovisual stimuli as rapidly as possible. In Task 2, they were required to detect as accurately and as rapidly as possible a pitch change within an otherwise monotonic 5-tone sequence that was presented either only auditorily (A condition), or simultaneously with a temporally congruent, but otherwise uninformative visual stimulus (AV condition). Results of Task 1 showed that amusics exhibit typical auditory and visual detection, and typical audiovisual integration capacities: both amusics and controls exhibited shorter response times for audiovisual stimuli than for either auditory stimuli or visual stimuli. Results of Task 2 revealed that both groups benefited from simultaneous uninformative visual stimuli to detect pitch changes: accuracy was higher and response times shorter in the AV condition than in the A condition. The audiovisual improvements of response times were observed for different pitch interval sizes depending on the group. These results suggest that both typical listeners and amusic individuals can benefit from multisensory integration to improve their pitch processing abilities and that this benefit varies as a function of task difficulty. These findings constitute the first step towards the perspective to exploit multisensory paradigms to reduce pitch-related deficits in congenital amusia, notably by suggesting that audiovisual paradigms are effective in an appropriate range of unimodal performance. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Visual training improves perceptual grouping based on basic stimulus features.

    PubMed

    Kurylo, Daniel D; Waxman, Richard; Kidron, Rachel; Silverstein, Steven M

    2017-10-01

    Training on visual tasks improves performance on basic and higher order visual capacities. Such improvement has been linked to changes in connectivity among mediating neurons. We investigated whether training effects occur for perceptual grouping. It was hypothesized that repeated engagement of integration mechanisms would enhance grouping processes. Thirty-six participants underwent 15 sessions of training on a visual discrimination task that required perceptual grouping. Participants viewed 20 × 20 arrays of dots or Gabor patches and indicated whether the array appeared grouped as vertical or horizontal lines. Across trials stimuli became progressively disorganized, contingent upon successful discrimination. Four visual dimensions were examined, in which grouping was based on similarity in luminance, color, orientation, and motion. Psychophysical thresholds of grouping were assessed before and after training. Results indicate that performance in all four dimensions improved with training. Training on a control condition, which paralleled the discrimination task but without a grouping component, produced no improvement. In addition, training on only the luminance and orientation dimensions improved performance for those conditions as well as for grouping by color, on which training had not occurred. However, improvement from partial training did not generalize to motion. Results demonstrate that a training protocol emphasizing stimulus integration enhanced perceptual grouping. Results suggest that neural mechanisms mediating grouping by common luminance and/or orientation contribute to those mediating grouping by color but do not share resources for grouping by common motion. Results are consistent with theories of perceptual learning emphasizing plasticity in early visual processing regions.

  2. Pavlovian Discriminative Stimulus Effects of Methamphetamine in Male Japanese quail (Coturnix japonica)

    PubMed Central

    Bolin, B. Levi; Singleton, Destiny L.; Akins, Chana K.

    2014-01-01

    Pavlovian drug discrimination (DD) procedures demonstrate that interoceptive drug stimuli may come to control behavior by informing the status of conditional relationships between stimuli and outcomes. This technique may provide insight into processes that contribute to drug-seeking, relapse, and other maladaptive behaviors associated with drug abuse. The purpose of the current research was to establish a model of Pavlovian DD in male Japanese quail. A Pavlovian conditioning procedure was used such that 3.0 mg/kg methamphetamine served as a feature positive stimulus for brief periods of visual access to a female quail and approach behavior was measured. After acquisition training, generalization tests were conducted with cocaine, nicotine, and haloperidol under extinction conditions. SCH 23390 was used to investigate the involvement of the dopamine D1 receptor subtype in the methamphetamine discriminative stimulus. Results showed that cocaine fully substituted for methamphetamine but nicotine only partially substituted for methamphetamine in quail. Haloperidol dose-dependently decreased approach behavior. Pretreatment with SCH 23390 modestly attenuated the methamphetamine discrimination suggesting that the D1 receptor subtype may be involved in the discriminative stimulus effects of methamphetamine. The findings are discussed in relation to drug abuse and associated negative health consequences. PMID:24965811

  3. Effects of directional uncertainty on visually-guided joystick pointing.

    PubMed

    Berryhill, Marian; Kveraga, Kestutis; Hughes, Howard C

    2005-02-01

    Reaction times generally follow the predictions of Hick's law as stimulus-response uncertainty increases, although notable exceptions include the oculomotor system. Saccadic and smooth pursuit eye movement reaction times are independent of stimulus-response uncertainty. Previous research showed that joystick pointing to targets, a motor analog of saccadic eye movements, is only modestly affected by increased stimulus-response uncertainty; however, a no-uncertainty condition (simple reaction time to 1 possible target) was not included. Here, we re-evaluate manual joystick pointing including a no-uncertainty condition. Analysis indicated simple joystick pointing reaction times were significantly faster than choice reaction times. Choice reaction times (2, 4, or 8 possible target locations) only slightly increased as the number of possible targets increased. These data suggest that, as with joystick tracking (a motor analog of smooth pursuit eye movements), joystick pointing is more closely approximated by a simple/choice step function than the log function predicted by Hick's law.

  4. Visual Space and Object Space in the Cerebral Cortex of Retinal Disease Patients

    PubMed Central

    Spileers, Werner; Wagemans, Johan; Op de Beeck, Hans P.

    2014-01-01

    The lower areas of the hierarchically organized visual cortex are strongly retinotopically organized, with strong responses to specific retinotopic stimuli, and no response to other stimuli outside these preferred regions. Higher areas in the ventral occipitotemporal cortex show a weak eccentricity bias, and are mainly sensitive for object category (e.g., faces versus buildings). This study investigated how the mapping of eccentricity and category sensitivity using functional magnetic resonance imaging is affected by a retinal lesion in two very different low vision patients: a patient with a large central scotoma, affecting central input to the retina (juvenile macular degeneration), and a patient where input to the peripheral retina is lost (retinitis pigmentosa). From the retinal degeneration, we can predict specific losses of retinotopic activation. These predictions were confirmed when comparing stimulus activations with a no-stimulus fixation baseline. At the same time, however, seemingly contradictory patterns of activation, unexpected given the retinal degeneration, were observed when different stimulus conditions were directly compared. These unexpected activations were due to position-specific deactivations, indicating the importance of investigating absolute activation (relative to a no-stimulus baseline) rather than relative activation (comparing different stimulus conditions). Data from two controls, with simulated scotomas that matched the lesions in the two patients also showed that retinotopic mapping results could be explained by a combination of activations at the stimulated locations and deactivations at unstimulated locations. Category sensitivity was preserved in the two patients. In sum, when we take into account the full pattern of activations and deactivations elicited in retinotopic cortex and throughout the ventral object vision pathway in low vision patients, the pattern of (de)activation is consistent with the retinal loss. PMID:24505449

  5. A behavioural preparation for the study of human Pavlovian conditioning.

    PubMed

    Arcediano, F; Ortega, N; Matute, H

    1996-08-01

    Conditioned suppression is a useful technique for assessing whether subjects have learned a CS-US association, but it is difficult to use in humans because of the need for an aversive US. The purpose of this research was to develop a non-aversive procedure that would produce suppression. Subjects learned to press the space bar of a computer as part of a video game, but they had to stop pressing whenever a visual US appeared, or they would lose points. In Experiment 1, we used an A+/B- discrimination design: The US always followed Stimulus A and never followed Stimulus B. Although no information about the existence of CSs was given to the subjects, suppression ratio results showed a discrimination learning curve-that is, subjects learned to suppress responding in anticipation of the US when Stimulus A was present but not during the presentations of Stimulus B. Experiment 2 explored the potential of this preparation by using two different instruction sets and assessing post-experimental judgements of CS A and CS B in addition to suppression ratios. The results of these experiments suggest that conditioned suppression can be reliably and conveniently used in the human laboratory, providing a bridge between experiments on animal conditioning and experiments on human judgements of causality.

  6. ERP effects of spatial attention and display search with unilateral and bilateral stimulus displays.

    PubMed

    Lange, J J; Wijers, A A; Mulder, L J; Mulder, G

    1999-07-01

    Two experiments were performed in which the effects of selective spatial attention on the ERPs elicited by unilateral and bilateral stimulus arrays were compared. In Experiment 1, subjects received a series of grating patterns. In the unilateral condition these gratings were presented one at a time, randomly to the right or left of fixation. In the bilateral condition, gratings were presented in pairs, one to each side of fixation. In the unilateral condition standard ERP effects of visual spatial attention were observed. However, in the bilateral condition we failed to observe an attention related posterior contralateral positivity (overlapping the P1 and N1 components, latency interval about 100-250 ms), as reported in several previous studies. In Experiment 2, we investigated whether attention related ERP lateralizations are affected by the task requirement to search among multiple objects in the visual field. We employed a task paradigm identical to that used by Luck et al. (Luck, S.J., Heinze, H.J., Mangun, G.R., Hillyard, S.A., 1990. Visual event-related potentials index focused attention within bilateral stimulus arrays. II. Functional dissociation of P1 and N1 components. Electroencephalogr. Clin. Neurophysiol. 75, 528-542). Four letters were presented to a visual hemifield, simultaneously to both the attended and unattended hemifields in the bilateral conditions, and to one hemifield only in the unilateral conditions. In a focused attention condition, subjects searched for a target letter at a fixed position, whereas they searched for the target letter among all four letters in the divided attention condition (as in the experiment of Luck et al., 1990). In the bilateral focused attention condition, only the contralateral P1 was enhanced. In the bilateral divided attention condition a prolonged posterior positivity was observed over the hemisphere contralateral to the attended hemifield, comparable to the results of Luck et al. (1990). A comparison of the ERPs elicited in the focused and divided attention conditions revealed a prolonged 'search related negativity'. We discuss possible interactions between this negativity and attention related lateralizations. The display search negativity consisted of two phases, one phase comprised a midline occipital negativity, developing first over the ipsilateral scalp, while the second phase involved two symmetrical occipitotemporal negativities, strongly resembling the N1 in their topography. The display search effect could be modelled with a dipole in a medial occipital (possibly striate) region and two symmetrical dipoles in occipitotemporal brain areas. We hypothesize that this effect reflects a process of rechecking the decaying information of iconic memory in the occipitotemporal object recognition pathway.

  7. Sensorimotor integration in human postural control

    NASA Technical Reports Server (NTRS)

    Peterka, R. J.

    2002-01-01

    It is generally accepted that human bipedal upright stance is achieved by feedback mechanisms that generate an appropriate corrective torque based on body-sway motion detected primarily by visual, vestibular, and proprioceptive sensory systems. Because orientation information from the various senses is not always available (eyes closed) or accurate (compliant support surface), the postural control system must somehow adjust to maintain stance in a wide variety of environmental conditions. This is the sensorimotor integration problem that we investigated by evoking anterior-posterior (AP) body sway using pseudorandom rotation of the visual surround and/or support surface (amplitudes 0.5-8 degrees ) in both normal subjects and subjects with severe bilateral vestibular loss (VL). AP rotation of body center-of-mass (COM) was measured in response to six conditions offering different combinations of available sensory information. Stimulus-response data were analyzed using spectral analysis to compute transfer functions and coherence functions over a frequency range from 0.017 to 2.23 Hz. Stimulus-response data were quite linear for any given condition and amplitude. However, overall behavior in normal subjects was nonlinear because gain decreased and phase functions sometimes changed with increasing stimulus amplitude. "Sensory channel reweighting" could account for this nonlinear behavior with subjects showing increasing reliance on vestibular cues as stimulus amplitudes increased. VL subjects could not perform this reweighting, and their stimulus-response behavior remained quite linear. Transfer function curve fits based on a simple feedback control model provided estimates of postural stiffness, damping, and feedback time delay. There were only small changes in these parameters with increasing visual stimulus amplitude. However, stiffness increased as much as 60% with increasing support surface amplitude. To maintain postural stability and avoid resonant behavior, an increase in stiffness should be accompanied by a corresponding increase in damping. Increased damping was achieved primarily by decreasing the apparent time delay of feedback control rather than by changing the damping coefficient (i.e., corrective torque related to body-sway velocity). In normal subjects, stiffness and damping were highly correlated with body mass and moment of inertia, with stiffness always about 1/3 larger than necessary to resist the destabilizing torque due to gravity. The stiffness parameter in some VL subjects was larger compared with normal subjects, suggesting that they may use increased stiffness to help compensate for their loss. Overall results show that the simple act of standing quietly depends on a remarkably complex sensorimotor control system.

  8. Task-irrelevant distractors in the delay period interfere selectively with visual short-term memory for spatial locations.

    PubMed

    Marini, Francesco; Scott, Jerry; Aron, Adam R; Ester, Edward F

    2017-07-01

    Visual short-term memory (VSTM) enables the representation of information in a readily accessible state. VSTM is typically conceptualized as a form of "active" storage that is resistant to interference or disruption, yet several recent studies have shown that under some circumstances task-irrelevant distractors may indeed disrupt performance. Here, we investigated how task-irrelevant visual distractors affected VSTM by asking whether distractors induce a general loss of remembered information or selectively interfere with memory representations. In a VSTM task, participants recalled the spatial location of a target visual stimulus after a delay in which distractors were presented on 75% of trials. Notably, the distractor's eccentricity always matched the eccentricity of the target, while in the critical conditions the distractor's angular position was shifted either clockwise or counterclockwise relative to the target. We then computed estimates of recall error for both eccentricity and polar angle. A general interference model would predict an effect of distractors on both polar angle and eccentricity errors, while a selective interference model would predict effects of distractors on angle but not on eccentricity errors. Results showed that for stimulus angle there was an increase in the magnitude and variability of recall errors. However, distractors had no effect on estimates of stimulus eccentricity. Our results suggest that distractors selectively interfere with VSTM for spatial locations.

  9. Dynamic reweighting of three modalities for sensor fusion.

    PubMed

    Hwang, Sungjae; Agada, Peter; Kiemel, Tim; Jeka, John J

    2014-01-01

    We simultaneously perturbed visual, vestibular and proprioceptive modalities to understand how sensory feedback is re-weighted so that overall feedback remains suited to stabilizing upright stance. Ten healthy young subjects received an 80 Hz vibratory stimulus to their bilateral Achilles tendons (stimulus turns on-off at 0.28 Hz), a ± 1 mA binaural monopolar galvanic vestibular stimulus at 0.36 Hz, and a visual stimulus at 0.2 Hz during standing. The visual stimulus was presented at different amplitudes (0.2, 0.8 deg rotation about ankle axis) to measure: the change in gain (weighting) to vision, an intramodal effect; and a change in gain to vibration and galvanic vestibular stimulation, both intermodal effects. The results showed a clear intramodal visual effect, indicating a de-emphasis on vision when the amplitude of visual stimulus increased. At the same time, an intermodal visual-proprioceptive reweighting effect was observed with the addition of vibration, which is thought to change proprioceptive inputs at the ankles, forcing the nervous system to rely more on vision and vestibular modalities. Similar intermodal effects for visual-vestibular reweighting were observed, suggesting that vestibular information is not a "fixed" reference, but is dynamically adjusted in the sensor fusion process. This is the first time, to our knowledge, that the interplay between the three primary modalities for postural control has been clearly delineated, illustrating a central process that fuses these modalities for accurate estimates of self-motion.

  10. Effects of visual cue and response assignment on spatial stimulus coding in stimulus-response compatibility.

    PubMed

    Nishimura, Akio; Yokosawa, Kazuhiko

    2012-01-01

    Tlauka and McKenna ( 2000 ) reported a reversal of the traditional stimulus-response compatibility (SRC) effect (faster responding to a stimulus presented on the same side than to one on the opposite side) when the stimulus appearing on one side of a display is a member of a superordinate unit that is largely on the opposite side. We investigated the effects of a visual cue that explicitly shows a superordinate unit, and of assignment of multiple stimuli within each superordinate unit to one response, on the SRC effect based on superordinate unit position. Three experiments revealed that stimulus-response assignment is critical, while the visual cue plays a minor role, in eliciting the SRC effect based on the superordinate unit position. Findings suggest bidirectional interaction between perception and action and simultaneous spatial stimulus coding according to multiple frames of reference, with contribution of each coding to the SRC effect flexibly varying with task situations.

  11. Subliminal perception of complex visual stimuli.

    PubMed

    Ionescu, Mihai Radu

    2016-01-01

    Rationale: Unconscious perception of various sensory modalities is an active subject of research though its function and effect on behavior is uncertain. Objective: The present study tried to assess if unconscious visual perception could occur with more complex visual stimuli than previously utilized. Methods and Results: Videos containing slideshows of indifferent complex images with interspersed frames of interest of various durations were presented to 24 healthy volunteers. The perception of the stimulus was evaluated with a forced-choice questionnaire while awareness was quantified by self-assessment with a modified awareness scale annexed to each question with 4 categories of awareness. At values of 16.66 ms of stimulus duration, conscious awareness was not possible and answers regarding the stimulus were random. At 50 ms, nonrandom answers were coupled with no self-reported awareness suggesting unconscious perception of the stimulus. At larger durations of stimulus presentation, significantly correct answers were coupled with a certain conscious awareness. Discussion: At values of 50 ms, unconscious perception is possible even with complex visual stimuli. Further studies are recommended with a focus on a range of interest of stimulus duration between 50 to 16.66 ms.

  12. Exploring the perceptual biases associated with believing and disbelieving in paranormal phenomena.

    PubMed

    Simmonds-Moore, Christine

    2014-08-01

    Ninety-five participants (32 believers, 30 disbelievers and 33 neutral believers in the paranormal) participated in an experiment comprising one visual and one auditory block of trials. Each block included one ESP, two degraded stimuli and one random trial. Each trial included 8 screens or epochs of "random" noise. Participants entered a guess if they perceived a stimulus or changed their mind about stimulus identity, rated guesses for confidence and made notes during each trial. Believers and disbelievers did not differ in the number of guesses made, or in their ability to detect degraded stimuli. Believers displayed a trend toward making faster guesses for some conditions and significantly higher confidence and more misidentifications concerning guesses than disbelievers. Guesses, misidentifications and faster response latencies were generally more likely in the visual than auditory conditions. ESP performance was no different from chance. ESP performance did not differ between belief groups or sensory modalities. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Conceptual distortions of hand structure are robust to changes in stimulus information.

    PubMed

    Ambroziak, Klaudia B; Tamè, Luigi; Longo, Matthew R

    2018-05-01

    Previous studies showed stereotyped distortions in hand representations. People judge their knuckles as farther forward in the hand than they actually are. The cause of this bias remains unclear. We tested whether both visual and tactile information contribute to the bias. In Experiment 1, participants judged the location of their knuckles by pointing to the location on their palm with: (1) a metal baton (using vision and touch), (2) a metal baton while blindfolded (using touch), or (3) a laser pointer (using vision). Distal mislocalisations were found in all conditions. In Experiment 2, we investigated whether judgments are influenced by visual landmarks such as creases. Participants localized their knuckles on either a photograph of their palm or a silhouette. Distal mislocalisations were apparent in both conditions. These results show that distal biases are resistant to changes in stimulus information, suggesting that such mislocalisations reflect a conceptual mis-representation of hand structure. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. Talker and lexical effects on audiovisual word recognition by adults with cochlear implants.

    PubMed

    Kaiser, Adam R; Kirk, Karen Iler; Lachs, Lorin; Pisoni, David B

    2003-04-01

    The present study examined how postlingually deafened adults with cochlear implants combine visual information from lipreading with auditory cues in an open-set word recognition task. Adults with normal hearing served as a comparison group. Word recognition performance was assessed using lexically controlled word lists presented under auditory-only, visual-only, and combined audiovisual presentation formats. Effects of talker variability were studied by manipulating the number of talkers producing the stimulus tokens. Lexical competition was investigated using sets of lexically easy and lexically hard test words. To assess the degree of audiovisual integration, a measure of visual enhancement, R(a), was used to assess the gain in performance provided in the audiovisual presentation format relative to the maximum possible performance obtainable in the auditory-only format. Results showed that word recognition performance was highest for audiovisual presentation followed by auditory-only and then visual-only stimulus presentation. Performance was better for single-talker lists than for multiple-talker lists, particularly under the audiovisual presentation format. Word recognition performance was better for the lexically easy than for the lexically hard words regardless of presentation format. Visual enhancement scores were higher for single-talker conditions compared to multiple-talker conditions and tended to be somewhat better for lexically easy words than for lexically hard words. The pattern of results suggests that information from the auditory and visual modalities is used to access common, multimodal lexical representations in memory. The findings are discussed in terms of the complementary nature of auditory and visual sources of information that specify the same underlying gestures and articulatory events in speech.

  15. Talker and Lexical Effects on Audiovisual Word Recognition by Adults With Cochlear Implants

    PubMed Central

    Kaiser, Adam R.; Kirk, Karen Iler; Lachs, Lorin; Pisoni, David B.

    2012-01-01

    The present study examined how postlingually deafened adults with cochlear implants combine visual information from lipreading with auditory cues in an open-set word recognition task. Adults with normal hearing served as a comparison group. Word recognition performance was assessed using lexically controlled word lists presented under auditory-only, visual-only, and combined audiovisual presentation formats. Effects of talker variability were studied by manipulating the number of talkers producing the stimulus tokens. Lexical competition was investigated using sets of lexically easy and lexically hard test words. To assess the degree of audiovisual integration, a measure of visual enhancement, Ra, was used to assess the gain in performance provided in the audiovisual presentation format relative to the maximum possible performance obtainable in the auditory-only format. Results showed that word recognition performance was highest for audiovisual presentation followed by auditory-only and then visual-only stimulus presentation. Performance was better for single-talker lists than for multiple-talker lists, particularly under the audiovisual presentation format. Word recognition performance was better for the lexically easy than for the lexically hard words regardless of presentation format. Visual enhancement scores were higher for single-talker conditions compared to multiple-talker conditions and tended to be somewhat better for lexically easy words than for lexically hard words. The pattern of results suggests that information from the auditory and visual modalities is used to access common, multimodal lexical representations in memory. The findings are discussed in terms of the complementary nature of auditory and visual sources of information that specify the same underlying gestures and articulatory events in speech. PMID:14700380

  16. Haltere mechanosensory influence on tethered flight behavior in Drosophila.

    PubMed

    Mureli, Shwetha; Fox, Jessica L

    2015-08-01

    In flies, mechanosensory information from modified hindwings known as halteres is combined with visual information for wing-steering behavior. Haltere input is necessary for free flight, making it difficult to study the effects of haltere ablation under natural flight conditions. We thus used tethered Drosophila melanogaster flies to examine the relationship between halteres and the visual system, using wide-field motion or moving figures as visual stimuli. Haltere input was altered by surgically decreasing its mass, or by removing it entirely. Haltere removal does not affect the flies' ability to flap or steer their wings, but it does increase the temporal frequency at which they modify their wingbeat amplitude. Reducing the haltere mass decreases the optomotor reflex response to wide-field motion, and removing the haltere entirely does not further decrease the response. Decreasing the mass does not attenuate the response to figure motion, but removing the entire haltere does attenuate the response. When flies are allowed to control a visual stimulus in closed-loop conditions, haltereless flies fixate figures with the same acuity as intact flies, but cannot stabilize a wide-field stimulus as accurately as intact flies can. These manipulations suggest that the haltere mass is influential in wide-field stabilization, but less so in figure tracking. In both figure and wide-field experiments, we observe responses to visual motion with and without halteres, indicating that during tethered flight, intact halteres are not strictly necessary for visually guided wing-steering responses. However, the haltere feedback loop may operate in a context-dependent way to modulate responses to visual motion. © 2015. Published by The Company of Biologists Ltd.

  17. Stability of simple/complex classification with contrast and extraclassical receptive field modulation in macaque V1

    PubMed Central

    Henry, Christopher A.

    2013-01-01

    A key property of neurons in primary visual cortex (V1) is the distinction between simple and complex cells. Recent reports in cat visual cortex indicate the categorization of simple and complex can change depending on stimulus conditions. We investigated the stability of the simple/complex classification with changes in drive produced by either contrast or modulation by the extraclassical receptive field (eCRF). These two conditions were reported to increase the proportion of simple cells in cat cortex. The ratio of the modulation depth of the response (F1) to the elevation of response (F0) to a drifting grating (F1/F0 ratio) was used as the measure of simple/complex. The majority of V1 complex cells remained classified as complex with decreasing contrast. Near contrast threshold, an equal proportion of simple and complex cells changed their classification. The F1/F0 ratio was stable between optimal and large stimulus areas even for those neurons that showed strong eCRF suppression. There was no discernible overall effect of surrounding spatial context on the F1/F0 ratio. Simple/complex cell classification is relatively stable across a range of stimulus drives, produced by either contrast or eCRF suppression. PMID:23303859

  18. Body Context and Posture Affect Mental Imagery of Hands

    PubMed Central

    Ionta, Silvio; Perruchoud, David; Draganski, Bogdan; Blanke, Olaf

    2012-01-01

    Different visual stimuli have been shown to recruit different mental imagery strategies. However the role of specific visual stimuli properties related to body context and posture in mental imagery is still under debate. Aiming to dissociate the behavioural correlates of mental processing of visual stimuli characterized by different body context, in the present study we investigated whether the mental rotation of stimuli showing either hands as attached to a body (hands-on-body) or not (hands-only), would be based on different mechanisms. We further examined the effects of postural changes on the mental rotation of both stimuli. Thirty healthy volunteers verbally judged the laterality of rotated hands-only and hands-on-body stimuli presented from the dorsum- or the palm-view, while positioning their hands on their knees (front postural condition) or behind their back (back postural condition). Mental rotation of hands-only, but not of hands-on-body, was modulated by the stimulus view and orientation. Additionally, only the hands-only stimuli were mentally rotated at different speeds according to the postural conditions. This indicates that different stimulus-related mechanisms are recruited in mental rotation by changing the bodily context in which a particular body part is presented. The present data suggest that, with respect to hands-only, mental rotation of hands-on-body is less dependent on biomechanical constraints and proprioceptive input. We interpret our results as evidence for preferential processing of visual- rather than kinesthetic-based mechanisms during mental transformation of hands-on-body and hands-only, respectively. PMID:22479618

  19. Probing the influence of unconscious fear-conditioned visual stimuli on eye movements.

    PubMed

    Madipakkam, Apoorva Rajiv; Rothkirch, Marcus; Wilbertz, Gregor; Sterzer, Philipp

    2016-11-01

    Efficient threat detection from the environment is critical for survival. Accordingly, fear-conditioned stimuli receive prioritized processing and capture overt and covert attention. However, it is unknown whether eye movements are influenced by unconscious fear-conditioned stimuli. We performed a classical fear-conditioning procedure and subsequently recorded participants' eye movements while they were exposed to fear-conditioned stimuli that were rendered invisible using interocular suppression. Chance-level performance in a forced-choice-task demonstrated unawareness of the stimuli. Differential skin conductance responses and a change in participants' fearfulness ratings of the stimuli indicated the effectiveness of conditioning. However, eye movements were not biased towards the fear-conditioned stimulus. Preliminary evidence suggests a relation between the strength of conditioning and the saccadic bias to the fear-conditioned stimulus. Our findings provide no strong evidence for a saccadic bias towards unconscious fear-conditioned stimuli but tentative evidence suggests that such an effect may depend on the strength of the conditioned response. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. The Effect of Visual Threat on Spatial Attention to Touch

    ERIC Educational Resources Information Center

    Poliakoff, Ellen; Miles, Eleanor; Li, Xinying; Blanchette, Isabelle

    2007-01-01

    Viewing a threatening stimulus can bias visual attention toward that location. Such effects have typically been investigated only in the visual modality, despite the fact that many threatening stimuli are most dangerous when close to or in contact with the body. Recent multisensory research indicates that a neutral visual stimulus, such as a light…

  1. An ERP Study of the Processing of Common and Decimal Fractions: How Different They Are

    PubMed Central

    Zhang, Li; Wang, Qi; Lin, Chongde; Ding, Cody; Zhou, Xinlin

    2013-01-01

    This study explored event-related potential (ERP) correlates of common fractions (1/5) and decimal fractions (0.2). Thirteen subjects performed a numerical magnitude matching task under two conditions. In the common fraction condition, a nonsymbolic fraction was asked to be judged whether its magnitude matched the magnitude of a common fraction; in the decimal fraction condition, a nonsymbolic fraction was asked to be matched with a decimal fraction. Behavioral results showed significant main effects of condition and numerical distance, but no significant interaction of condition and numerical distance. Electrophysiological data showed that when nonsymbolic fractions were compared to common fractions, they displayed larger N1 and P3 amplitudes than when they were compared to decimal fractions. This finding suggested that the visual identification for nonsymbolic fractions was different under the two conditions, which was not due to perceptual differences but to task demands. For symbolic fractions, the condition effect was observed in the N1 and P3 components, revealing stimulus-specific visual identification processing. The effect of numerical distance as an index of numerical magnitude representation was observed in the P2, N3 and P3 components under the two conditions. However, the topography of the distance effect was different under the two conditions, suggesting stimulus specific semantic processing of common fractions and decimal fractions. PMID:23894491

  2. Visual Evoked Cortical Potential (VECP) Elicited by Sinusoidal Gratings Controlled by Pseudo-Random Stimulation

    PubMed Central

    Araújo, Carolina S.; Souza, Givago S.; Gomes, Bruno D.; Silveira, Luiz Carlos L.

    2013-01-01

    The contributions of contrast detection mechanisms to the visual cortical evoked potential (VECP) have been investigated studying the contrast-response and spatial frequency-response functions. Previously, the use of m-sequences for stimulus control has been almost restricted to multifocal electrophysiology stimulation and, in some aspects, it substantially differs from conventional VECPs. Single stimulation with spatial contrast temporally controlled by m-sequences has not been extensively tested or compared to multifocal techniques. Our purpose was to evaluate the influence of spatial frequency and contrast of sinusoidal gratings on the VECP elicited by pseudo-random stimulation. Nine normal subjects were stimulated by achromatic sinusoidal gratings driven by pseudo random binary m-sequence at seven spatial frequencies (0.4–10 cpd) and three stimulus sizes (4°, 8°, and 16° of visual angle). At 8° subtence, six contrast levels were used (3.12–99%). The first order kernel (K1) did not provide a consistent measurable signal across spatial frequencies and contrasts that were tested–signal was very small or absent–while the second order kernel first (K2.1) and second (K2.2) slices exhibited reliable responses for the stimulus range. The main differences between results obtained with the K2.1 and K2.2 were in the contrast gain as measured in the amplitude versus contrast and amplitude versus spatial frequency functions. The results indicated that K2.1 was dominated by M-pathway, but for some stimulus condition some P-pathway contribution could be found, while the second slice reflected the P-pathway contribution. The present work extended previous findings of the visual pathways contribution to VECP elicited by pseudorandom stimulation for a wider range of spatial frequencies. PMID:23940546

  3. Visual discrimination transfer and modulation by biogenic amines in honeybees.

    PubMed

    Vieira, Amanda Rodrigues; Salles, Nayara; Borges, Marco; Mota, Theo

    2018-05-10

    For more than a century, visual learning and memory have been studied in the honeybee Apis mellifera using operant appetitive conditioning. Although honeybees show impressive visual learning capacities in this well-established protocol, operant training of free-flying animals cannot be combined with invasive protocols for studying the neurobiological basis of visual learning. In view of this, different attempts have been made to develop new classical conditioning protocols for studying visual learning in harnessed honeybees, though learning performance remains considerably poorer than that for free-flying animals. Here, we investigated the ability of honeybees to use visual information acquired during classical conditioning in a new operant context. We performed differential visual conditioning of the proboscis extension reflex (PER) followed by visual orientation tests in a Y-maze. Classical conditioning and Y-maze retention tests were performed using the same pair of perceptually isoluminant chromatic stimuli, to avoid the influence of phototaxis during free-flying orientation. Visual discrimination transfer was clearly observed, with pre-trained honeybees significantly orienting their flights towards the former positive conditioned stimulus (CS+), thus showing that visual memories acquired by honeybees are resistant to context changes between conditioning and the retention test. We combined this visual discrimination approach with selective pharmacological injections to evaluate the effect of dopamine and octopamine in appetitive visual learning. Both octopaminergic and dopaminergic antagonists impaired visual discrimination performance, suggesting that both these biogenic amines modulate appetitive visual learning in honeybees. Our study brings new insight into cognitive and neurobiological mechanisms underlying visual learning in honeybees. © 2018. Published by The Company of Biologists Ltd.

  4. Differential modulation of visual object processing in dorsal and ventral stream by stimulus visibility.

    PubMed

    Ludwig, Karin; Sterzer, Philipp; Kathmann, Norbert; Hesselmann, Guido

    2016-10-01

    As a functional organization principle in cortical visual information processing, the influential 'two visual systems' hypothesis proposes a division of labor between a dorsal "vision-for-action" and a ventral "vision-for-perception" stream. A core assumption of this model is that the two visual streams are differentially involved in visual awareness: ventral stream processing is closely linked to awareness while dorsal stream processing is not. In this functional magnetic resonance imaging (fMRI) study with human observers, we directly probed the stimulus-related information encoded in fMRI response patterns in both visual streams as a function of stimulus visibility. We parametrically modulated the visibility of face and tool stimuli by varying the contrasts of the masks in a continuous flash suppression (CFS) paradigm. We found that visibility - operationalized by objective and subjective measures - decreased proportionally with increasing log CFS mask contrast. Neuronally, this relationship was closely matched by ventral visual areas, showing a linear decrease of stimulus-related information with increasing mask contrast. Stimulus-related information in dorsal areas also showed a dependency on mask contrast, but the decrease rather followed a step function instead of a linear function. Together, our results suggest that both the ventral and the dorsal visual stream are linked to visual awareness, but neural activity in ventral areas more closely reflects graded differences in awareness compared to dorsal areas. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Effects of selective excitotoxic lesions of the nucleus accumbens core, anterior cingulate cortex, and central nucleus of the amygdala on autoshaping performance in rats.

    PubMed

    Cardinal, Rudolf N; Parkinson, John A; Lachenal, Guillaume; Halkerston, Katherine M; Rudarakanchana, Nung; Hall, Jeremy; Morrison, Caroline H; Howes, Simon R; Robbins, Trevor W; Everitt, Barry J

    2002-08-01

    The nucleus accumbens core (AcbC), anterior cingulate cortex (ACC), and central nucleus of the amygdala (CeA) are required for normal acquisition of tasks based on stimulus-reward associations. However, it is not known whether they are involved purely in the learning process or are required for behavioral expression of a learned response. Rats were trained preoperatively on a Pavlovian autoshaping task in which pairing a visual conditioned stimulus (CS+) with food causes subjects to approach the CS+ while not approaching an unpaired stimulus (CS-). Subjects then received lesions of the AcbC, ACC, or CeA before being retested. AcbC lesions severely impaired performance; lesioned subjects approached the CS+ significantly less often than controls, failing to discriminate between the CS+ and CS-. ACC lesions also impaired performance but did not abolish discrimination entirely. CeA lesions had no effect on performance. Thus, the CeA is required for learning, but not expression, of a conditioned approach response, implying that it makes a specific contribution to the learning of stimulus-reward associations.

  6. Visual Masking During Pursuit Eye Movements

    ERIC Educational Resources Information Center

    White, Charles W.

    1976-01-01

    Visual masking occurs when one stimulus interferes with the perception of another stimulus. Investigates which matters more for visual masking--that the target and masking stimuli are flashed on the same part of the retina, or, that the target and mask appear in the same place. (Author/RK)

  7. Spatial attention increases high-frequency gamma synchronisation in human medial visual cortex.

    PubMed

    Koelewijn, Loes; Rich, Anina N; Muthukumaraswamy, Suresh D; Singh, Krish D

    2013-10-01

    Visual information processing involves the integration of stimulus and goal-driven information, requiring neuronal communication. Gamma synchronisation is linked to neuronal communication, and is known to be modulated in visual cortex both by stimulus properties and voluntarily-directed attention. Stimulus-driven modulations of gamma activity are particularly associated with early visual areas such as V1, whereas attentional effects are generally localised to higher visual areas such as V4. The absence of a gamma increase in early visual cortex is at odds with robust attentional enhancements found with other measures of neuronal activity in this area. Here we used magnetoencephalography (MEG) to explore the effect of spatial attention on gamma activity in human early visual cortex using a highly effective gamma-inducing stimulus and strong attentional manipulation. In separate blocks, subjects tracked either a parafoveal grating patch that induced gamma activity in contralateral medial visual cortex, or a small line at fixation, effectively attending away from the gamma-inducing grating. Both items were always present, but rotated unpredictably and independently of each other. The rotating grating induced gamma synchronisation in medial visual cortex at 30-70 Hz, and in lateral visual cortex at 60-90 Hz, regardless of whether it was attended. Directing spatial attention to the grating increased gamma synchronisation in medial visual cortex, but only at 60-90 Hz. These results suggest that the generally found increase in gamma activity by spatial attention can be localised to early visual cortex in humans, and that stimulus and goal-driven modulations may be mediated at different frequencies within the gamma range. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Do People Take Stimulus Correlations into Account in Visual Search (Open Source)

    DTIC Science & Technology

    2016-03-10

    RESEARCH ARTICLE Do People Take Stimulus Correlations into Account in Visual Search ? Manisha Bhardwaj1, Ronald van den Berg2,3, Wei Ji Ma2,4...visual search experiments, distractors are often statistically independent of each other. However, stimuli in more naturalistic settings are often...contribute to bridging the gap between artificial and natural visual search tasks. Introduction Visual target detection in displays consisting of multiple

  9. Perceptual grouping enhances visual plasticity.

    PubMed

    Mastropasqua, Tommaso; Turatto, Massimo

    2013-01-01

    Visual perceptual learning, a manifestation of neural plasticity, refers to improvements in performance on a visual task achieved by training. Attention is known to play an important role in perceptual learning, given that the observer's discriminative ability improves only for those stimulus feature that are attended. However, the distribution of attention can be severely constrained by perceptual grouping, a process whereby the visual system organizes the initial retinal input into candidate objects. Taken together, these two pieces of evidence suggest the interesting possibility that perceptual grouping might also affect perceptual learning, either directly or via attentional mechanisms. To address this issue, we conducted two experiments. During the training phase, participants attended to the contrast of the task-relevant stimulus (oriented grating), while two similar task-irrelevant stimuli were presented in the adjacent positions. One of the two flanking stimuli was perceptually grouped with the attended stimulus as a consequence of its similar orientation (Experiment 1) or because it was part of the same perceptual object (Experiment 2). A test phase followed the training phase at each location. Compared to the task-irrelevant no-grouping stimulus, orientation discrimination improved at the attended location. Critically, a perceptual learning effect equivalent to the one observed for the attended location also emerged for the task-irrelevant grouping stimulus, indicating that perceptual grouping induced a transfer of learning to the stimulus (or feature) being perceptually grouped with the task-relevant one. Our findings indicate that no voluntary effort to direct attention to the grouping stimulus or feature is necessary to enhance visual plasticity.

  10. Differentiating aversive conditioning in bistable perception: Avoidance of a percept vs. salience of a stimulus.

    PubMed

    Wilbertz, Gregor; Sterzer, Philipp

    2018-05-01

    Alternating conscious visual perception of bistable stimuli is influenced by several factors. In order to understand the effect of negative valence, we tested the effect of two types of aversive conditioning on dominance durations in binocular rivalry. Participants received either aversive classical conditioning of the stimuli shown alone between rivalry blocks, or aversive percept conditioning of one of the two possible perceptual choices during rivalry. Both groups showed successful aversive conditioning according to skin conductance responses and affective valence ratings. However, while classical conditioning led to an immediate but short-lived increase in dominance durations of the conditioned stimulus, percept conditioning yielded no significant immediate effect but tended to decrease durations of the conditioned percept during extinction. These results show dissociable effects of value learning on perceptual inference in situations of perceptual conflict, depending on whether learning relates to the decision between conflicting perceptual choices or the sensory stimuli per se. Copyright © 2018 Elsevier Inc. All rights reserved.

  11. Teaching Equivalence Relations to Individuals with Minimal Verbal Repertoires: Are Visual and Auditory-Visual Discriminations Predictive of Stimulus Equivalence?

    ERIC Educational Resources Information Center

    Vause, Tricia; Martin, Garry L.; Yu, C.T.; Marion, Carole; Sakko, Gina

    2005-01-01

    The relationship between language, performance on the Assessment of Basic Learning Abilities (ABLA) test, and stimulus equivalence was examined. Five participants with minimal verbal repertoires were studied; 3 who passed up to ABLA Level 4, a visual quasi-identity discrimination and 2 who passed ABLA Level 6, an auditory-visual nonidentity…

  12. Women process multisensory emotion expressions more efficiently than men.

    PubMed

    Collignon, O; Girard, S; Gosselin, F; Saint-Amour, D; Lepore, F; Lassonde, M

    2010-01-01

    Despite claims in the popular press, experiments investigating whether female are more efficient than male observers at processing expression of emotions produced inconsistent findings. In the present study, participants were asked to categorize fear and disgust expressions displayed auditorily, visually, or audio-visually. Results revealed an advantage of women in all the conditions of stimulus presentation. We also observed more nonlinear probabilistic summation in the bimodal conditions in female than male observers, indicating greater neural integration of different sensory-emotional informations. These findings indicate robust differences between genders in the multisensory perception of emotion expression.

  13. Smell or vision? The use of different sensory modalities in predator discrimination.

    PubMed

    Fischer, Stefan; Oberhummer, Evelyne; Cunha-Saraiva, Filipa; Gerber, Nina; Taborsky, Barbara

    2017-01-01

    Theory predicts that animals should adjust their escape responses to the perceived predation risk. The information animals obtain about potential predation risk may differ qualitatively depending on the sensory modality by which a cue is perceived. For instance, olfactory cues may reveal better information about the presence or absence of threats, whereas visual information can reliably transmit the position and potential attack distance of a predator. While this suggests a differential use of information perceived through the two sensory channels, the relative importance of visual vs. olfactory cues when distinguishing between different predation threats is still poorly understood. Therefore, we exposed individuals of the cooperatively breeding cichlid Neolamprologus pulcher to a standardized threat stimulus combined with either predator or non-predator cues presented either visually or chemically. We predicted that flight responses towards a threat stimulus are more pronounced if cues of dangerous rather than harmless heterospecifics are presented and that N. pulcher , being an aquatic species, relies more on olfaction when discriminating between dangerous and harmless heterospecifics. N. pulcher responded faster to the threat stimulus, reached a refuge faster and entered a refuge more likely when predator cues were perceived. Unexpectedly, the sensory modality used to perceive the cues did not affect the escape response or the duration of the recovery phase. This suggests that N. pulcher are able to discriminate heterospecific cues with similar acuity when using vision or olfaction. We discuss that this ability may be advantageous in aquatic environments where the visibility conditions strongly vary over time. The ability to rapidly discriminate between dangerous predators and harmless heterospecifics is crucial for the survival of prey animals. In seasonally fluctuating environment, sensory conditions may change over the year and may make the use of multiple sensory modalities for heterospecific discrimination highly beneficial. Here we compared the efficacy of visual and olfactory senses in the discrimination ability of the cooperatively breeding cichlid Neolamprologus pulcher . We presented individual fish with visual or olfactory cues of predators or harmless heterospecifics and recorded their flight response. When exposed to predator cues, individuals responded faster, reached a refuge faster and were more likely to enter the refuge. Unexpectedly, the olfactory and visual senses seemed to be equally efficient in this discrimination task, suggesting that seasonal variation of water conditions experienced by N. pulcher may necessitate the use of multiple sensory channels for the same task.

  14. Disturbance of visual search by stimulating to posterior parietal cortex in the brain using transcranial magnetic stimulation

    NASA Astrophysics Data System (ADS)

    Iramina, Keiji; Ge, Sheng; Hyodo, Akira; Hayami, Takehito; Ueno, Shoogo

    2009-04-01

    In this study, we applied a transcranial magnetic stimulation (TMS) to investigate the temporal aspect for the functional processing of visual attention. Although it has been known that right posterior parietal cortex (PPC) in the brain has a role in certain visual search tasks, there is little knowledge about the temporal aspect of this area. Three visual search tasks that have different difficulties of task execution individually were carried out. These three visual search tasks are the "easy feature task," the "hard feature task," and the "conjunction task." To investigate the temporal aspect of the PPC involved in the visual search, we applied various stimulus onset asynchronies (SOAs) and measured the reaction time of the visual search. The magnetic stimulation was applied on the right PPC or the left PPC by the figure-eight coil. The results show that the reaction times of the hard feature task are longer than those of the easy feature task. When SOA=150 ms, compared with no-TMS condition, there was a significant increase in target-present reaction time when TMS pulses were applied. We considered that the right PPC was involved in the visual search at about SOA=150 ms after visual stimulus presentation. The magnetic stimulation to the right PPC disturbed the processing of the visual search. However, the magnetic stimulation to the left PPC gives no effect on the processing of the visual search.

  15. Exploring the additive effects of stimulus quality and word frequency: the influence of local and list-wide prime relatedness.

    PubMed

    Scaltritti, Michele; Balota, David A; Peressotti, Francesca

    2013-01-01

    Stimulus quality and word frequency produce additive effects in lexical decision performance, whereas the semantic priming effect interacts with both stimulus quality and word frequency effects. This pattern places important constraints on models of visual word recognition. In Experiment 1, all three variables were investigated within a single speeded pronunciation study. The results indicated that the joint effects of stimulus quality and word frequency were dependent upon prime relatedness. In particular, an additive effect of stimulus quality and word frequency was found after related primes, and an interactive effect was found after unrelated primes. It was hypothesized that this pattern reflects an adaptive reliance on related prime information within the experimental context. In Experiment 2, related primes were eliminated from the list, and the interactive effects of stimulus quality and word frequency found following unrelated primes in Experiment 1 reverted to additive effects for the same unrelated prime conditions. The results are supportive of a flexible lexical processor that adapts to both local prime information and global list-wide context.

  16. Incomplete cortical reorganization in macular degeneration.

    PubMed

    Liu, Tingting; Cheung, Sing-Hang; Schuchard, Ronald A; Glielmi, Christopher B; Hu, Xiaoping; He, Sheng; Legge, Gordon E

    2010-12-01

    Activity in regions of the visual cortex corresponding to central scotomas in subjects with macular degeneration (MD) is considered evidence for functional reorganization in the brain. Three unresolved issues related to cortical activity in subjects with MD were addressed: Is the cortical response to stimuli presented to the preferred retinal locus (PRL) different from other retinal loci at the same eccentricity? What effect does the role of age of onset and etiology of MD have on cortical responses? How do functional responses in an MD subject's visual cortex vary for task and stimulus conditions? Eight MD subjects-four with age-related onset (AMD) and four with juvenile onset (JMD)-and two age-matched normal vision controls, participated in three testing conditions while undergoing functional magnetic resonance imaging (fMRI). First, subjects viewed a small stimulus presented at the PRL compared with a non-PRL control location to investigate the role of the PRL. Second, they viewed a full-field flickering checkerboard compared with a small stimulus in the original fovea to investigate brain activation with passive viewing. Third, they performed a one-back task with scene images to investigate brain activation with active viewing. A small stimulus at the PRL generated more extensive cortical activation than at a non-PRL location, but neither yielded activation in the foveal cortical projection. Both passive and active viewing of full-field stimuli left a silent zone at the posterior pole of the occipital cortex, implying a lack of complete cortical reorganization. The silent zone was smaller in the task requiring active viewing compared with the task requiring passive viewing, especially in JMD subjects. The PRL for MD subjects has more extensive cortical representation than a retinal region with matched eccentricity. There is evidence for incomplete functional reorganization of early visual cortex in both JMD and AMD. Functional reorganization is more prominent in JMD. Feedback signals, possibly associated with attention, play an important role in the reorganization.

  17. Incomplete Cortical Reorganization in Macular Degeneration

    PubMed Central

    Cheung, Sing-Hang; Schuchard, Ronald A.; Glielmi, Christopher B.; Hu, Xiaoping; He, Sheng; Legge, Gordon E.

    2010-01-01

    Purpose. Activity in regions of the visual cortex corresponding to central scotomas in subjects with macular degeneration (MD) is considered evidence for functional reorganization in the brain. Three unresolved issues related to cortical activity in subjects with MD were addressed: Is the cortical response to stimuli presented to the preferred retinal locus (PRL) different from other retinal loci at the same eccentricity? What effect does the role of age of onset and etiology of MD have on cortical responses? How do functional responses in an MD subject's visual cortex vary for task and stimulus conditions? Methods. Eight MD subjects—four with age-related onset (AMD) and four with juvenile onset (JMD)—and two age-matched normal vision controls, participated in three testing conditions while undergoing functional magnetic resonance imaging (fMRI). First, subjects viewed a small stimulus presented at the PRL compared with a non-PRL control location to investigate the role of the PRL. Second, they viewed a full-field flickering checkerboard compared with a small stimulus in the original fovea to investigate brain activation with passive viewing. Third, they performed a one-back task with scene images to investigate brain activation with active viewing. Results. A small stimulus at the PRL generated more extensive cortical activation than at a non-PRL location, but neither yielded activation in the foveal cortical projection. Both passive and active viewing of full-field stimuli left a silent zone at the posterior pole of the occipital cortex, implying a lack of complete cortical reorganization. The silent zone was smaller in the task requiring active viewing compared with the task requiring passive viewing, especially in JMD subjects. Conclusions. The PRL for MD subjects has more extensive cortical representation than a retinal region with matched eccentricity. There is evidence for incomplete functional reorganization of early visual cortex in both JMD and AMD. Functional reorganization is more prominent in JMD. Feedback signals, possibly associated with attention, play an important role in the reorganization. PMID:20631240

  18. Uncertainty during pain anticipation: the adaptive value of preparatory processes.

    PubMed

    Seidel, Eva-Maria; Pfabigan, Daniela M; Hahn, Andreas; Sladky, Ronald; Grahl, Arvina; Paul, Katharina; Kraus, Christoph; Küblböck, Martin; Kranz, Georg S; Hummer, Allan; Lanzenberger, Rupert; Windischberger, Christian; Lamm, Claus

    2015-02-01

    Anticipatory processes prepare the organism for upcoming experiences. The aim of this study was to investigate neural responses related to anticipation and processing of painful stimuli occurring with different levels of uncertainty. Twenty-five participants (13 females) took part in an electroencephalography and functional magnetic resonance imaging (fMRI) experiment at separate times. A visual cue announced the occurrence of an electrical painful or nonpainful stimulus, delivered with certainty or uncertainty (50% chance), at some point during the following 15 s. During the first 2 s of the anticipation phase, a strong effect of uncertainty was reflected in a pronounced frontal stimulus-preceding negativity (SPN) and increased fMRI activation in higher visual processing areas. In the last 2 s before stimulus delivery, we observed stimulus-specific preparatory processes indicated by a centroparietal SPN and posterior insula activation that was most pronounced for the certain pain condition. Uncertain anticipation was associated with attentional control processes. During stimulation, the results revealed that unexpected painful stimuli produced the strongest activation in the affective pain processing network and a more pronounced offset-P2. Our results reflect that during early anticipation uncertainty is strongly associated with affective mechanisms and seems to be a more salient event compared to certain anticipation. During the last 2 s before stimulation, attentional control mechanisms are initiated related to the increased salience of uncertainty. Furthermore, stimulus-specific preparatory mechanisms during certain anticipation also shaped the response to stimulation, underlining the adaptive value of stimulus-targeted preparatory activity which is less likely when facing an uncertain event. © 2014 Wiley Periodicals, Inc.

  19. Speed tuning of motion segmentation and discrimination

    NASA Technical Reports Server (NTRS)

    Masson, G. S.; Mestre, D. R.; Stone, L. S.

    1999-01-01

    Motion transparency requires that the visual system distinguish different motion vectors and selectively integrate similar motion vectors over space into the perception of multiple surfaces moving through or over each other. Using large-field (7 degrees x 7 degrees) displays containing two populations of random-dots moving in the same (horizontal) direction but at different speeds, we examined speed-based segmentation by measuring the speed difference above which observers can perceive two moving surfaces. We systematically investigated this 'speed-segmentation' threshold as a function of speed and stimulus duration, and found that it increases sharply for speeds above approximately 8 degrees/s. In addition, speed-segmentation thresholds decrease with stimulus duration out to approximately 200 ms. In contrast, under matched conditions, speed-discrimination thresholds stay low at least out to 16 degrees/s and decrease with increasing stimulus duration at a faster rate than for speed segmentation. Thus, motion segmentation and motion discrimination exhibit different speed selectivity and different temporal integration characteristics. Results are discussed in terms of the speed preferences of different neuronal populations within the primate visual cortex.

  20. Influence of Coactors on Saccadic and Manual Responses

    PubMed Central

    Niehorster, Diederick C.; Jarodzka, Halszka; Holmqvist, Kenneth

    2017-01-01

    Two experiments were conducted to investigate the effects of coaction on saccadic and manual responses. Participants performed the experiments either in a solitary condition or in a group of coactors who performed the same tasks at the same time. In Experiment 1, participants completed a pro- and antisaccade task where they were required to make saccades towards (prosaccades) or away (antisaccades) from a peripheral visual stimulus. In Experiment 2, participants performed a visual discrimination task that required both making a saccade towards a peripheral stimulus and making a manual response in reaction to the stimulus’s orientation. The results showed that performance of stimulus-driven responses was independent of the social context, while volitionally controlled responses were delayed by the presence of coactors. These findings are in line with studies assessing the effect of attentional load on saccadic control during dual-task paradigms. In particular, antisaccades – but not prosaccades – were influenced by the type of social context. Additionally, the number of coactors present in the group had a moderating effect on both saccadic and manual responses. The results support an attentional view of social influences. PMID:28321288

  1. Coordinates of Human Visual and Inertial Heading Perception.

    PubMed

    Crane, Benjamin Thomas

    2015-01-01

    Heading estimation involves both inertial and visual cues. Inertial motion is sensed by the labyrinth, somatic sensation by the body, and optic flow by the retina. Because the eye and head are mobile these stimuli are sensed relative to different reference frames and it remains unclear if a perception occurs in a common reference frame. Recent neurophysiologic evidence has suggested the reference frames remain separate even at higher levels of processing but has not addressed the resulting perception. Seven human subjects experienced a 2s, 16 cm/s translation and/or a visual stimulus corresponding with this translation. For each condition 72 stimuli (360° in 5° increments) were delivered in random order. After each stimulus the subject identified the perceived heading using a mechanical dial. Some trial blocks included interleaved conditions in which the influence of ±28° of gaze and/or head position were examined. The observations were fit using a two degree-of-freedom population vector decoder (PVD) model which considered the relative sensitivity to lateral motion and coordinate system offset. For visual stimuli gaze shifts caused shifts in perceived head estimates in the direction opposite the gaze shift in all subjects. These perceptual shifts averaged 13 ± 2° for eye only gaze shifts and 17 ± 2° for eye-head gaze shifts. This finding indicates visual headings are biased towards retina coordinates. Similar gaze and head direction shifts prior to inertial headings had no significant influence on heading direction. Thus inertial headings are perceived in body-centered coordinates. Combined visual and inertial stimuli yielded intermediate results.

  2. Coordinates of Human Visual and Inertial Heading Perception

    PubMed Central

    Crane, Benjamin Thomas

    2015-01-01

    Heading estimation involves both inertial and visual cues. Inertial motion is sensed by the labyrinth, somatic sensation by the body, and optic flow by the retina. Because the eye and head are mobile these stimuli are sensed relative to different reference frames and it remains unclear if a perception occurs in a common reference frame. Recent neurophysiologic evidence has suggested the reference frames remain separate even at higher levels of processing but has not addressed the resulting perception. Seven human subjects experienced a 2s, 16 cm/s translation and/or a visual stimulus corresponding with this translation. For each condition 72 stimuli (360° in 5° increments) were delivered in random order. After each stimulus the subject identified the perceived heading using a mechanical dial. Some trial blocks included interleaved conditions in which the influence of ±28° of gaze and/or head position were examined. The observations were fit using a two degree-of-freedom population vector decoder (PVD) model which considered the relative sensitivity to lateral motion and coordinate system offset. For visual stimuli gaze shifts caused shifts in perceived head estimates in the direction opposite the gaze shift in all subjects. These perceptual shifts averaged 13 ± 2° for eye only gaze shifts and 17 ± 2° for eye-head gaze shifts. This finding indicates visual headings are biased towards retina coordinates. Similar gaze and head direction shifts prior to inertial headings had no significant influence on heading direction. Thus inertial headings are perceived in body-centered coordinates. Combined visual and inertial stimuli yielded intermediate results. PMID:26267865

  3. “Global” visual training and extent of transfer in amblyopic macaque monkeys

    PubMed Central

    Kiorpes, Lynne; Mangal, Paul

    2015-01-01

    Perceptual learning is gaining acceptance as a potential treatment for amblyopia in adults and children beyond the critical period. Many perceptual learning paradigms result in very specific improvement that does not generalize beyond the training stimulus, closely related stimuli, or visual field location. To be of use in amblyopia, a less specific effect is needed. To address this problem, we designed a more general training paradigm intended to effect improvement in visual sensitivity across tasks and domains. We used a “global” visual stimulus, random dot motion direction discrimination with 6 training conditions, and tested for posttraining improvement on a motion detection task and 3 spatial domain tasks (contrast sensitivity, Vernier acuity, Glass pattern detection). Four amblyopic macaques practiced the motion discrimination with their amblyopic eye for at least 20,000 trials. All showed improvement, defined as a change of at least a factor of 2, on the trained task. In addition, all animals showed improvements in sensitivity on at least some of the transfer test conditions, mainly the motion detection task; transfer to the spatial domain was inconsistent but best at fine spatial scales. However, the improvement on the transfer tasks was largely not retained at long-term follow-up. Our generalized training approach is promising for amblyopia treatment, but sustaining improved performance may require additional intervention. PMID:26505868

  4. Accessory stimulus modulates executive function during stepping task

    PubMed Central

    Watanabe, Tatsunori; Koyama, Soichiro; Tanabe, Shigeo

    2015-01-01

    When multiple sensory modalities are simultaneously presented, reaction time can be reduced while interference enlarges. The purpose of this research was to examine the effects of task-irrelevant acoustic accessory stimuli simultaneously presented with visual imperative stimuli on executive function during stepping. Executive functions were assessed by analyzing temporal events and errors in the initial weight transfer of the postural responses prior to a step (anticipatory postural adjustment errors). Eleven healthy young adults stepped forward in response to a visual stimulus. We applied a choice reaction time task and the Simon task, which consisted of congruent and incongruent conditions. Accessory stimuli were randomly presented with the visual stimuli. Compared with trials without accessory stimuli, the anticipatory postural adjustment error rates were higher in trials with accessory stimuli in the incongruent condition and the reaction times were shorter in trials with accessory stimuli in all the task conditions. Analyses after division of trials according to whether anticipatory postural adjustment error occurred or not revealed that the reaction times of trials with anticipatory postural adjustment errors were reduced more than those of trials without anticipatory postural adjustment errors in the incongruent condition. These results suggest that accessory stimuli modulate the initial motor programming of stepping by lowering decision threshold and exclusively under spatial incompatibility facilitate automatic response activation. The present findings advance the knowledge of intersensory judgment processes during stepping and may aid in the development of intervention and evaluation tools for individuals at risk of falls. PMID:25925321

  5. Successful inhibitory control over an immediate reward is associated with attentional disengagement in visual processing areas.

    PubMed

    O'Connor, David A; Rossiter, Sarah; Yücel, Murat; Lubman, Dan I; Hester, Robert

    2012-09-01

    We examined the neural basis of the capacity to resist an immediately rewarding stimulus in order to obtain a larger delayed reward. This was investigated with a Go/No-go task employing No-go targets that provided two types of reward outcomes. These were contingent on inhibitory control performance: failure to inhibit Reward No-go targets provided a small monetary reward with immediate feedback; while successful inhibitory control resulted in larger rewards with delayed feedback based on the highest number of consecutive inhibitions. We observed faster Go trial responses with maintained levels of inhibition accuracy during the Reward No-go condition compared to a neutral No-go condition. Comparisons between conditions of BOLD activity showed successful inhibitory control over rewarding No-Go targets was associated with hypoactivity in regions previously associated with regulating emotion and inhibitory control, including insula and right inferior frontal gyrus. In addition, regions previously associated with visual processing centers that are modulated as a function of visual attention, namely the left fusiform and right superior temporal gyri, were hypoactive. These findings suggest a role for attentional disengagement as an aid to withholding response over a rewarding stimulus and are consistent with the notion that gratification can be delayed by directing attention away from immediate rewards. Crown Copyright © 2012. Published by Elsevier Inc. All rights reserved.

  6. Multiple foci of spatial attention in multimodal working memory.

    PubMed

    Katus, Tobias; Eimer, Martin

    2016-11-15

    The maintenance of sensory information in working memory (WM) is mediated by the attentional activation of stimulus representations that are stored in perceptual brain regions. Using event-related potentials (ERPs), we measured tactile and visual contralateral delay activity (tCDA/CDA components) in a bimodal WM task to concurrently track the attention-based maintenance of information stored in anatomically segregated (somatosensory and visual) brain areas. Participants received tactile and visual sample stimuli on both sides, and in different blocks, memorized these samples on the same side or on opposite sides. After a retention delay, memory was unpredictably tested for touch or vision. In the same side blocks, tCDA and CDA components simultaneously emerged over the same hemisphere, contralateral to the memorized tactile/visual sample set. In opposite side blocks, these two components emerged over different hemispheres, but had the same sizes and onset latencies as in the same side condition. Our results reveal distinct foci of tactile and visual spatial attention that were concurrently maintained on task-relevant stimulus representations in WM. The independence of spatially-specific biasing mechanisms for tactile and visual WM content suggests that multimodal information is stored in distributed perceptual brain areas that are activated through modality-specific processes that can operate simultaneously and largely independently of each other. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Finding an emotional face in a crowd: emotional and perceptual stimulus factors influence visual search efficiency.

    PubMed

    Lundqvist, Daniel; Bruce, Neil; Öhman, Arne

    2015-01-01

    In this article, we examine how emotional and perceptual stimulus factors influence visual search efficiency. In an initial task, we run a visual search task, using a large number of target/distractor emotion combinations. In two subsequent tasks, we then assess measures of perceptual (rated and computational distances) and emotional (rated valence, arousal and potency) stimulus properties. In a series of regression analyses, we then explore the degree to which target salience (the size of target/distractor dissimilarities) on these emotional and perceptual measures predict the outcome on search efficiency measures (response times and accuracy) from the visual search task. The results show that both emotional and perceptual stimulus salience contribute to visual search efficiency. The results show that among the emotional measures, salience on arousal measures was more influential than valence salience. The importance of the arousal factor may be a contributing factor to contradictory history of results within this field.

  8. McGurk stimuli for the investigation of multisensory integration in cochlear implant users: The Oldenburg Audio Visual Speech Stimuli (OLAVS).

    PubMed

    Stropahl, Maren; Schellhardt, Sebastian; Debener, Stefan

    2017-06-01

    The concurrent presentation of different auditory and visual syllables may result in the perception of a third syllable, reflecting an illusory fusion of visual and auditory information. This well-known McGurk effect is frequently used for the study of audio-visual integration. Recently, it was shown that the McGurk effect is strongly stimulus-dependent, which complicates comparisons across perceivers and inferences across studies. To overcome this limitation, we developed the freely available Oldenburg audio-visual speech stimuli (OLAVS), consisting of 8 different talkers and 12 different syllable combinations. The quality of the OLAVS set was evaluated with 24 normal-hearing subjects. All 96 stimuli were characterized based on their stimulus disparity, which was obtained from a probabilistic model (cf. Magnotti & Beauchamp, 2015). Moreover, the McGurk effect was studied in eight adult cochlear implant (CI) users. By applying the individual, stimulus-independent parameters of the probabilistic model, the predicted effect of stronger audio-visual integration in CI users could be confirmed, demonstrating the validity of the new stimulus material.

  9. Fluctuations of visual awareness: Combining motion-induced blindness with binocular rivalry

    PubMed Central

    Jaworska, Katarzyna; Lages, Martin

    2014-01-01

    Binocular rivalry (BR) and motion-induced blindness (MIB) are two phenomena of visual awareness where perception alternates between multiple states despite constant retinal input. Both phenomena have been extensively studied, but the underlying processing remains unclear. It has been suggested that BR and MIB involve the same neural mechanism, but how the two phenomena compete for visual awareness in the same stimulus has not been systematically investigated. Here we introduce BR in a dichoptic stimulus display that can also elicit MIB and examine fluctuations of visual awareness over the course of each trial. Exploiting this paradigm we manipulated stimulus characteristics that are known to influence MIB and BR. In two experiments we found that effects on multistable percepts were incompatible with the idea of a common oscillator. The results suggest instead that local and global stimulus attributes can affect the dynamics of each percept differently. We conclude that the two phenomena of visual awareness share basic temporal characteristics but are most likely influenced by processing at different stages within the visual system. PMID:25240063

  10. Differential effects of ongoing EEG beta and theta power on memory formation

    PubMed Central

    Scholz, Sebastian; Schneider, Signe Luisa

    2017-01-01

    Recently, elevated ongoing pre-stimulus beta power (13–17 Hz) at encoding has been associated with subsequent memory formation for visual stimulus material. It is unclear whether this activity is merely specific to visual processing or whether it reflects a state facilitating general memory formation, independent of stimulus modality. To answer that question, the present study investigated the relationship between neural pre-stimulus oscillations and verbal memory formation in different sensory modalities. For that purpose, a within-subject design was employed to explore differences between successful and failed memory formation in the visual and auditory modality. Furthermore, associative memory was addressed by presenting the stimuli in combination with background images. Results revealed that similar EEG activity in the low beta frequency range (13–17 Hz) is associated with subsequent memory success, independent of stimulus modality. Elevated power prior to stimulus onset differentiated successful from failed memory formation. In contrast, differential effects between modalities were found in the theta band (3–7 Hz), with an increased oscillatory activity before the onset of later remembered visually presented words. In addition, pre-stimulus theta power dissociated between successful and failed encoding of associated context, independent of the stimulus modality of the item itself. We therefore suggest that increased ongoing low beta activity reflects a memory promoting state, which is likely to be moderated by modality-independent attentional or inhibitory processes, whereas high ongoing theta power is suggested as an indicator of the enhanced binding of incoming interlinked information. PMID:28192459

  11. Intermittent regime of brain activity at the early, bias-guided stage of perceptual learning.

    PubMed

    Nikolaev, Andrey R; Gepshtein, Sergei; van Leeuwen, Cees

    2016-11-01

    Perceptual learning improves visual performance. Among the plausible mechanisms of learning, reduction of perceptual bias has been studied the least. Perceptual bias may compensate for lack of stimulus information, but excessive reliance on bias diminishes visual discriminability. We investigated the time course of bias in a perceptual grouping task and studied the associated cortical dynamics in spontaneous and evoked EEG. Participants reported the perceived orientation of dot groupings in ambiguous dot lattices. Performance improved over a 1-hr period as indicated by the proportion of trials in which participants preferred dot groupings favored by dot proximity. The proximity-based responses were compromised by perceptual bias: Vertical groupings were sometimes preferred to horizontal ones, independent of dot proximity. In the evoked EEG activity, greater amplitude of the N1 component for horizontal than vertical responses indicated that the bias was most prominent in conditions of reduced visual discriminability. The prominence of bias decreased in the course of the experiment. Although the bias was still prominent, prestimulus activity was characterized by an intermittent regime of alternating modes of low and high alpha power. Responses were more biased in the former mode, indicating that perceptual bias was deployed actively to compensate for stimulus uncertainty. Thus, early stages of perceptual learning were characterized by episodes of greater reliance on prior visual preferences, alternating with episodes of receptivity to stimulus information. In the course of learning, the former episodes disappeared, and biases reappeared only infrequently.

  12. Dissociating verbal and nonverbal audiovisual object processing.

    PubMed

    Hocking, Julia; Price, Cathy J

    2009-02-01

    This fMRI study investigates how audiovisual integration differs for verbal stimuli that can be matched at a phonological level and nonverbal stimuli that can be matched at a semantic level. Subjects were presented simultaneously with one visual and one auditory stimulus and were instructed to decide whether these stimuli referred to the same object or not. Verbal stimuli were simultaneously presented spoken and written object names, and nonverbal stimuli were photographs of objects simultaneously presented with naturally occurring object sounds. Stimulus differences were controlled by including two further conditions that paired photographs of objects with spoken words and object sounds with written words. Verbal matching, relative to all other conditions, increased activation in a region of the left superior temporal sulcus that has previously been associated with phonological processing. Nonverbal matching, relative to all other conditions, increased activation in a right fusiform region that has previously been associated with structural and conceptual object processing. Thus, we demonstrate how brain activation for audiovisual integration depends on the verbal content of the stimuli, even when stimulus and task processing differences are controlled.

  13. Recent Visual Experience Shapes Visual Processing in Rats through Stimulus-Specific Adaptation and Response Enhancement.

    PubMed

    Vinken, Kasper; Vogels, Rufin; Op de Beeck, Hans

    2017-03-20

    From an ecological point of view, it is generally suggested that the main goal of vision in rats and mice is navigation and (aerial) predator evasion [1-3]. The latter requires fast and accurate detection of a change in the visual environment. An outstanding question is whether there are mechanisms in the rodent visual system that would support and facilitate visual change detection. An experimental protocol frequently used to investigate change detection in humans is the oddball paradigm, in which a rare, unexpected stimulus is presented in a train of stimulus repetitions [4]. A popular "predictive coding" theory of cortical responses states that neural responses should decrease for expected sensory input and increase for unexpected input [5, 6]. Despite evidence for response suppression and enhancement in noninvasive scalp recordings in humans with this paradigm [7, 8], it has proven challenging to observe both phenomena in invasive action potential recordings in other animals [9-11]. During a visual oddball experiment, we recorded multi-unit spiking activity in rat primary visual cortex (V1) and latero-intermediate area (LI), which is a higher area of the rodent ventral visual stream. In rat V1, there was only evidence for response suppression related to stimulus-specific adaptation, and not for response enhancement. However, higher up in area LI, spiking activity showed clear surprise-based response enhancement in addition to stimulus-specific adaptation. These results show that neural responses along the rat ventral visual stream become increasingly sensitive to changes in the visual environment, suggesting a system specialized in the detection of unexpected events. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Perceptual expertise and top-down expectation of musical notation engages the primary visual cortex.

    PubMed

    Wong, Yetta Kwailing; Peng, Cynthia; Fratus, Kristyn N; Woodman, Geoffrey F; Gauthier, Isabel

    2014-08-01

    Most theories of visual processing propose that object recognition is achieved in higher visual cortex. However, we show that category selectivity for musical notation can be observed in the first ERP component called the C1 (measured 40-60 msec after stimulus onset) with music-reading expertise. Moreover, the C1 note selectivity was observed only when the stimulus category was blocked but not when the stimulus category was randomized. Under blocking, the C1 activity for notes predicted individual music-reading ability, and behavioral judgments of musical stimuli reflected music-reading skill. Our results challenge current theories of object recognition, indicating that the primary visual cortex can be selective for musical notation within the initial feedforward sweep of activity with perceptual expertise and with a testing context that is consistent with the expertise training, such as blocking the stimulus category for music reading.

  15. Honeybees in a virtual reality environment learn unique combinations of colour and shape.

    PubMed

    Rusch, Claire; Roth, Eatai; Vinauger, Clément; Riffell, Jeffrey A

    2017-10-01

    Honeybees are well-known models for the study of visual learning and memory. Whereas most of our knowledge of learned responses comes from experiments using free-flying bees, a tethered preparation would allow fine-scale control of the visual stimuli as well as accurate characterization of the learned responses. Unfortunately, conditioning procedures using visual stimuli in tethered bees have been limited in their efficacy. In this study, using a novel virtual reality environment and a differential training protocol in tethered walking bees, we show that the majority of honeybees learn visual stimuli, and need only six paired training trials to learn the stimulus. We found that bees readily learn visual stimuli that differ in both shape and colour. However, bees learn certain components over others (colour versus shape), and visual stimuli are learned in a non-additive manner with the interaction of specific colour and shape combinations being crucial for learned responses. To better understand which components of the visual stimuli the bees learned, the shape-colour association of the stimuli was reversed either during or after training. Results showed that maintaining the visual stimuli in training and testing phases was necessary to elicit visual learning, suggesting that bees learn multiple components of the visual stimuli. Together, our results demonstrate a protocol for visual learning in restrained bees that provides a powerful tool for understanding how components of a visual stimulus elicit learned responses as well as elucidating how visual information is processed in the honeybee brain. © 2017. Published by The Company of Biologists Ltd.

  16. Effects of Systematically Depriving Access to Computer-Based Stimuli on Choice Responding with Individuals with Intellectual Disabilities

    ERIC Educational Resources Information Center

    Reyer, Howard S.; Sturmey, Peter

    2009-01-01

    Three adults with intellectual disabilities participated to investigate the effects of reinforcer deprivation on choice responding. The experimenter identified the most preferred audio-visual (A-V) stimulus and the least preferred visual-only stimulus for each participant. Participants did not have access to the A-V stimulus for 5 min, 5 and 24 h.…

  17. Effects of Visual Speech on Early Auditory Evoked Fields - From the Viewpoint of Individual Variance.

    PubMed

    Yahata, Izumi; Kawase, Tetsuaki; Kanno, Akitake; Hidaka, Hiroshi; Sakamoto, Shuichi; Nakasato, Nobukazu; Kawashima, Ryuta; Katori, Yukio

    2017-01-01

    The effects of visual speech (the moving image of the speaker's face uttering speech sound) on early auditory evoked fields (AEFs) were examined using a helmet-shaped magnetoencephalography system in 12 healthy volunteers (9 males, mean age 35.5 years). AEFs (N100m) in response to the monosyllabic sound /be/ were recorded and analyzed under three different visual stimulus conditions, the moving image of the same speaker's face uttering /be/ (congruent visual stimuli) or uttering /ge/ (incongruent visual stimuli), and visual noise (still image processed from speaker's face using a strong Gaussian filter: control condition). On average, latency of N100m was significantly shortened in the bilateral hemispheres for both congruent and incongruent auditory/visual (A/V) stimuli, compared to the control A/V condition. However, the degree of N100m shortening was not significantly different between the congruent and incongruent A/V conditions, despite the significant differences in psychophysical responses between these two A/V conditions. Moreover, analysis of the magnitudes of these visual effects on AEFs in individuals showed that the lip-reading effects on AEFs tended to be well correlated between the two different audio-visual conditions (congruent vs. incongruent visual stimuli) in the bilateral hemispheres but were not significantly correlated between right and left hemisphere. On the other hand, no significant correlation was observed between the magnitudes of visual speech effects and psychophysical responses. These results may indicate that the auditory-visual interaction observed on the N100m is a fundamental process which does not depend on the congruency of the visual information.

  18. Perceptual Grouping Enhances Visual Plasticity

    PubMed Central

    Mastropasqua, Tommaso; Turatto, Massimo

    2013-01-01

    Visual perceptual learning, a manifestation of neural plasticity, refers to improvements in performance on a visual task achieved by training. Attention is known to play an important role in perceptual learning, given that the observer's discriminative ability improves only for those stimulus feature that are attended. However, the distribution of attention can be severely constrained by perceptual grouping, a process whereby the visual system organizes the initial retinal input into candidate objects. Taken together, these two pieces of evidence suggest the interesting possibility that perceptual grouping might also affect perceptual learning, either directly or via attentional mechanisms. To address this issue, we conducted two experiments. During the training phase, participants attended to the contrast of the task-relevant stimulus (oriented grating), while two similar task-irrelevant stimuli were presented in the adjacent positions. One of the two flanking stimuli was perceptually grouped with the attended stimulus as a consequence of its similar orientation (Experiment 1) or because it was part of the same perceptual object (Experiment 2). A test phase followed the training phase at each location. Compared to the task-irrelevant no-grouping stimulus, orientation discrimination improved at the attended location. Critically, a perceptual learning effect equivalent to the one observed for the attended location also emerged for the task-irrelevant grouping stimulus, indicating that perceptual grouping induced a transfer of learning to the stimulus (or feature) being perceptually grouped with the task-relevant one. Our findings indicate that no voluntary effort to direct attention to the grouping stimulus or feature is necessary to enhance visual plasticity. PMID:23301100

  19. Spatial attention facilitates assembly of the briefest percepts: Electrophysiological evidence from color fusion.

    PubMed

    Akyürek, Elkan G; van Asselt, E Manon

    2015-12-01

    When two different color stimuli are presented in rapid succession, the resulting percept is sometimes that of a mixture of both colors, due to a perceptual process called color fusion. Although color fusion might seem to occur very early in the visual pathway, and only happens across the briefest of stimulus presentation intervals (< 50 ms), the present study showed that spatial attention can alter the fusion process. In a series of experiments, spatial cues were presented that either validly indicated the location of a pair of (different) color stimuli in successive stimulus arrays, or did not, pointing toward isoluminant gray distractors in the other visual hemifield. Increased color fusion was observed for valid cues across a range of stimulus durations, at the expense of individual color reports. By contrast, perception of repeated, same-color stimulus pairs did not change, suggesting that the enhancement was specific to fusion, not color discrimination per se. Electrophysiological measures furthermore showed that the amplitude of the N1, N2pc, and P3 components of the ERP were differentially modulated during the perception of individual and fused colors, as a function of cueing and stimulus duration. Fusion itself, collapsed across cueing conditions, was reflected uniquely in N1 amplitude. Overall, the results suggest that spatial attention enhances color fusion and decreases competition between stimuli, constituting an adaptive slowdown in service of temporal integration. © 2015 Society for Psychophysiological Research.

  20. Stimulation of the substantia nigra influences the specification of memory-guided saccades

    PubMed Central

    Mahamed, Safraaz; Garrison, Tiffany J.; Shires, Joel

    2013-01-01

    In the absence of sensory information, we rely on past experience or memories to guide our actions. Because previous experimental and clinical reports implicate basal ganglia nuclei in the generation of movement in the absence of sensory stimuli, we ask here whether one output nucleus of the basal ganglia, the substantia nigra pars reticulata (nigra), influences the specification of an eye movement in the absence of sensory information to guide the movement. We manipulated the level of activity of neurons in the nigra by introducing electrical stimulation to the nigra at different time intervals while monkeys made saccades to different locations in two conditions: one in which the target location remained visible and a second in which the target location appeared only briefly, requiring information stored in memory to specify the movement. Electrical manipulation of the nigra occurring during the delay period of the task, when information about the target was maintained in memory, altered the direction and the occurrence of subsequent saccades. Stimulation during other intervals of the memory task or during the delay period of the visually guided saccade task had less effect on eye movements. On stimulated trials, and only when the visual stimulus was absent, monkeys occasionally (∼20% of the time) failed to make saccades. When monkeys made saccades in the absence of a visual stimulus, stimulation of the nigra resulted in a rotation of the endpoints ipsilaterally (∼2°) and increased the reaction time of contralaterally directed saccades. When the visual stimulus was present, stimulation of the nigra resulted in no significant rotation and decreased the reaction time of contralaterally directed saccades slightly. Based on these measurements, stimulation during the delay period of the memory-guided saccade task influenced the metrics of saccades much more than did stimulation during the same period of the visually guided saccade task. Because these effects occurred with manipulation of nigral activity well before the initiation of saccades and in trials in which the visual stimulus was absent, we conclude that information from the basal ganglia influences the specification of an action as it is evolving primarily during performance of memory-guided saccades. When visual information is available to guide the specification of the saccade, as occurs during visually guided saccades, basal ganglia information is less influential. PMID:24259551

  1. Effects of vicarious pain on self-pain perception: investigating the role of awareness

    PubMed Central

    Terrighena, Esslin L; Lu, Ge; Yuen, Wai Ping; Lee, Tatia MC; Keuper, Kati

    2017-01-01

    The observation of pain in others may enhance or reduce self-pain, yet the boundary conditions and factors that determine the direction of such effects are poorly understood. The current study set out to show that visual stimulus awareness plays a crucial role in determining whether vicarious pain primarily activates behavioral defense systems that enhance pain sensitivity and stimulate withdrawal or appetitive systems that attenuate pain sensitivity and stimulate approach. We employed a mixed factorial design with the between-subject factors exposure time (subliminal vs optimal) and vicarious pain (pain vs no pain images), and the within-subject factor session (baseline vs trial) to investigate how visual awareness of vicarious pain images affects subsequent self-pain in the cold-pressor test. Self-pain tolerance, intensity and unpleasantness were evaluated in a sample of 77 healthy participants. Results revealed significant interactions of exposure time and vicarious pain in all three dependent measures. In the presence of visual awareness (optimal condition), vicarious pain compared to no-pain elicited overall enhanced self-pain sensitivity, indexed by reduced pain tolerance and enhanced ratings of pain intensity and unpleasantness. Conversely, in the absence of visual awareness (subliminal condition), vicarious pain evoked decreased self-pain intensity and unpleasantness while pain tolerance remained unaffected. These findings suggest that the activation of defense mechanisms by vicarious pain depends on relatively elaborate cognitive processes, while – strikingly – the appetitive system is activated in highly automatic manner independent from stimulus awareness. Such mechanisms may have evolved to facilitate empathic, protective approach responses toward suffering individuals, ensuring survival of the protective social group. PMID:28831270

  2. Language experience shapes early electrophysiological responses to visual stimuli: the effects of writing system, stimulus length, and presentation duration.

    PubMed

    Xue, Gui; Jiang, Ting; Chen, Chuansheng; Dong, Qi

    2008-02-15

    How language experience affects visual word recognition has been a topic of intense interest. Using event-related potentials (ERPs), the present study compared the early electrophysiological responses (i.e., N1) to familiar and unfamiliar writings under different conditions. Thirteen native Chinese speakers (with English as their second language) were recruited to passively view four types of scripts: Chinese (familiar logographic writings), English (familiar alphabetic writings), Korean Hangul (unfamiliar logographic writings), and Tibetan (unfamiliar alphabetic writings). Stimuli also differed in lexicality (words vs. non-words, for familiar writings only), length (characters/letters vs. words), and presentation duration (100 ms vs. 750 ms). We found no significant differences between words and non-words, and the effect of language experience (familiar vs. unfamiliar) was significantly modulated by stimulus length and writing system, and to a less degree, by presentation duration. That is, the language experience effect (i.e., a stronger N1 response to familiar writings than to unfamiliar writings) was significant only for alphabetic letters, but not for alphabetic and logographic words. The difference between Chinese characters and unfamiliar logographic characters was significant under the condition of short presentation duration, but not under the condition of long presentation duration. Long stimuli elicited a stronger N1 response than did short stimuli, but this effect was significantly attenuated for familiar writings. These results suggest that N1 response might not reliably differentiate familiar and unfamiliar writings. More importantly, our results suggest that N1 is modulated by visual, linguistic, and task factors, which has important implications for the visual expertise hypothesis.

  3. Effect of visual field locus and oscillation frequencies on posture control in an ecological environment.

    PubMed

    Piponnier, Jean-Claude; Hanssens, Jean-Marie; Faubert, Jocelyn

    2009-01-14

    To examine the respective roles of central and peripheral vision in the control of posture, body sway amplitude (BSA) and postural perturbations (given by velocity root mean square or vRMS) were calculated in a group of 19 healthy young adults. The stimulus was a 3D tunnel, either static or moving sinusoidally in the anterior-posterior direction. There were nine visual field conditions: four central conditions (4, 7, 15, and 30 degrees); four peripheral conditions (central occlusions of 4, 7, 15, and 30 degrees); and a full visual field condition (FF). The virtual tunnel respected all the aspects of a real physical tunnel (i.e., stereoscopy and size increase with proximity). The results show that, under static conditions, central and peripheral visual fields appear to have equal importance for the control of stance. In the presence of an optic flow, peripheral vision plays a crucial role in the control of stance, since it is responsible for a compensatory sway, whereas central vision has an accessory role that seems to be related to spatial orientation.

  4. A method for closed-loop presentation of sensory stimuli conditional on the internal brain-state of awake animals

    PubMed Central

    Rutishauser, Ueli; Kotowicz, Andreas; Laurent, Gilles

    2013-01-01

    Brain activity often consists of interactions between internal—or on-going—and external—or sensory—activity streams, resulting in complex, distributed patterns of neural activity. Investigation of such interactions could benefit from closed-loop experimental protocols in which one stream can be controlled depending on the state of the other. We describe here methods to present rapid and precisely timed visual stimuli to awake animals, conditional on features of the animal’s on-going brain state; those features are the presence, power and phase of oscillations in local field potentials (LFP). The system can process up to 64 channels in real time. We quantified its performance using simulations, synthetic data and animal experiments (chronic recordings in the dorsal cortex of awake turtles). The delay from detection of an oscillation to the onset of a visual stimulus on an LCD screen was 47.5 ms and visual-stimulus onset could be locked to the phase of ongoing oscillations at any frequency ≤40 Hz. Our software’s architecture is flexible, allowing on-the-fly modifications by experimenters and the addition of new closed-loop control and analysis components through plugins. The source code of our system “StimOMatic” is available freely as open-source. PMID:23473800

  5. Oscillatory encoding of visual stimulus familiarity.

    PubMed

    Kissinger, Samuel T; Pak, Alexandr; Tang, Yu; Masmanidis, Sotiris C; Chubykin, Alexander A

    2018-06-18

    Familiarity of the environment changes the way we perceive and encode incoming information. However, the neural substrates underlying this phenomenon are poorly understood. Here we describe a new form of experience-dependent low frequency oscillations in the primary visual cortex (V1) of awake adult male mice. The oscillations emerged in visually evoked potentials (VEPs) and single-unit activity following repeated visual stimulation. The oscillations were sensitive to the spatial frequency content of a visual stimulus and required the muscarinic acetylcholine receptors (mAChRs) for their induction and expression. Finally, ongoing visually evoked theta (4-6 Hz) oscillations boost the VEP amplitude of incoming visual stimuli if the stimuli are presented at the high excitability phase of the oscillations. Our results demonstrate that an oscillatory code can be used to encode familiarity and serves as a gate for oncoming sensory inputs. Significance Statement. Previous experience can influence the processing of incoming sensory information by the brain and alter perception. However, the mechanistic understanding of how this process takes place is lacking. We have discovered that persistent low frequency oscillations in the primary visual cortex encode information about familiarity and the spatial frequency of the stimulus. These familiarity evoked oscillations influence neuronal responses to the oncoming stimuli in a way that depends on the oscillation phase. Our work demonstrates a new mechanism of visual stimulus feature detection and learning. Copyright © 2018 the authors.

  6. Affective valence, stimulus attributes, and P300: color vs. black/white and normal vs. scrambled images.

    PubMed

    Cano, Maya E; Class, Quetzal A; Polich, John

    2009-01-01

    Pictures from the International Affective Picture System (IAPS) were selected to manipulate affective valence (unpleasant, neutral, pleasant) while keeping arousal level the same. The pictures were presented in an oddball paradigm, with a visual pattern used as the standard stimulus. Subjects pressed a button whenever a target was detected. Experiment 1 presented normal pictures in color and black/white. Control stimuli were constructed for both the color and black/white conditions by randomly rearranging 1 cm square fragments of each original picture to produce a "scrambled" image. Experiment 2 presented the same normal color pictures with large, medium, and small scrambled condition (2, 1, and 0.5 cm squares). The P300 event-related brain potential demonstrated larger amplitudes over frontal areas for positive compared to negative or neutral images for normal color pictures in both experiments. Attenuated and nonsignificant valence effects were obtained for black/white images. Scrambled stimuli in each study yielded no valence effects but demonstrated typical P300 topography that increased from frontal to parietal areas. The findings suggest that P300 amplitude is sensitive to affective picture valence in the absence of stimulus arousal differences, and that stimulus color contributes to ERP valence effects.

  7. Controlling the spotlight of attention: visual span size and flexibility in schizophrenia.

    PubMed

    Elahipanah, Ava; Christensen, Bruce K; Reingold, Eyal M

    2011-10-01

    The current study investigated the size and flexible control of visual span among patients with schizophrenia during visual search performance. Visual span is the region of the visual field from which one extracts information during a single eye fixation, and a larger visual span size is linked to more efficient search performance. Therefore, a reduced visual span may explain patients' impaired performance on search tasks. The gaze-contingent moving window paradigm was used to estimate the visual span size of patients and healthy participants while they performed two different search tasks. In addition, changes in visual span size were measured as a function of two manipulations of task difficulty: target-distractor similarity and stimulus familiarity. Patients with schizophrenia searched more slowly across both tasks and conditions. Patients also demonstrated smaller visual span sizes on the easier search condition in each task. Moreover, healthy controls' visual span size increased as target discriminability or distractor familiarity increased. This modulation of visual span size, however, was reduced or not observed among patients. The implications of the present findings, with regard to previously reported visual search deficits, and other functional and structural abnormalities associated with schizophrenia, are discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Size matters: large objects capture attention in visual search.

    PubMed

    Proulx, Michael J

    2010-12-23

    Can objects or events ever capture one's attention in a purely stimulus-driven manner? A recent review of the literature set out the criteria required to find stimulus-driven attentional capture independent of goal-directed influences, and concluded that no published study has satisfied that criteria. Here visual search experiments assessed whether an irrelevantly large object can capture attention. Capture of attention by this static visual feature was found. The results suggest that a large object can indeed capture attention in a stimulus-driven manner and independent of displaywide features of the task that might encourage a goal-directed bias for large items. It is concluded that these results are either consistent with the stimulus-driven criteria published previously or alternatively consistent with a flexible, goal-directed mechanism of saliency detection.

  9. Non-Instrumental Movement Inhibition (NIMI) Differentially Suppresses Head and Thigh Movements during Screenic Engagement: Dependence on Interaction

    PubMed Central

    Witchel, Harry J.; Santos, Carlos P.; Ackah, James K.; Westling, Carina E. I.; Chockalingam, Nachiappan

    2016-01-01

    Background: Estimating engagement levels from postural micromovements has been summarized by some researchers as: increased proximity to the screen is a marker for engagement, while increased postural movement is a signal for disengagement or negative affect. However, these findings are inconclusive: the movement hypothesis challenges other findings of dyadic interaction in humans, and experiments on the positional hypothesis diverge from it. Hypotheses: (1) Under controlled conditions, adding a relevant visual stimulus to an auditory stimulus will preferentially result in Non-Instrumental Movement Inhibition (NIMI) of the head. (2) When instrumental movements are eliminated and computer-interaction rate is held constant, for two identically-structured stimuli, cognitive engagement (i.e., interest) will result in measurable NIMI of the body generally. Methods: Twenty-seven healthy participants were seated in front of a computer monitor and speakers. Discrete 3-min stimuli were presented with interactions mediated via a handheld trackball without any keyboard, to minimize instrumental movements of the participant's body. Music videos and audio-only music were used to test hypothesis (1). Time-sensitive, highly interactive stimuli were used to test hypothesis (2). Subjective responses were assessed via visual analog scales. The computer users' movements were quantified using video motion tracking from the lateral aspect. Repeated measures ANOVAs with Tukey post hoc comparisons were performed. Results: For two equivalently-engaging music videos, eliminating the visual content elicited significantly increased non-instrumental movements of the head (while also decreasing subjective engagement); a highly engaging user-selected piece of favorite music led to further increased non-instrumental movement. For two comparable reading tasks, the more engaging reading significantly inhibited (42%) movement of the head and thigh; however, when a highly engaging video game was compared to the boring reading, even though the reading task and the game had similar levels of interaction (trackball clicks), only thigh movement was significantly inhibited, not head movement. Conclusions: NIMI can be elicited by adding a relevant visual accompaniment to an audio-only stimulus or by making a stimulus cognitively engaging. However, these results presume that all other factors are held constant, because total movement rates can be affected by cognitive engagement, instrumental movements, visual requirements, and the time-sensitivity of the stimulus. PMID:26941666

  10. Non-Instrumental Movement Inhibition (NIMI) Differentially Suppresses Head and Thigh Movements during Screenic Engagement: Dependence on Interaction.

    PubMed

    Witchel, Harry J; Santos, Carlos P; Ackah, James K; Westling, Carina E I; Chockalingam, Nachiappan

    2016-01-01

    Estimating engagement levels from postural micromovements has been summarized by some researchers as: increased proximity to the screen is a marker for engagement, while increased postural movement is a signal for disengagement or negative affect. However, these findings are inconclusive: the movement hypothesis challenges other findings of dyadic interaction in humans, and experiments on the positional hypothesis diverge from it. (1) Under controlled conditions, adding a relevant visual stimulus to an auditory stimulus will preferentially result in Non-Instrumental Movement Inhibition (NIMI) of the head. (2) When instrumental movements are eliminated and computer-interaction rate is held constant, for two identically-structured stimuli, cognitive engagement (i.e., interest) will result in measurable NIMI of the body generally. Twenty-seven healthy participants were seated in front of a computer monitor and speakers. Discrete 3-min stimuli were presented with interactions mediated via a handheld trackball without any keyboard, to minimize instrumental movements of the participant's body. Music videos and audio-only music were used to test hypothesis (1). Time-sensitive, highly interactive stimuli were used to test hypothesis (2). Subjective responses were assessed via visual analog scales. The computer users' movements were quantified using video motion tracking from the lateral aspect. Repeated measures ANOVAs with Tukey post hoc comparisons were performed. For two equivalently-engaging music videos, eliminating the visual content elicited significantly increased non-instrumental movements of the head (while also decreasing subjective engagement); a highly engaging user-selected piece of favorite music led to further increased non-instrumental movement. For two comparable reading tasks, the more engaging reading significantly inhibited (42%) movement of the head and thigh; however, when a highly engaging video game was compared to the boring reading, even though the reading task and the game had similar levels of interaction (trackball clicks), only thigh movement was significantly inhibited, not head movement. NIMI can be elicited by adding a relevant visual accompaniment to an audio-only stimulus or by making a stimulus cognitively engaging. However, these results presume that all other factors are held constant, because total movement rates can be affected by cognitive engagement, instrumental movements, visual requirements, and the time-sensitivity of the stimulus.

  11. Orienting attention to visual or verbal/auditory imagery differentially impairs the processing of visual stimuli.

    PubMed

    Villena-González, Mario; López, Vladimir; Rodríguez, Eugenio

    2016-05-15

    When attention is oriented toward inner thoughts, as spontaneously occurs during mind wandering, the processing of external information is attenuated. However, the potential effects of thought's content regarding sensory attenuation are still unknown. The present study aims to assess if the representational format of thoughts, such as visual imagery or inner speech, might differentially affect the sensory processing of external stimuli. We recorded the brain activity of 20 participants (12 women) while they were exposed to a probe visual stimulus in three different conditions: executing a task on the visual probe (externally oriented attention), and two conditions involving inward-turned attention i.e. generating inner speech and performing visual imagery. Event-related potentials results showed that the P1 amplitude, related with sensory response, was significantly attenuated during both task involving inward attention compared with external task. When both representational formats were compared, the visual imagery condition showed stronger attenuation in sensory processing than inner speech condition. Alpha power in visual areas was measured as an index of cortical inhibition. Larger alpha amplitude was found when participants engaged in an internal thought contrasted with the external task, with visual imagery showing even more alpha power than inner speech condition. Our results show, for the first time to our knowledge, that visual attentional processing to external stimuli during self-generated thoughts is differentially affected by the representational format of the ongoing train of thoughts. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. The Influence of Stimulus Material on Attention and Performance in the Visual Expectation Paradigm: A Longitudinal Study with 3- And 6-Month-Old Infants

    ERIC Educational Resources Information Center

    Teubert, Manuel; Lohaus, Arnold; Fassbender, Ina; Vierhaus, Marc; Spangler, Sibylle; Borchert, Sonja; Freitag, Claudia; Goertz, Claudia; Graf, Frauke; Gudi, Helene; Kolling, Thorsten; Lamm, Bettina; Keller, Heidi; Knopf, Monika; Schwarzer, Gudrun

    2012-01-01

    This longitudinal study examined the influence of stimulus material on attention and expectation learning in the visual expectation paradigm. Female faces were used as attention-attracting stimuli, and non-meaningful visual stimuli of comparable complexity (Greebles) were used as low attention-attracting stimuli. Expectation learning performance…

  13. Auditory-visual integration modulates location-specific repetition suppression of auditory responses.

    PubMed

    Shrem, Talia; Murray, Micah M; Deouell, Leon Y

    2017-11-01

    Space is a dimension shared by different modalities, but at what stage spatial encoding is affected by multisensory processes is unclear. Early studies observed attenuation of N1/P2 auditory evoked responses following repetition of sounds from the same location. Here, we asked whether this effect is modulated by audiovisual interactions. In two experiments, using a repetition-suppression paradigm, we presented pairs of tones in free field, where the test stimulus was a tone presented at a fixed lateral location. Experiment 1 established a neural index of auditory spatial sensitivity, by comparing the degree of attenuation of the response to test stimuli when they were preceded by an adapter sound at the same location versus 30° or 60° away. We found that the degree of attenuation at the P2 latency was inversely related to the spatial distance between the test stimulus and the adapter stimulus. In Experiment 2, the adapter stimulus was a tone presented from the same location or a more medial location than the test stimulus. The adapter stimulus was accompanied by a simultaneous flash displayed orthogonally from one of the two locations. Sound-flash incongruence reduced accuracy in a same-different location discrimination task (i.e., the ventriloquism effect) and reduced the location-specific repetition-suppression at the P2 latency. Importantly, this multisensory effect included topographic modulations, indicative of changes in the relative contribution of underlying sources across conditions. Our findings suggest that the auditory response at the P2 latency is affected by spatially selective brain activity, which is affected crossmodally by visual information. © 2017 Society for Psychophysiological Research.

  14. Discriminative stimuli that control instrumental tobacco-seeking by human smokers also command selective attention.

    PubMed

    Hogarth, Lee; Dickinson, Anthony; Duka, Theodora

    2003-08-01

    Incentive salience theory states that acquired bias in selective attention for stimuli associated with tobacco-smoke reinforcement controls the selective performance of tobacco-seeking and tobacco-taking behaviour. To support this theory, we assessed whether a stimulus that had acquired control of a tobacco-seeking response in a discrimination procedure would command the focus of visual attention in a subsequent test phase. Smokers received discrimination training in which an instrumental key-press response was followed by tobacco-smoke reinforcement when one visual discriminative stimulus (S+) was present, but not when another stimulus (S-) was present. The skin conductance response to the S+ and S- assessed whether Pavlovian conditioning to the S+ had taken place. In a subsequent test phase, the S+ and S- were presented in the dot-probe task and the allocation of the focus of visual attention to these stimuli was measured. Participants learned to perform the instrumental tobacco-seeking response selectively in the presence of the S+ relative to the S-, and showed a greater skin conductance response to the S+ than the S-. In the subsequent test phase, participants allocated the focus of visual attention to the S+ in preference to the S-. Correlation analysis revealed that the visual attentional bias for the S+ was positively associated with the number of times the S+ had been paired with tobacco-smoke in training, the skin conductance response to the S+ and with subjective craving to smoke. Furthermore, increased exposure to tobacco-smoke in the natural environment was associated with reduced discrimination learning. These data demonstrate that discriminative stimuli that signal that tobacco-smoke reinforcement is available acquire the capacity to command selective attentional and elicit instrumental tobacco-seeking behaviour.

  15. Visuomotor properties of corticotectal cells in area 17 and posteromedial lateral suprasylvian (PMLS) cortex of the cat.

    PubMed

    Weyand, T G; Gafka, A C

    2001-01-01

    We studied the visuomotor activity of corticotectal (CT) cells in two visual cortical areas [area 17 and the posteromedial lateral suprasylvian cortex (PMLS)] of the cat. The cats were trained in simple oculomotor tasks, and head position was fixed. Most CT cells in both cortical areas gave a vigorous discharge to a small stimulus used to control gaze when it fell within the retinotopically defined visual field. However, the vigor of the visual response did not predict latency to initiate a saccade, saccade velocity, amplitude, or even if a saccade would be made, minimizing any potential role these cells might have in premotor or attentional processes. Most CT cells in both areas were selective for direction of stimulus motion, and cells in PMLS showed a direction preference favoring motion away from points of central gaze. CT cells did not discharge with eye movements in the dark. During eye movements in the light, many CT cells in area 17 increased their activity. In contrast, cells in PMLS, including CT cells, were generally unresponsive during saccades. Paradoxically, cells in PMLS responded vigorously to stimuli moving at saccadic velocities, indicating that the oculomotor system suppresses visual activity elicited by moving the retina across an illuminated scene. Nearly all CT cells showed oscillatory activity in the frequency range of 20-90 Hz, especially in response to visual stimuli. However, this activity was capricious; strong oscillations in one trial could disappear in the next despite identical stimulus conditions. Although the CT cells in both of these regions share many characteristics, the direction anisotropy and the suppression of activity during eye movements which characterize the neurons in PMLS suggests that these two areas have different roles in facilitating perceptual/motor processes at the level of the superior colliculus.

  16. Reduced BOLD response to periodic visual stimulation.

    PubMed

    Parkes, Laura M; Fries, Pascal; Kerskens, Christian M; Norris, David G

    2004-01-01

    The blood oxygenation level-dependent (BOLD) response to entrained neuronal firing in the human visual cortex and lateral geniculate nuclei was investigated. Periodic checkerboard flashes at a range of frequencies (4-20 Hz) were used to drive the visual cortex neurons into entrained oscillatory firing. This is compared to a checkerboard flashing aperiodically, with the same average number of flashes per unit time. A magnetoencephalography (MEG) measurement was made to confirm that the periodic paradigm elicited entrainment. We found that for frequencies of 10 and 15 Hz, the periodic stimulus gave a smaller BOLD response than for the aperiodic stimulus. Detailed investigation at 15 Hz showed that the aperiodic stimulus gave a similar BOLD increase regardless of the magnitude of jitter (+/-17 ms compared to +/-33 ms), indicating that flashes need to be precise to at least 17 ms to maintain entrainment. This is also evidence that for aperiodic stimuli, the amplitude of the BOLD response ordinarily reflects the total number of flashes per unit time, irrespective of the precise spacing between them, suggesting that entrainment is the main cause of the BOLD reduction in the periodic condition. The results indicate that, during entrainment, there is a reduction in the neuronal metabolic demand. We suggest that because of the selective frequency band of this effect, it could be connected to synchronised reverberations around an internal feedback loop.

  17. Predicting the 'where' and resolving the 'what' of a moving target: a dichotomy of abilities.

    PubMed

    Long, G M; Vogel, C A

    1998-01-01

    Anticipation timing (AT) and dynamic visual acuity (DVA) were assessed in a group of college students (n = 60) under a range of velocity and duration conditions. Subjects participated in two identical sessions 1 week apart. Consistently with previous work, DVA performance worsened as velocity increased and as target duration decreased; and there was a significant improvement from the first to the second session. In contrast, AT performance improved as velocity increased, whereas no improvement from the first to the second session was indicated; but increasing duration again benefited performance. Correlational analyses comparing DVA and AT did not reveal any systematic relationship between the two visual tasks. A follow-up study with different instructions on the AT task revealed the same pattern of AT performance, suggesting the generalizability of the obtained stimulus relationships for the AT task. The importance of the often-overlooked role of stimulus variables on the AT task is discussed.

  18. The question of simultaneity in multisensory integration

    NASA Astrophysics Data System (ADS)

    Leone, Lynnette; McCourt, Mark E.

    2012-03-01

    Early reports of audiovisual (AV) multisensory integration (MI) indicated that unisensory stimuli must evoke simultaneous physiological responses to produce decreases in reaction time (RT) such that for unisensory stimuli with unequal RTs the stimulus eliciting the faster RT had to be delayed relative to the stimulus eliciting the slower RT. The "temporal rule" states that MI depends on the temporal proximity of unisensory stimuli, the neural responses to which must fall within a window of integration. Ecological validity demands that MI should occur only for simultaneous events (which may give rise to non-simultaneous neural activations). However, spurious neural response simultaneities which are unrelated to singular environmental multisensory occurrences must somehow be rejected. Using an RT/race model paradigm we measured AV MI as a function of stimulus onset asynchrony (SOA: +/-200 ms, 50 ms intervals) under fully dark adapted conditions for visual (V) stimuli that were either weak (scotopic 525 nm flashes; 511 ms mean RT) or strong (photopic 630 nm flashes; 356 ms mean RT). Auditory (A) stimulus (1000 Hz pure tone) intensity was constant. Despite the 155 ms slower mean RT to the scotopic versus photopic stimulus, facilitative AV MI in both conditions nevertheless occurred exclusively at an SOA of 0 ms. Thus, facilitative MI demands both physical and physiological simultaneity. We consider the mechanisms by which the nervous system may take account of variations in response latency arising from changes in stimulus intensity in order to selectively integrate only those physiological simultaneities that arise from physical simultaneities.

  19. Stimulus modality and working memory performance in Greek children with reading disabilities: additional evidence for the pictorial superiority hypothesis.

    PubMed

    Constantinidou, Fofi; Evripidou, Christiana

    2012-01-01

    This study investigated the effects of stimulus presentation modality on working memory performance in children with reading disabilities (RD) and in typically developing children (TDC), all native speakers of Greek. It was hypothesized that the visual presentation of common objects would result in improved learning and recall performance as compared to the auditory presentation of stimuli. Twenty children, ages 10-12, diagnosed with RD were matched to 20 TDC age peers. The experimental tasks implemented a multitrial verbal learning paradigm incorporating three modalities: auditory, visual, and auditory plus visual. Significant group differences were noted on language, verbal and nonverbal memory, and measures of executive abilities. A mixed-model MANOVA indicated that children with RD had a slower learning curve and recalled fewer words than TDC across experimental modalities. Both groups of participants benefited from the visual presentation of objects; however, children with RD showed the greatest gains during this condition. In conclusion, working memory for common verbal items is impaired in children with RD; however, performance can be facilitated, and learning efficiency maximized, when information is presented visually. The results provide further evidence for the pictorial superiority hypothesis and the theory that pictorial presentation of verbal stimuli is adequate for dual coding.

  20. Audiovisual Modulation in Mouse Primary Visual Cortex Depends on Cross-Modal Stimulus Configuration and Congruency.

    PubMed

    Meijer, Guido T; Montijn, Jorrit S; Pennartz, Cyriel M A; Lansink, Carien S

    2017-09-06

    The sensory neocortex is a highly connected associative network that integrates information from multiple senses, even at the level of the primary sensory areas. Although a growing body of empirical evidence supports this view, the neural mechanisms of cross-modal integration in primary sensory areas, such as the primary visual cortex (V1), are still largely unknown. Using two-photon calcium imaging in awake mice, we show that the encoding of audiovisual stimuli in V1 neuronal populations is highly dependent on the features of the stimulus constituents. When the visual and auditory stimulus features were modulated at the same rate (i.e., temporally congruent), neurons responded with either an enhancement or suppression compared with unisensory visual stimuli, and their prevalence was balanced. Temporally incongruent tones or white-noise bursts included in audiovisual stimulus pairs resulted in predominant response suppression across the neuronal population. Visual contrast did not influence multisensory processing when the audiovisual stimulus pairs were congruent; however, when white-noise bursts were used, neurons generally showed response suppression when the visual stimulus contrast was high whereas this effect was absent when the visual contrast was low. Furthermore, a small fraction of V1 neurons, predominantly those located near the lateral border of V1, responded to sound alone. These results show that V1 is involved in the encoding of cross-modal interactions in a more versatile way than previously thought. SIGNIFICANCE STATEMENT The neural substrate of cross-modal integration is not limited to specialized cortical association areas but extends to primary sensory areas. Using two-photon imaging of large groups of neurons, we show that multisensory modulation of V1 populations is strongly determined by the individual and shared features of cross-modal stimulus constituents, such as contrast, frequency, congruency, and temporal structure. Congruent audiovisual stimulation resulted in a balanced pattern of response enhancement and suppression compared with unisensory visual stimuli, whereas incongruent or dissimilar stimuli at full contrast gave rise to a population dominated by response-suppressing neurons. Our results indicate that V1 dynamically integrates nonvisual sources of information while still attributing most of its resources to coding visual information. Copyright © 2017 the authors 0270-6474/17/378783-14$15.00/0.

  1. Using an abstract geometry in virtual reality to explore choice behaviour: visual flicker preferences in honeybees.

    PubMed

    Van De Poll, Matthew N; Zajaczkowski, Esmi L; Taylor, Gavin J; Srinivasan, Mandyam V; van Swinderen, Bruno

    2015-11-01

    Closed-loop paradigms provide an effective approach for studying visual choice behaviour and attention in small animals. Different flying and walking paradigms have been developed to investigate behavioural and neuronal responses to competing stimuli in insects such as bees and flies. However, the variety of stimulus choices that can be presented over one experiment is often limited. Current choice paradigms are mostly constrained as single binary choice scenarios that are influenced by the linear structure of classical conditioning paradigms. Here, we present a novel behavioural choice paradigm that allows animals to explore a closed geometry of interconnected binary choices by repeatedly selecting among competing objects, thereby revealing stimulus preferences in an historical context. We used our novel paradigm to investigate visual flicker preferences in honeybees (Apis mellifera) and found significant preferences for 20-25 Hz flicker and avoidance of higher (50-100 Hz) and lower (2-4 Hz) flicker frequencies. Similar results were found when bees were presented with three simultaneous choices instead of two, and when they were given the chance to select previously rejected choices. Our results show that honeybees can discriminate among different flicker frequencies and that their visual preferences are persistent even under different experimental conditions. Interestingly, avoided stimuli were more attractive if they were novel, suggesting that novelty salience can override innate preferences. Our recursive virtual reality environment provides a new approach to studying visual discrimination and choice behaviour in animals. © 2015. Published by The Company of Biologists Ltd.

  2. Using the Freiburg Acuity and Contrast Test to measure visual performance in USAF personnel after PRK.

    PubMed

    Dennis, Richard J; Beer, Jeremy M A; Baldwin, J Bruce; Ivan, Douglas J; Lorusso, Frank J; Thompson, William T

    2004-07-01

    Photorefractive keratectomy (PRK) may be an alternative to spectacle and contact lens wear for United States Air Force (USAF) aircrew and may offer some distinct advantages in operational situations. However, any residual corneal haze or scar formation from PRK could exacerbate the disabling effects of a bright glare source on a complex visual task. The USAF recently completed a longitudinal clinical evaluation of the long-term effects of PRK on visual performance, including the experiment described herein. After baseline data were collected, 20 nonflying active duty USAF personnel underwent PRK. Visual performance was then measured at 6, 12, and 24 months after PRK. Visual acuity (VA) and contrast sensitivity (CS) data were collected by using the Freiburg Acuity and Contrast Test (FrACT), with the subject viewing half of the runs through a polycarbonate windscreen. Experimental runs were completed under 3 glare conditions: no glare source and with either a broadband or a green laser (532-nm) glare annulus (luminance approximately 6090 cd/m) surrounding the Landolt C stimulus. Systematic effects of PRK on VA relative to baseline were not identified. However, VA was almost 2 full Snellen lines worse with the laser glare source in place versus the broadband glare source. A significant drop-off was observed in CS performance after PRK under conditions of no glare and broadband glare; this was the case both with and without the windscreen. As with VA, laser glare disrupted CS performance significantly and more than broadband glare did. PRK does not appear to have affected VA, but the changes in CS might represent a true decline in visual performance. The greater disruptive effects from laser versus broadband glare may be a result of increased masking from coherent spatial noise (speckle) surrounding the laser stimulus.

  3. Neural processing of visual information under interocular suppression: a critical review

    PubMed Central

    Sterzer, Philipp; Stein, Timo; Ludwig, Karin; Rothkirch, Marcus; Hesselmann, Guido

    2014-01-01

    When dissimilar stimuli are presented to the two eyes, only one stimulus dominates at a time while the other stimulus is invisible due to interocular suppression. When both stimuli are equally potent in competing for awareness, perception alternates spontaneously between the two stimuli, a phenomenon called binocular rivalry. However, when one stimulus is much stronger, e.g., due to higher contrast, the weaker stimulus can be suppressed for prolonged periods of time. A technique that has recently become very popular for the investigation of unconscious visual processing is continuous flash suppression (CFS): High-contrast dynamic patterns shown to one eye can render a low-contrast stimulus shown to the other eye invisible for up to minutes. Studies using CFS have produced new insights but also controversies regarding the types of visual information that can be processed unconsciously as well as the neural sites and the relevance of such unconscious processing. Here, we review the current state of knowledge in regard to neural processing of interocularly suppressed information. Focusing on recent neuroimaging findings, we discuss whether and to what degree such suppressed visual information is processed at early and more advanced levels of the visual processing hierarchy. We review controversial findings related to the influence of attention on early visual processing under interocular suppression, the putative differential roles of dorsal and ventral areas in unconscious object processing, and evidence suggesting privileged unconscious processing of emotional and other socially relevant information. On a more general note, we discuss methodological and conceptual issues, from practical issues of how unawareness of a stimulus is assessed to the overarching question of what constitutes an adequate operational definition of unawareness. Finally, we propose approaches for future research to resolve current controversies in this exciting research area. PMID:24904469

  4. Attention to Attributes and Objects in Working Memory

    PubMed Central

    Cowan, Nelson; Blume, Christopher L.; Saults, J. Scott

    2013-01-01

    It has been debated on the basis of change-detection procedures whether visual working memory is limited by the number of objects, task-relevant attributes within those objects, or bindings between attributes. This debate, however, has been hampered by several limitations, including the use of conditions that vary between studies and the absence of appropriate mathematical models to estimate the number of items in working memory in different stimulus conditions. We re-examined working memory limits in two experiments with a wide array of conditions involving color and shape attributes, relying on a set of new models to fit various stimulus situations. In Experiment 2, a new procedure allowed identical retrieval conditions across different conditions of attention at encoding. The results show that multiple attributes compete for attention, but that retaining the binding between attributes is accomplished only by retaining the attributes themselves. We propose a theoretical account in which a fixed object capacity limit contains within it the possibility of the incomplete retention of object attributes, depending on the direction of attention. PMID:22905929

  5. Effects of Visual Speech on Early Auditory Evoked Fields - From the Viewpoint of Individual Variance

    PubMed Central

    Yahata, Izumi; Kanno, Akitake; Hidaka, Hiroshi; Sakamoto, Shuichi; Nakasato, Nobukazu; Kawashima, Ryuta; Katori, Yukio

    2017-01-01

    The effects of visual speech (the moving image of the speaker’s face uttering speech sound) on early auditory evoked fields (AEFs) were examined using a helmet-shaped magnetoencephalography system in 12 healthy volunteers (9 males, mean age 35.5 years). AEFs (N100m) in response to the monosyllabic sound /be/ were recorded and analyzed under three different visual stimulus conditions, the moving image of the same speaker’s face uttering /be/ (congruent visual stimuli) or uttering /ge/ (incongruent visual stimuli), and visual noise (still image processed from speaker’s face using a strong Gaussian filter: control condition). On average, latency of N100m was significantly shortened in the bilateral hemispheres for both congruent and incongruent auditory/visual (A/V) stimuli, compared to the control A/V condition. However, the degree of N100m shortening was not significantly different between the congruent and incongruent A/V conditions, despite the significant differences in psychophysical responses between these two A/V conditions. Moreover, analysis of the magnitudes of these visual effects on AEFs in individuals showed that the lip-reading effects on AEFs tended to be well correlated between the two different audio-visual conditions (congruent vs. incongruent visual stimuli) in the bilateral hemispheres but were not significantly correlated between right and left hemisphere. On the other hand, no significant correlation was observed between the magnitudes of visual speech effects and psychophysical responses. These results may indicate that the auditory-visual interaction observed on the N100m is a fundamental process which does not depend on the congruency of the visual information. PMID:28141836

  6. Stimulus Dependency of Object-Evoked Responses in Human Visual Cortex: An Inverse Problem for Category Specificity

    PubMed Central

    Graewe, Britta; De Weerd, Peter; Farivar, Reza; Castelo-Branco, Miguel

    2012-01-01

    Many studies have linked the processing of different object categories to specific event-related potentials (ERPs) such as the face-specific N170. Despite reports showing that object-related ERPs are influenced by visual stimulus features, there is consensus that these components primarily reflect categorical aspects of the stimuli. Here, we re-investigated this idea by systematically measuring the effects of visual feature manipulations on ERP responses elicited by both structure-from-motion (SFM)-defined and luminance-defined object stimuli. SFM objects elicited a novel component at 200–250 ms (N250) over parietal and posterior temporal sites. We found, however, that the N250 amplitude was unaffected by restructuring SFM stimuli into meaningless objects based on identical visual cues. This suggests that this N250 peak was not uniquely linked to categorical aspects of the objects, but is strongly determined by visual stimulus features. We provide strong support for this hypothesis by parametrically manipulating the depth range of both SFM- and luminance-defined object stimuli and showing that the N250 evoked by SFM stimuli as well as the well-known N170 to static faces were sensitive to this manipulation. Importantly, this effect could not be attributed to compromised object categorization in low depth stimuli, confirming a strong impact of visual stimulus features on object-related ERP signals. As ERP components linked with visual categorical object perception are likely determined by multiple stimulus features, this creates an interesting inverse problem when deriving specific perceptual processes from variations in ERP components. PMID:22363479

  7. Stimulus dependency of object-evoked responses in human visual cortex: an inverse problem for category specificity.

    PubMed

    Graewe, Britta; De Weerd, Peter; Farivar, Reza; Castelo-Branco, Miguel

    2012-01-01

    Many studies have linked the processing of different object categories to specific event-related potentials (ERPs) such as the face-specific N170. Despite reports showing that object-related ERPs are influenced by visual stimulus features, there is consensus that these components primarily reflect categorical aspects of the stimuli. Here, we re-investigated this idea by systematically measuring the effects of visual feature manipulations on ERP responses elicited by both structure-from-motion (SFM)-defined and luminance-defined object stimuli. SFM objects elicited a novel component at 200-250 ms (N250) over parietal and posterior temporal sites. We found, however, that the N250 amplitude was unaffected by restructuring SFM stimuli into meaningless objects based on identical visual cues. This suggests that this N250 peak was not uniquely linked to categorical aspects of the objects, but is strongly determined by visual stimulus features. We provide strong support for this hypothesis by parametrically manipulating the depth range of both SFM- and luminance-defined object stimuli and showing that the N250 evoked by SFM stimuli as well as the well-known N170 to static faces were sensitive to this manipulation. Importantly, this effect could not be attributed to compromised object categorization in low depth stimuli, confirming a strong impact of visual stimulus features on object-related ERP signals. As ERP components linked with visual categorical object perception are likely determined by multiple stimulus features, this creates an interesting inverse problem when deriving specific perceptual processes from variations in ERP components.

  8. Visual spatial attention enhances the amplitude of positive and negative fMRI responses to visual stimulation in an eccentricity-dependent manner

    PubMed Central

    Bressler, David W.; Fortenbaugh, Francesca C.; Robertson, Lynn C.; Silver, Michael A.

    2013-01-01

    Endogenous visual spatial attention improves perception and enhances neural responses to visual stimuli at attended locations. Although many aspects of visual processing differ significantly between central and peripheral vision, little is known regarding the neural substrates of the eccentricity dependence of spatial attention effects. We measured amplitudes of positive and negative fMRI responses to visual stimuli as a function of eccentricity in a large number of topographically-organized cortical areas. Responses to each stimulus were obtained when the stimulus was attended and when spatial attention was directed to a stimulus in the opposite visual hemifield. Attending to the stimulus increased both positive and negative response amplitudes in all cortical areas we studied: V1, V2, V3, hV4, VO1, LO1, LO2, V3A/B, IPS0, TO1, and TO2. However, the eccentricity dependence of these effects differed considerably across cortical areas. In early visual, ventral, and lateral occipital cortex, attentional enhancement of positive responses was greater for central compared to peripheral eccentricities. The opposite pattern was observed in dorsal stream areas IPS0 and putative MT homolog TO1, where attentional enhancement of positive responses was greater in the periphery. Both the magnitude and the eccentricity dependence of attentional modulation of negative fMRI responses closely mirrored that of positive responses across cortical areas. PMID:23562388

  9. Synchronization to auditory and visual rhythms in hearing and deaf individuals

    PubMed Central

    Iversen, John R.; Patel, Aniruddh D.; Nicodemus, Brenda; Emmorey, Karen

    2014-01-01

    A striking asymmetry in human sensorimotor processing is that humans synchronize movements to rhythmic sound with far greater precision than to temporally equivalent visual stimuli (e.g., to an auditory vs. a flashing visual metronome). Traditionally, this finding is thought to reflect a fundamental difference in auditory vs. visual processing, i.e., superior temporal processing by the auditory system and/or privileged coupling between the auditory and motor systems. It is unclear whether this asymmetry is an inevitable consequence of brain organization or whether it can be modified (or even eliminated) by stimulus characteristics or by experience. With respect to stimulus characteristics, we found that a moving, colliding visual stimulus (a silent image of a bouncing ball with a distinct collision point on the floor) was able to drive synchronization nearly as accurately as sound in hearing participants. To study the role of experience, we compared synchronization to flashing metronomes in hearing and profoundly deaf individuals. Deaf individuals performed better than hearing individuals when synchronizing with visual flashes, suggesting that cross-modal plasticity enhances the ability to synchronize with temporally discrete visual stimuli. Furthermore, when deaf (but not hearing) individuals synchronized with the bouncing ball, their tapping patterns suggest that visual timing may access higher-order beat perception mechanisms for deaf individuals. These results indicate that the auditory advantage in rhythmic synchronization is more experience- and stimulus-dependent than has been previously reported. PMID:25460395

  10. Square or sine: finding a waveform with high success rate of eliciting SSVEP.

    PubMed

    Teng, Fei; Chen, Yixin; Choong, Aik Min; Gustafson, Scott; Reichley, Christopher; Lawhead, Pamela; Waddell, Dwight

    2011-01-01

    Steady state visual evoked potential (SSVEP) is the brain's natural electrical potential response for visual stimuli at specific frequencies. Using a visual stimulus flashing at some given frequency will entrain the SSVEP at the same frequency, thereby allowing determination of the subject's visual focus. The faster an SSVEP is identified, the higher information transmission rate the system achieves. Thus, an effective stimulus, defined as one with high success rate of eliciting SSVEP and high signal-noise ratio, is desired. Also, researchers observed that harmonic frequencies often appear in the SSVEP at a reduced magnitude. Are the harmonics in the SSVEP elicited by the fundamental stimulating frequency or by the artifacts of the stimuli? In this paper, we compare the SSVEP responses of three periodic stimuli: square wave (with different duty cycles), triangle wave, and sine wave to find an effective stimulus. We also demonstrate the connection between the strength of the harmonics in SSVEP and the type of stimulus.

  11. Stimulus-dependent modulation of spontaneous low-frequency oscillations in the rat visual cortex.

    PubMed

    Huang, Liangming; Liu, Yadong; Gui, Jianjun; Li, Ming; Hu, Dewen

    2014-08-06

    Research on spontaneous low-frequency oscillations is important to reveal underlying regulatory mechanisms in the brain. The mechanism for the stimulus modulation of low-frequency oscillations is not known. Here, we used the intrinsic optical imaging technique to examine stimulus-modulated low-frequency oscillation signals in the rat visual cortex. The stimulation was presented monocularly as a flashing light with different frequencies and intensities. The phases of low-frequency oscillations in different regions tended to be synchronized and the rhythms typically accelerated within a 30-s period after stimulation. These phenomena were confined to visual stimuli with specific flashing frequencies (12.5-17.5 Hz) and intensities (5-10 mA). The acceleration and synchronization induced by the flashing frequency were more marked than those induced by the intensity. These results show that spontaneous low-frequency oscillations can be modulated by parameter-dependent flashing lights and indicate the potential utility of the visual stimulus paradigm in exploring the origin and function of low-frequency oscillations.

  12. Crossmodal integration enhances neural representation of task-relevant features in audiovisual face perception.

    PubMed

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Liu, Yongjian; Liang, Changhong; Sun, Pei

    2015-02-01

    Previous studies have shown that audiovisual integration improves identification performance and enhances neural activity in heteromodal brain areas, for example, the posterior superior temporal sulcus/middle temporal gyrus (pSTS/MTG). Furthermore, it has also been demonstrated that attention plays an important role in crossmodal integration. In this study, we considered crossmodal integration in audiovisual facial perception and explored its effect on the neural representation of features. The audiovisual stimuli in the experiment consisted of facial movie clips that could be classified into 2 gender categories (male vs. female) or 2 emotion categories (crying vs. laughing). The visual/auditory-only stimuli were created from these movie clips by removing the auditory/visual contents. The subjects needed to make a judgment about the gender/emotion category for each movie clip in the audiovisual, visual-only, or auditory-only stimulus condition as functional magnetic resonance imaging (fMRI) signals were recorded. The neural representation of the gender/emotion feature was assessed using the decoding accuracy and the brain pattern-related reproducibility indices, obtained by a multivariate pattern analysis method from the fMRI data. In comparison to the visual-only and auditory-only stimulus conditions, we found that audiovisual integration enhanced the neural representation of task-relevant features and that feature-selective attention might play a role of modulation in the audiovisual integration. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. Spatio-temporal distribution of brain activity associated with audio-visually congruent and incongruent speech and the McGurk Effect.

    PubMed

    Pratt, Hillel; Bleich, Naomi; Mittelman, Nomi

    2015-11-01

    Spatio-temporal distributions of cortical activity to audio-visual presentations of meaningless vowel-consonant-vowels and the effects of audio-visual congruence/incongruence, with emphasis on the McGurk effect, were studied. The McGurk effect occurs when a clearly audible syllable with one consonant, is presented simultaneously with a visual presentation of a face articulating a syllable with a different consonant and the resulting percept is a syllable with a consonant other than the auditorily presented one. Twenty subjects listened to pairs of audio-visually congruent or incongruent utterances and indicated whether pair members were the same or not. Source current densities of event-related potentials to the first utterance in the pair were estimated and effects of stimulus-response combinations, brain area, hemisphere, and clarity of visual articulation were assessed. Auditory cortex, superior parietal cortex, and middle temporal cortex were the most consistently involved areas across experimental conditions. Early (<200 msec) processing of the consonant was overall prominent in the left hemisphere, except right hemisphere prominence in superior parietal cortex and secondary visual cortex. Clarity of visual articulation impacted activity in secondary visual cortex and Wernicke's area. McGurk perception was associated with decreased activity in primary and secondary auditory cortices and Wernicke's area before 100 msec, increased activity around 100 msec which decreased again around 180 msec. Activity in Broca's area was unaffected by McGurk perception and was only increased to congruent audio-visual stimuli 30-70 msec following consonant onset. The results suggest left hemisphere prominence in the effects of stimulus and response conditions on eight brain areas involved in dynamically distributed parallel processing of audio-visual integration. Initially (30-70 msec) subcortical contributions to auditory cortex, superior parietal cortex, and middle temporal cortex occur. During 100-140 msec, peristriate visual influences and Wernicke's area join in the processing. Resolution of incongruent audio-visual inputs is then attempted, and if successful, McGurk perception occurs and cortical activity in left hemisphere further increases between 170 and 260 msec.

  14. Perceptual grouping across eccentricity.

    PubMed

    Tannazzo, Teresa; Kurylo, Daniel D; Bukhari, Farhan

    2014-10-01

    Across the visual field, progressive differences exist in neural processing as well as perceptual abilities. Expansion of stimulus scale across eccentricity compensates for some basic visual capacities, but not for high-order functions. It was hypothesized that as with many higher-order functions, perceptual grouping ability should decline across eccentricity. To test this prediction, psychophysical measurements of grouping were made across eccentricity. Participants indicated the dominant grouping of dot grids in which grouping was based upon luminance, motion, orientation, or proximity. Across trials, the organization of stimuli was systematically decreased until perceived grouping became ambiguous. For all stimulus features, grouping ability remained relatively stable until 40°, beyond which thresholds significantly elevated. The pattern of change across eccentricity varied across stimulus feature, in which stimulus scale, dot size, or stimulus size interacted with eccentricity effects. These results demonstrate that perceptual grouping of such stimuli is not reliant upon foveal viewing, and suggest that selection of dominant grouping patterns from ambiguous displays operates similarly across much of the visual field. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Distinct roles of visual, parietal, and frontal motor cortices in memory-guided sensorimotor decisions.

    PubMed

    Goard, Michael J; Pho, Gerald N; Woodson, Jonathan; Sur, Mriganka

    2016-08-04

    Mapping specific sensory features to future motor actions is a crucial capability of mammalian nervous systems. We investigated the role of visual (V1), posterior parietal (PPC), and frontal motor (fMC) cortices for sensorimotor mapping in mice during performance of a memory-guided visual discrimination task. Large-scale calcium imaging revealed that V1, PPC, and fMC neurons exhibited heterogeneous responses spanning all task epochs (stimulus, delay, response). Population analyses demonstrated unique encoding of stimulus identity and behavioral choice information across regions, with V1 encoding stimulus, fMC encoding choice even early in the trial, and PPC multiplexing the two variables. Optogenetic inhibition during behavior revealed that all regions were necessary during the stimulus epoch, but only fMC was required during the delay and response epochs. Stimulus identity can thus be rapidly transformed into behavioral choice, requiring V1, PPC, and fMC during the transformation period, but only fMC for maintaining the choice in memory prior to execution.

  16. Visual Vestibular Interaction in the Dynamic Visual Acuity Test during Voluntary Head Rotation

    NASA Technical Reports Server (NTRS)

    Lee, Moo Hoon; Durnford, Simon; Crowley, John; Rupert, Angus

    1996-01-01

    Although intact vestibular function is essential in maintaining spatial orientation, no good screening tests of vestibular function are available to the aviation community. High frequency voluntary head rotation was selected as a vestibular stimulus to isolate the vestibulo-ocular reflex (VOR) from visual influence. A dynamic visual acuity test that incorporates voluntary head rotation was evaluated as a potential vestibular function screening tool. Twenty-seven normal subjects performed voluntary sinusoidal head rotation at frequencies from 0.7-4.0 Hz under three different visual conditions: visually-enhanced VOR, normal VOR, and visually suppressed VOR. Standardized Baily-Lovie chart letters were presented on a computer monitor in front of the subject, who then was asked to read the letters while rotating his head horizontally. The electro-oculogram and dynamic visual acuity score were recorded and analyzed. There were no significant differences in gain or phase shift among three visual conditions in the frequency range of 2.8 to 4.0 Hz. The dynamic visual acuity score shifted less than 0.3 logMAR at frequencies under 2.0 Hz. The dynamic visual acuity test at frequencies a round 2.0 Hz can be recommended for evaluating vestibular function.

  17. Psychophysiological responses to drug-associated stimuli in chronic heavy cannabis use.

    PubMed

    Wölfling, Klaus; Flor, Herta; Grüsser, Sabine M

    2008-02-01

    Due to learning processes originally neutral stimuli become drug-associated and can activate an implicit drug memory, which leads to a conditioned arousing 'drug-seeking' state. This condition is accompanied by specific psychophysiological responses. The goal of the present study was the analysis of changes in cortical and peripheral reactivity to cannabis as well as alcohol-associated pictures compared with emotionally significant drug-unrelated and neutral pictures in long-term heavy cannabis users. Participants were 15 chronic heavy cannabis users and 15 healthy controls. Verbal reports as well as event-related potentials of the electroencephalogram and skin conductance responses were assessed in a cue-reactivity paradigm to determine the psychophysiological effects caused by drug-related visual stimulus material. The evaluation of self-reported craving and emotional processing showed that cannabis stimuli were perceived as more arousing and pleasant and elicited significantly more cannabis craving in cannabis users than in healthy controls. Cannabis users also demonstrated higher cannabis stimulus-induced arousal, as indicated by significantly increased skin conductance and a larger late positivity of the visual event-related brain potential. These findings support the assumption that drug-associated stimuli acquire increased incentive salience in addiction history and induce conditioned physiological patterns, which lead to craving and potentially to drug intake. The potency of visual drug-associated cues to capture attention and to activate drug-specific memory traces and accompanying physiological symptoms embedded in a cycle of abstinence and relapse--even in a 'so-called' soft drug--was assessed for the first time.

  18. Inhibition of Return in the Visual Field

    PubMed Central

    Bao, Yan; Lei, Quan; Fang, Yuan; Tong, Yu; Schill, Kerstin; Pöppel, Ernst; Strasburger, Hans

    2013-01-01

    Inhibition of return (IOR) as an indicator of attentional control is characterized by an eccentricity effect, that is, the more peripheral visual field shows a stronger IOR magnitude relative to the perifoveal visual field. However, it could be argued that this eccentricity effect may not be an attention effect, but due to cortical magnification. To test this possibility, we examined this eccentricity effect in two conditions: the same-size condition in which identical stimuli were used at different eccentricities, and the size-scaling condition in which stimuli were scaled according to the cortical magnification factor (M-scaling), thus stimuli being larger at the more peripheral locations. The results showed that the magnitude of IOR was significantly stronger in the peripheral relative to the perifoveal visual field, and this eccentricity effect was independent of the manipulation of stimulus size (same-size or size-scaling). These results suggest a robust eccentricity effect of IOR which cannot be eliminated by M-scaling. Underlying neural mechanisms of the eccentricity effect of IOR are discussed with respect to both cortical and subcortical structures mediating attentional control in the perifoveal and peripheral visual field. PMID:23820946

  19. Dynamic circuitry for updating spatial representations. II. Physiological evidence for interhemispheric transfer in area LIP of the split-brain macaque.

    PubMed

    Heiser, Laura M; Berman, Rebecca A; Saunders, Richard C; Colby, Carol L

    2005-11-01

    With each eye movement, a new image impinges on the retina, yet we do not notice any shift in visual perception. This perceptual stability indicates that the brain must be able to update visual representations to take our eye movements into account. Neurons in the lateral intraparietal area (LIP) update visual representations when the eyes move. The circuitry that supports these updated representations remains unknown, however. In this experiment, we asked whether the forebrain commissures are necessary for updating in area LIP when stimulus representations must be updated from one visual hemifield to the other. We addressed this question by recording from LIP neurons in split-brain monkeys during two conditions: stimulus traces were updated either across or within hemifields. Our expectation was that across-hemifield updating activity in LIP would be reduced or abolished after transection of the forebrain commissures. Our principal finding is that LIP neurons can update stimulus traces from one hemifield to the other even in the absence of the forebrain commissures. This finding provides the first evidence that representations in parietal cortex can be updated without the use of direct cortico-cortical links. The second main finding is that updating activity in LIP is modified in the split-brain monkey: across-hemifield signals are reduced in magnitude and delayed in onset compared with within-hemifield signals, which indicates that the pathways for across-hemifield updating are less effective in the absence of the forebrain commissures. Together these findings reveal a dynamic circuit that contributes to updating spatial representations.

  20. A Unifying Motif for Spatial and Directional Surround Suppression.

    PubMed

    Liu, Liu D; Miller, Kenneth D; Pack, Christopher C

    2018-01-24

    In the visual system, the response to a stimulus in a neuron's receptive field can be modulated by stimulus context, and the strength of these contextual influences vary with stimulus intensity. Recent work has shown how a theoretical model, the stabilized supralinear network (SSN), can account for such modulatory influences, using a small set of computational mechanisms. Although the predictions of the SSN have been confirmed in primary visual cortex (V1), its computational principles apply with equal validity to any cortical structure. We have therefore tested the generality of the SSN by examining modulatory influences in the middle temporal area (MT) of the macaque visual cortex, using electrophysiological recordings and pharmacological manipulations. We developed a novel stimulus that can be adjusted parametrically to be larger or smaller in the space of all possible motion directions. We found, as predicted by the SSN, that MT neurons integrate across motion directions for low-contrast stimuli, but that they exhibit suppression by the same stimuli when they are high in contrast. These results are analogous to those found in visual cortex when stimulus size is varied in the space domain. We further tested the mechanisms of inhibition using pharmacological manipulations of inhibitory efficacy. As predicted by the SSN, local manipulation of inhibitory strength altered firing rates, but did not change the strength of surround suppression. These results are consistent with the idea that the SSN can account for modulatory influences along different stimulus dimensions and in different cortical areas. SIGNIFICANCE STATEMENT Visual neurons are selective for specific stimulus features in a region of visual space known as the receptive field, but can be modulated by stimuli outside of the receptive field. The SSN model has been proposed to account for these and other modulatory influences, and tested in V1. As this model is not specific to any particular stimulus feature or brain region, we wondered whether similar modulatory influences might be observed for other stimulus dimensions and other regions. We tested for specific patterns of modulatory influences in the domain of motion direction, using electrophysiological recordings from MT. Our data confirm the predictions of the SSN in MT, suggesting that the SSN computations might be a generic feature of sensory cortex. Copyright © 2018 the authors 0270-6474/18/380989-11$15.00/0.

  1. A Biophysical Neural Model To Describe Spatial Visual Attention

    NASA Astrophysics Data System (ADS)

    Hugues, Etienne; José, Jorge V.

    2008-02-01

    Visual scenes have enormous spatial and temporal information that are transduced into neural spike trains. Psychophysical experiments indicate that only a small portion of a spatial image is consciously accessible. Electrophysiological experiments in behaving monkeys have revealed a number of modulations of the neural activity in special visual area known as V4, when the animal is paying attention directly towards a particular stimulus location. The nature of the attentional input to V4, however, remains unknown as well as to the mechanisms responsible for these modulations. We use a biophysical neural network model of V4 to address these issues. We first constrain our model to reproduce the experimental results obtained for different external stimulus configurations and without paying attention. To reproduce the known neuronal response variability, we found that the neurons should receive about equal, or balanced, levels of excitatory and inhibitory inputs and whose levels are high as they are in in vivo conditions. Next we consider attentional inputs that can induce and reproduce the observed spiking modulations. We also elucidate the role played by the neural network to generate these modulations.

  2. A Biophysical Neural Model To Describe Spatial Visual Attention

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hugues, Etienne; Jose, Jorge V.

    2008-02-14

    Visual scenes have enormous spatial and temporal information that are transduced into neural spike trains. Psychophysical experiments indicate that only a small portion of a spatial image is consciously accessible. Electrophysiological experiments in behaving monkeys have revealed a number of modulations of the neural activity in special visual area known as V4, when the animal is paying attention directly towards a particular stimulus location. The nature of the attentional input to V4, however, remains unknown as well as to the mechanisms responsible for these modulations. We use a biophysical neural network model of V4 to address these issues. We firstmore » constrain our model to reproduce the experimental results obtained for different external stimulus configurations and without paying attention. To reproduce the known neuronal response variability, we found that the neurons should receive about equal, or balanced, levels of excitatory and inhibitory inputs and whose levels are high as they are in in vivo conditions. Next we consider attentional inputs that can induce and reproduce the observed spiking modulations. We also elucidate the role played by the neural network to generate these modulations.« less

  3. Visual attention to variation in female facial skin color distribution.

    PubMed

    Fink, Bernhard; Matts, Paul J; Klingenberg, Heiner; Kuntze, Sebastian; Weege, Bettina; Grammer, Karl

    2008-06-01

    Visible skin condition of women is argued to influence human physical attraction. Recent research has shown that people are sensitive to variation in skin color distribution, and such variation affects visual perception of female facial attractiveness, healthiness, and age. The eye gaze of 39 males and females, aged 13 to 45 years, was tracked while they viewed images of shape- and topography-standardized stimulus faces that varied only in terms of skin color distribution. The number of fixations and dwell time were significantly higher when viewing stimulus faces with the homogeneous skin color distribution of young people, compared with those of more elderly people. In accordance with recent research, facial stimuli with even skin tones were also judged to be younger and received higher attractiveness ratings. Finally, visual attention measures were negatively correlated with perceived age, but positively associated with attractiveness judgments. Variation in visible skin color distribution (independent of facial form and skin surface topography) is able to selectively attract people's attention toward female faces, and this higher attention results in more positive statements about a woman's face.

  4. Transferability of Dual-Task Coordination Skills after Practice with Changing Component Tasks

    PubMed Central

    Schubert, Torsten; Liepelt, Roman; Kübler, Sebastian; Strobach, Tilo

    2017-01-01

    Recent research has demonstrated that dual-task performance with two simultaneously presented tasks can be substantially improved as a result of practice. Among other mechanisms, theories of dual-task practice-relate this improvement to the acquisition of task coordination skills. These skills are assumed (1) to result from dual-task practice, but not from single-task practice, and (2) to be independent from the specific stimulus and response mappings during the practice situation and, therefore, transferable to new dual task situations. The present study is the first that provides an elaborated test of these assumptions in a context with well-controllable practice and transfer situations. To this end, we compared the effects of dual-task and single-task practice with a visual and an auditory sensory-motor component task on the dual-task performance in a subsequent transfer session. Importantly, stimulus and stimulus-response mapping conditions in the two component tasks changed repeatedly during practice sessions, which prevents that automatized stimulus-response associations may be transferred from practice to transfer. Dual-task performance was found to be improved after practice with the dual tasks in contrast to the single-task practice. These findings are consistent with the assumption that coordination skills had been acquired, which can be transferred to other dual-task situations independently on the specific stimulus and response mapping conditions of the practiced component tasks. PMID:28659844

  5. Temporal properties of material categorization and material rating: visual vs non-visual material features.

    PubMed

    Nagai, Takehiro; Matsushima, Toshiki; Koida, Kowa; Tani, Yusuke; Kitazaki, Michiteru; Nakauchi, Shigeki

    2015-10-01

    Humans can visually recognize material categories of objects, such as glass, stone, and plastic, easily. However, little is known about the kinds of surface quality features that contribute to such material class recognition. In this paper, we examine the relationship between perceptual surface features and material category discrimination performance for pictures of materials, focusing on temporal aspects, including reaction time and effects of stimulus duration. The stimuli were pictures of objects with an identical shape but made of different materials that could be categorized into seven classes (glass, plastic, metal, stone, wood, leather, and fabric). In a pre-experiment, observers rated the pictures on nine surface features, including visual (e.g., glossiness and transparency) and non-visual features (e.g., heaviness and warmness), on a 7-point scale. In the main experiments, observers judged whether two simultaneously presented pictures were classified as the same or different material category. Reaction times and effects of stimulus duration were measured. The results showed that visual feature ratings were correlated with material discrimination performance for short reaction times or short stimulus durations, while non-visual feature ratings were correlated only with performance for long reaction times or long stimulus durations. These results suggest that the mechanisms underlying visual and non-visual feature processing may differ in terms of processing time, although the cause is unclear. Visual surface features may mainly contribute to material recognition in daily life, while non-visual features may contribute only weakly, if at all. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Inertial acceleration as a measure of linear vection: An alternative to magnitude estimation. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Carpenter-Smith, Theodore R.; Futamura, Robert G.; Parker, Donald E.

    1995-01-01

    The present study focused on the development of a procedure to assess perceived self-motion induced by visual surround motion - vection. Using an apparatus that permitted independent control of visual and inertial stimuli, prone observers were translated along their head x-axis (fore/aft). The observers' task was to report the direction of self-motion during passive forward and backward translations of their bodies coupled with exposure to various visual surround conditions. The proportion of 'forward' responses was used to calculate each observer's point of subjective equality (PSE) for each surround condition. The results showed that the moving visual stimulus produced a significant shift in the PSE when data from the moving surround condition were compared with the stationary surround and no-vision condition. Further, the results indicated that vection increased monotonically with surround velocities between 4 and 40/s. It was concluded that linear vection can be measured in terms of changes in the amplitude of whole-body inertial acceleration required to elicit equivalent numbers of 'forward' and 'backward' self-motion reports.

  7. A Neural Basis for Interindividual Differences in the McGurk Effect, a Multisensory Speech Illusion

    PubMed Central

    Nath, Audrey R.; Beauchamp, Michael S.

    2011-01-01

    The McGurk effect is a compelling illusion in which humans perceive mismatched audiovisual speech as a completely different syllable. However, some normal individuals do not experience the illusion, reporting that the stimulus sounds the same with or without visual input. Converging evidence suggests that the left superior temporal sulcus (STS) is critical for audiovisual integration during speech perception. We used blood-oxygen level dependent functional magnetic resonance imaging (BOLD fMRI) to measure brain activity as McGurk perceivers and non-perceivers were presented with congruent audiovisual syllables, McGurk audiovisual syllables, and non-McGurk incongruent syllables. The inferior frontal gyrus showed an effect of stimulus condition (greater responses for incongruent stimuli) but not susceptibility group, while the left auditory cortex showed an effect of susceptibility group (greater response in susceptible individuals) but not stimulus condition. Only one brain region, the left STS, showed a significant effect of both susceptibility and stimulus condition. The amplitude of the response in the left STS was significantly correlated with the likelihood of perceiving the McGurk effect: a weak STS response meant that a subject was less likely to perceive the McGurk effect, while a strong response meant that a subject was more likely to perceive it. These results suggest that the left STS is a key locus for interindividual differences in speech perception. PMID:21787869

  8. Using eye tracking to identify faking attempts during penile plethysmography assessment.

    PubMed

    Trottier, Dominique; Rouleau, Joanne-Lucine; Renaud, Patrice; Goyette, Mathieu

    2014-01-01

    Penile plethysmography (PPG) is considered the most rigorous method for sexual interest assessment. Nevertheless, it is subject to faking attempts by participants, which compromises the internal validity of the instrument. To date, various attempts have been made to limit voluntary control of sexual response during PPG assessments, without satisfactory results. This exploratory research examined eye-tracking technologies' ability to identify the presence of cognitive strategies responsible for erectile inhibition during PPG assessment. Eye movements and penile responses for 20 subjects were recorded while exploring animated human-like computer-generated stimuli in a virtual environment under three distinct viewing conditions: (a) the free visual exploration of a preferred sexual stimulus without erectile inhibition; (b) the viewing of a preferred sexual stimulus with erectile inhibition; and (c) the free visual exploration of a non-preferred sexual stimulus. Results suggest that attempts to control erectile responses generate specific eye-movement variations, characterized by a general deceleration of the exploration process and limited exploration of the erogenous zone. Findings indicate that recording eye movements can provide significant information on the presence of competing covert processes responsible for erectile inhibition. The use of eye-tracking technologies during PPG could therefore lead to improved internal validity of the plethysmographic procedure.

  9. Comparing different stimulus configurations for population receptive field mapping in human fMRI

    PubMed Central

    Alvarez, Ivan; de Haas, Benjamin; Clark, Chris A.; Rees, Geraint; Schwarzkopf, D. Samuel

    2015-01-01

    Population receptive field (pRF) mapping is a widely used approach to measuring aggregate human visual receptive field properties by recording non-invasive signals using functional MRI. Despite growing interest, no study to date has systematically investigated the effects of different stimulus configurations on pRF estimates from human visual cortex. Here we compared the effects of three different stimulus configurations on a model-based approach to pRF estimation: size-invariant bars and eccentricity-scaled bars defined in Cartesian coordinates and traveling along the cardinal axes, and a novel simultaneous “wedge and ring” stimulus defined in polar coordinates, systematically covering polar and eccentricity axes. We found that the presence or absence of eccentricity scaling had a significant effect on goodness of fit and pRF size estimates. Further, variability in pRF size estimates was directly influenced by stimulus configuration, particularly for higher visual areas including V5/MT+. Finally, we compared eccentricity estimation between phase-encoded and model-based pRF approaches. We observed a tendency for more peripheral eccentricity estimates using phase-encoded methods, independent of stimulus size. We conclude that both eccentricity scaling and polar rather than Cartesian stimulus configuration are important considerations for optimal experimental design in pRF mapping. While all stimulus configurations produce adequate estimates, simultaneous wedge and ring stimulation produced higher fit reliability, with a significant advantage in reduced acquisition time. PMID:25750620

  10. A further assessment of the Hall-Rodriguez theory of latent inhibition.

    PubMed

    Leung, Hiu Tin; Killcross, A S; Westbrook, R Frederick

    2013-04-01

    The Hall-Rodriguez (G. Hall & G. Rodriguez, 2010, Associative and nonassociative processes in latent inhibition: An elaboration of the Pearce-Hall model, in R. E. Lubow & I. Weiner, Eds., Latent inhibition: Data, theories, and applications to schizophrenia, pp. 114-136, Cambridge, England: Cambridge University Press) theory of latent inhibition predicts that it will be deepened when a preexposed target stimulus is given additional preexposures in compound with (a) a novel stimulus or (b) another preexposed stimulus, and (c) that deepening will be greater when the compound contains a novel rather than another preexposed stimulus. A series of experiments studied these predictions using a fear conditioning procedure with rats. In each experiment, rats were preexposed to 3 stimuli, 1 (A) taken from 1 modality (visual or auditory) and the remaining 2 (X and Y) taken from another modality (auditory or visual). Then A was compounded with X, and Y was compounded with a novel stimulus (B) taken from the same modality as A. A previous series of experiments (H. T. Leung, A. S. Killcross, & R. F. Westbrook, 2011, Additional exposures to a compound of two preexposed stimuli deepen latent inhibition, Journal of Experimental Psychology: Animal Behavior Processes, Vol. 37, pp. 394-406) compared A with Y, finding that A was more latently inhibited than Y, the opposite of what was predicted. The present experiments confirmed that A was more latently inhibited than Y, showed that this was due to A entering the compound more latently inhibited than Y, and finally, that a comparison of X and Y confirmed the 3 predictions made by the theory.

  11. [Microcomputer control of a LED stimulus display device].

    PubMed

    Ohmoto, S; Kikuchi, T; Kumada, T

    1987-02-01

    A visual stimulus display system controlled by a microcomputer was constructed at low cost. The system consists of a LED stimulus display device, a microcomputer, two interface boards, a pointing device (a "mouse") and two kinds of software. The first software package is written in BASIC. Its functions are: to construct stimulus patterns using the mouse, to construct letter patterns (alphabet, digit, symbols and Japanese letters--kanji, hiragana, katakana), to modify the patterns, to store the patterns on a floppy disc, to translate the patterns into integer data which are used to display the patterns in the second software. The second software package, written in BASIC and machine language, controls display of a sequence of stimulus patterns in predetermined time schedules in visual experiments.

  12. Beyond a mask and against the bottleneck: retroactive dual-task interference during working memory consolidation of a masked visual target.

    PubMed

    Nieuwenstein, Mark; Wyble, Brad

    2014-06-01

    While studies on visual memory commonly assume that the consolidation of a visual stimulus into working memory is interrupted by a trailing mask, studies on dual-task interference suggest that the consolidation of a stimulus can continue for several hundred milliseconds after a mask. As a result, estimates of the time course of working memory consolidation differ more than an order of magnitude. Here, we contrasted these opposing views by examining if and for how long the processing of a masked display of visual stimuli can be disturbed by a trailing 2-alternative forced choice task (2-AFC; a color discrimination task or a visual or auditory parity judgment task). The results showed that the presence of the 2-AFC task produced a pronounced retroactive interference effect that dissipated across stimulus onset asynchronies of 250-1,000 ms, indicating that the processing elicited by the 2-AFC task interfered with the gradual consolidation of the earlier shown stimuli. Furthermore, this interference effect occurred regardless of whether the to-be-remembered stimuli comprised a string of letters or an unfamiliar complex visual shape, and it occurred regardless of whether these stimuli were masked. Conversely, the interference effect was reduced when the memory load for the 1st task was reduced, or when the 2nd task was a color detection task that did not require decision making. Taken together, these findings show that the formation of a durable and consciously accessible working memory trace for a briefly shown visual stimulus can be disturbed by a trailing 2-AFC task for up to several hundred milliseconds after the stimulus has been masked. By implication, the current findings challenge the common view that working memory consolidation involves an immutable central processing bottleneck, and they also make clear that consolidation does not stop when a stimulus is masked. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  13. Cortical Neural Synchronization Underlies Primary Visual Consciousness of Qualia: Evidence from Event-Related Potentials

    PubMed Central

    Babiloni, Claudio; Marzano, Nicola; Soricelli, Andrea; Cordone, Susanna; Millán-Calenti, José Carlos; Del Percio, Claudio; Buján, Ana

    2016-01-01

    This article reviews three experiments on event-related potentials (ERPs) testing the hypothesis that primary visual consciousness (stimulus self-report) is related to enhanced cortical neural synchronization as a function of stimulus features. ERP peak latency and sources were compared between “seen” trials and “not seen” trials, respectively related and unrelated to the primary visual consciousness. Three salient features of visual stimuli were considered (visuospatial, emotional face expression, and written words). Results showed the typical visual ERP components in both “seen” and “not seen” trials. There was no statistical difference in the ERP peak latencies between the “seen” and “not seen” trials, suggesting a similar timing of the cortical neural synchronization regardless the primary visual consciousness. In contrast, ERP sources showed differences between “seen” and “not seen” trials. For the visuospatial stimuli, the primary consciousness was related to higher activity in dorsal occipital and parietal sources at about 400 ms post-stimulus. For the emotional face expressions, there was greater activity in parietal and frontal sources at about 180 ms post-stimulus. For the written letters, there was higher activity in occipital, parietal and temporal sources at about 230 ms post-stimulus. These results hint that primary visual consciousness is associated with an enhanced cortical neural synchronization having entirely different spatiotemporal characteristics as a function of the features of the visual stimuli and possibly, the relative qualia (i.e., visuospatial, face expression, and words). In this framework, the dorsal visual stream may be synchronized in association with the primary consciousness of visuospatial and emotional face contents. Analogously, both dorsal and ventral visual streams may be synchronized in association with the primary consciousness of linguistic contents. In this line of reasoning, the ensemble of the cortical neural networks underpinning the single visual features would constitute a sort of multi-dimensional palette of colors, shapes, regions of the visual field, movements, emotional face expressions, and words. The synchronization of one or more of these cortical neural networks, each with its peculiar timing, would produce the primary consciousness of one or more of the visual features of the scene. PMID:27445750

  14. Flexible cue combination in the guidance of attention in visual search

    PubMed Central

    Brand, John; Oriet, Chris; Johnson, Aaron P.; Wolfe, Jeremy M.

    2014-01-01

    Hodsoll and Humphreys (2001) have assessed the relative contributions of stimulus-driven and user-driven knowledge on linearly- and nonlinearly separable search. However, the target feature used to determine linear separability in their task (i.e., target size) was required to locate the target. In the present work, we investigated the contributions of stimulus-driven and user-driven knowledge when a linearly- or nonlinearly-separable feature is available but not required for target identification. We asked observers to complete a series of standard color X orientation conjunction searches in which target size was either linearly- or nonlinearly separable from the size of the distractors. When guidance by color X orientation and by size information are both available, observers rely on whichever information results in the best search efficiency. This is the case irrespective of whether we provide target foreknowledge by blocking stimulus conditions, suggesting that feature information is used in both a stimulus-driven and user-driven fashion. PMID:25463553

  15. Neural Pathways Conveying Novisual Information to the Visual Cortex

    PubMed Central

    2013-01-01

    The visual cortex has been traditionally considered as a stimulus-driven, unimodal system with a hierarchical organization. However, recent animal and human studies have shown that the visual cortex responds to non-visual stimuli, especially in individuals with visual deprivation congenitally, indicating the supramodal nature of the functional representation in the visual cortex. To understand the neural substrates of the cross-modal processing of the non-visual signals in the visual cortex, we firstly showed the supramodal nature of the visual cortex. We then reviewed how the nonvisual signals reach the visual cortex. Moreover, we discussed if these non-visual pathways are reshaped by early visual deprivation. Finally, the open question about the nature (stimulus-driven or top-down) of non-visual signals is also discussed. PMID:23840972

  16. A neural correlate of working memory in the monkey primary visual cortex.

    PubMed

    Supèr, H; Spekreijse, H; Lamme, V A

    2001-07-06

    The brain frequently needs to store information for short periods. In vision, this means that the perceptual correlate of a stimulus has to be maintained temporally once the stimulus has been removed from the visual scene. However, it is not known how the visual system transfers sensory information into a memory component. Here, we identify a neural correlate of working memory in the monkey primary visual cortex (V1). We propose that this component may link sensory activity with memory activity.

  17. Effects of stimulus salience on touchscreen serial reversal learning in a mouse model of fragile X syndrome

    PubMed Central

    Dickson, Price E.; Corkill, Beau; McKimm, Eric; Miller, Mellessa M.; Calton, Michele A.; Goldowitz, Daniel; Blaha, Charles D.; Mittleman, Guy

    2013-01-01

    Fragile X syndrome (FXS) is the most common inherited form of intellectual disability in males and the most common genetic cause of autism. Although executive dysfunction is consistently found in humans with FXS, evidence of executive dysfunction in Fmr1 KO mice, a mouse model of FXS, has been inconsistent. One possible explanation for this is that executive dysfunction in Fmr1 KO mice, similar to humans with FXS, is only evident when cognitive demands are high. Using touchscreen operant conditioning chambers, male Fmr1 KO mice and their male wildtype littermates were tested on the acquisition of a pairwise visual discrimination followed by four serial reversals of the response rule. We assessed reversal learning performance under two different conditions. In the first, the correct stimulus was salient and the incorrect stimulus was non-salient. In the second and more challenging condition, the incorrect stimulus was salient and the correct stimulus was non-salient; this increased cognitive load by introducing conflict between sensory-driven (i.e., bottom-up) and task-dependent (i.e., top-down) signals. Fmr1 KOs displayed two distinct impairments relative to wildtype littermates. First, Fmr1 KOs committed significantly more learning-type errors during the second reversal stage, but only under high cognitive load. Second, during the first reversal stage, Fmr1 KOs committed significantly more attempts to collect a reward during the timeout following an incorrect response. These findings indicate that Fmr1 KO mice display executive dysfunction that, in some cases, is only evident under high cognitive load. PMID:23747611

  18. Barack Obama Blindness (BOB): Absence of Visual Awareness to a Single Object.

    PubMed

    Persuh, Marjan; Melara, Robert D

    2016-01-01

    In two experiments, we evaluated whether a perceiver's prior expectations could alone obliterate his or her awareness of a salient visual stimulus. To establish expectancy, observers first made a demanding visual discrimination on each of three baseline trials. Then, on a fourth, critical trial, a single, salient and highly visible object appeared in full view at the center of the visual field and in the absence of any competing visual input. Surprisingly, fully half of the participants were unaware of the solitary object in front of their eyes. Dramatically, observers were blind even when the only stimulus on display was the face of U.S. President Barack Obama. We term this novel, counterintuitive phenomenon, Barack Obama Blindness (BOB). Employing a method that rules out putative memory effects by probing awareness immediately after presentation of the critical stimulus, we demonstrate that the BOB effect is a true failure of conscious vision.

  19. Barack Obama Blindness (BOB): Absence of Visual Awareness to a Single Object

    PubMed Central

    Persuh, Marjan; Melara, Robert D.

    2016-01-01

    In two experiments, we evaluated whether a perceiver’s prior expectations could alone obliterate his or her awareness of a salient visual stimulus. To establish expectancy, observers first made a demanding visual discrimination on each of three baseline trials. Then, on a fourth, critical trial, a single, salient and highly visible object appeared in full view at the center of the visual field and in the absence of any competing visual input. Surprisingly, fully half of the participants were unaware of the solitary object in front of their eyes. Dramatically, observers were blind even when the only stimulus on display was the face of U.S. President Barack Obama. We term this novel, counterintuitive phenomenon, Barack Obama Blindness (BOB). Employing a method that rules out putative memory effects by probing awareness immediately after presentation of the critical stimulus, we demonstrate that the BOB effect is a true failure of conscious vision. PMID:27047362

  20. Attention distributed across sensory modalities enhances perceptual performance

    PubMed Central

    Mishra, Jyoti; Gazzaley, Adam

    2012-01-01

    This study investigated the interaction between top-down attentional control and multisensory processing in humans. Using semantically congruent and incongruent audiovisual stimulus streams, we found target detection to be consistently improved in the setting of distributed audiovisual attention versus focused visual attention. This performance benefit was manifested as faster reaction times for congruent audiovisual stimuli, and as accuracy improvements for incongruent stimuli, resulting in a resolution of stimulus interference. Electrophysiological recordings revealed that these behavioral enhancements were associated with reduced neural processing of both auditory and visual components of the audiovisual stimuli under distributed vs. focused visual attention. These neural changes were observed at early processing latencies, within 100–300 ms post-stimulus onset, and localized to auditory, visual, and polysensory temporal cortices. These results highlight a novel neural mechanism for top-down driven performance benefits via enhanced efficacy of sensory neural processing during distributed audiovisual attention relative to focused visual attention. PMID:22933811

  1. How actions shape perception: learning action-outcome relations and predicting sensory outcomes promote audio-visual temporal binding

    PubMed Central

    Desantis, Andrea; Haggard, Patrick

    2016-01-01

    To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events. PMID:27982063

  2. How actions shape perception: learning action-outcome relations and predicting sensory outcomes promote audio-visual temporal binding.

    PubMed

    Desantis, Andrea; Haggard, Patrick

    2016-12-16

    To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events.

  3. The coupling of cerebral blood flow and oxygen metabolism with brain activation is similar for simple and complex stimuli in human primary visual cortex.

    PubMed

    Griffeth, Valerie E M; Simon, Aaron B; Buxton, Richard B

    2015-01-01

    Quantitative functional MRI (fMRI) experiments to measure blood flow and oxygen metabolism coupling in the brain typically rely on simple repetitive stimuli. Here we compared such stimuli with a more naturalistic stimulus. Previous work on the primary visual cortex showed that direct attentional modulation evokes a blood flow (CBF) response with a relatively large oxygen metabolism (CMRO2) response in comparison to an unattended stimulus, which evokes a much smaller metabolic response relative to the flow response. We hypothesized that a similar effect would be associated with a more engaging stimulus, and tested this by measuring the primary human visual cortex response to two contrast levels of a radial flickering checkerboard in comparison to the response to free viewing of brief movie clips. We did not find a significant difference in the blood flow-metabolism coupling (n=%ΔCBF/%ΔCMRO2) between the movie stimulus and the flickering checkerboards employing two different analysis methods: a standard analysis using the Davis model and a new analysis using a heuristic model dependent only on measured quantities. This finding suggests that in the primary visual cortex a naturalistic stimulus (in comparison to a simple repetitive stimulus) is either not sufficient to provoke a change in flow-metabolism coupling by attentional modulation as hypothesized, that the experimental design disrupted the cognitive processes underlying the response to a more natural stimulus, or that the technique used is not sensitive enough to detect a small difference. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. The effects of stereo disparity on the behavioural and electrophysiological correlates of perception of audio-visual motion in depth.

    PubMed

    Harrison, Neil R; Witheridge, Sian; Makin, Alexis; Wuerger, Sophie M; Pegna, Alan J; Meyer, Georg F

    2015-11-01

    Motion is represented by low-level signals, such as size-expansion in vision or loudness changes in the auditory modality. The visual and auditory signals from the same object or event may be integrated and facilitate detection. We explored behavioural and electrophysiological correlates of congruent and incongruent audio-visual depth motion in conditions where auditory level changes, visual expansion, and visual disparity cues were manipulated. In Experiment 1 participants discriminated auditory motion direction whilst viewing looming or receding, 2D or 3D, visual stimuli. Responses were faster and more accurate for congruent than for incongruent audio-visual cues, and the congruency effect (i.e., difference between incongruent and congruent conditions) was larger for visual 3D cues compared to 2D cues. In Experiment 2, event-related potentials (ERPs) were collected during presentation of the 2D and 3D, looming and receding, audio-visual stimuli, while participants detected an infrequent deviant sound. Our main finding was that audio-visual congruity was affected by retinal disparity at an early processing stage (135-160ms) over occipito-parietal scalp. Topographic analyses suggested that similar brain networks were activated for the 2D and 3D congruity effects, but that cortical responses were stronger in the 3D condition. Differences between congruent and incongruent conditions were observed between 140-200ms, 220-280ms, and 350-500ms after stimulus onset. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. [Analysis of electrically evoked response (EER) in relation to the central visual pathway of the cat (1). Wave shape of the cat EER].

    PubMed

    Fukatsu, Y; Miyake, Y; Sugita, S; Saito, A; Watanabe, S

    1990-11-01

    To analyze the Electrically evoked response (EER) in relation to the central visual pathway, the authors studied the properties of wave patterns and peak latencies of EER in 35 anesthetized adult cats. The cat EER showed two early positive waves on outward current (cornea cathode) stimulus and three or four early positive waves on inward current (cornea anode) stimulus. These waves were recorded within 50 ms after stimulus onset, and were the most consistent components in cat EER. The stimulus threshold for EER showed a less individual variation than amplitude. The difference of stimulus threshold between outward and inward current stimulus was also essentially negligible. The stimulus threshold was higher in early components than in late components. The peak latency of EER became shorter and the amplitude became higher, as the stimulus intensity was increased. However, this tendency was reversed and some wavelets started to appear when the stimulus was extremely strong. The recording using short stimulus duration and bipolar electrodes enabled us to reduce the electrical artifact of EER. These results obtained from cats were compared with those of humans and rabbits.

  6. Effect of ethanol on the visual-evoked potential in rat: dynamics of ON and OFF responses.

    PubMed

    Dulinskas, Redas; Buisas, Rokas; Vengeliene, Valentina; Ruksenas, Osvaldas

    2017-01-01

    The effect of acute ethanol administration on the flash visual-evoked potential (VEP) was investigated in numerous studies. However, it is still unclear which brain structures are responsible for the differences observed in stimulus onset (ON) and offset (OFF) responses and how these responses are modulated by ethanol. The aim of our study was to investigate the pattern of ON and OFF responses in the visual system, measured as amplitude and latency of each VEP component following acute administration of ethanol. VEPs were recorded at the onset and offset of a 500 ms visual stimulus in anesthetized male Wistar rats. The effect of alcohol on VEP latency and amplitude was measured for one hour after injection of 2 g/kg ethanol dose. Three VEP components - N63, P89 and N143 - were analyzed. Our results showed that, except for component N143, ethanol increased the latency of both ON and OFF responses in a similar manner. The latency of N143 during OFF response was not affected by ethanol but its amplitude was reduced. Our study demonstrated that the activation of the visual system during the ON response to a 500 ms visual stimulus is qualitatively different from that during the OFF response. Ethanol interfered with processing of the stimulus duration at the level of the visual cortex and reduced the activation of cortical regions.

  7. A role of nucleus accumbens dopamine receptors in the nucleus accumbens core, but not shell, in fear prediction error.

    PubMed

    Li, Susan S Y; McNally, Gavan P

    2015-08-01

    Two experiments used an associative blocking design to study the role of dopamine receptors in the nucleus accumbens shell (AcbSh) and core (AcbC) in fear prediction error. Rats in the experimental groups were trained to a visual fear-conditioned stimulus (conditional stimulus [CS]) A in Stage I, whereas rats in the control groups were not. In Stage II, all rats received compound fear conditioning of the visual CSA and an auditory CSB. Rats were later tested for their fear responses to CSB. All rats received microinjections of saline or the D1-D2 receptor antagonist cis-(z)-flupenthixol prior to Stage II. These microinjections targeted either the AcbSh (Experiment 1) or the AcbC (Experiment 2). In each experiment, Stage I fear conditioning of CSA blocked fear learning to CSB. Microinjection of cis-(z)-flupenthixol (10 or 20 μg) into the AcbSh (Experiment 1) had no effect on fear learning or associative blocking. In contrast, microinjection of cis-(z)-flupenthixol (10 or 20 μg) into the AcbC (Experiment 2) attenuated blocking and so enabled fear learning to CSB. These results identify the AcbC as the critical locus for dopamine receptor contributions to fear prediction error and the associative blocking of fear learning. (c) 2015 APA, all rights reserved).

  8. Visual spatial attention enhances the amplitude of positive and negative fMRI responses to visual stimulation in an eccentricity-dependent manner.

    PubMed

    Bressler, David W; Fortenbaugh, Francesca C; Robertson, Lynn C; Silver, Michael A

    2013-06-07

    Endogenous visual spatial attention improves perception and enhances neural responses to visual stimuli at attended locations. Although many aspects of visual processing differ significantly between central and peripheral vision, little is known regarding the neural substrates of the eccentricity dependence of spatial attention effects. We measured amplitudes of positive and negative fMRI responses to visual stimuli as a function of eccentricity in a large number of topographically-organized cortical areas. Responses to each stimulus were obtained when the stimulus was attended and when spatial attention was directed to a stimulus in the opposite visual hemifield. Attending to the stimulus increased both positive and negative response amplitudes in all cortical areas we studied: V1, V2, V3, hV4, VO1, LO1, LO2, V3A/B, IPS0, TO1, and TO2. However, the eccentricity dependence of these effects differed considerably across cortical areas. In early visual, ventral, and lateral occipital cortex, attentional enhancement of positive responses was greater for central compared to peripheral eccentricities. The opposite pattern was observed in dorsal stream areas IPS0 and putative MT homolog TO1, where attentional enhancement of positive responses was greater in the periphery. Both the magnitude and the eccentricity dependence of attentional modulation of negative fMRI responses closely mirrored that of positive responses across cortical areas. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Perceived duration decreases with increasing eccentricity.

    PubMed

    Kliegl, Katrin M; Huckauf, Anke

    2014-07-01

    Previous studies examining the influence of stimulus location on temporal perception yield inhomogeneous and contradicting results. Therefore, the aim of the present study is to soundly examine the effect of stimulus eccentricity. In a series of five experiments, subjects compared the duration of foveal disks to disks presented at different retinal eccentricities on the horizontal meridian. The results show that the perceived duration of a visual stimulus declines with increasing eccentricity. The effect was replicated with various stimulus orders (Experiments 1-3), as well as with cortically magnified stimuli (Experiments 4-5), ruling out that the effect was merely caused by different cortical representation sizes. The apparent decreasing duration of stimuli with increasing eccentricity is discussed with respect to current models of time perception, the possible influence of visual attention and respective underlying physiological characteristics of the visual system. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Audiovisual Integration Delayed by Stimulus Onset Asynchrony Between Auditory and Visual Stimuli in Older Adults.

    PubMed

    Ren, Yanna; Yang, Weiping; Nakahashi, Kohei; Takahashi, Satoshi; Wu, Jinglong

    2017-02-01

    Although neuronal studies have shown that audiovisual integration is regulated by temporal factors, there is still little knowledge about the impact of temporal factors on audiovisual integration in older adults. To clarify how stimulus onset asynchrony (SOA) between auditory and visual stimuli modulates age-related audiovisual integration, 20 younger adults (21-24 years) and 20 older adults (61-80 years) were instructed to perform an auditory or visual stimuli discrimination experiment. The results showed that in younger adults, audiovisual integration was altered from an enhancement (AV, A ± 50 V) to a depression (A ± 150 V). In older adults, the alterative pattern was similar to that for younger adults with the expansion of SOA; however, older adults showed significantly delayed onset for the time-window-of-integration and peak latency in all conditions, which further demonstrated that audiovisual integration was delayed more severely with the expansion of SOA, especially in the peak latency for V-preceded-A conditions in older adults. Our study suggested that audiovisual facilitative integration occurs only within a certain SOA range (e.g., -50 to 50 ms) in both younger and older adults. Moreover, our results confirm that the response for older adults was slowed and provided empirical evidence that integration ability is much more sensitive to the temporal alignment of audiovisual stimuli in older adults.

  11. Neural Dynamics of Audiovisual Synchrony and Asynchrony Perception in 6-Month-Old Infants

    PubMed Central

    Kopp, Franziska; Dietrich, Claudia

    2013-01-01

    Young infants are sensitive to multisensory temporal synchrony relations, but the neural dynamics of temporal interactions between vision and audition in infancy are not well understood. We investigated audiovisual synchrony and asynchrony perception in 6-month-old infants using event-related brain potentials (ERP). In a prior behavioral experiment (n = 45), infants were habituated to an audiovisual synchronous stimulus and tested for recovery of interest by presenting an asynchronous test stimulus in which the visual stream was delayed with respect to the auditory stream by 400 ms. Infants who behaviorally discriminated the change in temporal alignment were included in further analyses. In the EEG experiment (final sample: n = 15), synchronous and asynchronous stimuli (visual delay of 400 ms) were presented in random order. Results show latency shifts in the auditory ERP components N1 and P2 as well as the infant ERP component Nc. Latencies in the asynchronous condition were significantly longer than in the synchronous condition. After video onset but preceding the auditory onset, amplitude modulations propagating from posterior to anterior sites and related to the Pb component of infants’ ERP were observed. Results suggest temporal interactions between the two modalities. Specifically, they point to the significance of anticipatory visual motion for auditory processing, and indicate young infants’ predictive capacities for audiovisual temporal synchrony relations. PMID:23346071

  12. Encoding of Target Detection during Visual Search by Single Neurons in the Human Brain.

    PubMed

    Wang, Shuo; Mamelak, Adam N; Adolphs, Ralph; Rutishauser, Ueli

    2018-06-08

    Neurons in the primate medial temporal lobe (MTL) respond selectively to visual categories such as faces, contributing to how the brain represents stimulus meaning. However, it remains unknown whether MTL neurons continue to encode stimulus meaning when it changes flexibly as a function of variable task demands imposed by goal-directed behavior. While classically associated with long-term memory, recent lesion and neuroimaging studies show that the MTL also contributes critically to the online guidance of goal-directed behaviors such as visual search. Do such tasks modulate responses of neurons in the MTL, and if so, do their responses mirror bottom-up input from visual cortices or do they reflect more abstract goal-directed properties? To answer these questions, we performed concurrent recordings of eye movements and single neurons in the MTL and medial frontal cortex (MFC) in human neurosurgical patients performing a memory-guided visual search task. We identified a distinct population of target-selective neurons in both the MTL and MFC whose response signaled whether the currently fixated stimulus was a target or distractor. This target-selective response was invariant to visual category and predicted whether a target was detected or missed behaviorally during a given fixation. The response latencies, relative to fixation onset, of MFC target-selective neurons preceded those in the MTL by ∼200 ms, suggesting a frontal origin for the target signal. The human MTL thus represents not only fixed stimulus identity, but also task-specified stimulus relevance due to top-down goal relevance. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Psychophysical and perceptual performance in a simulated-scotoma model of human eye injury

    NASA Astrophysics Data System (ADS)

    Brandeis, R.; Egoz, I.; Peri, D.; Sapiens, N.; Turetz, J.

    2008-02-01

    Macular scotomas, affecting visual functioning, characterize many eye and neurological diseases like AMD, diabetes mellitus, multiple sclerosis, and macular hole. In this work, foveal visual field defects were modeled, and their effects were evaluated on spatial contrast sensitivity and a task of stimulus detection and aiming. The modeled occluding scotomas, of different size, were superimposed on the stimuli presented on the computer display, and were stabilized on the retina using a mono Purkinje Eye-Tracker. Spatial contrast sensitivity was evaluated using square-wave grating stimuli, whose contrast thresholds were measured using the method of constant stimuli with "catch trials". The detection task consisted of a triple conjunctive visual search display of: size (in visual angle), contrast and background (simple, low-level features vs. complex, high-level features). Search/aiming accuracy as well as R.T. measures used for performance evaluation. Artificially generated scotomas suppressed spatial contrast sensitivity in a size dependent manner, similar to previous studies. Deprivation effect was dependent on spatial frequency, consistent with retinal inhomogeneity models. Stimulus detection time was slowed in complex background search situation more than in simple background. Detection speed was dependent on scotoma size and size of stimulus. In contrast, visually guided aiming was more sensitive to scotoma effect in simple background search situation than in complex background. Both stimulus aiming R.T. and accuracy (precision targeting) were impaired, as a function of scotoma size and size of stimulus. The data can be explained by models distinguishing between saliency-based, parallel and serial search processes, guiding visual attention, which are supported by underlying retinal as well as neural mechanisms.

  14. Crossmodal attention switching: auditory dominance in temporal discrimination tasks.

    PubMed

    Lukas, Sarah; Philipp, Andrea M; Koch, Iring

    2014-11-01

    Visual stimuli are often processed more efficiently than accompanying stimuli in another modality. In line with this "visual dominance", earlier studies on attentional switching showed a clear benefit for visual stimuli in a bimodal visual-auditory modality-switch paradigm that required spatial stimulus localization in the relevant modality. The present study aimed to examine the generality of this visual dominance effect. The modality appropriateness hypothesis proposes that stimuli in different modalities are differentially effectively processed depending on the task dimension, so that processing of visual stimuli is favored in the dimension of space, whereas processing auditory stimuli is favored in the dimension of time. In the present study, we examined this proposition by using a temporal duration judgment in a bimodal visual-auditory switching paradigm. Two experiments demonstrated that crossmodal interference (i.e., temporal stimulus congruence) was larger for visual stimuli than for auditory stimuli, suggesting auditory dominance when performing temporal judgment tasks. However, attention switch costs were larger for the auditory modality than for visual modality, indicating a dissociation of the mechanisms underlying crossmodal competition in stimulus processing and modality-specific biasing of attentional set. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Ventral and Dorsal Pathways Relate Differently to Visual Awareness of Body Postures under Continuous Flash Suppression

    PubMed Central

    Goebel, Rainer

    2018-01-01

    Abstract Visual perception includes ventral and dorsal stream processes. However, it is still unclear whether the former is predominantly related to conscious and the latter to nonconscious visual perception as argued in the literature. In this study upright and inverted body postures were rendered either visible or invisible under continuous flash suppression (CFS), while brain activity of human participants was measured with functional MRI (fMRI). Activity in the ventral body-sensitive areas was higher during visible conditions. In comparison, activity in the posterior part of the bilateral intraparietal sulcus (IPS) showed a significant interaction of stimulus orientation and visibility. Our results provide evidence that dorsal stream areas are less associated with visual awareness. PMID:29445766

  16. Electrophysiological evidence for phenomenal consciousness.

    PubMed

    Revonsuo, Antti; Koivisto, Mika

    2010-09-01

    Abstract Recent evidence from event-related brain potentials (ERPs) lends support to two central theses in Lamme's theory. The earliest ERP correlate of visual consciousness appears over posterior visual cortex around 100-200 ms after stimulus onset. Its scalp topography and time window are consistent with recurrent processing in the visual cortex. This electrophysiological correlate of visual consciousness is mostly independent of later ERPs reflecting selective attention and working memory functions. Overall, the ERP evidence supports the view that phenomenal consciousness of a visual stimulus emerges earlier than access consciousness, and that attention and awareness are served by distinct neural processes.

  17. Auditory proactive interference in monkeys: The role of stimulus set size and intertrial interval

    PubMed Central

    Bigelow, James; Poremba, Amy

    2013-01-01

    We conducted two experiments to examine the influence of stimulus set size (the number of stimuli that are used throughout the session) and intertrial interval (ITI, the elapsed time between trials) in auditory short-term memory in monkeys. We used an auditory delayed matching-to-sample task wherein the animals had to indicate whether two sounds separated by a 5-s retention interval were the same (match trials) or different (non-match trials). In Experiment 1, we randomly assigned a stimulus set size of 2, 4, 8, 16, 32, 64, or 192 (trial unique) for each session of 128 trials. Consistent with previous visual studies, overall accuracy was consistently lower when smaller stimulus set sizes were used. Further analyses revealed that these effects were primarily caused by an increase in incorrect “same” responses on non-match trials. In Experiment 2, we held the stimulus set size constant at four for each session and alternately set the ITI at 5, 10, or 20 s. Overall accuracy improved by increasing the ITI from 5 to 10 s, but the 10 and 20 s conditions were the same. As in Experiment 1, the overall decrease in accuracy during the 5-s condition was caused by a greater number of false “match” responses on non-match trials. Taken together, Experiments 1 and 2 show that auditory short-term memory in monkeys is highly susceptible to PI caused by stimulus repetition. Additional analyses from Experiment 1 suggest that monkeys may make same/different judgments based on a familiarity criterion that is adjusted by error-related feedback. PMID:23526232

  18. Social cichlid fish change behaviour in response to a visual predator stimulus, but not the odour of damaged conspecifics.

    PubMed

    O'Connor, Constance M; Reddon, Adam R; Odetunde, Aderinsola; Jindal, Shagun; Balshine, Sigal

    2015-12-01

    Predation is one of the primary drivers of fitness for prey species. Therefore, there should be strong selection for accurate assessment of predation risk, and whenever possible, individuals should use all available information to fine-tune their response to the current threat of predation. Here, we used a controlled laboratory experiment to assess the responses of individual Neolamprologus pulcher, a social cichlid fish, to a live predator stimulus, to the odour of damaged conspecifics, or to both indicators of predation risk combined. We found that fish in the presence of the visual predator stimulus showed typical antipredator behaviour. Namely, these fish decreased activity and exploration, spent more time seeking shelter, and more time near conspecifics. Surprisingly, there was no effect of the chemical cue alone, and fish showed a reduced response to the combination of the visual predator stimulus and the odour of damaged conspecifics relative to the visual predator stimulus alone. These results demonstrate that N. pulcher adjust their anti-predator behaviour to the information available about current predation risk, and we suggest a possible role for the use of social information in the assessment of predation risk in a cooperatively breeding fish. Copyright © 2015. Published by Elsevier B.V.

  19. The path to memory is guided by strategy: distinct networks are engaged in associative encoding under visual and verbal strategy and influence memory performance in healthy and impaired individuals

    PubMed Central

    Hales, J. B.; Brewer, J. B.

    2018-01-01

    Given the diversity of stimuli encountered in daily life, a variety of strategies must be used for learning new information. Relating and encoding visual and verbal stimuli into memory has been probed using various tasks and stimulus-types. Engagement of specific subsequent memory and cortical processing regions depends on the stimulus modality of studied material; however, it remains unclear whether different encoding strategies similarly influence regional activity when stimulus-type is held constant. In this study, subjects encoded object pairs using a visual or verbal associative strategy during functional magnetic resonance imaging (fMRI), and subsequent memory was assessed for pairs encoded under each strategy. Each strategy elicited distinct regional processing and subsequent memory effects: middle / superior frontal, lateral parietal, and lateral occipital for visually-associated pairs and inferior frontal, medial frontal, and medial occipital for verbally-associated pairs. This regional selectivity mimics the effects of stimulus modality, suggesting that cortical involvement in associative encoding is driven by strategy, and not simply by stimulus-type. The clinical relevance of these findings, probed in two patients with recent aphasic strokes, suggest that training with strategies utilizing unaffected cortical regions might improve memory ability in patients with brain damage. PMID:22390467

  20. Does dorsolateral prefrontal cortex (DLPFC) activation return to baseline when sexual stimuli cease? The role of DLPFC in visual sexual stimulation.

    PubMed

    Leon-Carrion, Jose; Martín-Rodríguez, Juan Francisco; Damas-López, Jesús; Pourrezai, Kambiz; Izzetoglu, Kurtulus; Barroso Y Martin, Juan Manuel; Dominguez-Morales, M Rosario

    2007-04-06

    A fundamental question in human sexuality regards the neural substrate underlying sexually-arousing representations. Lesion and neuroimaging studies suggest that dorsolateral pre-frontal cortex (DLPFC) plays an important role in regulating the processing of visual sexual stimulation. The aim of this Functional Near-Infrared Spectroscopy (fNIRS) study was to explore DLPFC structures involved in the processing of erotic and non-sexual films. fNIRS was used to image the evoked-cerebral blood oxygenation (CBO) response in 15 male and 15 female subjects. Our hypothesis is that a sexual stimulus would produce DLPFC activation during the period of direct stimulus perception ("on" period), and that this activation would continue after stimulus cessation ("off" period). A new paradigm was used to measure the relative oxygenated hemoglobin (oxyHb) concentrations in DLPFC while subjects viewed the two selected stimuli (Roman orgy and a non-sexual film clip), and also immediately following stimulus cessation. Viewing of the non-sexual stimulus produced no overshoot in DLPFC, whereas exposure to the erotic stimulus produced rapidly ascendant overshoot, which became even more pronounced following stimulus cessation. We also report on gender differences in the timing and intensity of DLPFC activation in response to a sexually explicit visual stimulus. We found evidence indicating that men experience greater and more rapid sexual arousal when exposed to erotic stimuli than do women. Our results point out that self-regulation of DLPFC activation is modulated by subjective arousal and that cognitive appraisal of the sexual stimulus (valence) plays a secondary role in this regulation.

  1. Parallel perceptual enhancement and hierarchic relevance evaluation in an audio-visual conjunction task.

    PubMed

    Potts, Geoffrey F; Wood, Susan M; Kothmann, Delia; Martin, Laura E

    2008-10-21

    Attention directs limited-capacity information processing resources to a subset of available perceptual representations. The mechanisms by which attention selects task-relevant representations for preferential processing are not fully known. Triesman and Gelade's [Triesman, A., Gelade, G., 1980. A feature integration theory of attention. Cognit. Psychol. 12, 97-136.] influential attention model posits that simple features are processed preattentively, in parallel, but that attention is required to serially conjoin multiple features into an object representation. Event-related potentials have provided evidence for this model showing parallel processing of perceptual features in the posterior Selection Negativity (SN) and serial, hierarchic processing of feature conjunctions in the Frontal Selection Positivity (FSP). Most prior studies have been done on conjunctions within one sensory modality while many real-world objects have multimodal features. It is not known if the same neural systems of posterior parallel processing of simple features and frontal serial processing of feature conjunctions seen within a sensory modality also operate on conjunctions between modalities. The current study used ERPs and simultaneously presented auditory and visual stimuli in three task conditions: Attend Auditory (auditory feature determines the target, visual features are irrelevant), Attend Visual (visual features relevant, auditory irrelevant), and Attend Conjunction (target defined by the co-occurrence of an auditory and a visual feature). In the Attend Conjunction condition when the auditory but not the visual feature was a target there was an SN over auditory cortex, when the visual but not auditory stimulus was a target there was an SN over visual cortex, and when both auditory and visual stimuli were targets (i.e. conjunction target) there were SNs over both auditory and visual cortex, indicating parallel processing of the simple features within each modality. In contrast, an FSP was present when either the visual only or both auditory and visual features were targets, but not when only the auditory stimulus was a target, indicating that the conjunction target determination was evaluated serially and hierarchically with visual information taking precedence. This indicates that the detection of a target defined by audio-visual conjunction is achieved via the same mechanism as within a single perceptual modality, through separate, parallel processing of the auditory and visual features and serial processing of the feature conjunction elements, rather than by evaluation of a fused multimodal percept.

  2. Is Conscious Stimulus Identification Dependent on Knowledge of the Perceptual Modality? Testing the “Source Misidentification Hypothesis”

    PubMed Central

    Overgaard, Morten; Lindeløv, Jonas; Svejstrup, Stinna; Døssing, Marianne; Hvid, Tanja; Kauffmann, Oliver; Mouridsen, Kim

    2013-01-01

    This paper reports an experiment intended to test a particular hypothesis derived from blindsight research, which we name the “source misidentification hypothesis.” According to this hypothesis, a subject may be correct about a stimulus without being correct about how she had access to this knowledge (whether the stimulus was visual, auditory, or something else). We test this hypothesis in healthy subjects, asking them to report whether a masked stimulus was presented auditorily or visually, what the stimulus was, and how clearly they experienced the stimulus using the Perceptual Awareness Scale (PAS). We suggest that knowledge about perceptual modality may be a necessary precondition in order to issue correct reports of which stimulus was presented. Furthermore, we find that PAS ratings correlate with correctness, and that subjects are at chance level when reporting no conscious experience of the stimulus. To demonstrate that particular levels of reporting accuracy are obtained, we employ a statistical strategy, which operationally tests the hypothesis of non-equality, such that the usual rejection of the null-hypothesis admits the conclusion of equivalence. PMID:23508677

  3. Estimating the relative weights of visual and auditory tau versus heuristic-based cues for time-to-contact judgments in realistic, familiar scenes by older and younger adults.

    PubMed

    Keshavarz, Behrang; Campos, Jennifer L; DeLucia, Patricia R; Oberfeld, Daniel

    2017-04-01

    Estimating time to contact (TTC) involves multiple sensory systems, including vision and audition. Previous findings suggested that the ratio of an object's instantaneous optical size/sound intensity to its instantaneous rate of change in optical size/sound intensity (τ) drives TTC judgments. Other evidence has shown that heuristic-based cues are used, including final optical size or final sound pressure level. Most previous studies have used decontextualized and unfamiliar stimuli (e.g., geometric shapes on a blank background). Here we evaluated TTC estimates by using a traffic scene with an approaching vehicle to evaluate the weights of visual and auditory TTC cues under more realistic conditions. Younger (18-39 years) and older (65+ years) participants made TTC estimates in three sensory conditions: visual-only, auditory-only, and audio-visual. Stimuli were presented within an immersive virtual-reality environment, and cue weights were calculated for both visual cues (e.g., visual τ, final optical size) and auditory cues (e.g., auditory τ, final sound pressure level). The results demonstrated the use of visual τ as well as heuristic cues in the visual-only condition. TTC estimates in the auditory-only condition, however, were primarily based on an auditory heuristic cue (final sound pressure level), rather than on auditory τ. In the audio-visual condition, the visual cues dominated overall, with the highest weight being assigned to visual τ by younger adults, and a more equal weighting of visual τ and heuristic cues in older adults. Overall, better characterizing the effects of combined sensory inputs, stimulus characteristics, and age on the cues used to estimate TTC will provide important insights into how these factors may affect everyday behavior.

  4. Massively parallel neural circuits for stereoscopic color vision: encoding, decoding and identification.

    PubMed

    Lazar, Aurel A; Slutskiy, Yevgeniy B; Zhou, Yiyin

    2015-03-01

    Past work demonstrated how monochromatic visual stimuli could be faithfully encoded and decoded under Nyquist-type rate conditions. Color visual stimuli were then traditionally encoded and decoded in multiple separate monochromatic channels. The brain, however, appears to mix information about color channels at the earliest stages of the visual system, including the retina itself. If information about color is mixed and encoded by a common pool of neurons, how can colors be demixed and perceived? We present Color Video Time Encoding Machines (Color Video TEMs) for encoding color visual stimuli that take into account a variety of color representations within a single neural circuit. We then derive a Color Video Time Decoding Machine (Color Video TDM) algorithm for color demixing and reconstruction of color visual scenes from spikes produced by a population of visual neurons. In addition, we formulate Color Video Channel Identification Machines (Color Video CIMs) for functionally identifying color visual processing performed by a spiking neural circuit. Furthermore, we derive a duality between TDMs and CIMs that unifies the two and leads to a general theory of neural information representation for stereoscopic color vision. We provide examples demonstrating that a massively parallel color visual neural circuit can be first identified with arbitrary precision and its spike trains can be subsequently used to reconstruct the encoded stimuli. We argue that evaluation of the functional identification methodology can be effectively and intuitively performed in the stimulus space. In this space, a signal reconstructed from spike trains generated by the identified neural circuit can be compared to the original stimulus. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Top-Down Beta Enhances Bottom-Up Gamma

    PubMed Central

    Thompson, William H.

    2017-01-01

    Several recent studies have demonstrated that the bottom-up signaling of a visual stimulus is subserved by interareal gamma-band synchronization, whereas top-down influences are mediated by alpha-beta band synchronization. These processes may implement top-down control of stimulus processing if top-down and bottom-up mediating rhythms are coupled via cross-frequency interaction. To test this possibility, we investigated Granger-causal influences among awake macaque primary visual area V1, higher visual area V4, and parietal control area 7a during attentional task performance. Top-down 7a-to-V1 beta-band influences enhanced visually driven V1-to-V4 gamma-band influences. This enhancement was spatially specific and largest when beta-band activity preceded gamma-band activity by ∼0.1 s, suggesting a causal effect of top-down processes on bottom-up processes. We propose that this cross-frequency interaction mechanistically subserves the attentional control of stimulus selection. SIGNIFICANCE STATEMENT Contemporary research indicates that the alpha-beta frequency band underlies top-down control, whereas the gamma-band mediates bottom-up stimulus processing. This arrangement inspires an attractive hypothesis, which posits that top-down beta-band influences directly modulate bottom-up gamma band influences via cross-frequency interaction. We evaluate this hypothesis determining that beta-band top-down influences from parietal area 7a to visual area V1 are correlated with bottom-up gamma frequency influences from V1 to area V4, in a spatially specific manner, and that this correlation is maximal when top-down activity precedes bottom-up activity. These results show that for top-down processes such as spatial attention, elevated top-down beta-band influences directly enhance feedforward stimulus-induced gamma-band processing, leading to enhancement of the selected stimulus. PMID:28592697

  6. Physical Features of Visual Images Affect Macaque Monkey’s Preference for These Images

    PubMed Central

    Funahashi, Shintaro

    2016-01-01

    Animals exhibit different degrees of preference toward various visual stimuli. In addition, it has been shown that strongly preferred stimuli can often act as a reward. The aim of the present study was to determine what features determine the strength of the preference for visual stimuli in order to examine neural mechanisms of preference judgment. We used 50 color photographs obtained from the Flickr Material Database (FMD) as original stimuli. Four macaque monkeys performed a simple choice task, in which two stimuli selected randomly from among the 50 stimuli were simultaneously presented on a monitor and monkeys were required to choose either stimulus by eye movements. We considered that the monkeys preferred the chosen stimulus if it continued to look at the stimulus for an additional 6 s and calculated a choice ratio for each stimulus. Each monkey exhibited a different choice ratio for each of the original 50 stimuli. They tended to select clear, colorful and in-focus stimuli. Complexity and clarity were stronger determinants of preference than colorfulness. Images that included greater amounts of spatial frequency components were selected more frequently. These results indicate that particular physical features of the stimulus can affect the strength of a monkey’s preference and that the complexity, clarity and colorfulness of the stimulus are important determinants of this preference. Neurophysiological studies would be needed to examine whether these features of visual stimuli produce more activation in neurons that participate in this preference judgment. PMID:27853424

  7. Components of Attention Modulated by Temporal Expectation

    ERIC Educational Resources Information Center

    Sørensen, Thomas Alrik; Vangkilde, Signe; Bundesen, Claus

    2015-01-01

    By varying the probabilities that a stimulus would appear at particular times after the presentation of a cue and modeling the data by the theory of visual attention (Bundesen, 1990), Vangkilde, Coull, and Bundesen (2012) provided evidence that the speed of encoding a singly presented stimulus letter into visual short-term memory (VSTM) is…

  8. Stimulus information contaminates summation tests of independent neural representations of features

    NASA Technical Reports Server (NTRS)

    Shimozaki, Steven S.; Eckstein, Miguel P.; Abbey, Craig K.

    2002-01-01

    Many models of visual processing assume that visual information is analyzed into separable and independent neural codes, or features. A common psychophysical test of independent features is known as a summation study, which measures performance in a detection, discrimination, or visual search task as the number of proposed features increases. Improvement in human performance with increasing number of available features is typically attributed to the summation, or combination, of information across independent neural coding of the features. In many instances, however, increasing the number of available features also increases the stimulus information in the task, as assessed by an optimal observer that does not include the independent neural codes. In a visual search task with spatial frequency and orientation as the component features, a particular set of stimuli were chosen so that all searches had equivalent stimulus information, regardless of the number of features. In this case, human performance did not improve with increasing number of features, implying that the improvement observed with additional features may be due to stimulus information and not the combination across independent features.

  9. Blood Oxygen Level-Dependent Activation of the Primary Visual Cortex Predicts Size Adaptation Illusion

    PubMed Central

    Pooresmaeili, Arezoo; Arrighi, Roberto; Biagi, Laura; Morrone, Maria Concetta

    2016-01-01

    In natural scenes, objects rarely occur in isolation but appear within a spatiotemporal context. Here, we show that the perceived size of a stimulus is significantly affected by the context of the scene: brief previous presentation of larger or smaller adapting stimuli at the same region of space changes the perceived size of a test stimulus, with larger adapting stimuli causing the test to appear smaller than veridical and vice versa. In a human fMRI study, we measured the blood oxygen level-dependent activation (BOLD) responses of the primary visual cortex (V1) to the contours of large-diameter stimuli and found that activation closely matched the perceptual rather than the retinal stimulus size: the activated area of V1 increased or decreased, depending on the size of the preceding stimulus. A model based on local inhibitory V1 mechanisms simulated the inward or outward shifts of the stimulus contours and hence the perceptual effects. Our findings suggest that area V1 is actively involved in reshaping our perception to match the short-term statistics of the visual scene. PMID:24089504

  10. Response properties of ON-OFF retinal ganglion cells to high-order stimulus statistics.

    PubMed

    Xiao, Lei; Gong, Han-Yan; Gong, Hai-Qing; Liang, Pei-Ji; Zhang, Pu-Ming

    2014-10-17

    The visual stimulus statistics are the fundamental parameters to provide the reference for studying visual coding rules. In this study, the multi-electrode extracellular recording experiments were designed and implemented on bullfrog retinal ganglion cells to explore the neural response properties to the changes in stimulus statistics. The changes in low-order stimulus statistics, such as intensity and contrast, were clearly reflected in the neuronal firing rate. However, it was difficult to distinguish the changes in high-order statistics, such as skewness and kurtosis, only based on the neuronal firing rate. The neuronal temporal filtering and sensitivity characteristics were further analyzed. We observed that the peak-to-peak amplitude of the temporal filter and the neuronal sensitivity, which were obtained from either neuronal ON spikes or OFF spikes, could exhibit significant changes when the high-order stimulus statistics were changed. These results indicate that in the retina, the neuronal response properties may be reliable and powerful in carrying some complex and subtle visual information. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  11. Distinct roles of visual, parietal, and frontal motor cortices in memory-guided sensorimotor decisions

    PubMed Central

    Goard, Michael J; Pho, Gerald N; Woodson, Jonathan; Sur, Mriganka

    2016-01-01

    Mapping specific sensory features to future motor actions is a crucial capability of mammalian nervous systems. We investigated the role of visual (V1), posterior parietal (PPC), and frontal motor (fMC) cortices for sensorimotor mapping in mice during performance of a memory-guided visual discrimination task. Large-scale calcium imaging revealed that V1, PPC, and fMC neurons exhibited heterogeneous responses spanning all task epochs (stimulus, delay, response). Population analyses demonstrated unique encoding of stimulus identity and behavioral choice information across regions, with V1 encoding stimulus, fMC encoding choice even early in the trial, and PPC multiplexing the two variables. Optogenetic inhibition during behavior revealed that all regions were necessary during the stimulus epoch, but only fMC was required during the delay and response epochs. Stimulus identity can thus be rapidly transformed into behavioral choice, requiring V1, PPC, and fMC during the transformation period, but only fMC for maintaining the choice in memory prior to execution. DOI: http://dx.doi.org/10.7554/eLife.13764.001 PMID:27490481

  12. Effects of Verbal Coding on Learning Disabled and Normal Readers Visual Short-Term Memory.

    ERIC Educational Resources Information Center

    Swanson, Lee

    The hypothesis that reading difficulty of learning disabled (LD) children is attributable to deficiencies in verbal encoding was investigated with 60 LD and normal children (mean CA=9.1, mean IQ=103.5). Ss were compared on recall of a serial short-term memory task after pre-training of named and unnamed stimulus conditions. Data suggested that…

  13. Age Changes in Attention Control: Assessing the Role of Stimulus Contingencies

    ERIC Educational Resources Information Center

    Brodeur, Darlene A.

    2004-01-01

    Children (ages 5, 7, and 9 years) and young adults completed two visual attention tasks that required them to make a forced choice identification response to a target shape presented in the center of a computer screen. In the first task (high correlation condition) each target was flanked with the same distracters on 80% of the trials (valid…

  14. Orientation-selective Responses in the Mouse Lateral Geniculate Nucleus

    PubMed Central

    Zhao, Xinyu; Chen, Hui; Liu, Xiaorong

    2013-01-01

    The dorsal lateral geniculate nucleus (dLGN) receives visual information from the retina and transmits it to the cortex. In this study, we made extracellular recordings in the dLGN of both anesthetized and awake mice, and found that a surprisingly high proportion of cells were selective for stimulus orientation. The orientation selectivity of dLGN cells was unchanged after silencing the visual cortex pharmacologically, indicating that it is not due to cortical feedback. The orientation tuning of some dLGN cells correlated with their elongated receptive fields, while in others orientation selectivity was observed despite the fact that their receptive fields were circular, suggesting that their retinal input might already be orientation selective. Consistently, we revealed orientation/axis-selective ganglion cells in the mouse retina using multielectrode arrays in an in vitro preparation. Furthermore, the orientation tuning of dLGN cells was largely maintained at different stimulus contrasts, which could be sufficiently explained by a simple linear feedforward model. We also compared the degree of orientation selectivity in different visual structures under the same recording condition. Compared with the dLGN, orientation selectivity is greatly improved in the visual cortex, but is similar in the superior colliculus, another major retinal target. Together, our results demonstrate prominent orientation selectivity in the mouse dLGN, which may potentially contribute to visual processing in the cortex. PMID:23904611

  15. Visual perception of writing and pointing movements.

    PubMed

    Méary, David; Chary, Catherine; Palluel-Germain, Richard; Orliaguet, Jean-Pierre

    2005-01-01

    Studies of movement production have shown that the relationship between the amplitude of a movement and its duration varies according to the type of gesture. In the case of pointing movements the duration increases as a function of distance and width of the target (Fitts' law), whereas for writing movements the duration tends to remain constant across changes in trajectory length (isochrony principle). We compared the visual perception of these two categories of movement. The participants judged the speed of a light spot that portrayed the motion of the end-point of a hand-held pen (pointing or writing). For the two types of gesture we used 8 stimulus sizes (from 2.5 cm to 20 cm) and 32 durations (from 0.2 s to 1.75 s). Viewing each combination of size and duration, participants had to indicate whether the movement speed seemed "fast", "slow", or "correct". Results showed that the participants' perceptual preferences were in agreement with the rules of movement production. The stimulus size was more influential in the pointing condition than in the writing condition. We consider that this finding reflects the influence of common representational resources for perceptual judgment and movement production.

  16. A versatile stereoscopic visual display system for vestibular and oculomotor research.

    PubMed

    Kramer, P D; Roberts, D C; Shelhamer, M; Zee, D S

    1998-01-01

    Testing of the vestibular system requires a vestibular stimulus (motion) and/or a visual stimulus. We have developed a versatile, low cost, stereoscopic visual display system, using "virtual reality" (VR) technology. The display system can produce images for each eye that correspond to targets at any virtual distance relative to the subject, and so require the appropriate ocular vergence. We elicited smooth pursuit, "stare" optokinetic nystagmus (OKN) and after-nystagmus (OKAN), vergence for targets at various distances, and short-term adaptation of the vestibulo-ocular reflex (VOR), using both conventional methods and the stereoscopic display. Pursuit, OKN, and OKAN were comparable with both methods. When used with a vestibular stimulus, VR induced appropriate adaptive changes of the phase and gain of the angular VOR. In addition, using the VR display system and a human linear acceleration sled, we adapted the phase of the linear VOR. The VR-based stimulus system not only offers an alternative to more cumbersome means of stimulating the visual system in vestibular experiments, it also can produce visual stimuli that would otherwise be impractical or impossible. Our techniques provide images without the latencies encountered in most VR systems. Its inherent versatility allows it to be useful in several different types of experiments, and because it is software driven it can be quickly adapted to provide a new stimulus. These two factors allow VR to provide considerable savings in time and money, as well as flexibility in developing experimental paradigms.

  17. Attention Determines Contextual Enhancement versus Suppression in Human Primary Visual Cortex.

    PubMed

    Flevaris, Anastasia V; Murray, Scott O

    2015-09-02

    Neural responses in primary visual cortex (V1) depend on stimulus context in seemingly complex ways. For example, responses to an oriented stimulus can be suppressed when it is flanked by iso-oriented versus orthogonally oriented stimuli but can also be enhanced when attention is directed to iso-oriented versus orthogonal flanking stimuli. Thus the exact same contextual stimulus arrangement can have completely opposite effects on neural responses-in some cases leading to orientation-tuned suppression and in other cases leading to orientation-tuned enhancement. Here we show that stimulus-based suppression and enhancement of fMRI responses in humans depends on small changes in the focus of attention and can be explained by a model that combines feature-based attention with response normalization. Neurons in the primary visual cortex (V1) respond to stimuli within a restricted portion of the visual field, termed their "receptive field." However, neuronal responses can also be influenced by stimuli that surround a receptive field, although the nature of these contextual interactions and underlying neural mechanisms are debated. Here we show that the response in V1 to a stimulus in the same context can either be suppressed or enhanced depending on the focus of attention. We are able to explain the results using a simple computational model that combines two well established properties of visual cortical responses: response normalization and feature-based enhancement. Copyright © 2015 the authors 0270-6474/15/3512273-08$15.00/0.

  18. Distributed Fading Memory for Stimulus Properties in the Primary Visual Cortex

    PubMed Central

    Singer, Wolf; Maass, Wolfgang

    2009-01-01

    It is currently not known how distributed neuronal responses in early visual areas carry stimulus-related information. We made multielectrode recordings from cat primary visual cortex and applied methods from machine learning in order to analyze the temporal evolution of stimulus-related information in the spiking activity of large ensembles of around 100 neurons. We used sequences of up to three different visual stimuli (letters of the alphabet) presented for 100 ms and with intervals of 100 ms or larger. Most of the information about visual stimuli extractable by sophisticated methods of machine learning, i.e., support vector machines with nonlinear kernel functions, was also extractable by simple linear classification such as can be achieved by individual neurons. New stimuli did not erase information about previous stimuli. The responses to the most recent stimulus contained about equal amounts of information about both this and the preceding stimulus. This information was encoded both in the discharge rates (response amplitudes) of the ensemble of neurons and, when using short time constants for integration (e.g., 20 ms), in the precise timing of individual spikes (≤∼20 ms), and persisted for several 100 ms beyond the offset of stimuli. The results indicate that the network from which we recorded is endowed with fading memory and is capable of performing online computations utilizing information about temporally sequential stimuli. This result challenges models assuming frame-by-frame analyses of sequential inputs. PMID:20027205

  19. A Novel Visual Psychometric Test for Light-Induced Discomfort Using Red and Blue Light Stimuli Under Binocular and Monocular Viewing Conditions.

    PubMed

    Zivcevska, Marija; Lei, Shaobo; Blakeman, Alan; Goltz, Herbert C; Wong, Agnes M F

    2018-03-01

    To develop an objective psychophysical method to quantify light-induced visual discomfort, and to measure the effects of viewing condition and stimulus wavelength. Eleven visually normal subjects participated in the study. Their pupils were dilated (2.5% phenylephrine) before the experiment. A Ganzfeld system presented either red (1.5, 19.1, 38.2, 57.3, 76.3, 152.7, 305.3 cd/m2) or blue (1.4, 7.1, 14.3, 28.6, 42.9, 57.1, 71.4 cd/m2) randomized light intensities (1 s each) in four blocks. Constant white-light stimuli (3 cd/m2, 4 s duration) were interleaved with the chromatic trials. Participants reported each stimulus as either "uncomfortably bright" or "not uncomfortably bright." The experiment was done binocularly and monocularly in separate sessions, and the order of color/viewing condition sequence was randomized across participants. The proportion of "uncomfortable" responses was used to generate individual psychometric functions, from which 50% discomfort thresholds were calculated. Light-induced discomfort was higher under blue compared with red light stimulation, both during binocular (t(10) = 3.58, P < 0.01) and monocular viewing (t(10) = 3.15, P = 0.01). There was also a significant difference in discomfort between viewing conditions, with binocular viewing inducing more discomfort than monocular viewing for blue (P < 0.001), but not for red light stimulation. The light-induced discomfort characteristics reported here are consistent with features of the melanopsin-containing intrinsically photosensitive retinal ganglion cell light irradiance pathway, which may mediate photophobia, a prominent feature in many clinical disorders. This is the first psychometric assessment designed around melanopsin spectral properties that can be customized further to assess photophobia in different clinical populations.

  20. Combined pitch and roll and cybersickness in a virtual environment.

    PubMed

    Bonato, Frederick; Bubka, Andrea; Palmisano, Stephen

    2009-11-01

    Stationary subjects who perceive visually induced illusions of self-motion, or vection, in virtual reality (VR) often experience cybersickness, the symptoms of which are similar to those experienced during motion sickness. An experiment was conducted to test the effects of single and dual-axis rotation of a virtual environment on cybersickness. It was predicted that VR displays which induced illusory dual-axis (as opposed to single-axis) self-rotations in stationary subjects would generate more sensory conflict and subsequently more cybersickness. There were 19 individuals (5 men, 14 women, mean age = 19.8 yr) who viewed the interior of a virtual cube that steadily rotated (at 60 degrees x s(-1)) about either the pitch axis or both the pitch and roll axes simultaneously. Subjects completed the Simulator Sickness Questionnaire (SSQ) before a trial and after 5 min of stimulus viewing. Post-treatment total SSQ scores and subscores for nausea, oculomotor, and disorientation were significantly higher in the dual-axis condition. These results support the hypothesis that a vection-inducing VR stimulus that rotates about two axes generates more cybersickness compared to aVR stimulus that rotates about only one. In the single-axis condition, sensory conflict and pseudo-Coriolis effects may have led to symptoms. However, in the dual-axis condition, not only was perceived self-motion more complex (two axes compared to one), the inducing stimulus was consistent with twice as much self-motion. Hence, the increased likelihood/magnitude of sensory conflict and pseudo-Coriolis effects may have subsequently resulted in a higher degree of cybersickness in the dual-axis condition.

  1. Visual stimuli for the P300 brain-computer interface: a comparison of white/gray and green/blue flicker matrices.

    PubMed

    Takano, Kouji; Komatsu, Tomoaki; Hata, Naoki; Nakajima, Yasoichi; Kansaku, Kenji

    2009-08-01

    The white/gray flicker matrix has been used as a visual stimulus for the so-called P300 brain-computer interface (BCI), but the white/gray flash stimuli might induce discomfort. In this study, we investigated the effectiveness of green/blue flicker matrices as visual stimuli. Ten able-bodied, non-trained subjects performed Alphabet Spelling (Japanese Alphabet: Hiragana) using an 8 x 10 matrix with three types of intensification/rest flicker combinations (L, luminance; C, chromatic; LC, luminance and chromatic); both online and offline performances were evaluated. The accuracy rate under the online LC condition was 80.6%. Offline analysis showed that the LC condition was associated with significantly higher accuracy than was the L or C condition (Tukey-Kramer, p < 0.05). No significant difference was observed between L and C conditions. The LC condition, which used the green/blue flicker matrix was associated with better performances in the P300 BCI. The green/blue chromatic flicker matrix can be an efficient tool for practical BCI application.

  2. Selective Attention Modulates the Direction of Audio-Visual Temporal Recalibration

    PubMed Central

    Ikumi, Nara; Soto-Faraco, Salvador

    2014-01-01

    Temporal recalibration of cross-modal synchrony has been proposed as a mechanism to compensate for timing differences between sensory modalities. However, far from the rich complexity of everyday life sensory environments, most studies to date have examined recalibration on isolated cross-modal pairings. Here, we hypothesize that selective attention might provide an effective filter to help resolve which stimuli are selected when multiple events compete for recalibration. We addressed this question by testing audio-visual recalibration following an adaptation phase where two opposing audio-visual asynchronies were present. The direction of voluntary visual attention, and therefore to one of the two possible asynchronies (flash leading or flash lagging), was manipulated using colour as a selection criterion. We found a shift in the point of subjective audio-visual simultaneity as a function of whether the observer had focused attention to audio-then-flash or to flash-then-audio groupings during the adaptation phase. A baseline adaptation condition revealed that this effect of endogenous attention was only effective toward the lagging flash. This hints at the role of exogenous capture and/or additional endogenous effects producing an asymmetry toward the leading flash. We conclude that selective attention helps promote selected audio-visual pairings to be combined and subsequently adjusted in time but, stimulus organization exerts a strong impact on recalibration. We tentatively hypothesize that the resolution of recalibration in complex scenarios involves the orchestration of top-down selection mechanisms and stimulus-driven processes. PMID:25004132

  3. Selective attention modulates the direction of audio-visual temporal recalibration.

    PubMed

    Ikumi, Nara; Soto-Faraco, Salvador

    2014-01-01

    Temporal recalibration of cross-modal synchrony has been proposed as a mechanism to compensate for timing differences between sensory modalities. However, far from the rich complexity of everyday life sensory environments, most studies to date have examined recalibration on isolated cross-modal pairings. Here, we hypothesize that selective attention might provide an effective filter to help resolve which stimuli are selected when multiple events compete for recalibration. We addressed this question by testing audio-visual recalibration following an adaptation phase where two opposing audio-visual asynchronies were present. The direction of voluntary visual attention, and therefore to one of the two possible asynchronies (flash leading or flash lagging), was manipulated using colour as a selection criterion. We found a shift in the point of subjective audio-visual simultaneity as a function of whether the observer had focused attention to audio-then-flash or to flash-then-audio groupings during the adaptation phase. A baseline adaptation condition revealed that this effect of endogenous attention was only effective toward the lagging flash. This hints at the role of exogenous capture and/or additional endogenous effects producing an asymmetry toward the leading flash. We conclude that selective attention helps promote selected audio-visual pairings to be combined and subsequently adjusted in time but, stimulus organization exerts a strong impact on recalibration. We tentatively hypothesize that the resolution of recalibration in complex scenarios involves the orchestration of top-down selection mechanisms and stimulus-driven processes.

  4. Spatio-temporal Dynamics of Audiovisual Speech Processing

    PubMed Central

    Bernstein, Lynne E.; Auer, Edward T.; Wagner, Michael; Ponton, Curtis W.

    2007-01-01

    The cortical processing of auditory-alone, visual-alone, and audiovisual speech information is temporally and spatially distributed, and functional magnetic resonance imaging (fMRI) cannot adequately resolve its temporal dynamics. In order to investigate a hypothesized spatio-temporal organization for audiovisual speech processing circuits, event-related potentials (ERPs) were recorded using electroencephalography (EEG). Stimuli were congruent audiovisual /bα/, incongruent auditory /bα/ synchronized with visual /gα/, auditory-only /bα/, and visual-only /bα/ and /gα/. Current density reconstructions (CDRs) of the ERP data were computed across the latency interval of 50-250 milliseconds. The CDRs demonstrated complex spatio-temporal activation patterns that differed across stimulus conditions. The hypothesized circuit that was investigated here comprised initial integration of audiovisual speech by the middle superior temporal sulcus (STS), followed by recruitment of the intraparietal sulcus (IPS), followed by activation of Broca's area (Miller and d'Esposito, 2005). The importance of spatio-temporally sensitive measures in evaluating processing pathways was demonstrated. Results showed, strikingly, early (< 100 msec) and simultaneous activations in areas of the supramarginal and angular gyrus (SMG/AG), the IPS, the inferior frontal gyrus, and the dorsolateral prefrontal cortex. Also, emergent left hemisphere SMG/AG activation, not predicted based on the unisensory stimulus conditions was observed at approximately 160 to 220 msec. The STS was neither the earliest nor most prominent activation site, although it is frequently considered the sine qua non of audiovisual speech integration. As discussed here, the relatively late activity of the SMG/AG solely under audiovisual conditions is a possible candidate audiovisual speech integration response. PMID:17920933

  5. Effects of aging and involuntary capture of attention on event-related potentials associated with the processing of and the response to a target stimulus

    PubMed Central

    Cid-Fernández, Susana; Lindín, Mónica; Díaz, Fernando

    2014-01-01

    The main aim of the present study was to assess whether aging modulates the effects of involuntary capture of attention by novel stimuli on performance, and on event-related potentials (ERPs) associated with target processing (N2b and P3b) and subsequent response processes (stimulus-locked Lateralized Readiness Potential -sLRP- and response-locked Lateralized Readiness Potential -rLRP-). An auditory-visual distraction-attention task was performed by 77 healthy participants, divided into three age groups (Young: 21–29, Middle-aged: 51–64, Old: 65–84 years old). Participants were asked to attend to visual stimuli and to ignore auditory stimuli. Aging was associated with slowed reaction times, target stimulus processing in working memory (WM, longer N2b and P3b latencies) and selection and preparation of the motor response (longer sLRP and earlier rLRP onset latencies). In the novel relative to the standard condition we observed, in the three age groups: (1) a distraction effect, reflected in a slowing of reaction times, of stimuli categorization in WM (longer P3b latency), and of motor response selection (longer sLRP onset latency); (2) a facilitation effect on response preparation (later rLRP onset latency), and (3) an increase in arousal (larger amplitudes of all ERPs evaluated, except for N2b amplitude in the Old group). A distraction effect on the stimulus evaluation processes (longer N2b latency) were also observed, but only in middle-aged and old participants, indicating that the attentional capture slows the stimulus evaluation in WM from early ages (from 50 years onwards, without differences between middle-age and older adults), but not in young adults. PMID:25294999

  6. Short-term memory for event duration: modality specificity and goal dependency.

    PubMed

    Takahashi, Kohske; Watanabe, Katsumi

    2012-11-01

    Time perception is involved in various cognitive functions. This study investigated the characteristics of short-term memory for event duration by examining how the length of the retention period affects inter- and intramodal duration judgment. On each trial, a sample stimulus was followed by a comparison stimulus, after a variable delay period (0.5-5 s). The sample and comparison stimuli were presented in the visual or auditory modality. The participants determined whether the comparison stimulus was longer or shorter than the sample stimulus. The distortion pattern of subjective duration during the delay period depended on the sensory modality of the comparison stimulus but was not affected by that of the sample stimulus. When the comparison stimulus was visually presented, the retained duration of the sample stimulus was shortened as the delay period increased. Contrarily, when the comparison stimulus was presented in the auditory modality, the delay period had little to no effect on the retained duration. Furthermore, whenever the participants did not know the sensory modality of the comparison stimulus beforehand, the effect of the delay period disappeared. These results suggest that the memory process for event duration is specific to sensory modality and that its performance is determined depending on the sensory modality in which the retained duration will be used subsequently.

  7. Toward the influence of temporal attention on the selection of targets in a visual search task: An ERP study.

    PubMed

    Rolke, Bettina; Festl, Freya; Seibold, Verena C

    2016-11-01

    We used ERPs to investigate whether temporal attention interacts with spatial attention and feature-based attention to enhance visual processing. We presented a visual search display containing one singleton stimulus among a set of homogenous distractors. Participants were asked to respond only to target singletons of a particular color and shape that were presented in an attended spatial position. We manipulated temporal attention by presenting a warning signal before each search display and varying the foreperiod (FP) between the warning signal and the search display in a blocked manner. We observed distinctive ERP effects of both spatial and temporal attention. The amplitudes for the N2pc, SPCN, and P3 were enhanced by spatial attention indicating a processing benefit of relevant stimulus features at the attended side. Temporal attention accelerated stimulus processing; this was indexed by an earlier onset of the N2pc component and a reduction in reaction times to targets. Most importantly, temporal attention did not interact with spatial attention or stimulus features to influence visual processing. Taken together, the results suggest that temporal attention fosters visual perceptual processing in a visual search task independently from spatial attention and feature-based attention; this provides support for the nonspecific enhancement hypothesis of temporal attention. © 2016 Society for Psychophysiological Research.

  8. An investigation of the spatial selectivity of the duration after-effect.

    PubMed

    Maarseveen, Jim; Hogendoorn, Hinze; Verstraten, Frans A J; Paffen, Chris L E

    2017-01-01

    Adaptation to the duration of a visual stimulus causes the perceived duration of a subsequently presented stimulus with a slightly different duration to be skewed away from the adapted duration. This pattern of repulsion following adaptation is similar to that observed for other visual properties, such as orientation, and is considered evidence for the involvement of duration-selective mechanisms in duration encoding. Here, we investigated whether the encoding of duration - by duration-selective mechanisms - occurs early on in the visual processing hierarchy. To this end, we investigated the spatial specificity of the duration after-effect in two experiments. We measured the duration after-effect at adapter-test distances ranging between 0 and 15° of visual angle and for within- and between-hemifield presentations. We replicated the duration after-effect: the test stimulus was perceived to have a longer duration following adaptation to a shorter duration, and a shorter duration following adaptation to a longer duration. Importantly, this duration after-effect occurred at all measured distances, with no evidence for a decrease in the magnitude of the after-effect at larger distances or across hemifields. This shows that adaptation to duration does not result from adaptation occurring early on in the visual processing hierarchy. Instead, it seems likely that duration information is a high-level stimulus property that is encoded later on in the visual processing hierarchy. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Inhibition of return in the visual field: the eccentricity effect is independent of cortical magnification.

    PubMed

    Bao, Yan; Lei, Quan; Fang, Yuan; Tong, Yu; Schill, Kerstin; Pöppel, Ernst; Strasburger, Hans

    2013-01-01

    Inhibition of return (IOR) as an indicator of attentional control is characterized by an eccentricity effect, that is, the more peripheral visual field shows a stronger IOR magnitude relative to the perifoveal visual field. However, it could be argued that this eccentricity effect may not be an attention effect, but due to cortical magnification. To test this possibility, we examined this eccentricity effect in two conditions: the same-size condition in which identical stimuli were used at different eccentricities, and the size-scaling condition in which stimuli were scaled according to the cortical magnification factor (M-scaling), thus stimuli being larger at the more peripheral locations. The results showed that the magnitude of IOR was significantly stronger in the peripheral relative to the perifoveal visual field, and this eccentricity effect was independent of the manipulation of stimulus size (same-size or size-scaling). These results suggest a robust eccentricity effect of IOR which cannot be eliminated by M-scaling. Underlying neural mechanisms of the eccentricity effect of IOR are discussed with respect to both cortical and subcortical structures mediating attentional control in the perifoveal and peripheral visual field.

  10. Simon Effect with and without Awareness of the Accessory Stimulus

    ERIC Educational Resources Information Center

    Treccani, Barbara; Umilta, Carlo; Tagliabue, Mariaelena

    2006-01-01

    The authors investigated whether a Simon effect could be observed in an accessory-stimulus Simon task when participants were unaware of the task-irrelevant accessory cue. In Experiment 1A a central visual target was accompanied by a suprathreshold visual lateral cue. A regular Simon effect (i.e., faster cue-response corresponding reaction times…

  11. Sensitivity and integration in a visual pathway for circadian entrainment in the hamster (Mesocricetus auratus).

    PubMed Central

    Nelson, D E; Takahashi, J S

    1991-01-01

    1. Light-induced phase shifts of the circadian rhythm of wheel-running activity were used to measure the photic sensitivity of a circadian pacemaker and the visual pathway that conveys light information to it in the golden hamster (Mesocricetus auratus). The sensitivity to stimulus irradiance and duration was assessed by measuring the magnitude of phase-shift responses to photic stimuli of different irradiance and duration. The visual sensitivity was also measured at three different phases of the circadian rhythm. 2. The stimulus-response curves measured at different circadian phases suggest that the maximum phase-shift is the only aspect of visual responsivity to change as a function of the circadian day. The half-saturation constants (sigma) for the stimulus-response curves are not significantly different over the three circadian phases tested. The photic sensitivity to irradiance (1/sigma) appears to remain constant over the circadian day. 3. The hamster circadian pacemaker and the photoreceptive system that subserves it are more sensitive to the irradiance of longer-duration stimuli than to irradiance of briefer stimuli. The system is maximally sensitive to the irradiance of stimuli of 300 s and longer in duration. A quantitative model is presented to explain the changes that occur in the stimulus-response curves as a function of photic stimulus duration. 4. The threshold for photic stimulation of the hamster circadian pacemaker is also quite high. The threshold irradiance (the minimum irradiance necessary to induce statistically significant responses) is approximately 10(11) photons cm-2 s-1 for optimal stimulus durations. This threshold is equivalent to a luminance at the cornea of 0.1 cd m-2. 5. We also measured the sensitivity of this visual pathway to the total number of photons in a stimulus. This system is maximally sensitive to photons in stimuli between 30 and 3600 s in duration. The maximum quantum efficiency of photic integration occurs in 300 s stimuli. 6. These results suggest that the visual pathways that convey light information to the mammalian circadian pacemaker possess several unique characteristics. These pathways are relatively insensitive to light irradiance and also integrate light inputs over relatively long durations. This visual system, therefore, possesses an optimal sensitivity of 'tuning' to total photons delivered in stimuli of several minutes in duration. Together these characteristics may make this visual system unresponsive to environmental 'noise' that would interfere with the entrainment of circadian rhythms to light-dark cycles. PMID:1895235

  12. The analysis of the influence of fractal structure of stimuli on fractal dynamics in fixational eye movements and EEG signal

    NASA Astrophysics Data System (ADS)

    Namazi, Hamidreza; Kulish, Vladimir V.; Akrami, Amin

    2016-05-01

    One of the major challenges in vision research is to analyze the effect of visual stimuli on human vision. However, no relationship has been yet discovered between the structure of the visual stimulus, and the structure of fixational eye movements. This study reveals the plasticity of human fixational eye movements in relation to the ‘complex’ visual stimulus. We demonstrated that the fractal temporal structure of visual dynamics shifts towards the fractal dynamics of the visual stimulus (image). The results showed that images with higher complexity (higher fractality) cause fixational eye movements with lower fractality. Considering the brain, as the main part of nervous system that is engaged in eye movements, we analyzed the governed Electroencephalogram (EEG) signal during fixation. We have found out that there is a coupling between fractality of image, EEG and fixational eye movements. The capability observed in this research can be further investigated and applied for treatment of different vision disorders.

  13. Differential effects of visual-spatial attention on response latency and temporal-order judgment.

    PubMed

    Neumann, O; Esselmann, U; Klotz, W

    1993-01-01

    Theorists from both classical structuralism and modern attention research have claimed that attention to a sensory stimulus enhances processing speed. However, they have used different operations to measure this effect, viz., temporal-order judgment (TOJ) and reaction-time (RT) measurement. We report two experiments that compared the effect of a spatial cue on RT and TOJ. Experiment 1 demonstrated that a nonmasked, peripheral cue (the brief brightening of a box) affected both RT and TOJ. However, the former effect was significantly larger than the latter. A masked cue had a smaller, but reliable, effect on TOJ. In Experiment 2, the effects of a masked cue on RT and TOJ were compared under identical stimulus conditions. While the cue had a strong effect on RT, it left TOJ unaffected. These results suggest that a spatial cue may have dissociable effects on response processes and the processes that lead to a conscious percept. Implications for the concept of direct parameter specification and for theories of visual attention are discussed.

  14. Event-related brain potentials and cognitive processes related to perceptual-motor information transmission.

    PubMed

    Kopp, Bruno; Wessel, Karl

    2010-05-01

    In the present study, event-related potentials (ERPs) were recorded to investigate cognitive processes related to the partial transmission of information from stimulus recognition to response preparation. Participants classified two-dimensional visual stimuli with dimensions size and form. One feature combination was designated as the go-target, whereas the other three feature combinations served as no-go distractors. Size discriminability was manipulated across three experimental conditions. N2c and P3a amplitudes were enhanced in response to those distractors that shared the feature from the faster dimension with the target. Moreover, N2c and P3a amplitudes showed a crossover effect: Size distractors evoked more pronounced ERPs under high size discriminability, but form distractors elicited enhanced ERPs under low size discriminability. These results suggest that partial perceptual-motor transmission of information is accompanied by acts of cognitive control and by shifts of attention between the sources of conflicting information. Selection negativity findings imply adaptive allocation of visual feature-based attention across the two stimulus dimensions.

  15. Parietal-Occipital Interactions Underlying Control- and Representation-Related Processes in Working Memory for Nonspatial Visual Features.

    PubMed

    Gosseries, Olivia; Yu, Qing; LaRocque, Joshua J; Starrett, Michael J; Rose, Nathan S; Cowan, Nelson; Postle, Bradley R

    2018-05-02

    Although the manipulation of load is popular in visual working memory research, many studies confound general attentional demands with context binding by drawing memoranda from the same stimulus category. In this fMRI study of human observers (both sexes), we created high- versus low-binding conditions, while holding load constant, by comparing trials requiring memory for the direction of motion of one random dot kinematogram (RDK; 1M trials) versus for three RDKs (3M), or versus one RDK and two color patches (1M2C). Memory precision was highest for 1M trials and comparable for 3M and 1M2C trials. And although delay-period activity in occipital cortex did not differ between the three conditions, returning to baseline for all three, multivariate pattern analysis decoding of a remembered RDK from occipital cortex was also highest for 1M trials and comparable for 3M and 1M2C trials. Delay-period activity in intraparietal sulcus (IPS), although elevated for all three conditions, displayed more sensitivity to demands on context binding than to load per se. The 1M-to-3M increase in IPS signal predicted the 1M-to-3M declines in both behavioral and neural estimates of working memory precision. These effects strengthened along a caudal-to-rostral gradient, from IPS0 to IPS5. Context binding-independent load sensitivity was observed when analyses were lateralized and extended into PFC, with trend-level effects evident in left IPS and strong effects in left lateral PFC. These findings illustrate how visual working memory capacity limitations arise from multiple factors that each recruit dissociable brain systems. SIGNIFICANCE STATEMENT Visual working memory capacity predicts performance on a wide array of cognitive and real-world outcomes. At least two theoretically distinct factors are proposed to influence visual working memory capacity limitations: an amodal attentional resource that must be shared across remembered items; and the demands on context binding. We unconfounded these two factors by varying load with items drawn from the same stimulus category ("high demands on context binding") versus items drawn from different stimulus categories ("low demands on context binding"). The results provide evidence for the dissociability, and the neural bases, of these two theorized factors, and they specify that the functions of intraparietal sulcus may relate more strongly to the control of representations than to the general allocation of attention. Copyright © 2018 the authors 0270-6474/18/384357-10$15.00/0.

  16. Conditioning to colors: a population assay for visual learning in Drosophila.

    PubMed

    van Swinderen, Bruno

    2011-11-01

    Vision is a major sensory modality in Drosophila behavior, with more than one-half of the Drosophila brain devoted to visual processing. The mechanisms of vision in Drosophila can be studied in individuals and in populations of flies by using various paradigms. Although there has never been a widely used population assay for visual learning in Drosophila, some population paradigms have shown significant visual learning. These studies use colors as conditioned stimuli (CS) and shaking as the unconditioned stimulus (US). A simple version of the paradigm, conditioning to colors using a shaking device, is described here. A conditioning chamber, called a crab, is designed to center the flies after shaking by having them tumble down to the lowest point between joined glass tubes forming a V. Thus, vibration should be just strong enough to center most flies. After shaking, flies display a geotactic response and climb up either side of the V, and their choice of which side to climb is influenced by color displays on either side. The proportion of flies on either side determines the flies' natural preference or their learned avoidance of a color associated with shaking.

  17. Conditioned Fear Inhibits c-fos mRNA Expression in the Central Extended Amygdala

    PubMed Central

    Day, Heidi E.W.; Kryskow, Elisa M.; Nyhuis, Tara J.; Herlihy, Lauren; Campeau, Serge

    2008-01-01

    We have shown previously that unconditioned stressors inhibit neurons of the lateral/capsular division of the central nucleus of the amygdala (CEAl/c) and oval division of the bed nucleus of the stria terminalis (BSTov), which form part of the central extended amygdala. The current study investigated whether conditioned fear inhibits c-fos mRNA expression in these regions. Male rats were trained either to associate a visual stimulus (light) with footshock or were exposed to the light alone. After training, animals were replaced in the apparatus, and 2 hours later injected remotely, via a catheter, with amphetamine (2 mg/kg i.p.), to induce c-fos mRNA and allow inhibition of expression to be measured. The rats were then presented with 15 visual stimuli over a 30 minute period. As expected, fear conditioned animals that were not injected with amphetamine, had extremely low levels of c-fos mRNA in the central extended amygdala. In contrast, animals that were trained with the light alone (no fear conditioning) and were injected with amphetamine had high levels of c-fos mRNA in the CEAl/c and BSTov. Animals that underwent fear-conditioning, and were re-exposed to the conditioned stimulus after amphetamine injection had significantly reduced levels of c-fos mRNA in both the BSTov and CEAl/c, compared to the non-conditioned animals. These data suggest that conditioned fear can inhibit neurons of the central extended amygdala. Because these neurons are GABAergic, and project to the medial CEA (an amygdaloid output region), this may be a novel mechanism whereby conditioned fear potentiates amygdaloid output. PMID:18634767

  18. Evaluation of an organic light-emitting diode display for precise visual stimulation.

    PubMed

    Ito, Hiroyuki; Ogawa, Masaki; Sunaga, Shoji

    2013-06-11

    A new type of visual display for presentation of a visual stimulus with high quality was assessed. The characteristics of an organic light-emitting diode (OLED) display (Sony PVM-2541, 24.5 in.; Sony Corporation, Tokyo, Japan) were measured in detail from the viewpoint of its applicability to visual psychophysics. We found the new display to be superior to other display types in terms of spatial uniformity, color gamut, and contrast ratio. Changes in the intensity of luminance were sharper on the OLED display than those on a liquid crystal display. Therefore, such OLED displays could replace conventional cathode ray tube displays in vision research for high quality stimulus presentation. Benefits of using OLED displays in vision research were especially apparent in the fields of low-level vision, where precise control and description of the stimulus are needed, e.g., in mesopic or scotopic vision, color vision, and motion perception.

  19. Emotional facilitation of sensory processing in the visual cortex.

    PubMed

    Schupp, Harald T; Junghöfer, Markus; Weike, Almut I; Hamm, Alfons O

    2003-01-01

    A key function of emotion is the preparation for action. However, organization of successful behavioral strategies depends on efficient stimulus encoding. The present study tested the hypothesis that perceptual encoding in the visual cortex is modulated by the emotional significance of visual stimuli. Event-related brain potentials were measured while subjects viewed pleasant, neutral, and unpleasant pictures. Early selective encoding of pleasant and unpleasant images was associated with a posterior negativity, indicating primary sources of activation in the visual cortex. The study also replicated previous findings in that affective cues also elicited enlarged late positive potentials, indexing increased stimulus relevance at higher-order stages of stimulus processing. These results support the hypothesis that sensory encoding of affective stimuli is facilitated implicitly by natural selective attention. Thus, the affect system not only modulates motor output (i.e., favoring approach or avoidance dispositions), but already operates at an early level of sensory encoding.

  20. High-resolution eye tracking using V1 neuron activity

    PubMed Central

    McFarland, James M.; Bondy, Adrian G.; Cumming, Bruce G.; Butts, Daniel A.

    2014-01-01

    Studies of high-acuity visual cortical processing have been limited by the inability to track eye position with sufficient accuracy to precisely reconstruct the visual stimulus on the retina. As a result, studies on primary visual cortex (V1) have been performed almost entirely on neurons outside the high-resolution central portion of the visual field (the fovea). Here we describe a procedure for inferring eye position using multi-electrode array recordings from V1 coupled with nonlinear stimulus processing models. We show that this method can be used to infer eye position with one arc-minute accuracy – significantly better than conventional techniques. This allows for analysis of foveal stimulus processing, and provides a means to correct for eye-movement induced biases present even outside the fovea. This method could thus reveal critical insights into the role of eye movements in cortical coding, as well as their contribution to measures of cortical variability. PMID:25197783

  1. Neuronal population coding of perceived and memorized visual features in the lateral prefrontal cortex

    PubMed Central

    Mendoza-Halliday, Diego; Martinez-Trujillo, Julio C.

    2017-01-01

    The primate lateral prefrontal cortex (LPFC) encodes visual stimulus features while they are perceived and while they are maintained in working memory. However, it remains unclear whether perceived and memorized features are encoded by the same or different neurons and population activity patterns. Here we record LPFC neuronal activity while monkeys perceive the motion direction of a stimulus that remains visually available, or memorize the direction if the stimulus disappears. We find neurons with a wide variety of combinations of coding strength for perceived and memorized directions: some neurons encode both to similar degrees while others preferentially or exclusively encode either one. Reading out the combined activity of all neurons, a machine-learning algorithm reliably decode the motion direction and determine whether it is perceived or memorized. Our results indicate that a functionally diverse population of LPFC neurons provides a substrate for discriminating between perceptual and mnemonic representations of visual features. PMID:28569756

  2. The stimulus-evoked population response in visual cortex of awake monkey is a propagating wave

    PubMed Central

    Muller, Lyle; Reynaud, Alexandre; Chavane, Frédéric; Destexhe, Alain

    2014-01-01

    Propagating waves occur in many excitable media and were recently found in neural systems from retina to neocortex. While propagating waves are clearly present under anaesthesia, whether they also appear during awake and conscious states remains unclear. One possibility is that these waves are systematically missed in trial-averaged data, due to variability. Here we present a method for detecting propagating waves in noisy multichannel recordings. Applying this method to single-trial voltage-sensitive dye imaging data, we show that the stimulus-evoked population response in primary visual cortex of the awake monkey propagates as a travelling wave, with consistent dynamics across trials. A network model suggests that this reliability is the hallmark of the horizontal fibre network of superficial cortical layers. Propagating waves with similar properties occur independently in secondary visual cortex, but maintain precise phase relations with the waves in primary visual cortex. These results show that, in response to a visual stimulus, propagating waves are systematically evoked in several visual areas, generating a consistent spatiotemporal frame for further neuronal interactions. PMID:24770473

  3. Early, but not late visual distractors affect movement synchronization to a temporal-spatial visual cue.

    PubMed

    Booth, Ashley J; Elliott, Mark T

    2015-01-01

    The ease of synchronizing movements to a rhythmic cue is dependent on the modality of the cue presentation: timing accuracy is much higher when synchronizing with discrete auditory rhythms than an equivalent visual stimulus presented through flashes. However, timing accuracy is improved if the visual cue presents spatial as well as temporal information (e.g., a dot following an oscillatory trajectory). Similarly, when synchronizing with an auditory target metronome in the presence of a second visual distracting metronome, the distraction is stronger when the visual cue contains spatial-temporal information rather than temporal only. The present study investigates individuals' ability to synchronize movements to a temporal-spatial visual cue in the presence of same-modality temporal-spatial distractors. Moreover, we investigated how increasing the number of distractor stimuli impacted on maintaining synchrony with the target cue. Participants made oscillatory vertical arm movements in time with a vertically oscillating white target dot centered on a large projection screen. The target dot was surrounded by 2, 8, or 14 distractor dots, which had an identical trajectory to the target but at a phase lead or lag of 0, 100, or 200 ms. We found participants' timing performance was only affected in the phase-lead conditions and when there were large numbers of distractors present (8 and 14). This asymmetry suggests participants still rely on salient events in the stimulus trajectory to synchronize movements. Subsequently, distractions occurring in the window of attention surrounding those events have the maximum impact on timing performance.

  4. On the generality of the displaywide contingent orienting hypothesis: can a visual onset capture attention without top-down control settings for displaywide onset?

    PubMed

    Yeh, Su-Ling; Liao, Hsin-I

    2010-10-01

    The contingent orienting hypothesis (Folk, Remington, & Johnston, 1992) states that attentional capture is contingent on top-down control settings induced by task demands. Past studies supporting this hypothesis have identified three kinds of top-down control settings: for target-specific features, for the strategy to search for a singleton, and for visual features in the target display as a whole. Previously, we have found stimulus-driven capture by onset that was not contingent on the first two kinds of settings (Yeh & Liao, 2008). The current study aims to test the third kind: the displaywide contingent orienting hypothesis (Gibson & Kelsey, 1998). Specifically, we ask whether an onset stimulus can still capture attention in the spatial cueing paradigm when attentional control settings for the displaywide onset of the target are excluded by making all letters in the target display emerge from placeholders. Results show that a preceding uninformative onset cue still captured attention to its location in a stimulus-driven fashion, whereas a color cue captured attention only when it was contingent on the setting for displaywide color. These results raise doubts as to the generality of the displaywide contingent orienting hypothesis and help delineate the boundary conditions on this hypothesis. Copyright © 2010 Elsevier B.V. All rights reserved.

  5. Saccadic eye movements do not disrupt the deployment of feature-based attention.

    PubMed

    Kalogeropoulou, Zampeta; Rolfs, Martin

    2017-07-01

    The tight link of saccades to covert spatial attention has been firmly established, yet their relation to other forms of visual selection remains poorly understood. Here we studied the temporal dynamics of feature-based attention (FBA) during fixation and across saccades. Participants reported the orientation (on a continuous scale) of one of two sets of spatially interspersed Gabors (black or white). We tested performance at different intervals between the onset of a colored cue (black or white, indicating which stimulus was the most probable target; red: neutral condition) and the stimulus. FBA built up after cue onset: Benefits (errors for valid vs. neutral cues), costs (invalid vs. neutral), and the overall cueing effect (valid vs. invalid) increased with the cue-stimulus interval. Critically, we also tested visual performance at different intervals after a saccade, when FBA had been fully deployed before saccade initiation. Cueing effects were evident immediately after the saccade and were predicted most accurately and most precisely by fully deployed FBA, indicating that FBA was continuous throughout saccades. Finally, a decomposition of orientation reports into target reports and random guesses confirmed continuity of report precision and guess rates across the saccade. We discuss the role of FBA in perceptual continuity across saccades.

  6. A 2D virtual reality system for visual goal-driven navigation in zebrafish larvae

    PubMed Central

    Jouary, Adrien; Haudrechy, Mathieu; Candelier, Raphaël; Sumbre, German

    2016-01-01

    Animals continuously rely on sensory feedback to adjust motor commands. In order to study the role of visual feedback in goal-driven navigation, we developed a 2D visual virtual reality system for zebrafish larvae. The visual feedback can be set to be similar to what the animal experiences in natural conditions. Alternatively, modification of the visual feedback can be used to study how the brain adapts to perturbations. For this purpose, we first generated a library of free-swimming behaviors from which we learned the relationship between the trajectory of the larva and the shape of its tail. Then, we used this technique to infer the intended displacements of head-fixed larvae, and updated the visual environment accordingly. Under these conditions, larvae were capable of aligning and swimming in the direction of a whole-field moving stimulus and produced the fine changes in orientation and position required to capture virtual prey. We demonstrate the sensitivity of larvae to visual feedback by updating the visual world in real-time or only at the end of the discrete swimming episodes. This visual feedback perturbation caused impaired performance of prey-capture behavior, suggesting that larvae rely on continuous visual feedback during swimming. PMID:27659496

  7. Retinotopic patterns of background connectivity between V1 and fronto-parietal cortex are modulated by task demands

    PubMed Central

    Griffis, Joseph C.; Elkhetali, Abdurahman S.; Burge, Wesley K.; Chen, Richard H.; Visscher, Kristina M.

    2015-01-01

    Attention facilitates the processing of task-relevant visual information and suppresses interference from task-irrelevant information. Modulations of neural activity in visual cortex depend on attention, and likely result from signals originating in fronto-parietal and cingulo-opercular regions of cortex. Here, we tested the hypothesis that attentional facilitation of visual processing is accomplished in part by changes in how brain networks involved in attentional control interact with sectors of V1 that represent different retinal eccentricities. We measured the strength of background connectivity between fronto-parietal and cingulo-opercular regions with different eccentricity sectors in V1 using functional MRI data that were collected while participants performed tasks involving attention to either a centrally presented visual stimulus or a simultaneously presented auditory stimulus. We found that when the visual stimulus was attended, background connectivity between V1 and the left frontal eye fields (FEF), left intraparietal sulcus (IPS), and right IPS varied strongly across different eccentricity sectors in V1 so that foveal sectors were more strongly connected than peripheral sectors. This retinotopic gradient was weaker when the visual stimulus was ignored, indicating that it was driven by attentional effects. Greater task-driven differences between foveal and peripheral sectors in background connectivity to these regions were associated with better performance on the visual task and faster response times on correct trials. These findings are consistent with the notion that attention drives the configuration of task-specific functional pathways that enable the prioritized processing of task-relevant visual information, and show that the prioritization of visual information by attentional processes may be encoded in the retinotopic gradient of connectivty between V1 and fronto-parietal regions. PMID:26106320

  8. Visual motion perception predicts driving hazard perception ability.

    PubMed

    Lacherez, Philippe; Au, Sandra; Wood, Joanne M

    2014-02-01

    To examine the basis of previous findings of an association between indices of driving safety and visual motion sensitivity and to examine whether this association could be explained by low-level changes in visual function. A total of 36 visually normal participants (aged 19-80 years) completed a battery of standard vision tests including visual acuity, contrast sensitivity and automated visual fields and two tests of motion perception including sensitivity for movement of a drifting Gabor stimulus and sensitivity for displacement in a random dot kinematogram (Dmin ). Participants also completed a hazard perception test (HPT), which measured participants' response times to hazards embedded in video recordings of real-world driving, which has been shown to be linked to crash risk. Dmin for the random dot stimulus ranged from -0.88 to -0.12 log minutes of arc, and the minimum drift rate for the Gabor stimulus ranged from 0.01 to 0.35 cycles per second. Both measures of motion sensitivity significantly predicted response times on the HPT. In addition, while the relationship involving the HPT and motion sensitivity for the random dot kinematogram was partially explained by the other visual function measures, the relationship with sensitivity for detection of the drifting Gabor stimulus remained significant even after controlling for these variables. These findings suggest that motion perception plays an important role in the visual perception of driving-relevant hazards independent of other areas of visual function and should be further explored as a predictive test of driving safety. Future research should explore the causes of reduced motion perception to develop better interventions to improve road safety. © 2012 The Authors. Acta Ophthalmologica © 2012 Acta Ophthalmologica Scandinavica Foundation.

  9. Can responses to basic non-numerical visual features explain neural numerosity responses?

    PubMed

    Harvey, Ben M; Dumoulin, Serge O

    2017-04-01

    Humans and many animals can distinguish between stimuli that differ in numerosity, the number of objects in a set. Human and macaque parietal lobes contain neurons that respond to changes in stimulus numerosity. However, basic non-numerical visual features can affect neural responses to and perception of numerosity, and visual features often co-vary with numerosity. Therefore, it is debated whether numerosity or co-varying low-level visual features underlie neural and behavioral responses to numerosity. To test the hypothesis that non-numerical visual features underlie neural numerosity responses in a human parietal numerosity map, we analyze responses to a group of numerosity stimulus configurations that have the same numerosity progression but vary considerably in their non-numerical visual features. Using ultra-high-field (7T) fMRI, we measure responses to these stimulus configurations in an area of posterior parietal cortex whose responses are believed to reflect numerosity-selective activity. We describe an fMRI analysis method to distinguish between alternative models of neural response functions, following a population receptive field (pRF) modeling approach. For each stimulus configuration, we first quantify the relationships between numerosity and several non-numerical visual features that have been proposed to underlie performance in numerosity discrimination tasks. We then determine how well responses to these non-numerical visual features predict the observed fMRI responses, and compare this to the predictions of responses to numerosity. We demonstrate that a numerosity response model predicts observed responses more accurately than models of responses to simple non-numerical visual features. As such, neural responses in cognitive processing need not reflect simpler properties of early sensory inputs. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Attention supports verbal short-term memory via competition between dorsal and ventral attention networks.

    PubMed

    Majerus, Steve; Attout, Lucie; D'Argembeau, Arnaud; Degueldre, Christian; Fias, Wim; Maquet, Pierre; Martinez Perez, Trecy; Stawarczyk, David; Salmon, Eric; Van der Linden, Martial; Phillips, Christophe; Balteau, Evelyne

    2012-05-01

    Interactions between the neural correlates of short-term memory (STM) and attention have been actively studied in the visual STM domain but much less in the verbal STM domain. Here we show that the same attention mechanisms that have been shown to shape the neural networks of visual STM also shape those of verbal STM. Based on previous research in visual STM, we contrasted the involvement of a dorsal attention network centered on the intraparietal sulcus supporting task-related attention and a ventral attention network centered on the temporoparietal junction supporting stimulus-related attention. We observed that, with increasing STM load, the dorsal attention network was activated while the ventral attention network was deactivated, especially during early maintenance. Importantly, activation in the ventral attention network increased in response to task-irrelevant stimuli briefly presented during the maintenance phase of the STM trials but only during low-load STM conditions, which were associated with the lowest levels of activity in the dorsal attention network during encoding and early maintenance. By demonstrating a trade-off between task-related and stimulus-related attention networks during verbal STM, this study highlights the dynamics of attentional processes involved in verbal STM.

  11. Neural Signatures of Stimulus Features in Visual Working Memory—A Spatiotemporal Approach

    PubMed Central

    Jackson, Margaret C.; Klein, Christoph; Mohr, Harald; Shapiro, Kimron L.; Linden, David E. J.

    2010-01-01

    We examined the neural signatures of stimulus features in visual working memory (WM) by integrating functional magnetic resonance imaging (fMRI) and event-related potential data recorded during mental manipulation of colors, rotation angles, and color–angle conjunctions. The N200, negative slow wave, and P3b were modulated by the information content of WM, and an fMRI-constrained source model revealed a progression in neural activity from posterior visual areas to higher order areas in the ventral and dorsal processing streams. Color processing was associated with activity in inferior frontal gyrus during encoding and retrieval, whereas angle processing involved right parietal regions during the delay interval. WM for color–angle conjunctions did not involve any additional neural processes. The finding that different patterns of brain activity underlie WM for color and spatial information is consistent with ideas that the ventral/dorsal “what/where” segregation of perceptual processing influences WM organization. The absence of characteristic signatures of conjunction-related brain activity, which was generally intermediate between the 2 single conditions, suggests that conjunction judgments are based on the coordinated activity of these 2 streams. PMID:19429863

  12. Does bimodal stimulus presentation increase ERP components usable in BCIs?

    NASA Astrophysics Data System (ADS)

    Thurlings, Marieke E.; Brouwer, Anne-Marie; Van Erp, Jan B. F.; Blankertz, Benjamin; Werkhoven, Peter J.

    2012-08-01

    Event-related potential (ERP)-based brain-computer interfaces (BCIs) employ differences in brain responses to attended and ignored stimuli. Typically, visual stimuli are used. Tactile stimuli have recently been suggested as a gaze-independent alternative. Bimodal stimuli could evoke additional brain activity due to multisensory integration which may be of use in BCIs. We investigated the effect of visual-tactile stimulus presentation on the chain of ERP components, BCI performance (classification accuracies and bitrates) and participants’ task performance (counting of targets). Ten participants were instructed to navigate a visual display by attending (spatially) to targets in sequences of either visual, tactile or visual-tactile stimuli. We observe that attending to visual-tactile (compared to either visual or tactile) stimuli results in an enhanced early ERP component (N1). This bimodal N1 may enhance BCI performance, as suggested by a nonsignificant positive trend in offline classification accuracies. A late ERP component (P300) is reduced when attending to visual-tactile compared to visual stimuli, which is consistent with the nonsignificant negative trend of participants’ task performance. We discuss these findings in the light of affected spatial attention at high-level compared to low-level stimulus processing. Furthermore, we evaluate bimodal BCIs from a practical perspective and for future applications.

  13. Colour and luminance contrasts predict the human detection of natural stimuli in complex visual environments.

    PubMed

    White, Thomas E; Rojas, Bibiana; Mappes, Johanna; Rautiala, Petri; Kemp, Darrell J

    2017-09-01

    Much of what we know about human colour perception has come from psychophysical studies conducted in tightly-controlled laboratory settings. An enduring challenge, however, lies in extrapolating this knowledge to the noisy conditions that characterize our actual visual experience. Here we combine statistical models of visual perception with empirical data to explore how chromatic (hue/saturation) and achromatic (luminant) information underpins the detection and classification of stimuli in a complex forest environment. The data best support a simple linear model of stimulus detection as an additive function of both luminance and saturation contrast. The strength of each predictor is modest yet consistent across gross variation in viewing conditions, which accords with expectation based upon general primate psychophysics. Our findings implicate simple visual cues in the guidance of perception amidst natural noise, and highlight the potential for informing human vision via a fusion between psychophysical modelling and real-world behaviour. © 2017 The Author(s).

  14. Influence of the beta-blocking atenolol and other medications on visual reaction time.

    PubMed

    Harms, D; Pachale, E; Nechvatal, D

    1981-11-01

    Visual reaction time as a measure of vigilance and of the psychophysiological condition of subjects, was determined after combined physical and mental stress to examine the influence of beta blockade. Using the technique of electro-oculography, 40 subjects aged 25.7 +/- 6 years, with a mean blood pressure of 126/79 torr, were studied in a double-blind crossover design after application of placebo or 50 mg atenolol for 3 d. Visual reaction time was defined as the time between display of a peripheral light signal and the start of the eye movement that shifts the direction of gaze from the reference point to the stimulus. The results of the study show that, under these experimental conditions, there is a positive effect of beta blocker medication on vigilance. Findings of other authors are discussed. To prove the sensitivity of the test method in a preliminary study, the effects of the well-known drugs fenethylline-hydrochioride, diazepam, oxazepam, and alcohol on visual reaction time were investigated.

  15. Stimulus-dependent modulation of visual neglect in a touch-screen cancellation task.

    PubMed

    Keller, Ingo; Volkening, Katharina; Garbacenkaite, Ruta

    2015-05-01

    Patients with left-sided neglect frequently show omissions and repetitive behavior on cancellation tests. Using a touch-screen-based cancellation task, we tested how visual feedback and distracters influence the number of omissions and perseverations. Eighteen patients with left-sided visual neglect and 18 healthy controls performed four different cancellation tasks on an iPad touch screen: no feedback (the display did not change during the task), visual feedback (touched targets changed their color from black to green), visual feedback with distracters (20 distracters were evenly embedded in the display; detected targets changed their color from black to green), vanishing targets (touched targets disappeared from the screen). Except for the condition with vanishing targets, neglect patients had significantly more omissions and perseverations than healthy controls in the remaining three subtests. Both conditions providing feedback by changing the target color showed the highest number of omissions. Erasure of targets nearly diminished omissions completely. The highest rate of perseverations was observed in the no-feedback condition. The implementation of distracters led to a moderate number of perseverations. Visual feedback without distracters and vanishing targets abolished perseverations nearly completely. Visual feedback and the presence of distracters aggravated hemispatial neglect. This finding is compatible with impaired disengagement from the ipsilesional side as an important factor of visual neglect. Improvement of cancellation behavior with vanishing targets could have therapeutic implications. (c) 2015 APA, all rights reserved).

  16. Effects of set-size and lateral masking in visual search.

    PubMed

    Põder, Endel

    2004-01-01

    In the present research, the roles of lateral masking and central processing limitations in visual search were studied. Two search conditions were used: (1) target differed from distractors by presence/absence of a simple feature; (2) target differed by relative position of the same components only. The number of displayed stimuli (set-size) and the distance between neighbouring stimuli were varied as independently as possible in order to measure the effect of both. The effect of distance between stimuli (lateral masking) was found to be similar in both conditions. The effect of set-size was much larger for relative position stimuli. The results support the view that perception of relative position of stimulus components is limited mainly by the capacity of central processing.

  17. Identifying a "default" visual search mode with operant conditioning.

    PubMed

    Kawahara, Jun-ichiro

    2010-09-01

    The presence of a singleton in a task-irrelevant domain can impair visual search. This impairment, known as the attentional capture depends on the set of participants. When narrowly searching for a specific feature (the feature search mode), only matching stimuli capture attention. When searching broadly (the singleton detection mode), any oddball captures attention. The present study examined which strategy represents the "default" mode using an operant conditioning approach in which participants were trained, in the absence of explicit instructions, to search for a target in an ambiguous context in which one of two modes was available. The results revealed that participants behaviorally adopted the singleton detection as the default mode but reported using the feature search mode. Conscious strategies did not eliminate capture. These results challenge the view that a conscious set always modulates capture, suggesting that the visual system tends to rely on stimulus salience to deploy attention.

  18. Contralateral cortical organisation of information in visual short-term memory: evidence from lateralized brain activity during retrieval.

    PubMed

    Fortier-Gauthier, Ulysse; Moffat, Nicolas; Dell'Acqua, Roberto; McDonald, John J; Jolicœur, Pierre

    2012-07-01

    We studied brain activity during retention and retrieval phases of two visual short-term memory (VSTM) experiments. Experiment 1 used a balanced memory array, with one color stimulus in each hemifield, followed by a retention interval and a central probe, at the fixation point that designated the target stimulus in memory about which to make a determination of orientation. Retrieval of information from VSTM was associated with an event-related lateralization (ERL) with a contralateral negativity relative to the visual field from which the probed stimulus was originally encoded, suggesting a lateralized organization of VSTM. The scalp distribution of the retrieval ERL was more anterior than what is usually associated with simple maintenance activity, which is consistent with the involvement of different brain structures for these distinct visual memory mechanisms. Experiment 2 was like Experiment 1, but used an unbalanced memory array consisting of one lateral color stimulus in a hemifield and one color stimulus on the vertical mid-line. This design enabled us to separate lateralized activity related to target retrieval from distractor processing. Target retrieval was found to generate a negative-going ERL at electrode sites found in Experiment 1, and suggested representations were retrieved from anterior cortical structures. Distractor processing elicited a positive-going ERL at posterior electrodes sites, which could be indicative of a return to baseline of retention activity for the discarded memory of the now-irrelevant stimulus, or an active inhibition mechanism mediating distractor suppression. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Retinotopic Maps, Spatial Tuning, and Locations of Human Visual Areas in Surface Coordinates Characterized with Multifocal and Blocked fMRI Designs

    PubMed Central

    Henriksson, Linda; Karvonen, Juha; Salminen-Vaparanta, Niina; Railo, Henry; Vanni, Simo

    2012-01-01

    The localization of visual areas in the human cortex is typically based on mapping the retinotopic organization with functional magnetic resonance imaging (fMRI). The most common approach is to encode the response phase for a slowly moving visual stimulus and to present the result on an individual's reconstructed cortical surface. The main aims of this study were to develop complementary general linear model (GLM)-based retinotopic mapping methods and to characterize the inter-individual variability of the visual area positions on the cortical surface. We studied 15 subjects with two methods: a 24-region multifocal checkerboard stimulus and a blocked presentation of object stimuli at different visual field locations. The retinotopic maps were based on weighted averaging of the GLM parameter estimates for the stimulus regions. In addition to localizing visual areas, both methods could be used to localize multiple retinotopic regions-of-interest. The two methods yielded consistent retinotopic maps in the visual areas V1, V2, V3, hV4, and V3AB. In the higher-level areas IPS0, VO1, LO1, LO2, TO1, and TO2, retinotopy could only be mapped with the blocked stimulus presentation. The gradual widening of spatial tuning and an increase in the responses to stimuli in the ipsilateral visual field along the hierarchy of visual areas likely reflected the increase in the average receptive field size. Finally, after registration to Freesurfer's surface-based atlas of the human cerebral cortex, we calculated the mean and variability of the visual area positions in the spherical surface-based coordinate system and generated probability maps of the visual areas on the average cortical surface. The inter-individual variability in the area locations decreased when the midpoints were calculated along the spherical cortical surface compared with volumetric coordinates. These results can facilitate both analysis of individual functional anatomy and comparisons of visual cortex topology across studies. PMID:22590626

  20. The effect of visual salience on memory-based choices.

    PubMed

    Pooresmaeili, Arezoo; Bach, Dominik R; Dolan, Raymond J

    2014-02-01

    Deciding whether a stimulus is the "same" or "different" from a previous presented one involves integrating among the incoming sensory information, working memory, and perceptual decision making. Visual selective attention plays a crucial role in selecting the relevant information that informs a subsequent course of action. Previous studies have mainly investigated the role of visual attention during the encoding phase of working memory tasks. In this study, we investigate whether manipulation of bottom-up attention by changing stimulus visual salience impacts on later stages of memory-based decisions. In two experiments, we asked subjects to identify whether a stimulus had either the same or a different feature to that of a memorized sample. We manipulated visual salience of the test stimuli by varying a task-irrelevant feature contrast. Subjects chose a visually salient item more often when they looked for matching features and less often so when they looked for a nonmatch. This pattern of results indicates that salient items are more likely to be identified as a match. We interpret the findings in terms of capacity limitations at a comparison stage where a visually salient item is more likely to exhaust resources leading it to be prematurely parsed as a match.

  1. Feasibility and performance evaluation of generating and recording visual evoked potentials using ambulatory Bluetooth based system.

    PubMed

    Ellingson, Roger M; Oken, Barry

    2010-01-01

    Report contains the design overview and key performance measurements demonstrating the feasibility of generating and recording ambulatory visual stimulus evoked potentials using the previously reported custom Complementary and Alternative Medicine physiologic data collection and monitoring system, CAMAS. The methods used to generate visual stimuli on a PDA device and the design of an optical coupling device to convert the display to an electrical waveform which is recorded by the CAMAS base unit are presented. The optical sensor signal, synchronized to the visual stimulus emulates the brain's synchronized EEG signal input to CAMAS normally reviewed for the evoked potential response. Most importantly, the PDA also sends a marker message over the wireless Bluetooth connection to the CAMAS base unit synchronized to the visual stimulus which is the critical averaging reference component to obtain VEP results. Results show the variance in the latency of the wireless marker messaging link is consistent enough to support the generation and recording of visual evoked potentials. The averaged sensor waveforms at multiple CPU speeds are presented and demonstrate suitability of the Bluetooth interface for portable ambulatory visual evoked potential implementation on our CAMAS platform.

  2. Electrocortical amplification for emotionally arousing natural scenes: The contribution of luminance and chromatic visual channels

    PubMed Central

    Miskovic, Vladimir; Martinovic, Jasna; Wieser, Matthias M.; Petro, Nathan M.; Bradley, Margaret M.; Keil, Andreas

    2015-01-01

    Emotionally arousing scenes readily capture visual attention, prompting amplified neural activity in sensory regions of the brain. The physical stimulus features and related information channels in the human visual system that contribute to this modulation, however, are not known. Here, we manipulated low-level physical parameters of complex scenes varying in hedonic valence and emotional arousal in order to target the relative contributions of luminance based versus chromatic visual channels to emotional perception. Stimulus-evoked brain electrical activity was measured during picture viewing and used to quantify neural responses sensitive to lower-tier visual cortical involvement (steady-state visual evoked potentials) as well as the late positive potential, reflecting a more distributed cortical event. Results showed that the enhancement for emotional content was stimulus-selective when examining the steady-state segments of the evoked visual potentials. Response amplification was present only for low spatial frequency, grayscale stimuli, and not for high spatial frequency, red/green stimuli. In contrast, the late positive potential was modulated by emotion regardless of the scene’s physical properties. Our findings are discussed in relation to neurophysiologically plausible constraints operating at distinct stages of the cortical processing stream. PMID:25640949

  3. Electrocortical amplification for emotionally arousing natural scenes: the contribution of luminance and chromatic visual channels.

    PubMed

    Miskovic, Vladimir; Martinovic, Jasna; Wieser, Matthias J; Petro, Nathan M; Bradley, Margaret M; Keil, Andreas

    2015-03-01

    Emotionally arousing scenes readily capture visual attention, prompting amplified neural activity in sensory regions of the brain. The physical stimulus features and related information channels in the human visual system that contribute to this modulation, however, are not known. Here, we manipulated low-level physical parameters of complex scenes varying in hedonic valence and emotional arousal in order to target the relative contributions of luminance based versus chromatic visual channels to emotional perception. Stimulus-evoked brain electrical activity was measured during picture viewing and used to quantify neural responses sensitive to lower-tier visual cortical involvement (steady-state visual evoked potentials) as well as the late positive potential, reflecting a more distributed cortical event. Results showed that the enhancement for emotional content was stimulus-selective when examining the steady-state segments of the evoked visual potentials. Response amplification was present only for low spatial frequency, grayscale stimuli, and not for high spatial frequency, red/green stimuli. In contrast, the late positive potential was modulated by emotion regardless of the scene's physical properties. Our findings are discussed in relation to neurophysiologically plausible constraints operating at distinct stages of the cortical processing stream. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Visual short-term memory load suppresses temporo-parietal junction activity and induces inattentional blindness.

    PubMed

    Todd, J Jay; Fougnie, Daryl; Marois, René

    2005-12-01

    The right temporo-parietal junction (TPJ) is critical for stimulus-driven attention and visual awareness. Here we show that as the visual short-term memory (VSTM) load of a task increases, activity in this region is increasingly suppressed. Correspondingly, increasing VSTM load impairs the ability of subjects to consciously detect the presence of a novel, unexpected object in the visual field. These results not only demonstrate that VSTM load suppresses TPJ activity and induces inattentional blindness, but also offer a plausible neural mechanism for this perceptual deficit: suppression of the stimulus-driven attentional network.

  5. Bipedal vs. unipedal: a comparison between one-foot and two-foot driving in a driving simulator.

    PubMed

    Wang, Dong-Yuan Debbie; Richard, F Dan; Cino, Cullen R; Blount, Trevin; Schmuller, Joseph

    2017-04-01

    Is it better to drive with one foot or with two feet? Although two-foot driving has fostered interminable debate in the media, no scientific and systematic research has assessed this issue and federal and local state governments have provided no answers. The current study compared traditional unipedal (one-foot driving, using the right foot to control the accelerator and the brake pedal) with bipedal (two-foot driving, using the right foot to control the accelerator and the left foot to control the brake pedal) responses to a visual stimulus in a driving simulator study. Each of 30 undergraduate participants drove in a simulated driving scenario. They responded to a STOP sign displayed on the centre of the screen by bringing their vehicle to a complete stop. Brake RT was shorter under the bipedal condition, while throttle RT showed advantage under the unipedal condition. Stopping time and distance showed a bipedal advantage, however. We discuss further limitations of the current study and implications in a driving task. Before drawing any conclusions from the simulator study, further on-road driving tests are necessary to confirm these obtained bipedal advantages. Practitioner Summary: Traditional unipedal (using the right foot to control the accelerator and the brake pedal) with bipedal (using the right foot to control the accelerator and the left foot to control the brake pedal) responses to a visual stimulus in a driving simulator were compared. Our results showed a bipedal advantage. Promotion: Although two-foot driving has fostered interminable debate in the media, no scientific and systematic research has assessed this issue and federal and local state governments have provided no answers. Traditional (one-foot driving, using the right foot to control the accelerator and the brake pedal) with bipedal (using the right foot to control the accelerator and the left foot to control the brake pedal) responses to a visual stimulus in a simulated driving study were compared. Throttle reaction time was faster in the unipedal condition whereas brake reaction time, stopping time and stopping distance showed a bipedal advantage. We discuss further theoretical issues and implications in a driving task.

  6. The rapid distraction of attentional resources toward the source of incongruent stimulus input during multisensory conflict.

    PubMed

    Donohue, Sarah E; Todisco, Alexandra E; Woldorff, Marty G

    2013-04-01

    Neuroimaging work on multisensory conflict suggests that the relevant modality receives enhanced processing in the face of incongruency. However, the degree of stimulus processing in the irrelevant modality and the temporal cascade of the attentional modulations in either the relevant or irrelevant modalities are unknown. Here, we employed an audiovisual conflict paradigm with a sensory probe in the task-irrelevant modality (vision) to gauge the attentional allocation to that modality. ERPs were recorded as participants attended to and discriminated spoken auditory letters while ignoring simultaneous bilateral visual letter stimuli that were either fully congruent, fully incongruent, or partially incongruent (one side incongruent, one congruent) with the auditory stimulation. Half of the audiovisual letter stimuli were followed 500-700 msec later by a bilateral visual probe stimulus. As expected, ERPs to the audiovisual stimuli showed an incongruency ERP effect (fully incongruent versus fully congruent) of an enhanced, centrally distributed, negative-polarity wave starting ∼250 msec. More critically here, the sensory ERP components to the visual probes were larger when they followed fully incongruent versus fully congruent multisensory stimuli, with these enhancements greatest on fully incongruent trials with the slowest RTs. In addition, on the slowest-response partially incongruent trials, the P2 sensory component to the visual probes was larger contralateral to the preceding incongruent visual stimulus. These data suggest that, in response to conflicting multisensory stimulus input, the initial cognitive effect is a capture of attention by the incongruent irrelevant-modality input, pulling neural processing resources toward that modality, resulting in rapid enhancement, rather than rapid suppression, of that input.

  7. Examining the Reinforcement-Enhancement Effects of Phencyclidine and Its Interactions with Nicotine on Lever-Pressing for a Visual Stimulus

    PubMed Central

    Swalve, Natashia; Barrett, Scott T.; Bevins, Rick A.; Li, Ming

    2015-01-01

    Nicotine is a widely-abused drug, yet its primary reinforcing effect does not seem potent as other stimulants such as cocaine. Recent research on the contributing factors toward chronic use of nicotine-containing products has implicated the role of reinforcement-enhancing effects of nicotine. The present study investigates whether phencyclidine (PCP) may also possess a reinforcement-enhancement effect and how this may interact with the reinforcement-enhancement effect of nicotine. PCP was tested for two reasons: 1) it produces discrepant results on overall reward, similar to that seen with nicotine and 2) it may elucidate how other compounds may interact with the reinforcement-enhancement of nicotine. Adult male Sprague-Dawley rats were trained to lever press for brief visual stimulus presentations under fixed-ratio (FR) schedules of reinforcement and then were tested with nicotine (0.2 or 0.4 mg/kg) and/or PCP (2.0 mg/kg) over six increasing FR values. A selective increase in active lever-pressing for the visual stimulus with drug treatment was considered evidence of a reinforcement-enhancement effect. PCP and nicotine separately increased active lever pressing for a visual stimulus in a dose-dependent manner and across the different FR schedules. The addition of PCP to nicotine did not increase lever-pressing for the visual stimulus, possibly due to a ceiling effect. The effect of PCP may be driven largely by its locomotor stimulant effects, whereas the effect of nicotine was independent of locomotor stimulation. This dissociation emphasizes that distinct pharmacological properties contribute to the reinforcement-enhancement effects of substances. PMID:26026783

  8. Auditory proactive interference in monkeys: the roles of stimulus set size and intertrial interval.

    PubMed

    Bigelow, James; Poremba, Amy

    2013-09-01

    We conducted two experiments to examine the influences of stimulus set size (the number of stimuli that are used throughout the session) and intertrial interval (ITI, the elapsed time between trials) in auditory short-term memory in monkeys. We used an auditory delayed matching-to-sample task wherein the animals had to indicate whether two sounds separated by a 5-s retention interval were the same (match trials) or different (nonmatch trials). In Experiment 1, we randomly assigned stimulus set sizes of 2, 4, 8, 16, 32, 64, or 192 (trial-unique) for each session of 128 trials. Consistent with previous visual studies, overall accuracy was consistently lower when smaller stimulus set sizes were used. Further analyses revealed that these effects were primarily caused by an increase in incorrect "same" responses on nonmatch trials. In Experiment 2, we held the stimulus set size constant at four for each session and alternately set the ITI at 5, 10, or 20 s. Overall accuracy improved when the ITI was increased from 5 to 10 s, but it was the same across the 10- and 20-s conditions. As in Experiment 1, the overall decrease in accuracy during the 5-s condition was caused by a greater number of false "match" responses on nonmatch trials. Taken together, Experiments 1 and 2 showed that auditory short-term memory in monkeys is highly susceptible to proactive interference caused by stimulus repetition. Additional analyses of the data from Experiment 1 suggested that monkeys may make same-different judgments on the basis of a familiarity criterion that is adjusted by error-related feedback.

  9. 10-Month-Olds Visually Anticipate an Outcome Contingent on Their Own Action

    ERIC Educational Resources Information Center

    Kenward, Ben

    2010-01-01

    It is known that young infants can learn to perform an action that elicits a reinforcer, and that they can visually anticipate a predictable stimulus by looking at its location before it begins. Here, in an investigation of the display of these abilities in tandem, I report that 10-month-olds anticipate a reward stimulus that they generate through…

  10. Cross-modal interaction between visual and olfactory learning in Apis cerana.

    PubMed

    Zhang, Li-Zhen; Zhang, Shao-Wu; Wang, Zi-Long; Yan, Wei-Yu; Zeng, Zhi-Jiang

    2014-10-01

    The power of the small honeybee brain carrying out behavioral and cognitive tasks has been shown repeatedly to be highly impressive. The present study investigates, for the first time, the cross-modal interaction between visual and olfactory learning in Apis cerana. To explore the role and molecular mechanisms of cross-modal learning in A. cerana, the honeybees were trained and tested in a modified Y-maze with seven visual and five olfactory stimulus, where a robust visual threshold for black/white grating (period of 2.8°-3.8°) and relatively olfactory threshold (concentration of 50-25%) was obtained. Meanwhile, the expression levels of five genes (AcCREB, Acdop1, Acdop2, Acdop3, Actyr1) related to learning and memory were analyzed under different training conditions by real-time RT-PCR. The experimental results indicate that A. cerana could exhibit cross-modal interactions between visual and olfactory learning by reducing the threshold level of the conditioning stimuli, and that these genes may play important roles in the learning process of honeybees.

  11. The Hidden Snake in the Grass: Superior Detection of Snakes in Challenging Attentional Conditions

    PubMed Central

    Soares, Sandra C.; Lindström, Björn; Esteves, Francisco; Öhman, Arne

    2014-01-01

    Snakes have provided a serious threat to primates throughout evolution. Furthermore, bites by venomous snakes still cause significant morbidity and mortality in tropical regions of the world. According to the Snake Detection Theory (SDT Isbell, 2006; 2009), the vital need to detect camouflaged snakes provided strong evolutionary pressure to develop astute perceptual capacity in animals that were potential targets for snake attacks. We performed a series of behavioral tests that assessed snake detection under conditions that may have been critical for survival. We used spiders as the control stimulus because they are also a common object of phobias and rated negatively by the general population, thus commonly lumped together with snakes as “evolutionary fear-relevant”. Across four experiments (N = 205) we demonstrate an advantage in snake detection, which was particularly obvious under visual conditions known to impede detection of a wide array of common stimuli, for example brief stimulus exposures, stimuli presentation in the visual periphery, and stimuli camouflaged in a cluttered environment. Our results demonstrate a striking independence of snake detection from ecological factors that impede the detection of other stimuli, which suggests that, consistent with the SDT, they reflect a specific biological adaptation. Nonetheless, the empirical tests we report are limited to only one aspect of this rich theory, which integrates findings across a wide array of scientific disciplines. PMID:25493937

  12. The Hidden Snake in the Grass: Superior Detection of Snakes in Challenging Attentional Conditions.

    PubMed

    Soares, Sandra C; Lindström, Björn; Esteves, Francisco; Ohman, Arne

    2014-01-01

    Snakes have provided a serious threat to primates throughout evolution. Furthermore, bites by venomous snakes still cause significant morbidity and mortality in tropical regions of the world. According to the Snake Detection Theory (SDT Isbell, 2006; 2009), the vital need to detect camouflaged snakes provided strong evolutionary pressure to develop astute perceptual capacity in animals that were potential targets for snake attacks. We performed a series of behavioral tests that assessed snake detection under conditions that may have been critical for survival. We used spiders as the control stimulus because they are also a common object of phobias and rated negatively by the general population, thus commonly lumped together with snakes as "evolutionary fear-relevant". Across four experiments (N = 205) we demonstrate an advantage in snake detection, which was particularly obvious under visual conditions known to impede detection of a wide array of common stimuli, for example brief stimulus exposures, stimuli presentation in the visual periphery, and stimuli camouflaged in a cluttered environment. Our results demonstrate a striking independence of snake detection from ecological factors that impede the detection of other stimuli, which suggests that, consistent with the SDT, they reflect a specific biological adaptation. Nonetheless, the empirical tests we report are limited to only one aspect of this rich theory, which integrates findings across a wide array of scientific disciplines.

  13. A method for real-time visual stimulus selection in the study of cortical object perception.

    PubMed

    Leeds, Daniel D; Tarr, Michael J

    2016-06-01

    The properties utilized by visual object perception in the mid- and high-level ventral visual pathway are poorly understood. To better establish and explore possible models of these properties, we adopt a data-driven approach in which we repeatedly interrogate neural units using functional Magnetic Resonance Imaging (fMRI) to establish each unit's image selectivity. This approach to imaging necessitates a search through a broad space of stimulus properties using a limited number of samples. To more quickly identify the complex visual features underlying human cortical object perception, we implemented a new functional magnetic resonance imaging protocol in which visual stimuli are selected in real-time based on BOLD responses to recently shown images. Two variations of this protocol were developed, one relying on natural object stimuli and a second based on synthetic object stimuli, both embedded in feature spaces based on the complex visual properties of the objects. During fMRI scanning, we continuously controlled stimulus selection in the context of a real-time search through these image spaces in order to maximize neural responses across pre-determined 1cm(3) rain regions. Elsewhere we have reported the patterns of cortical selectivity revealed by this approach (Leeds et al., 2014). In contrast, here our objective is to present more detailed methods and explore the technical and biological factors influencing the behavior of our real-time stimulus search. We observe that: 1) Searches converged more reliably when exploring a more precisely parameterized space of synthetic objects; 2) real-time estimation of cortical responses to stimuli is reasonably consistent; 3) search behavior was acceptably robust to delays in stimulus displays and subject motion effects. Overall, our results indicate that real-time fMRI methods may provide a valuable platform for continuing study of localized neural selectivity, both for visual object representation and beyond. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. A method for real-time visual stimulus selection in the study of cortical object perception

    PubMed Central

    Leeds, Daniel D.; Tarr, Michael J.

    2016-01-01

    The properties utilized by visual object perception in the mid- and high-level ventral visual pathway are poorly understood. To better establish and explore possible models of these properties, we adopt a data-driven approach in which we repeatedly interrogate neural units using functional Magnetic Resonance Imaging (fMRI) to establish each unit’s image selectivity. This approach to imaging necessitates a search through a broad space of stimulus properties using a limited number of samples. To more quickly identify the complex visual features underlying human cortical object perception, we implemented a new functional magnetic resonance imaging protocol in which visual stimuli are selected in real-time based on BOLD responses to recently shown images. Two variations of this protocol were developed, one relying on natural object stimuli and a second based on synthetic object stimuli, both embedded in feature spaces based on the complex visual properties of the objects. During fMRI scanning, we continuously controlled stimulus selection in the context of a real-time search through these image spaces in order to maximize neural responses across predetermined 1 cm3 brain regions. Elsewhere we have reported the patterns of cortical selectivity revealed by this approach (Leeds 2014). In contrast, here our objective is to present more detailed methods and explore the technical and biological factors influencing the behavior of our real-time stimulus search. We observe that: 1) Searches converged more reliably when exploring a more precisely parameterized space of synthetic objects; 2) Real-time estimation of cortical responses to stimuli are reasonably consistent; 3) Search behavior was acceptably robust to delays in stimulus displays and subject motion effects. Overall, our results indicate that real-time fMRI methods may provide a valuable platform for continuing study of localized neural selectivity, both for visual object representation and beyond. PMID:26973168

  15. Vibrotactile timing: Are vibrotactile judgements of duration affected by repetitive stimulation?

    PubMed

    Jones, Luke A; Ogden, Ruth S

    2016-01-01

    Timing in the vibrotactile modality was explored. Previous research has shown that repetitive auditory stimulation (in the form of click-trains) and visual stimulation (in the form of flickers) can alter duration judgements in a manner consistent with a "speeding up" of an internal clock. In Experiments 1 and 2 we investigated whether repetitive vibrotactile stimulation in the form of vibration trains would also alter duration judgements of either vibrotactile stimuli or visual stimuli. Participants gave verbal estimates of the duration of vibrotactile and visual stimuli that were preceded either by five seconds of 5-Hz vibration trains, or, by a five-second period of no vibrotactile stimulation, the end of which was signalled by a single vibration pulse (control condition). The results showed that durations were overestimated in the vibrotactile train conditions relative to the control condition; however, the effects were not multiplicative (did not increase with increasing stimulus duration) and as such were not consistent with a speeding up of the internal clock, but rather with an additive attentional effect. An additional finding was that the slope of the vibrotactile psychometric (control condition) function was not significantly different from that of the visual (control condition) function, which replicates a finding from a previous cross-modal comparison of timing.

  16. Visual adaptation and novelty responses in the superior colliculus

    PubMed Central

    Boehnke, Susan E.; Berg, David J.; Marino, Robert M.; Baldi, Pierre F.; Itti, Laurent; Munoz, Douglas P.

    2011-01-01

    The brain's ability to ignore repeating, often redundant, information while enhancing novel information processing is paramount to survival. When stimuli are repeatedly presented, the response of visually-sensitive neurons decreases in magnitude, i.e. neurons adapt or habituate, although the mechanism is not yet known. We monitored activity of visual neurons in the superior colliculus (SC) of rhesus monkeys who actively fixated while repeated visual events were presented. We dissociated adaptation from habituation as mechanisms of the response decrement by using a Bayesian model of adaptation, and by employing a paradigm including rare trials that included an oddball stimulus that was either brighter or dimmer. If the mechanism is adaptation, response recovery should be seen only for the brighter stimulus; if habituation, response recovery (‘dishabituation’) should be seen for both the brighter and dimmer stimulus. We observed a reduction in the magnitude of the initial transient response and an increase in response onset latency with stimulus repetition for all visually responsive neurons in the SC. Response decrement was successfully captured by the adaptation model which also predicted the effects of presentation rate and rare luminance changes. However, in a subset of neurons with sustained activity to visual stimuli, a novelty signal akin to dishabituation was observed late in the visual response profile to both brighter and dimmer stimuli and was not captured by the model. This suggests that SC neurons integrate both rapidly discounted information about repeating stimuli and novelty information about oddball events, to support efficient selection in a cluttered dynamic world. PMID:21864319

  17. The impact of early visual cortex transcranial magnetic stimulation on visual working memory precision and guess rate.

    PubMed

    Rademaker, Rosanne L; van de Ven, Vincent G; Tong, Frank; Sack, Alexander T

    2017-01-01

    Neuroimaging studies have demonstrated that activity patterns in early visual areas predict stimulus properties actively maintained in visual working memory. Yet, the mechanisms by which such information is represented remain largely unknown. In this study, observers remembered the orientations of 4 briefly presented gratings, one in each quadrant of the visual field. A 10Hz Transcranial Magnetic Stimulation (TMS) triplet was applied directly at stimulus offset, or midway through a 2-second delay, targeting early visual cortex corresponding retinotopically to a sample item in the lower hemifield. Memory for one of the four gratings was probed at random, and participants reported this orientation via method of adjustment. Recall errors were smaller when the visual field location targeted by TMS overlapped with that of the cued memory item, compared to errors for stimuli probed diagonally to TMS. This implied topographic storage of orientation information, and a memory-enhancing effect at the targeted location. Furthermore, early pulses impaired performance at all four locations, compared to late pulses. Next, response errors were fit empirically using a mixture model to characterize memory precision and guess rates. Memory was more precise for items proximal to the pulse location, irrespective of pulse timing. Guesses were more probable with early TMS pulses, regardless of stimulus location. Thus, while TMS administered at the offset of the stimulus array might disrupt early-phase consolidation in a non-topographic manner, TMS also boosts the precise representation of an item at its targeted retinotopic location, possibly by increasing attentional resources or by injecting a beneficial amount of noise.

  18. The impact of early visual cortex transcranial magnetic stimulation on visual working memory precision and guess rate

    PubMed Central

    van de Ven, Vincent G.; Tong, Frank; Sack, Alexander T.

    2017-01-01

    Neuroimaging studies have demonstrated that activity patterns in early visual areas predict stimulus properties actively maintained in visual working memory. Yet, the mechanisms by which such information is represented remain largely unknown. In this study, observers remembered the orientations of 4 briefly presented gratings, one in each quadrant of the visual field. A 10Hz Transcranial Magnetic Stimulation (TMS) triplet was applied directly at stimulus offset, or midway through a 2-second delay, targeting early visual cortex corresponding retinotopically to a sample item in the lower hemifield. Memory for one of the four gratings was probed at random, and participants reported this orientation via method of adjustment. Recall errors were smaller when the visual field location targeted by TMS overlapped with that of the cued memory item, compared to errors for stimuli probed diagonally to TMS. This implied topographic storage of orientation information, and a memory-enhancing effect at the targeted location. Furthermore, early pulses impaired performance at all four locations, compared to late pulses. Next, response errors were fit empirically using a mixture model to characterize memory precision and guess rates. Memory was more precise for items proximal to the pulse location, irrespective of pulse timing. Guesses were more probable with early TMS pulses, regardless of stimulus location. Thus, while TMS administered at the offset of the stimulus array might disrupt early-phase consolidation in a non-topographic manner, TMS also boosts the precise representation of an item at its targeted retinotopic location, possibly by increasing attentional resources or by injecting a beneficial amount of noise. PMID:28384347

  19. Letters persistence after physical offset: visual word form area and left planum temporale. An fMRI study.

    PubMed

    Barban, Francesco; Zannino, Gian Daniele; Macaluso, Emiliano; Caltagirone, Carlo; Carlesimo, Giovanni A

    2013-06-01

    Iconic memory is a high-capacity low-duration visual memory store that allows the persistence of a visual stimulus after its offset. The categorical nature of this store has been extensively debated. This study provides functional magnetic resonance imaging evidence for brain regions underlying the persistence of postcategorical representations of visual stimuli. In a partial report paradigm, subjects matched a cued row of a 3 × 3 array of letters (postcategorical stimuli) or false fonts (precategorical stimuli) with a subsequent triplet of stimuli. The cued row was indicated by two visual flankers presented at the onset (physical stimulus readout) or after the offset of the array (iconic memory readout). The left planum temporale showed a greater modulation of the source of readout (iconic memory vs. physical stimulus) when letters were presented compared to false fonts. This is a multimodal brain region responsible for matching incoming acoustic and visual patterns with acoustic pattern templates. These findings suggest that letters persist after their physical offset in an abstract postcategorical representation. A targeted region of interest analysis revealed a similar pattern of activation in the Visual Word Form Area. These results suggest that multiple higher-order visual areas mediate iconic memory for postcategorical stimuli. Copyright © 2012 Wiley Periodicals, Inc.

  20. Dataset of red light induced pupil constriction superimposed on post-illumination pupil response.

    PubMed

    Lei, Shaobo; Goltz, Herbert C; Sklar, Jaime C; Wong, Agnes M F

    2016-09-01

    We collected and analyzed pupil diameter data from of 7 visually normal participants to compare the maximum pupil constriction (MPC) induced by "Red Only" vs. "Blue+Red" visual stimulation conditions. The "Red Only" condition consisted of red light (640±10 nm) stimuli of variable intensity and duration presented to dark-adapted eyes with pupils at resting state. This condition stimulates the cone-driven activity of the intrinsically photosensitive retinal ganglion cells (ipRGC). The "Blue+Red" condition consisted of the same red light stimulus presented during ongoing blue (470±17 nm) light-induced post-illumination pupil response (PIPR), representing the cone-driven ipRGC activity superimposed on the melanopsin-driven intrinsic activity of the ipRGCs ("The Absence of Attenuating Effect of Red light Exposure on Pre-existing Melanopsin-Driven Post-illumination Pupil Response" Lei et al. (2016) [1]). MPC induced by the "Red Only" condition was compared with the MPC induced by the "Blue+Red" condition by multiple paired sample t -tests with Bonferroni correction.

  1. The Effect of Optokinetic Stimulation on Perceptual and Postural Symptoms in Visual Vestibular Mismatch Patients.

    PubMed

    Van Ombergen, Angelique; Lubeck, Astrid J; Van Rompaey, Vincent; Maes, Leen K; Stins, John F; Van de Heyning, Paul H; Wuyts, Floris L; Bos, Jelte E

    2016-01-01

    Vestibular patients occasionally report aggravation or triggering of their symptoms by visual stimuli, which is called visual vestibular mismatch (VVM). These patients therefore experience discomfort, disorientation, dizziness and postural unsteadiness. Firstly, we aimed to get a better insight in the underlying mechanism of VVM by examining perceptual and postural symptoms. Secondly, we wanted to investigate whether roll-motion is a necessary trait to evoke these symptoms or whether a complex but stationary visual pattern equally provokes them. Nine VVM patients and healthy matched control group were examined by exposing both groups to a stationary stimulus as well as an optokinetic stimulus rotating around the naso-occipital axis for a prolonged period of time. Subjective visual vertical (SVV) measurements, posturography and relevant questionnaires were assessed. No significant differences between both groups were found for SVV measurements. Patients always swayed more and reported more symptoms than healthy controls. Prolonged exposure to roll-motion caused in patients and controls an increase in postural sway and symptoms. However, only VVM patients reported significantly more symptoms after prolonged exposure to the optokinetic stimulus compared to scores after exposure to a stationary stimulus. VVM patients differ from healthy controls in postural and subjective symptoms and motion is a crucial factor in provoking these symptoms. A possible explanation could be a central visual-vestibular integration deficit, which has implications for diagnostics and clinical rehabilitation purposes. Future research should focus on the underlying central mechanism of VVM and the effectiveness of optokinetic stimulation in resolving it.

  2. Modulation of visual physiology by behavioral state in monkeys, mice, and flies.

    PubMed

    Maimon, Gaby

    2011-08-01

    When a monkey attends to a visual stimulus, neurons in visual cortex respond differently to that stimulus than when the monkey attends elsewhere. In the 25 years since the initial discovery, the study of attention in primates has been central to understanding flexible visual processing. Recent experiments demonstrate that visual neurons in mice and fruit flies are modulated by locomotor behaviors, like running and flying, in a manner that resembles attention-based modulations in primates. The similar findings across species argue for a more generalized view of state-dependent sensory processing and for a renewed dialogue among vertebrate and invertebrate research communities. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. GABA(A) receptors in visual and auditory cortex and neural activity changes during basic visual stimulation.

    PubMed

    Qin, Pengmin; Duncan, Niall W; Wiebking, Christine; Gravel, Paul; Lyttelton, Oliver; Hayes, Dave J; Verhaeghe, Jeroen; Kostikov, Alexey; Schirrmacher, Ralf; Reader, Andrew J; Northoff, Georg

    2012-01-01

    Recent imaging studies have demonstrated that levels of resting γ-aminobutyric acid (GABA) in the visual cortex predict the degree of stimulus-induced activity in the same region. These studies have used the presentation of discrete visual stimulus; the change from closed eyes to open also represents a simple visual stimulus, however, and has been shown to induce changes in local brain activity and in functional connectivity between regions. We thus aimed to investigate the role of the GABA system, specifically GABA(A) receptors, in the changes in brain activity between the eyes closed (EC) and eyes open (EO) state in order to provide detail at the receptor level to complement previous studies of GABA concentrations. We conducted an fMRI study involving two different modes of the change from EC to EO: an EO and EC block design, allowing the modeling of the haemodynamic response, followed by longer periods of EC and EO to allow the measuring of functional connectivity. The same subjects also underwent [(18)F]Flumazenil PET to measure GABA(A) receptor binding potentials. It was demonstrated that the local-to-global ratio of GABA(A) receptor binding potential in the visual cortex predicted the degree of changes in neural activity from EC to EO. This same relationship was also shown in the auditory cortex. Furthermore, the local-to-global ratio of GABA(A) receptor binding potential in the visual cortex also predicted the change in functional connectivity between the visual and auditory cortex from EC to EO. These findings contribute to our understanding of the role of GABA(A) receptors in stimulus-induced neural activity in local regions and in inter-regional functional connectivity.

  4. Audio–visual interactions for motion perception in depth modulate activity in visual area V3A

    PubMed Central

    Ogawa, Akitoshi; Macaluso, Emiliano

    2013-01-01

    Multisensory signals can enhance the spatial perception of objects and events in the environment. Changes of visual size and auditory intensity provide us with the main cues about motion direction in depth. However, frequency changes in audition and binocular disparity in vision also contribute to the perception of motion in depth. Here, we presented subjects with several combinations of auditory and visual depth-cues to investigate multisensory interactions during processing of motion in depth. The task was to discriminate the direction of auditory motion in depth according to increasing or decreasing intensity. Rising or falling auditory frequency provided an additional within-audition cue that matched or did not match the intensity change (i.e. intensity-frequency (IF) “matched vs. unmatched” conditions). In two-thirds of the trials, a task-irrelevant visual stimulus moved either in the same or opposite direction of the auditory target, leading to audio–visual “congruent vs. incongruent” between-modalities depth-cues. Furthermore, these conditions were presented either with or without binocular disparity. Behavioral data showed that the best performance was observed in the audio–visual congruent condition with IF matched. Brain imaging results revealed maximal response in visual area V3A when all cues provided congruent and reliable depth information (i.e. audio–visual congruent, IF-matched condition including disparity cues). Analyses of effective connectivity revealed increased coupling from auditory cortex to V3A specifically in audio–visual congruent trials. We conclude that within- and between-modalities cues jointly contribute to the processing of motion direction in depth, and that they do so via dynamic changes of connectivity between visual and auditory cortices. PMID:23333414

  5. Developmental lead exposure causes startle response deficits in zebrafish.

    PubMed

    Rice, Clinton; Ghorai, Jugal K; Zalewski, Kathryn; Weber, Daniel N

    2011-10-01

    Lead (Pb(2+)) exposure continues to be an important concern for fish populations. Research is required to assess the long-term behavioral effects of low-level concentrations of Pb(2+) and the physiological mechanisms that control those behaviors. Newly fertilized zebrafish embryos (<2h post fertilization; hpf) were exposed to one of three concentrations of lead (as PbCl(2)): 0, 10, or 30 nM until 24 hpf. (1) Response to a mechanosensory stimulus: Individual larvae (168 hpf) were tested for response to a directional, mechanical stimulus. The tap frequency was adjusted to either 1 or 4 taps/s. Startle response was recorded at 1000 fps. Larvae responded in a concentration-dependent pattern for latency to reaction, maximum turn velocity, time to reach V(max) and escape time. With increasing exposure concentrations, a larger number of larvae failed to respond to even the initial tap and, for those that did respond, ceased responding earlier than control larvae. These differences were more pronounced at a frequency of 4 taps/s. (2) Response to a visual stimulus: Fish, exposed as embryos (2-24 hpf) to Pb(2+) (0-10 μM) were tested as adults under low light conditions (≈ 60 μW/m(2)) for visual responses to a rotating black bar. Visual responses were significantly degraded at Pb(2+) concentrations of 30 nM. These data suggest that zebrafish are viable models for short- and long-term sensorimotor deficits induced by acute, low-level developmental Pb(2+) exposures. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. Contextual consistency facilitates long-term memory of perceptual detail in barely seen images.

    PubMed

    Gronau, Nurit; Shachar, Meytal

    2015-08-01

    It is long known that contextual information affects memory for an object's identity (e.g., its basic level category), yet it is unclear whether schematic knowledge additionally enhances memory for the precise visual appearance of an item. Here we investigated memory for visual detail of merely glimpsed objects. Participants viewed pairs of contextually related and unrelated stimuli, presented for an extremely brief duration (24 ms, masked). They then performed a forced-choice memory-recognition test for the precise perceptual appearance of 1 of 2 objects within each pair (i.e., the "memory-target" item). In 3 experiments, we show that memory-target stimuli originally appearing within contextually related pairs are remembered better than targets appearing within unrelated pairs. These effects are obtained whether the target is presented at test with its counterpart pair object (i.e., when reiterating the original context at encoding) or whether the target is presented alone, implying that the contextual consistency effects are mediated predominantly by processes occurring during stimulus encoding, rather than during stimulus retrieval. Furthermore, visual detail encoding is improved whether object relations involve implied action or not, suggesting that, contrary to some prior suggestions, action is not a necessary component for object-to-object associative "grouping" processes. Our findings suggest that during a brief glimpse, but not under long viewing conditions, contextual associations may play a critical role in reducing stimulus competition for attention selection and in facilitating rapid encoding of sensory details. Theoretical implications with respect to classic frame theories are discussed. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  7. Roles of Aminergic Neurons in Formation and Recall of Associative Memory in Crickets

    PubMed Central

    Mizunami, Makoto; Matsumoto, Yukihisa

    2010-01-01

    We review recent progress in the study of roles of octopaminergic (OA-ergic) and dopaminergic (DA-ergic) signaling in insect classical conditioning, focusing on our studies on crickets. Studies on olfactory learning in honey bees and fruit-flies have suggested that OA-ergic and DA-ergic neurons convey reinforcing signals of appetitive unconditioned stimulus (US) and aversive US, respectively. Our work suggested that this is applicable to olfactory, visual pattern, and color learning in crickets, indicating that this feature is ubiquitous in learning of various sensory stimuli. We also showed that aversive memory decayed much faster than did appetitive memory, and we proposed that this feature is common in insects and humans. Our study also suggested that activation of OA- or DA-ergic neurons is needed for appetitive or aversive memory recall, respectively. To account for this finding, we proposed a model in which it is assumed that two types of synaptic connections are strengthened by conditioning and are activated during memory recall, one type being connections from neurons representing conditioned stimulus (CS) to neurons inducing conditioned response and the other being connections from neurons representing CS to OA- or DA-ergic neurons representing appetitive or aversive US, respectively. The former is called stimulus–response (S–R) connection and the latter is called stimulus–stimulus (S–S) connection by theorists studying classical conditioning in vertebrates. Results of our studies using a second-order conditioning procedure supported our model. We propose that insect classical conditioning involves the formation of S–S connection and its activation for memory recall, which are often called cognitive processes. PMID:21119781

  8. Fear learning circuitry is biased toward generalization of fear associations in posttraumatic stress disorder

    PubMed Central

    Morey, R A; Dunsmoor, J E; Haswell, C C; Brown, V M; Vora, A; Weiner, J; Stjepanovic, D; Wagner, H R; Brancu, Mira; Marx, Christine E; Naylor, Jennifer C; Van Voorhees, Elizabeth; Taber, Katherine H; Beckham, Jean C; Calhoun, Patrick S; Fairbank, John A; Szabo, Steven T; LaBar, K S

    2015-01-01

    Fear conditioning is an established model for investigating posttraumatic stress disorder (PTSD). However, symptom triggers may vaguely resemble the initial traumatic event, differing on a variety of sensory and affective dimensions. We extended the fear-conditioning model to assess generalization of conditioned fear on fear processing neurocircuitry in PTSD. Military veterans (n=67) consisting of PTSD (n=32) and trauma-exposed comparison (n=35) groups underwent functional magnetic resonance imaging during fear conditioning to a low fear-expressing face while a neutral face was explicitly unreinforced. Stimuli that varied along a neutral-to-fearful continuum were presented before conditioning to assess baseline responses, and after conditioning to assess experience-dependent changes in neural activity. Compared with trauma-exposed controls, PTSD patients exhibited greater post-study memory distortion of the fear-conditioned stimulus toward the stimulus expressing the highest fear intensity. PTSD patients exhibited biased neural activation toward high-intensity stimuli in fusiform gyrus (P<0.02), insula (P<0.001), primary visual cortex (P<0.05), locus coeruleus (P<0.04), thalamus (P<0.01), and at the trend level in inferior frontal gyrus (P=0.07). All regions except fusiform were moderated by childhood trauma. Amygdala–calcarine (P=0.01) and amygdala–thalamus (P=0.06) functional connectivity selectively increased in PTSD patients for high-intensity stimuli after conditioning. In contrast, amygdala–ventromedial prefrontal cortex (P=0.04) connectivity selectively increased in trauma-exposed controls compared with PTSD patients for low-intensity stimuli after conditioning, representing safety learning. In summary, fear generalization in PTSD is biased toward stimuli with higher emotional intensity than the original conditioned-fear stimulus. Functional brain differences provide a putative neurobiological model for fear generalization whereby PTSD symptoms are triggered by threat cues that merely resemble the index trauma. PMID:26670285

  9. Orienting attention in visual working memory requires central capacity: decreased retro-cue effects under dual-task conditions.

    PubMed

    Janczyk, Markus; Berryhill, Marian E

    2014-04-01

    The retro-cue effect (RCE) describes superior working memory performance for validly cued stimulus locations long after encoding has ended. Importantly, this happens with delays beyond the range of iconic memory. In general, the RCE is a stable phenomenon that emerges under varied stimulus configurations and timing parameters. We investigated its susceptibility to dual-task interference to determine the attentional requirements at the time point of cue onset and encoding. In Experiment 1, we compared single- with dual-task conditions. In Experiment 2, we borrowed from the psychological refractory period paradigm and compared conditions with high and low (dual-) task overlap. The secondary task was always binary tone discrimination requiring a manual response. Across both experiments, an RCE was found, but it was diminished in magnitude in the critical dual-task conditions. A previous study did not find evidence that sustained attention is required in the interval between cue offset and test. Our results apparently contradict these findings and point to a critical time period around cue onset and briefly thereafter during which attention is required.

  10. Orienting attention in visual working memory requires central capacity: Decreased retro-cue effects under dual-task conditions

    PubMed Central

    Berryhill, Marian E.

    2014-01-01

    The retro-cue effect (RCE) describes superior working memory performance for validly cued stimulus locations long after encoding has ended. Importantly, this happens with delays beyond the range of iconic memory. In general, the RCE is a stable phenomenon that emerges under varied stimulus configurations and timing parameters. We investigated its susceptibility to dual-task interference to determine the attentional requirements at the time point of cue onset and encoding. In Experiment 1, we compared single- with dual-task conditions. In Experiment 2, we borrowed from the psychological refractory period paradigm and compared conditions with high and low (dual-) task overlap. The secondary task was always binary tone discrimination requiring amanual response. Across both experiments, an RCE was found, but it was diminished in magnitude in the critical dual-task conditions. A previous study did not find evidence that sustained attention is required in the interval between cue offset and test. Our results apparently contradict these findings and point to a critical time period around cue onset and briefly thereafter during which attention is required. PMID:24452383

  11. Prospects for Quantitative fMRI: Investigating the Effects of Caffeine on Baseline Oxygen Metabolism and the Response to a Visual Stimulus in Humans

    PubMed Central

    Griffeth, Valerie E.M.; Perthen, Joanna E.; Buxton, Richard B.

    2011-01-01

    Functional magnetic resonance imaging (fMRI) provides an indirect reflection of neural activity change in the working brain through detection of blood oxygenation level dependent (BOLD) signal changes. Although widely used to map patterns of brain activation, fMRI has not yet met its potential for clinical and pharmacological studies due to difficulties in quantitatively interpreting the BOLD signal. This difficulty is due to the BOLD response being strongly modulated by two physiological factors in addition to the level of neural activity: the amount of deoxyhemoglobin present in the baseline state and the coupling ratio, n, of evoked changes in blood flow and oxygen metabolism. In this study, we used a quantitative fMRI approach with dual measurement of blood flow and BOLD responses to overcome these limitations and show that these two sources of modulation work in opposite directions following caffeine administration in healthy human subjects. A strong 27% reduction in baseline blood flow and a 22% increase in baseline oxygen metabolism after caffeine consumption led to a decrease in baseline blood oxygenation and was expected to increase the subsequent BOLD response to the visual stimulus. Opposing this, caffeine reduced n through a strong 61% increase in the evoked oxygen metabolism response to the visual stimulus. The combined effect was that BOLD responses pre- and post-caffeine were similar despite large underlying physiological changes, indicating that the magnitude of the BOLD response alone should not be interpreted as a direct measure of underlying neurophysiological changes. Instead, a quantitative methodology based on dual-echo measurement of blood flow and BOLD responses is a promising tool for applying fMRI to disease and drug studies in which both baseline conditions and the coupling of blood flow and oxygen metabolism responses to a stimulus may be altered. PMID:21586328

  12. Fear-potentiated startle processing in humans: Parallel fMRI and orbicularis EMG assessment during cue conditioning and extinction.

    PubMed

    Lindner, Katja; Neubert, Jörg; Pfannmöller, Jörg; Lotze, Martin; Hamm, Alfons O; Wendt, Julia

    2015-12-01

    Studying neural networks and behavioral indices such as potentiated startle responses during fear conditioning has a long tradition in both animal and human research. However, most of the studies in humans do not link startle potentiation and neural activity during fear acquisition and extinction. Therefore, we examined startle blink responses measured with electromyography (EMG) and brain activity measured with functional MRI simultaneously during differential conditioning. Furthermore, we combined these behavioral fear indices with brain network activity by analyzing the brain activity evoked by the startle probe stimulus presented during conditioned visual threat and safety cues as well as in the absence of visual stimulation. In line with previous research, we found a fear-induced potentiation of the startle blink responses when elicited during a conditioned threat stimulus and a rapid decline of amygdala activity after an initial differentiation of threat and safety cues in early acquisition trials. Increased activation during processing of threat cues was also found in the anterior insula, the anterior cingulate cortex (ACC), and the periaqueductal gray (PAG). More importantly, our results depict an increase of brain activity to probes presented during threatening in comparison to safety cues indicating an involvement of the anterior insula, the ACC, the thalamus, and the PAG in fear-potentiated startle processing during early extinction trials. Our study underlines that parallel assessment of fear-potentiated startle in fMRI paradigms can provide a helpful method to investigate common and distinct processing pathways in humans and animals and, thus, contributes to translational research. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Increased visual sensitivity following periods of dim illumination.

    PubMed

    McKeown, Alex S; Kraft, Timothy W; Loop, Michael S

    2015-02-19

    We measured changes in the sensitivity of the human rod pathway by testing visual reaction times before and after light adaptation. We targeted a specific range of conditioning light intensities to see if a physiological adaptation recently discovered in mouse rods is observable at the perceptual level in humans. We also measured the noise spectrum of single mouse rods due to the importance of the signal-to-noise ratio in rod to rod bipolar cell signal transfer. Using the well-defined relationship between stimulus intensity and reaction time (Piéron's law), we measured the reaction times of eight human subjects (ages 24-66) to scotopic test flashes of a single intensity before and after the presentation of a 3-minute background. We also made recordings from single mouse rods and processed the cellular noise spectrum before and after similar conditioning exposures. Subject reaction times to a fixed-strength stimulus were fastest 5 seconds after conditioning background exposure (79% ± 1% of the preconditioning mean, in darkness) and were significantly faster for the first 12 seconds after background exposure (P < 0.01). During the period of increased rod sensitivity, the continuous noise spectrum of individual mouse rods was not significantly increased. A decrease in human reaction times to a dim flash after conditioning background exposure may originate in rod photoreceptors through a transient increase in the sensitivity of the phototransduction cascade. There is no accompanying increase in rod cellular noise, allowing for reliable transmission of larger rod signals after conditioning exposures and the observed increase in perceptual sensitivity. Copyright 2015 The Association for Research in Vision and Ophthalmology, Inc.

  14. Differences in reward processing between putative cell types in primate prefrontal cortex

    PubMed Central

    Fan, Hongwei; Wang, Rubin; Sakagami, Masamichi

    2017-01-01

    Single-unit studies in monkeys have demonstrated that neurons in the prefrontal cortex predict the reward type, reward amount or reward availability associated with a stimulus. To examine contributions of pyramidal cells and interneurons in reward processing, single-unit activity was extracellularly recorded in prefrontal cortices of four monkeys performing a reward prediction task. Based on their shapes of spike waveforms, prefrontal neurons were classified into broad-spike and narrow-spike units that represented putative pyramidal cells and interneurons, respectively. We mainly observed that narrow-spike neurons showed higher firing rates but less bursty discharges than did broad-spike neurons. Both narrow-spike and broad-spike cells selectively responded to the stimulus, reward and their interaction, and the proportions of each type of selective neurons were similar between the two cell classes. Moreover, the two types of cells displayed equal reliability of reward or stimulus discrimination. Furthermore, we found that broad-spike and narrow-spike cells showed distinct mechanisms for encoding reward or stimulus information. Broad-spike neurons raised their firing rate relative to the baseline rate to represent the preferred reward or stimulus information, whereas narrow-spike neurons inhibited their firing rate lower than the baseline rate to encode the non-preferred reward or stimulus information. Our results suggest that narrow-spike and broad-spike cells were equally involved in reward and stimulus processing in the prefrontal cortex. They utilized a binary strategy to complementarily represent reward or stimulus information, which was consistent with the task structure in which the monkeys were required to remember two reward conditions and two visual stimuli. PMID:29261734

  15. Differences in reward processing between putative cell types in primate prefrontal cortex.

    PubMed

    Fan, Hongwei; Pan, Xiaochuan; Wang, Rubin; Sakagami, Masamichi

    2017-01-01

    Single-unit studies in monkeys have demonstrated that neurons in the prefrontal cortex predict the reward type, reward amount or reward availability associated with a stimulus. To examine contributions of pyramidal cells and interneurons in reward processing, single-unit activity was extracellularly recorded in prefrontal cortices of four monkeys performing a reward prediction task. Based on their shapes of spike waveforms, prefrontal neurons were classified into broad-spike and narrow-spike units that represented putative pyramidal cells and interneurons, respectively. We mainly observed that narrow-spike neurons showed higher firing rates but less bursty discharges than did broad-spike neurons. Both narrow-spike and broad-spike cells selectively responded to the stimulus, reward and their interaction, and the proportions of each type of selective neurons were similar between the two cell classes. Moreover, the two types of cells displayed equal reliability of reward or stimulus discrimination. Furthermore, we found that broad-spike and narrow-spike cells showed distinct mechanisms for encoding reward or stimulus information. Broad-spike neurons raised their firing rate relative to the baseline rate to represent the preferred reward or stimulus information, whereas narrow-spike neurons inhibited their firing rate lower than the baseline rate to encode the non-preferred reward or stimulus information. Our results suggest that narrow-spike and broad-spike cells were equally involved in reward and stimulus processing in the prefrontal cortex. They utilized a binary strategy to complementarily represent reward or stimulus information, which was consistent with the task structure in which the monkeys were required to remember two reward conditions and two visual stimuli.

  16. Independence of Movement Preparation and Movement Initiation.

    PubMed

    Haith, Adrian M; Pakpoor, Jina; Krakauer, John W

    2016-03-09

    Initiating a movement in response to a visual stimulus takes significantly longer than might be expected on the basis of neural transmission delays, but it is unclear why. In a visually guided reaching task, we forced human participants to move at lower-than-normal reaction times to test whether normal reaction times are strictly necessary for accurate movement. We found that participants were, in fact, capable of moving accurately ∼80 ms earlier than their reaction times would suggest. Reaction times thus include a seemingly unnecessary delay that accounts for approximately one-third of their duration. Close examination of participants' behavior in conventional reaction-time conditions revealed that they generated occasional, spontaneous errors in trials in which their reaction time was unusually short. The pattern of these errors could be well accounted for by a simple model in which the timing of movement initiation is independent of the timing of movement preparation. This independence provides an explanation for why reaction times are usually so sluggish: delaying the mean time of movement initiation relative to preparation reduces the risk that a movement will be initiated before it has been appropriately prepared. Our results suggest that preparation and initiation of movement are mechanistically independent and may have a distinct neural basis. The results also demonstrate that, even in strongly stimulus-driven tasks, presentation of a stimulus does not directly trigger a movement. Rather, the stimulus appears to trigger an internal decision whether to make a movement, reflecting a volitional rather than reactive mode of control. Copyright © 2016 the authors 0270-6474/16/363007-10$15.00/0.

  17. Testing a Poisson Counter Model for Visual Identification of Briefly Presented, Mutually Confusable Single Stimuli in Pure Accuracy Tasks

    ERIC Educational Resources Information Center

    Kyllingsbaek, Soren; Markussen, Bo; Bundesen, Claus

    2012-01-01

    The authors propose and test a simple model of the time course of visual identification of briefly presented, mutually confusable single stimuli in pure accuracy tasks. The model implies that during stimulus analysis, tentative categorizations that stimulus i belongs to category j are made at a constant Poisson rate, v(i, j). The analysis is…

  18. Contralateral Cortical Organisation of Information in Visual Short-Term Memory: Evidence from Lateralized Brain Activity during Retrieval

    ERIC Educational Resources Information Center

    Fortier-Gauthier, Ulysse; Moffat, Nicolas; Dell'Acqua, Robert; McDonald, John J.; Jolicoeur, Pierre

    2012-01-01

    We studied brain activity during retention and retrieval phases of two visual short-term memory (VSTM) experiments. Experiment 1 used a balanced memory array, with one color stimulus in each hemifield, followed by a retention interval and a central probe, at the fixation point that designated the target stimulus in memory about which to make a…

  19. Modality-specificity of sensory aging in vision and audition: evidence from event-related potentials.

    PubMed

    Ceponiene, R; Westerfield, M; Torki, M; Townsend, J

    2008-06-18

    Major accounts of aging implicate changes in processing external stimulus information. Little is known about differential effects of auditory and visual sensory aging, and the mechanisms of sensory aging are still poorly understood. Using event-related potentials (ERPs) elicited by unattended stimuli in younger (M=25.5 yrs) and older (M=71.3 yrs) subjects, this study examined mechanisms of sensory aging under minimized attention conditions. Auditory and visual modalities were examined to address modality-specificity vs. generality of sensory aging. Between-modality differences were robust. The earlier-latency responses (P1, N1) were unaffected in the auditory modality but were diminished in the visual modality. The auditory N2 and early visual N2 were diminished. Two similarities between the modalities were age-related enhancements in the late P2 range and positive behavior-early N2 correlation, the latter suggesting that N2 may reflect long-latency inhibition of irrelevant stimuli. Since there is no evidence for salient differences in neuro-biological aging between the two sensory regions, the observed between-modality differences are best explained by the differential reliance of auditory and visual systems on attention. Visual sensory processing relies on facilitation by visuo-spatial attention, withdrawal of which appears to be more disadvantageous in older populations. In contrast, auditory processing is equipped with powerful inhibitory capacities. However, when the whole auditory modality is unattended, thalamo-cortical gating deficits may not manifest in the elderly. In contrast, ERP indices of longer-latency, stimulus-level inhibitory modulation appear to diminish with age.

  20. Emotion based attentional priority for storage in visual short-term memory.

    PubMed

    Simione, Luca; Calabrese, Lucia; Marucci, Francesco S; Belardinelli, Marta Olivetti; Raffone, Antonino; Maratos, Frances A

    2014-01-01

    A plethora of research demonstrates that the processing of emotional faces is prioritised over non-emotive stimuli when cognitive resources are limited (this is known as 'emotional superiority'). However, there is debate as to whether competition for processing resources results in emotional superiority per se, or more specifically, threat superiority. Therefore, to investigate prioritisation of emotional stimuli for storage in visual short-term memory (VSTM), we devised an original VSTM report procedure using schematic (angry, happy, neutral) faces in which processing competition was manipulated. In Experiment 1, display exposure time was manipulated to create competition between stimuli. Participants (n = 20) had to recall a probed stimulus from a set size of four under high (150 ms array exposure duration) and low (400 ms array exposure duration) perceptual processing competition. For the high competition condition (i.e. 150 ms exposure), results revealed an emotional superiority effect per se. In Experiment 2 (n = 20), we increased competition by manipulating set size (three versus five stimuli), whilst maintaining a constrained array exposure duration of 150 ms. Here, for the five-stimulus set size (i.e. maximal competition) only threat superiority emerged. These findings demonstrate attentional prioritisation for storage in VSTM for emotional faces. We argue that task demands modulated the availability of processing resources and consequently the relative magnitude of the emotional/threat superiority effect, with only threatening stimuli prioritised for storage in VSTM under more demanding processing conditions. Our results are discussed in light of models and theories of visual selection, and not only combine the two strands of research (i.e. visual selection and emotion), but highlight a critical factor in the processing of emotional stimuli is availability of processing resources, which is further constrained by task demands.

  1. Statistical Regularities Attract Attention when Task-Relevant.

    PubMed

    Alamia, Andrea; Zénon, Alexandre

    2016-01-01

    Visual attention seems essential for learning the statistical regularities in our environment, a process known as statistical learning. However, how attention is allocated when exploring a novel visual scene whose statistical structure is unknown remains unclear. In order to address this question, we investigated visual attention allocation during a task in which we manipulated the conditional probability of occurrence of colored stimuli, unbeknown to the subjects. Participants were instructed to detect a target colored dot among two dots moving along separate circular paths. We evaluated implicit statistical learning, i.e., the effect of color predictability on reaction times (RTs), and recorded eye position concurrently. Attention allocation was indexed by comparing the Mahalanobis distance between the position, velocity and acceleration of the eyes and the two colored dots. We found that learning the conditional probabilities occurred very early during the course of the experiment as shown by the fact that, starting already from the first block, predictable stimuli were detected with shorter RT than unpredictable ones. In terms of attentional allocation, we found that the predictive stimulus attracted gaze only when it was informative about the occurrence of the target but not when it predicted the occurrence of a task-irrelevant stimulus. This suggests that attention allocation was influenced by regularities only when they were instrumental in performing the task. Moreover, we found that the attentional bias towards task-relevant predictive stimuli occurred at a very early stage of learning, concomitantly with the first effects of learning on RT. In conclusion, these results show that statistical regularities capture visual attention only after a few occurrences, provided these regularities are instrumental to perform the task.

  2. The economics of motion perception and invariants of visual sensitivity.

    PubMed

    Gepshtein, Sergei; Tyukin, Ivan; Kubovy, Michael

    2007-06-21

    Neural systems face the challenge of optimizing their performance with limited resources, just as economic systems do. Here, we use tools of neoclassical economic theory to explore how a frugal visual system should use a limited number of neurons to optimize perception of motion. The theory prescribes that vision should allocate its resources to different conditions of stimulation according to the degree of balance between measurement uncertainties and stimulus uncertainties. We find that human vision approximately follows the optimal prescription. The equilibrium theory explains why human visual sensitivity is distributed the way it is and why qualitatively different regimes of apparent motion are observed at different speeds. The theory offers a new normative framework for understanding the mechanisms of visual sensitivity at the threshold of visibility and above the threshold and predicts large-scale changes in visual sensitivity in response to changes in the statistics of stimulation and system goals.

  3. Sparse coding can predict primary visual cortex receptive field changes induced by abnormal visual input.

    PubMed

    Hunt, Jonathan J; Dayan, Peter; Goodhill, Geoffrey J

    2013-01-01

    Receptive fields acquired through unsupervised learning of sparse representations of natural scenes have similar properties to primary visual cortex (V1) simple cell receptive fields. However, what drives in vivo development of receptive fields remains controversial. The strongest evidence for the importance of sensory experience in visual development comes from receptive field changes in animals reared with abnormal visual input. However, most sparse coding accounts have considered only normal visual input and the development of monocular receptive fields. Here, we applied three sparse coding models to binocular receptive field development across six abnormal rearing conditions. In every condition, the changes in receptive field properties previously observed experimentally were matched to a similar and highly faithful degree by all the models, suggesting that early sensory development can indeed be understood in terms of an impetus towards sparsity. As previously predicted in the literature, we found that asymmetries in inter-ocular correlation across orientations lead to orientation-specific binocular receptive fields. Finally we used our models to design a novel stimulus that, if present during rearing, is predicted by the sparsity principle to lead robustly to radically abnormal receptive fields.

  4. Sparse Coding Can Predict Primary Visual Cortex Receptive Field Changes Induced by Abnormal Visual Input

    PubMed Central

    Hunt, Jonathan J.; Dayan, Peter; Goodhill, Geoffrey J.

    2013-01-01

    Receptive fields acquired through unsupervised learning of sparse representations of natural scenes have similar properties to primary visual cortex (V1) simple cell receptive fields. However, what drives in vivo development of receptive fields remains controversial. The strongest evidence for the importance of sensory experience in visual development comes from receptive field changes in animals reared with abnormal visual input. However, most sparse coding accounts have considered only normal visual input and the development of monocular receptive fields. Here, we applied three sparse coding models to binocular receptive field development across six abnormal rearing conditions. In every condition, the changes in receptive field properties previously observed experimentally were matched to a similar and highly faithful degree by all the models, suggesting that early sensory development can indeed be understood in terms of an impetus towards sparsity. As previously predicted in the literature, we found that asymmetries in inter-ocular correlation across orientations lead to orientation-specific binocular receptive fields. Finally we used our models to design a novel stimulus that, if present during rearing, is predicted by the sparsity principle to lead robustly to radically abnormal receptive fields. PMID:23675290

  5. Bottlenecks of Motion Processing during a Visual Glance: The Leaky Flask Model

    PubMed Central

    Öğmen, Haluk; Ekiz, Onur; Huynh, Duong; Bedell, Harold E.; Tripathy, Srimant P.

    2013-01-01

    Where do the bottlenecks for information and attention lie when our visual system processes incoming stimuli? The human visual system encodes the incoming stimulus and transfers its contents into three major memory systems with increasing time scales, viz., sensory (or iconic) memory, visual short-term memory (VSTM), and long-term memory (LTM). It is commonly believed that the major bottleneck of information processing resides in VSTM. In contrast to this view, we show major bottlenecks for motion processing prior to VSTM. In the first experiment, we examined bottlenecks at the stimulus encoding stage through a partial-report technique by delivering the cue immediately at the end of the stimulus presentation. In the second experiment, we varied the cue delay to investigate sensory memory and VSTM. Performance decayed exponentially as a function of cue delay and we used the time-constant of the exponential-decay to demarcate sensory memory from VSTM. We then decomposed performance in terms of quality and quantity measures to analyze bottlenecks along these dimensions. In terms of the quality of information, two thirds to three quarters of the motion-processing bottleneck occurs in stimulus encoding rather than memory stages. In terms of the quantity of information, the motion-processing bottleneck is distributed, with the stimulus-encoding stage accounting for one third of the bottleneck. The bottleneck for the stimulus-encoding stage is dominated by the selection compared to the filtering function of attention. We also found that the filtering function of attention is operating mainly at the sensory memory stage in a specific manner, i.e., influencing only quantity and sparing quality. These results provide a novel and more complete understanding of information processing and storage bottlenecks for motion processing. PMID:24391806

  6. Bottlenecks of motion processing during a visual glance: the leaky flask model.

    PubMed

    Öğmen, Haluk; Ekiz, Onur; Huynh, Duong; Bedell, Harold E; Tripathy, Srimant P

    2013-01-01

    Where do the bottlenecks for information and attention lie when our visual system processes incoming stimuli? The human visual system encodes the incoming stimulus and transfers its contents into three major memory systems with increasing time scales, viz., sensory (or iconic) memory, visual short-term memory (VSTM), and long-term memory (LTM). It is commonly believed that the major bottleneck of information processing resides in VSTM. In contrast to this view, we show major bottlenecks for motion processing prior to VSTM. In the first experiment, we examined bottlenecks at the stimulus encoding stage through a partial-report technique by delivering the cue immediately at the end of the stimulus presentation. In the second experiment, we varied the cue delay to investigate sensory memory and VSTM. Performance decayed exponentially as a function of cue delay and we used the time-constant of the exponential-decay to demarcate sensory memory from VSTM. We then decomposed performance in terms of quality and quantity measures to analyze bottlenecks along these dimensions. In terms of the quality of information, two thirds to three quarters of the motion-processing bottleneck occurs in stimulus encoding rather than memory stages. In terms of the quantity of information, the motion-processing bottleneck is distributed, with the stimulus-encoding stage accounting for one third of the bottleneck. The bottleneck for the stimulus-encoding stage is dominated by the selection compared to the filtering function of attention. We also found that the filtering function of attention is operating mainly at the sensory memory stage in a specific manner, i.e., influencing only quantity and sparing quality. These results provide a novel and more complete understanding of information processing and storage bottlenecks for motion processing.

  7. Interactions between the spatial and temporal stimulus factors that influence multisensory integration in human performance.

    PubMed

    Stevenson, Ryan A; Fister, Juliane Krueger; Barnett, Zachary P; Nidiffer, Aaron R; Wallace, Mark T

    2012-05-01

    In natural environments, human sensory systems work in a coordinated and integrated manner to perceive and respond to external events. Previous research has shown that the spatial and temporal relationships of sensory signals are paramount in determining how information is integrated across sensory modalities, but in ecologically plausible settings, these factors are not independent. In the current study, we provide a novel exploration of the impact on behavioral performance for systematic manipulations of the spatial location and temporal synchrony of a visual-auditory stimulus pair. Simple auditory and visual stimuli were presented across a range of spatial locations and stimulus onset asynchronies (SOAs), and participants performed both a spatial localization and simultaneity judgment task. Response times in localizing paired visual-auditory stimuli were slower in the periphery and at larger SOAs, but most importantly, an interaction was found between the two factors, in which the effect of SOA was greater in peripheral as opposed to central locations. Simultaneity judgments also revealed a novel interaction between space and time: individuals were more likely to judge stimuli as synchronous when occurring in the periphery at large SOAs. The results of this study provide novel insights into (a) how the speed of spatial localization of an audiovisual stimulus is affected by location and temporal coincidence and the interaction between these two factors and (b) how the location of a multisensory stimulus impacts judgments concerning the temporal relationship of the paired stimuli. These findings provide strong evidence for a complex interdependency between spatial location and temporal structure in determining the ultimate behavioral and perceptual outcome associated with a paired multisensory (i.e., visual-auditory) stimulus.

  8. Visual short-term memory: activity supporting encoding and maintenance in retinotopic visual cortex.

    PubMed

    Sneve, Markus H; Alnæs, Dag; Endestad, Tor; Greenlee, Mark W; Magnussen, Svein

    2012-10-15

    Recent studies have demonstrated that retinotopic cortex maintains information about visual stimuli during retention intervals. However, the process by which transient stimulus-evoked sensory responses are transformed into enduring memory representations is unknown. Here, using fMRI and short-term visual memory tasks optimized for univariate and multivariate analysis approaches, we report differential involvement of human retinotopic areas during memory encoding of the low-level visual feature orientation. All visual areas show weaker responses when memory encoding processes are interrupted, possibly due to effects in orientation-sensitive primary visual cortex (V1) propagating across extrastriate areas. Furthermore, intermediate areas in both dorsal (V3a/b) and ventral (LO1/2) streams are significantly more active during memory encoding compared with non-memory (active and passive) processing of the same stimulus material. These effects in intermediate visual cortex are also observed during memory encoding of a different stimulus feature (spatial frequency), suggesting that these areas are involved in encoding processes on a higher level of representation. Using pattern-classification techniques to probe the representational content in visual cortex during delay periods, we further demonstrate that simply initiating memory encoding is not sufficient to produce long-lasting memory traces. Rather, active maintenance appears to underlie the observed memory-specific patterns of information in retinotopic cortex. Copyright © 2012 Elsevier Inc. All rights reserved.

  9. Visual adaptation enhances action sound discrimination.

    PubMed

    Barraclough, Nick E; Page, Steve A; Keefe, Bruce D

    2017-01-01

    Prolonged exposure, or adaptation, to a stimulus in 1 modality can bias, but also enhance, perception of a subsequent stimulus presented within the same modality. However, recent research has also found that adaptation in 1 modality can bias perception in another modality. Here, we show a novel crossmodal adaptation effect, where adaptation to a visual stimulus enhances subsequent auditory perception. We found that when compared to no adaptation, prior adaptation to visual, auditory, or audiovisual hand actions enhanced discrimination between 2 subsequently presented hand action sounds. Discrimination was most enhanced when the visual action "matched" the auditory action. In addition, prior adaptation to a visual, auditory, or audiovisual action caused subsequent ambiguous action sounds to be perceived as less like the adaptor. In contrast, these crossmodal action aftereffects were not generated by adaptation to the names of actions. Enhanced crossmodal discrimination and crossmodal perceptual aftereffects may result from separate mechanisms operating in audiovisual action sensitive neurons within perceptual systems. Adaptation-induced crossmodal enhancements cannot be explained by postperceptual responses or decisions. More generally, these results together indicate that adaptation is a ubiquitous mechanism for optimizing perceptual processing of multisensory stimuli.

  10. The Emergence of Visual Awareness: Temporal Dynamics in Relation to Task and Mask Type

    PubMed Central

    Kiefer, Markus; Kammer, Thomas

    2017-01-01

    One aspect of consciousness phenomena, the temporal emergence of visual awareness, has been subject of a controversial debate. How can visual awareness, that is the experiential quality of visual stimuli, be characterized best? Is there a sharp discontinuous or dichotomous transition between unaware and fully aware states, or does awareness emerge gradually encompassing intermediate states? Previous studies yielded conflicting results and supported both dichotomous and gradual views. It is well conceivable that these conflicting results are more than noise, but reflect the dynamic nature of the temporal emergence of visual awareness. Using a psychophysical approach, the present research tested whether the emergence of visual awareness is context-dependent with a temporal two-alternative forced choice task. During backward masking of word targets, it was assessed whether the relative temporal sequence of stimulus thresholds is modulated by the task (stimulus presence, letter case, lexical decision, and semantic category) and by mask type. Four masks with different similarity to the target features were created. Psychophysical functions were then fitted to the accuracy data in the different task conditions as a function of the stimulus mask SOA in order to determine the inflection point (conscious threshold of each feature) and slope of the psychophysical function (transition from unaware to aware within each feature). Depending on feature-mask similarity, thresholds in the different tasks were highly dispersed suggesting a graded transition from unawareness to awareness or had less differentiated thresholds indicating that clusters of features probed by the tasks quite simultaneously contribute to the percept. The latter observation, although not compatible with the notion of a sharp all-or-none transition between unaware and aware states, suggests a less gradual or more discontinuous emergence of awareness. Analyses of slopes of the fitted psychophysical functions also indicated that the emergence of awareness of single features is variable and might be influenced by the continuity of the feature dimensions. The present work thus suggests that the emergence of awareness is neither purely gradual nor dichotomous, but highly dynamic depending on the task and mask type. PMID:28316583

  11. Dissociation between Neural Signatures of Stimulus and Choice in Population Activity of Human V1 during Perceptual Decision-Making

    PubMed Central

    Choe, Kyoung Whan; Blake, Randolph

    2014-01-01

    Primary visual cortex (V1) forms the initial cortical representation of objects and events in our visual environment, and it distributes information about that representation to higher cortical areas within the visual hierarchy. Decades of work have established tight linkages between neural activity occurring in V1 and features comprising the retinal image, but it remains debatable how that activity relates to perceptual decisions. An actively debated question is the extent to which V1 responses determine, on a trial-by-trial basis, perceptual choices made by observers. By inspecting the population activity of V1 from human observers engaged in a difficult visual discrimination task, we tested one essential prediction of the deterministic view: choice-related activity, if it exists in V1, and stimulus-related activity should occur in the same neural ensemble of neurons at the same time. Our findings do not support this prediction: while cortical activity signifying the variability in choice behavior was indeed found in V1, that activity was dissociated from activity representing stimulus differences relevant to the task, being advanced in time and carried by a different neural ensemble. The spatiotemporal dynamics of population responses suggest that short-term priors, perhaps formed in higher cortical areas involved in perceptual inference, act to modulate V1 activity prior to stimulus onset without modifying subsequent activity that actually represents stimulus features within V1. PMID:24523561

  12. Decoding and reconstructing color from responses in human visual cortex.

    PubMed

    Brouwer, Gijs Joost; Heeger, David J

    2009-11-04

    How is color represented by spatially distributed patterns of activity in visual cortex? Functional magnetic resonance imaging responses to several stimulus colors were analyzed with multivariate techniques: conventional pattern classification, a forward model of idealized color tuning, and principal component analysis (PCA). Stimulus color was accurately decoded from activity in V1, V2, V3, V4, and VO1 but not LO1, LO2, V3A/B, or MT+. The conventional classifier and forward model yielded similar accuracies, but the forward model (unlike the classifier) also reliably reconstructed novel stimulus colors not used to train (specify parameters of) the model. The mean responses, averaged across voxels in each visual area, were not reliably distinguishable for the different stimulus colors. Hence, each stimulus color was associated with a unique spatially distributed pattern of activity, presumably reflecting the color selectivity of cortical neurons. Using PCA, a color space was derived from the covariation, across voxels, in the responses to different colors. In V4 and VO1, the first two principal component scores (main source of variation) of the responses revealed a progression through perceptual color space, with perceptually similar colors evoking the most similar responses. This was not the case for any of the other visual cortical areas, including V1, although decoding was most accurate in V1. This dissociation implies a transformation from the color representation in V1 to reflect perceptual color space in V4 and VO1.

  13. Enhanced alpha-oscillations in visual cortex during anticipation of self-generated visual stimulation.

    PubMed

    Stenner, Max-Philipp; Bauer, Markus; Haggard, Patrick; Heinze, Hans-Jochen; Dolan, Ray

    2014-11-01

    The perceived intensity of sensory stimuli is reduced when these stimuli are caused by the observer's actions. This phenomenon is traditionally explained by forward models of sensory action-outcome, which arise from motor processing. Although these forward models critically predict anticipatory modulation of sensory neural processing, neurophysiological evidence for anticipatory modulation is sparse and has not been linked to perceptual data showing sensory attenuation. By combining a psychophysical task involving contrast discrimination with source-level time-frequency analysis of MEG data, we demonstrate that the amplitude of alpha-oscillations in visual cortex is enhanced before the onset of a visual stimulus when the identity and onset of the stimulus are controlled by participants' motor actions. Critically, this prestimulus enhancement of alpha-amplitude is paralleled by psychophysical judgments of a reduced contrast for this stimulus. We suggest that alpha-oscillations in visual cortex preceding self-generated visual stimulation are a likely neurophysiological signature of motor-induced sensory anticipation and mediate sensory attenuation. We discuss our results in relation to proposals that attribute generic inhibitory functions to alpha-oscillations in prioritizing and gating sensory information via top-down control.

  14. Simple and powerful visual stimulus generator.

    PubMed

    Kremlácek, J; Kuba, M; Kubová, Z; Vít, F

    1999-02-01

    We describe a cheap, simple, portable and efficient approach to visual stimulation for neurophysiology which does not need any special hardware equipment. The method based on an animation technique uses the FLI autodesk animator format. This form of the animation is replayed by a special program ('player') providing synchronisation pulses toward recording system via parallel port. The 'player is running on an IBM compatible personal computer under MS-DOS operation system and stimulus is displayed on a VGA computer monitor. Various stimuli created with this technique for visual evoked potentials (VEPs) are presented.

  15. Visual Distractors Disrupt Audiovisual Integration Regardless of Stimulus Complexity

    PubMed Central

    Gibney, Kyla D.; Aligbe, Enimielen; Eggleston, Brady A.; Nunes, Sarah R.; Kerkhoff, Willa G.; Dean, Cassandra L.; Kwakye, Leslie D.

    2017-01-01

    The intricate relationship between multisensory integration and attention has been extensively researched in the multisensory field; however, the necessity of attention for the binding of multisensory stimuli remains contested. In the current study, we investigated whether diverting attention from well-known multisensory tasks would disrupt integration and whether the complexity of the stimulus and task modulated this interaction. A secondary objective of this study was to investigate individual differences in the interaction of attention and multisensory integration. Participants completed a simple audiovisual speeded detection task and McGurk task under various perceptual load conditions: no load (multisensory task while visual distractors present), low load (multisensory task while detecting the presence of a yellow letter in the visual distractors), and high load (multisensory task while detecting the presence of a number in the visual distractors). Consistent with prior studies, we found that increased perceptual load led to decreased reports of the McGurk illusion, thus confirming the necessity of attention for the integration of speech stimuli. Although increased perceptual load led to longer response times for all stimuli in the speeded detection task, participants responded faster on multisensory trials than unisensory trials. However, the increase in multisensory response times violated the race model for no and low perceptual load conditions only. Additionally, a geometric measure of Miller’s inequality showed a decrease in multisensory integration for the speeded detection task with increasing perceptual load. Surprisingly, we found diverging changes in multisensory integration with increasing load for participants who did not show integration for the no load condition: no changes in integration for the McGurk task with increasing load but increases in integration for the detection task. The results of this study indicate that attention plays a crucial role in multisensory integration for both highly complex and simple multisensory tasks and that attention may interact differently with multisensory processing in individuals who do not strongly integrate multisensory information. PMID:28163675

  16. Visual Distractors Disrupt Audiovisual Integration Regardless of Stimulus Complexity.

    PubMed

    Gibney, Kyla D; Aligbe, Enimielen; Eggleston, Brady A; Nunes, Sarah R; Kerkhoff, Willa G; Dean, Cassandra L; Kwakye, Leslie D

    2017-01-01

    The intricate relationship between multisensory integration and attention has been extensively researched in the multisensory field; however, the necessity of attention for the binding of multisensory stimuli remains contested. In the current study, we investigated whether diverting attention from well-known multisensory tasks would disrupt integration and whether the complexity of the stimulus and task modulated this interaction. A secondary objective of this study was to investigate individual differences in the interaction of attention and multisensory integration. Participants completed a simple audiovisual speeded detection task and McGurk task under various perceptual load conditions: no load (multisensory task while visual distractors present), low load (multisensory task while detecting the presence of a yellow letter in the visual distractors), and high load (multisensory task while detecting the presence of a number in the visual distractors). Consistent with prior studies, we found that increased perceptual load led to decreased reports of the McGurk illusion, thus confirming the necessity of attention for the integration of speech stimuli. Although increased perceptual load led to longer response times for all stimuli in the speeded detection task, participants responded faster on multisensory trials than unisensory trials. However, the increase in multisensory response times violated the race model for no and low perceptual load conditions only. Additionally, a geometric measure of Miller's inequality showed a decrease in multisensory integration for the speeded detection task with increasing perceptual load. Surprisingly, we found diverging changes in multisensory integration with increasing load for participants who did not show integration for the no load condition: no changes in integration for the McGurk task with increasing load but increases in integration for the detection task. The results of this study indicate that attention plays a crucial role in multisensory integration for both highly complex and simple multisensory tasks and that attention may interact differently with multisensory processing in individuals who do not strongly integrate multisensory information.

  17. Increased noise levels have different impacts on the anti-predator behaviour of two sympatric fish species.

    PubMed

    Voellmy, Irene K; Purser, Julia; Simpson, Stephen D; Radford, Andrew N

    2014-01-01

    Animals must avoid predation to survive and reproduce, and there is increasing evidence that man-made (anthropogenic) factors can influence predator-prey relationships. Anthropogenic noise has been shown to have a variety of effects on many species, but work investigating the impact on anti-predator behaviour is rare. In this laboratory study, we examined how additional noise (playback of field recordings of a ship passing through a harbour), compared with control conditions (playback of recordings from the same harbours without ship noise), affected responses to a visual predatory stimulus. We compared the anti-predator behaviour of two sympatric fish species, the three-spined stickleback (Gasterosteus aculeatus) and the European minnow (Phoxinus phoxinus), which share similar feeding and predator ecologies, but differ in their body armour. Effects of additional-noise playbacks differed between species: sticklebacks responded significantly more quickly to the visual predatory stimulus during additional-noise playbacks than during control conditions, while minnows exhibited no significant change in their response latency. Our results suggest that elevated noise levels have the potential to affect anti-predator behaviour of different species in different ways. Future field-based experiments are needed to confirm whether this effect and the interspecific difference exist in relation to real-world noise sources, and to determine survival and population consequences.

  18. Increased Noise Levels Have Different Impacts on the Anti-Predator Behaviour of Two Sympatric Fish Species

    PubMed Central

    Voellmy, Irene K.; Purser, Julia; Simpson, Stephen D.; Radford, Andrew N.

    2014-01-01

    Animals must avoid predation to survive and reproduce, and there is increasing evidence that man-made (anthropogenic) factors can influence predator−prey relationships. Anthropogenic noise has been shown to have a variety of effects on many species, but work investigating the impact on anti-predator behaviour is rare. In this laboratory study, we examined how additional noise (playback of field recordings of a ship passing through a harbour), compared with control conditions (playback of recordings from the same harbours without ship noise), affected responses to a visual predatory stimulus. We compared the anti-predator behaviour of two sympatric fish species, the three-spined stickleback (Gasterosteus aculeatus) and the European minnow (Phoxinus phoxinus), which share similar feeding and predator ecologies, but differ in their body armour. Effects of additional-noise playbacks differed between species: sticklebacks responded significantly more quickly to the visual predatory stimulus during additional-noise playbacks than during control conditions, while minnows exhibited no significant change in their response latency. Our results suggest that elevated noise levels have the potential to affect anti-predator behaviour of different species in different ways. Future field-based experiments are needed to confirm whether this effect and the interspecific difference exist in relation to real-world noise sources, and to determine survival and population consequences. PMID:25058618

  19. Examining the reinforcement-enhancement effects of phencyclidine and its interactions with nicotine on lever-pressing for a visual stimulus.

    PubMed

    Swalve, Natashia; Barrett, Scott T; Bevins, Rick A; Li, Ming

    2015-09-15

    Nicotine is a widely-abused drug, yet its primary reinforcing effect does not seem potent as other stimulants such as cocaine. Recent research on the contributing factors toward chronic use of nicotine-containing products has implicated the role of reinforcement-enhancing effects of nicotine. The present study investigates whether phencyclidine (PCP) may also possess a reinforcement-enhancement effect and how this may interact with the reinforcement-enhancement effect of nicotine. PCP was tested for two reasons: (1) it produces discrepant results on overall reward, similar to that seen with nicotine and (2) it may elucidate how other compounds may interact with the reinforcement-enhancement of nicotine. Adult male Sprague-Dawley rats were trained to lever press for brief visual stimulus presentations under fixed-ratio (FR) schedules of reinforcement and then were tested with nicotine (0.2 or 0.4 mg/kg) and/or PCP (2.0mg/kg) over six increasing FR values. A selective increase in active lever-pressing for the visual stimulus with drug treatment was considered evidence of a reinforcement-enhancement effect. PCP and nicotine separately increased active lever pressing for a visual stimulus in a dose-dependent manner and across the different FR schedules. The addition of PCP to nicotine did not increase lever-pressing for the visual stimulus, possibly due to a ceiling effect. The effect of PCP may be driven largely by its locomotor stimulant effects, whereas the effect of nicotine was independent of locomotor stimulation. This dissociation emphasizes that distinct pharmacological properties contribute to the reinforcement-enhancement effects of substances. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Perceptual asymmetries in greyscales: object-based versus space-based influences.

    PubMed

    Thomas, Nicole A; Elias, Lorin J

    2012-05-01

    Neurologically normal individuals exhibit leftward spatial biases, resulting from object- and space-based biases; however their relative contributions to the overall bias remain unknown. Relative position within the display has not often been considered, with similar spatial conditions being collapsed across. Study 1 used the greyscales task to investigate the influence of relative position and object- and space-based contributions. One image in each greyscale pair was shifted towards the left or the right. A leftward object-based bias moderated by a bias to the centre was expected. Results confirmed this as a left object-based bias occurred in the right visual field, where the left side of the greyscale pairs was located in the centre visual field. Further, only lower visual field images exhibited a significant left bias in the left visual field. The left bias was also stronger when images were partially overlapping in the right visual field, demonstrating the importance of examining proximity. The second study examined whether object-based biases were stronger when actual objects, with directional lighting biases, were used. Direction of luminosity was congruent or incongruent with spatial location. A stronger object-based bias emerged overall; however a leftward bias was seen in congruent conditions and a rightward bias was seen in incongruent conditions. In conditions with significant biases, the lower visual field image was chosen most often. Results show that object- and space-based biases both contribute; however stimulus type allows either space- or object-based biases to be stronger. A lower visual field bias also interacts with these biases, leading the left bias to be eliminated under certain conditions. The complex interaction occurring between frame of reference and visual field makes spatial location extremely important in determining the strength of the leftward bias. Copyright © 2010 Elsevier Srl. All rights reserved.

  1. Face imagery is based on featural representations.

    PubMed

    Lobmaier, Janek S; Mast, Fred W

    2008-01-01

    The effect of imagery on featural and configural face processing was investigated using blurred and scrambled faces. By means of blurring, featural information is reduced; by scrambling a face into its constituent parts configural information is lost. Twenty-four participants learned ten faces together with the sound of a name. In following matching-to-sample tasks participants had to decide whether an auditory presented name belonged to a visually presented scrambled or blurred face in two experimental conditions. In the imagery condition, the name was presented prior to the visual stimulus and participants were required to imagine the corresponding face as clearly and vividly as possible. In the perception condition name and test face were presented simultaneously, thus no facilitation via mental imagery was possible. Analyses of the hit values showed that in the imagery condition scrambled faces were recognized significantly better than blurred faces whereas there was no such effect for the perception condition. The results suggest that mental imagery activates featural representations more than configural representations.

  2. Preschoolers' speed of locating a target symbol under different color conditions.

    PubMed

    Wilkinson, Krista M; Carlin, Michael; Jagaroo, Vinoth

    2006-06-01

    A pressing decision in AAC concerns the organization of aided visual symbols. One recent proposal suggested that basic principles of visual processing may be important determinants of how easily a symbol is found in an array, and that this, in turn will influence more functional outcomes like symbol identification or use. This study examined the role of color on accuracy and speed of symbol location by 16 preschool children without disabilities. Participants searched for a target stimulus in an array of eight stimuli. In the same-color condition, the eight stimuli were all red; in the guided search condition, four of the stimuli were red and four were yellow; in the unique-color condition, all stimuli were unique colors. Accuracy was higher and reaction time was faster when stimuli were unique colors than when they were all one color. Reaction time and accuracy did not differ under the guided search and the color-unique conditions. The implications for AAC are discussed.

  3. Implications on visual apperception: energy, duration, structure and synchronization.

    PubMed

    Bókkon, I; Vimal, Ram Lakhan Pandey

    2010-07-01

    Although primary visual cortex (V1 or striate) activity per se is not sufficient for visual apperception (normal conscious visual experiences and conscious functions such as detection, discrimination, and recognition), the same is also true for extrastriate visual areas (such as V2, V3, V4/V8/VO, V5/M5/MST, IT, and GF). In the lack of V1 area, visual signals can still reach several extrastriate parts but appear incapable of generating normal conscious visual experiences. It is scarcely emphasized in the scientific literature that conscious perceptions and representations must have also essential energetic conditions. These energetic conditions are achieved by spatiotemporal networks of dynamic mitochondrial distributions inside neurons. However, the highest density of neurons in neocortex (number of neurons per degree of visual angle) devoted to representing the visual field is found in retinotopic V1. It means that the highest mitochondrial (energetic) activity can be achieved in mitochondrial cytochrome oxidase-rich V1 areas. Thus, V1 bear the highest energy allocation for visual representation. In addition, the conscious perceptions also demand structural conditions, presence of adequate duration of information representation, and synchronized neural processes and/or 'interactive hierarchical structuralism.' For visual apperception, various visual areas are involved depending on context such as stimulus characteristics such as color, form/shape, motion, and other features. Here, we focus primarily on V1 where specific mitochondrial-rich retinotopic structures are found; we will concisely discuss V2 where smaller riches of these structures are found. We also point out that residual brain states are not fully reflected in active neural patterns after visual perception. Namely, after visual perception, subliminal residual states are not being reflected in passive neural recording techniques, but require active stimulation to be revealed.

  4. Abnormalities in the Visual Processing of Viewing Complex Visual Stimuli Amongst Individuals With Body Image Concern.

    PubMed

    Duncum, A J F; Atkins, K J; Beilharz, F L; Mundy, M E

    2016-01-01

    Individuals with body dysmorphic disorder (BDD) and clinically concerning body-image concern (BIC) appear to possess abnormalities in the way they perceive visual information in the form of a bias towards local visual processing. As inversion interrupts normal global processing, forcing individuals to process locally, an upright-inverted stimulus discrimination task was used to investigate this phenomenon. We examined whether individuals with nonclinical, yet high levels of BIC would show signs of this bias, in the form of reduced inversion effects (i.e., increased local processing). Furthermore, we assessed whether this bias appeared for general visual stimuli or specifically for appearance-related stimuli, such as faces and bodies. Participants with high-BIC (n = 25) and low-BIC (n = 30) performed a stimulus discrimination task with upright and inverted faces, scenes, objects, and bodies. Unexpectedly, the high-BIC group showed an increased inversion effect compared to the low-BIC group, indicating perceptual abnormalities may not be present as local processing biases, as originally thought. There was no significant difference in performance across stimulus types, signifying that any visual processing abnormalities may be general rather than appearance-based. This has important implications for whether visual processing abnormalities are predisposing factors for BDD or develop throughout the disorder.

  5. A Correlational Analysis of the Effects of Learner and Linear Programming Characteristics on Learning Programmed Instruction. Final Report.

    ERIC Educational Resources Information Center

    Seibert, Warren F.; Reid, Christopher J.

    Learning and retention may be influenced by subtle instructional stimulus characteristics and certain visual memory aptitudes. Ten stimulus characteristics were chosen for study; 50 sequences of programed instructional material were specially written to conform to sampled values of each stimulus characteristic. Seventy-three freshman subjects…

  6. Alerting Attention and Time Perception in Children.

    ERIC Educational Resources Information Center

    Droit-Volet, Sylvie

    2003-01-01

    Examined effects of a click signaling arrival of a visual stimulus to be timed on temporal discrimination in 3-, 5-, and 8-year-olds. Found that in all groups, the proportion of long responses increased with the stimulus duration, although the steepness of functions increased with age. Stimulus duration was judged longer with than without the…

  7. Order of Stimulus Presentation Influences Children's Acquisition in Receptive Identification Tasks

    ERIC Educational Resources Information Center

    Petursdottir, Anna Ingeborg; Aguilar, Gabriella

    2016-01-01

    Receptive identification is usually taught in matching-to-sample format, which entails the presentation of an auditory sample stimulus and several visual comparison stimuli in each trial. Conflicting recommendations exist regarding the order of stimulus presentation in matching-to-sample trials. The purpose of this study was to compare acquisition…

  8. Stimulus Intensity and the Perception of Duration

    ERIC Educational Resources Information Center

    Matthews, William J.; Stewart, Neil; Wearden, John H.

    2011-01-01

    This article explores the widely reported finding that the subjective duration of a stimulus is positively related to its magnitude. In Experiments 1 and 2 we show that, for both auditory and visual stimuli, the effect of stimulus magnitude on the perception of duration depends upon the background: Against a high intensity background, weak stimuli…

  9. Visual motion transforms visual space representations similarly throughout the human visual hierarchy.

    PubMed

    Harvey, Ben M; Dumoulin, Serge O

    2016-02-15

    Several studies demonstrate that visual stimulus motion affects neural receptive fields and fMRI response amplitudes. Here we unite results of these two approaches and extend them by examining the effects of visual motion on neural position preferences throughout the hierarchy of human visual field maps. We measured population receptive field (pRF) properties using high-field fMRI (7T), characterizing position preferences simultaneously over large regions of the visual cortex. We measured pRFs properties using sine wave gratings in stationary apertures, moving at various speeds in either the direction of pRF measurement or the orthogonal direction. We find direction- and speed-dependent changes in pRF preferred position and size in all visual field maps examined, including V1, V3A, and the MT+ map TO1. These effects on pRF properties increase up the hierarchy of visual field maps. However, both within and between visual field maps the extent of pRF changes was approximately proportional to pRF size. This suggests that visual motion transforms the representation of visual space similarly throughout the visual hierarchy. Visual motion can also produce an illusory displacement of perceived stimulus position. We demonstrate perceptual displacements using the same stimulus configuration. In contrast to effects on pRF properties, perceptual displacements show only weak effects of motion speed, with far larger speed-independent effects. We describe a model where low-level mechanisms could underlie the observed effects on neural position preferences. We conclude that visual motion induces similar transformations of visuo-spatial representations throughout the visual hierarchy, which may arise through low-level mechanisms. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. On the functional order of binocular rivalry and blind spot filling-in.

    PubMed

    Qian, Cheng S; Brascamp, Jan W; Liu, Taosheng

    2017-07-01

    Binocular rivalry is an important phenomenon for understanding the mechanisms of visual awareness. Here we assessed the functional locus of binocular rivalry relative to blind spot filling-in, which is thought to transpire in V1, thus providing a reference point for assessing the locus of rivalry. We conducted two experiments to explore the functional order of binocular rivalry and blind spot filling-in. Experiment 1 examined if the information filled-in at the blind spot can engage in rivalry with a physical stimulus at the corresponding location in the fellow eye. Participants' perceptual reports showed no difference between this condition and a condition where filling-in was precluded by presenting the same stimuli away from the blind spot, suggesting that the rivalry process is not influenced by any filling-in that might occur. In Experiment 2, we presented the fellow eye's stimulus directly in rivalry with the 'inducer' stimulus that surrounds the blind spot, and compared it with two control conditions away from the blind spot: one involving a ring physically identical to the inducer, and one involving a disc that resembled the filled-in percept. Perceptual reports in the blind spot condition resembled those in the 'ring' condition, more than those in the latter, 'disc' condition, indicating that a perceptually suppressed inducer does not engender filling-in. Thus, our behavioral data suggest binocular rivalry functionally precedes blind spot filling-in. We conjecture that the neural substrate of binocular rivalry suppression includes processing stages at or before V1. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Audio-Visual, Visuo-Tactile and Audio-Tactile Correspondences in Preschoolers.

    PubMed

    Nava, Elena; Grassi, Massimo; Turati, Chiara

    2016-01-01

    Interest in crossmodal correspondences has recently seen a renaissance thanks to numerous studies in human adults. Yet, still very little is known about crossmodal correspondences in children, particularly in sensory pairings other than audition and vision. In the current study, we investigated whether 4-5-year-old children match auditory pitch to the spatial motion of visual objects (audio-visual condition). In addition, we investigated whether this correspondence extends to touch, i.e., whether children also match auditory pitch to the spatial motion of touch (audio-tactile condition) and the spatial motion of visual objects to touch (visuo-tactile condition). In two experiments, two different groups of children were asked to indicate which of two stimuli fitted best with a centrally located third stimulus (Experiment 1), or to report whether two presented stimuli fitted together well (Experiment 2). We found sensitivity to the congruency of all of the sensory pairings only in Experiment 2, suggesting that only under specific circumstances can these correspondences be observed. Our results suggest that pitch-height correspondences for audio-visual and audio-tactile combinations may still be weak in preschool children, and speculate that this could be due to immature linguistic and auditory cues that are still developing at age five.

  12. Attentional capture decreases when distractors remain visible during rapid serial visual presentations.

    PubMed

    Inukai, Tomoe; Kumada, Takatsune; Kawahara, Jun-ichiro

    2010-05-01

    The identification of a central visual target is impaired by the onset of a peripheral distractor. This impairment is said to occur because attentional focus is diverted to the peripheral distractor. We examined whether distractor offset would enhance or reduce attentional capture by manipulating the duration of the distractor. Observers identified a color singleton among a rapid stream of homogeneous nontargets. Peripheral distractors disappeared 43 or 172 msec after onset (the short- and long-duration conditions, respectively). Identification accuracy was greater in the long-duration condition than in the short-duration condition. The same pattern of results was obtained when participants identified a target of a designated color among heterogeneous nontargets when the color of the distractor was the same as that of the target. These findings suggest that attentional capture consists of stimulus onset and offset, both of which are susceptible to top-down attentional set.

  13. How Configural Is the Configural Superiority Effect? A Neuroimaging Investigation of Emergent Features in Visual Cortex

    PubMed Central

    Fox, Olivia M.; Harel, Assaf; Bennett, Kevin B.

    2017-01-01

    The perception of a visual stimulus is dependent not only upon local features, but also on the arrangement of those features. When stimulus features are perceptually well organized (e.g., symmetric or parallel), a global configuration with a high degree of salience emerges from the interactions between these features, often referred to as emergent features. Emergent features can be demonstrated in the Configural Superiority Effect (CSE): presenting a stimulus within an organized context relative to its presentation in a disarranged one results in better performance. Prior neuroimaging work on the perception of emergent features regards the CSE as an “all or none” phenomenon, focusing on the contrast between configural and non-configural stimuli. However, it is still not clear how emergent features are processed between these two endpoints. The current study examined the extent to which behavioral and neuroimaging markers of emergent features are responsive to the degree of configurality in visual displays. Subjects were tasked with reporting the anomalous quadrant in a visual search task while being scanned. Degree of configurality was manipulated by incrementally varying the rotational angle of low-level features within the stimulus arrays. Behaviorally, we observed faster response times with increasing levels of configurality. These behavioral changes were accompanied by increases in response magnitude across multiple visual areas in occipito-temporal cortex, primarily early visual cortex and object-selective cortex. Our findings suggest that the neural correlates of emergent features can be observed even in response to stimuli that are not fully configural, and demonstrate that configural information is already present at early stages of the visual hierarchy. PMID:28167924

  14. Cholinergic Modulation of Visual Attention and Working Memory: Dissociable Effects of Basal Forebrain 192-IgG-Saporin Lesions and Intraprefrontal Infusions of Scopolamine

    ERIC Educational Resources Information Center

    Chudasama, Yogita; Dalley, Jeffrey W.; Nathwani, Falgyni; Bouger, Pascale; Robbins, Trevor W.

    2004-01-01

    Two experiments examined the effects of reductions in cortical cholinergic function on performance of a novel task that allowed for the simultaneous assessment of attention to a visual stimulus and memory for that stimulus over a variable delay within the same test session. In the first experiment, infusions of the muscarinic receptor antagonist…

  15. Functional significance of the emotion-related late positive potential

    PubMed Central

    Brown, Stephen B. R. E.; van Steenbergen, Henk; Band, Guido P. H.; de Rover, Mischa; Nieuwenhuis, Sander

    2012-01-01

    The late positive potential (LPP) is an event-related potential (ERP) component over visual cortical areas that is modulated by the emotional intensity of a stimulus. However, the functional significance of this neural modulation remains elusive. We conducted two experiments in which we studied the relation between LPP amplitude, subsequent perceptual sensitivity to a non-emotional stimulus (Experiment 1) and visual cortical excitability, as reflected by P1/N1 components evoked by this stimulus (Experiment 2). During the LPP modulation elicited by unpleasant stimuli, perceptual sensitivity was not affected. In contrast, we found some evidence for a decreased N1 amplitude during the LPP modulation, a decreased P1 amplitude on trials with a relatively large LPP, and consistent negative (but non-significant) across-subject correlations between the magnitudes of the LPP modulation and corresponding changes in d-prime or P1/N1 amplitude. The results provide preliminary evidence that the LPP reflects a global inhibition of activity in visual cortex, resulting in the selective survival of activity associated with the processing of the emotional stimulus. PMID:22375117

  16. Revealing hidden states in visual working memory using electroencephalography

    PubMed Central

    Wolff, Michael J.; Ding, Jacqueline; Myers, Nicholas E.; Stokes, Mark G.

    2015-01-01

    It is often assumed that information in visual working memory (vWM) is maintained via persistent activity. However, recent evidence indicates that information in vWM could be maintained in an effectively “activity-silent” neural state. Silent vWM is consistent with recent cognitive and neural models, but poses an important experimental problem: how can we study these silent states using conventional measures of brain activity? We propose a novel approach that is analogous to echolocation: using a high-contrast visual stimulus, it may be possible to drive brain activity during vWM maintenance and measure the vWM-dependent impulse response. We recorded electroencephalography (EEG) while participants performed a vWM task in which a randomly oriented grating was remembered. Crucially, a high-contrast, task-irrelevant stimulus was shown in the maintenance period in half of the trials. The electrophysiological response from posterior channels was used to decode the orientations of the gratings. While orientations could be decoded during and shortly after stimulus presentation, decoding accuracy dropped back close to baseline in the delay. However, the visual evoked response from the task-irrelevant stimulus resulted in a clear re-emergence in decodability. This result provides important proof-of-concept for a promising and relatively simple approach to decode “activity-silent” vWM content using non-invasive EEG. PMID:26388748

  17. Kinesthetic information facilitates saccades towards proprioceptive-tactile targets.

    PubMed

    Voudouris, Dimitris; Goettker, Alexander; Mueller, Stefanie; Fiehler, Katja

    2016-05-01

    Saccades to somatosensory targets have longer latencies and are less accurate and precise than saccades to visual targets. Here we examined how different somatosensory information influences the planning and control of saccadic eye movements. Participants fixated a central cross and initiated a saccade as fast as possible in response to a tactile stimulus that was presented to either the index or the middle fingertip of their unseen left hand. In a static condition, the hand remained at a target location for the entire block of trials and the stimulus was presented at a fixed time after an auditory tone. Therefore, the target location was derived only from proprioceptive and tactile information. In a moving condition, the hand was first actively moved to the same target location and the stimulus was then presented immediately. Thus, in the moving condition additional kinesthetic information about the target location was available. We found shorter saccade latencies in the moving compared to the static condition, but no differences in accuracy or precision of saccadic endpoints. In a second experiment, we introduced variable delays after the auditory tone (static condition) or after the end of the hand movement (moving condition) in order to reduce the predictability of the moment of the stimulation and to allow more time to process the kinesthetic information. Again, we found shorter latencies in the moving compared to the static condition but no improvement in saccade accuracy or precision. In a third experiment, we showed that the shorter saccade latencies in the moving condition cannot be explained by the temporal proximity between the relevant event (auditory tone or end of hand movement) and the moment of the stimulation. Our findings suggest that kinesthetic information facilitates planning, but not control, of saccadic eye movements to proprioceptive-tactile targets. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Decomposing mechanisms of abnormal saccade generation in schizophrenia patients: Contributions of volitional initiation, motor preparation, and fixation release.

    PubMed

    Reuter, Benedikt; Elsner, Björn; Möllers, David; Kathmann, Norbert

    2016-11-01

    Clinical and theoretical models suggest deficient volitional initiation of action in schizophrenia patients. Recent research provided an experimental model of testing this assumption using saccade tasks. However, inconsistent findings necessitate a specification of conditions on which the deficit may occur. The present study sought to detect mechanisms that may contribute to poor performance. Sixteen schizophrenia patients and 16 healthy control participants performed visually guided and two types of volitional saccade tasks. All tasks varied as to whether the initial fixation stimulus disappeared (fixation stimulus offset) or continued during saccade initiation, and whether a direction cue allowed motor preparation of the specific saccade. Saccade latencies of the two groups were differentially affected by task type, fixation stimulus offset, and cueing, suggesting abnormal volitional saccade generation, fixation release, and motor preparation in schizophrenia. However, substantial performance deficits may only occur if all affected processes are required in a task. © 2016 Society for Psychophysiological Research.

  19. A Role for Mouse Primary Visual Cortex in Motion Perception.

    PubMed

    Marques, Tiago; Summers, Mathew T; Fioreze, Gabriela; Fridman, Marina; Dias, Rodrigo F; Feller, Marla B; Petreanu, Leopoldo

    2018-06-04

    Visual motion is an ethologically important stimulus throughout the animal kingdom. In primates, motion perception relies on specific higher-order cortical regions. Although mouse primary visual cortex (V1) and higher-order visual areas show direction-selective (DS) responses, their role in motion perception remains unknown. Here, we tested whether V1 is involved in motion perception in mice. We developed a head-fixed discrimination task in which mice must report their perceived direction of motion from random dot kinematograms (RDKs). After training, mice made around 90% correct choices for stimuli with high coherence and performed significantly above chance for 16% coherent RDKs. Accuracy increased with both stimulus duration and visual field coverage of the stimulus, suggesting that mice in this task integrate motion information in time and space. Retinal recordings showed that thalamically projecting On-Off DS ganglion cells display DS responses when stimulated with RDKs. Two-photon calcium imaging revealed that neurons in layer (L) 2/3 of V1 display strong DS tuning in response to this stimulus. Thus, RDKs engage motion-sensitive retinal circuits as well as downstream visual cortical areas. Contralateral V1 activity played a key role in this motion direction discrimination task because its reversible inactivation with muscimol led to a significant reduction in performance. Neurometric-psychometric comparisons showed that an ideal observer could solve the task with the information encoded in DS L2/3 neurons. Motion discrimination of RDKs presents a powerful behavioral tool for dissecting the role of retino-forebrain circuits in motion processing. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. The Effect of Optokinetic Stimulation on Perceptual and Postural Symptoms in Visual Vestibular Mismatch Patients

    PubMed Central

    Van Rompaey, Vincent; Maes, Leen K.; Stins, John F.; Van de Heyning, Paul H.

    2016-01-01

    Background Vestibular patients occasionally report aggravation or triggering of their symptoms by visual stimuli, which is called visual vestibular mismatch (VVM). These patients therefore experience discomfort, disorientation, dizziness and postural unsteadiness. Objective Firstly, we aimed to get a better insight in the underlying mechanism of VVM by examining perceptual and postural symptoms. Secondly, we wanted to investigate whether roll-motion is a necessary trait to evoke these symptoms or whether a complex but stationary visual pattern equally provokes them. Methods Nine VVM patients and healthy matched control group were examined by exposing both groups to a stationary stimulus as well as an optokinetic stimulus rotating around the naso-occipital axis for a prolonged period of time. Subjective visual vertical (SVV) measurements, posturography and relevant questionnaires were assessed. Results No significant differences between both groups were found for SVV measurements. Patients always swayed more and reported more symptoms than healthy controls. Prolonged exposure to roll-motion caused in patients and controls an increase in postural sway and symptoms. However, only VVM patients reported significantly more symptoms after prolonged exposure to the optokinetic stimulus compared to scores after exposure to a stationary stimulus. Conclusions VVM patients differ from healthy controls in postural and subjective symptoms and motion is a crucial factor in provoking these symptoms. A possible explanation could be a central visual-vestibular integration deficit, which has implications for diagnostics and clinical rehabilitation purposes. Future research should focus on the underlying central mechanism of VVM and the effectiveness of optokinetic stimulation in resolving it. PMID:27128970

Top