Keihani, Ahmadreza; Shirzhiyan, Zahra; Farahi, Morteza; Shamsi, Elham; Mahnam, Amin; Makkiabadi, Bahador; Haidari, Mohsen R.; Jafari, Amir H.
2018-01-01
Background: Recent EEG-SSVEP signal based BCI studies have used high frequency square pulse visual stimuli to reduce subjective fatigue. However, the effect of total harmonic distortion (THD) has not been considered. Compared to CRT and LCD monitors, LED screen displays high-frequency wave with better refresh rate. In this study, we present high frequency sine wave simple and rhythmic patterns with low THD rate by LED to analyze SSVEP responses and evaluate subjective fatigue in normal subjects. Materials and Methods: We used patterns of 3-sequence high-frequency sine waves (25, 30, and 35 Hz) to design our visual stimuli. Nine stimuli patterns, 3 simple (repetition of each of above 3 frequencies e.g., P25-25-25) and 6 rhythmic (all of the frequencies in 6 different sequences e.g., P25-30-35) were chosen. A hardware setup with low THD rate (<0.1%) was designed to present these patterns on LED. Twenty two normal subjects (aged 23–30 (25 ± 2.1) yrs) were enrolled. Visual analog scale (VAS) was used for subjective fatigue evaluation after presentation of each stimulus pattern. PSD, CCA, and LASSO methods were employed to analyze SSVEP responses. The data including SSVEP features and fatigue rate for different visual stimuli patterns were statistically evaluated. Results: All 9 visual stimuli patterns elicited SSVEP responses. Overall, obtained accuracy rates were 88.35% for PSD and > 90% for CCA and LASSO (for TWs > 1 s). High frequency rhythmic patterns group with low THD rate showed higher accuracy rate (99.24%) than simple patterns group (98.48%). Repeated measure ANOVA showed significant difference between rhythmic pattern features (P < 0.0005). Overall, there was no significant difference between the VAS of rhythmic [3.85 ± 2.13] compared to the simple patterns group [3.96 ± 2.21], (P = 0.63). Rhythmic group had lower within group VAS variation (min = P25-30-35 [2.90 ± 2.45], max = P35-25-30 [4.81 ± 2.65]) as well as least individual pattern VAS (P25-30-35). Discussion and Conclusion: Overall, rhythmic and simple pattern groups had higher and similar accuracy rates. Rhythmic stimuli patterns showed insignificantly lower fatigue rate than simple patterns. We conclude that both rhythmic and simple visual high frequency sine wave stimuli require further research for human subject SSVEP-BCI studies. PMID:29892219
Keihani, Ahmadreza; Shirzhiyan, Zahra; Farahi, Morteza; Shamsi, Elham; Mahnam, Amin; Makkiabadi, Bahador; Haidari, Mohsen R; Jafari, Amir H
2018-01-01
Background: Recent EEG-SSVEP signal based BCI studies have used high frequency square pulse visual stimuli to reduce subjective fatigue. However, the effect of total harmonic distortion (THD) has not been considered. Compared to CRT and LCD monitors, LED screen displays high-frequency wave with better refresh rate. In this study, we present high frequency sine wave simple and rhythmic patterns with low THD rate by LED to analyze SSVEP responses and evaluate subjective fatigue in normal subjects. Materials and Methods: We used patterns of 3-sequence high-frequency sine waves (25, 30, and 35 Hz) to design our visual stimuli. Nine stimuli patterns, 3 simple (repetition of each of above 3 frequencies e.g., P25-25-25) and 6 rhythmic (all of the frequencies in 6 different sequences e.g., P25-30-35) were chosen. A hardware setup with low THD rate (<0.1%) was designed to present these patterns on LED. Twenty two normal subjects (aged 23-30 (25 ± 2.1) yrs) were enrolled. Visual analog scale (VAS) was used for subjective fatigue evaluation after presentation of each stimulus pattern. PSD, CCA, and LASSO methods were employed to analyze SSVEP responses. The data including SSVEP features and fatigue rate for different visual stimuli patterns were statistically evaluated. Results: All 9 visual stimuli patterns elicited SSVEP responses. Overall, obtained accuracy rates were 88.35% for PSD and > 90% for CCA and LASSO (for TWs > 1 s). High frequency rhythmic patterns group with low THD rate showed higher accuracy rate (99.24%) than simple patterns group (98.48%). Repeated measure ANOVA showed significant difference between rhythmic pattern features ( P < 0.0005). Overall, there was no significant difference between the VAS of rhythmic [3.85 ± 2.13] compared to the simple patterns group [3.96 ± 2.21], ( P = 0.63). Rhythmic group had lower within group VAS variation (min = P25-30-35 [2.90 ± 2.45], max = P35-25-30 [4.81 ± 2.65]) as well as least individual pattern VAS (P25-30-35). Discussion and Conclusion: Overall, rhythmic and simple pattern groups had higher and similar accuracy rates. Rhythmic stimuli patterns showed insignificantly lower fatigue rate than simple patterns. We conclude that both rhythmic and simple visual high frequency sine wave stimuli require further research for human subject SSVEP-BCI studies.
Marini, Francesco; Marzi, Carlo A.
2016-01-01
The visual system leverages organizational regularities of perceptual elements to create meaningful representations of the world. One clear example of such function, which has been formalized in the Gestalt psychology principles, is the perceptual grouping of simple visual elements (e.g., lines and arcs) into unitary objects (e.g., forms and shapes). The present study sought to characterize automatic attentional capture and related cognitive processing of Gestalt-like visual stimuli at the psychophysiological level by using event-related potentials (ERPs). We measured ERPs during a simple visual reaction time task with bilateral presentations of physically matched elements with or without a Gestalt organization. Results showed that Gestalt (vs. non-Gestalt) stimuli are characterized by a larger N2pc together with enhanced ERP amplitudes of non-lateralized components (N1, N2, P3) starting around 150 ms post-stimulus onset. Thus, we conclude that Gestalt stimuli capture attention automatically and entail characteristic psychophysiological signatures at both early and late processing stages. Highlights We studied the neural signatures of the automatic processes of visual attention elicited by Gestalt stimuli. We found that a reliable early correlate of attentional capture turned out to be the N2pc component. Perceptual and cognitive processing of Gestalt stimuli is associated with larger N1, N2, and P3 PMID:27630555
Innes-Brown, Hamish; Barutchu, Ayla; Crewther, David P.
2013-01-01
The effect of multi-modal vs uni-modal prior stimuli on the subsequent processing of a simple flash stimulus was studied in the context of the audio-visual ‘flash-beep’ illusion, in which the number of flashes a person sees is influenced by accompanying beep stimuli. EEG recordings were made while combinations of simple visual and audio-visual stimuli were presented. The experiments found that the electric field strength related to a flash stimulus was stronger when it was preceded by a multi-modal flash/beep stimulus, compared to when it was preceded by another uni-modal flash stimulus. This difference was found to be significant in two distinct timeframes – an early timeframe, from 130–160 ms, and a late timeframe, from 300–320 ms. Source localisation analysis found that the increased activity in the early interval was localised to an area centred on the inferior and superior parietal lobes, whereas the later increase was associated with stronger activity in an area centred on primary and secondary visual cortex, in the occipital lobe. The results suggest that processing of a visual stimulus can be affected by the presence of an immediately prior multisensory event. Relatively long-lasting interactions generated by the initial auditory and visual stimuli altered the processing of a subsequent visual stimulus. PMID:24391939
Do You "See'" What I "See"? Differentiation of Visual Action Words
ERIC Educational Resources Information Center
Dickinson, Joël; Cirelli, Laura; Szeligo, Frank
2014-01-01
Dickinson and Szeligo ("Can J Exp Psychol" 62(4):211--222, 2008) found that processing time for simple visual stimuli was affected by the visual action participants had been instructed to perform on these stimuli (e.g., see, distinguish). It was concluded that these effects reflected the differences in the durations of these various…
Evidence for unlimited capacity processing of simple features in visual cortex
White, Alex L.; Runeson, Erik; Palmer, John; Ernst, Zachary R.; Boynton, Geoffrey M.
2017-01-01
Performance in many visual tasks is impaired when observers attempt to divide spatial attention across multiple visual field locations. Correspondingly, neuronal response magnitudes in visual cortex are often reduced during divided compared with focused spatial attention. This suggests that early visual cortex is the site of capacity limits, where finite processing resources must be divided among attended stimuli. However, behavioral research demonstrates that not all visual tasks suffer such capacity limits: The costs of divided attention are minimal when the task and stimulus are simple, such as when searching for a target defined by orientation or contrast. To date, however, every neuroimaging study of divided attention has used more complex tasks and found large reductions in response magnitude. We bridged that gap by using functional magnetic resonance imaging to measure responses in the human visual cortex during simple feature detection. The first experiment used a visual search task: Observers detected a low-contrast Gabor patch within one or four potentially relevant locations. The second experiment used a dual-task design, in which observers made independent judgments of Gabor presence in patches of dynamic noise at two locations. In both experiments, blood-oxygen level–dependent (BOLD) signals in the retinotopic cortex were significantly lower for ignored than attended stimuli. However, when observers divided attention between multiple stimuli, BOLD signals were not reliably reduced and behavioral performance was unimpaired. These results suggest that processing of simple features in early visual cortex has unlimited capacity. PMID:28654964
The Multisensory Attentional Consequences of Tool Use: A Functional Magnetic Resonance Imaging Study
Holmes, Nicholas P.; Spence, Charles; Hansen, Peter C.; Mackay, Clare E.; Calvert, Gemma A.
2008-01-01
Background Tool use in humans requires that multisensory information is integrated across different locations, from objects seen to be distant from the hand, but felt indirectly at the hand via the tool. We tested the hypothesis that using a simple tool to perceive vibrotactile stimuli results in the enhanced processing of visual stimuli presented at the distal, functional part of the tool. Such a finding would be consistent with a shift of spatial attention to the location where the tool is used. Methodology/Principal Findings We tested this hypothesis by scanning healthy human participants' brains using functional magnetic resonance imaging, while they used a simple tool to discriminate between target vibrations, accompanied by congruent or incongruent visual distractors, on the same or opposite side to the tool. The attentional hypothesis was supported: BOLD response in occipital cortex, particularly in the right hemisphere lingual gyrus, varied significantly as a function of tool position, increasing contralaterally, and decreasing ipsilaterally to the tool. Furthermore, these modulations occurred despite the fact that participants were repeatedly instructed to ignore the visual stimuli, to respond only to the vibrotactile stimuli, and to maintain visual fixation centrally. In addition, the magnitude of multisensory (visual-vibrotactile) interactions in participants' behavioural responses significantly predicted the BOLD response in occipital cortical areas that were also modulated as a function of both visual stimulus position and tool position. Conclusions/Significance These results show that using a simple tool to locate and to perceive vibrotactile stimuli is accompanied by a shift of spatial attention to the location where the functional part of the tool is used, resulting in enhanced processing of visual stimuli at that location, and decreased processing at other locations. This was most clearly observed in the right hemisphere lingual gyrus. Such modulations of visual processing may reflect the functional importance of visuospatial information during human tool use. PMID:18958150
The Emergence of Contrast-Invariant Orientation Tuning in Simple Cells of Cat Visual Cortex
Finn, Ian M.; Priebe, Nicholas J.; Ferster, David
2007-01-01
Simple cells in primary visual cortex exhibit contrast-invariant orientation tuning, in seeming contradiction to feed-forward models relying on lateral geniculate nucleus (LGN) input alone. Contrast invariance has therefore been thought to depend on the presence of intracortical lateral inhibition. In vivo intracellular recordings instead suggest that contrast invariance can be explained by three properties of the excitatory pathway. 1) Depolarizations evoked by orthogonal stimuli are determined by the amount of excitation a cell receives from the LGN, relative to the excitation it receives from other cortical cells. 2) Depolarizations evoked by preferred stimuli saturate at lower contrasts than the spike output of LGN relay cells. 3) Visual stimuli evoke contrast-dependent changes in trial-to-trial variability, which lead to contrast-dependent changes in the relationship between membrane potential and spike rate. Thus, high-contrast, orthogonally-oriented stimuli that evoke significant depolarizations evoke few spikes. Together these mechanisms, without lateral inhibition, can account for contrast-invariant stimulus selectivity. PMID:17408583
Cell-assembly coding in several memory processes.
Sakurai, Y
1998-01-01
The present paper discusses why the cell assembly, i.e., an ensemble population of neurons with flexible functional connections, is a tenable view of the basic code for information processes in the brain. The main properties indicating the reality of cell-assembly coding are neurons overlaps among different assemblies and connection dynamics within and among the assemblies. The former can be detected as multiple functions of individual neurons in processing different kinds of information. Individual neurons appear to be involved in multiple information processes. The latter can be detected as changes of functional synaptic connections in processing different kinds of information. Correlations of activity among some of the recorded neurons appear to change in multiple information processes. Recent experiments have compared several different memory processes (tasks) and detected these two main properties, indicating cell-assembly coding of memory in the working brain. The first experiment compared different types of processing of identical stimuli, i.e., working memory and reference memory of auditory stimuli. The second experiment compared identical processes of different types of stimuli, i.e., discriminations of simple auditory, simple visual, and configural auditory-visual stimuli. The third experiment compared identical processes of different types of stimuli with or without temporal processing of stimuli, i.e., discriminations of elemental auditory, configural auditory-visual, and sequential auditory-visual stimuli. Some possible features of the cell-assembly coding, especially "dual coding" by individual neurons and cell assemblies, are discussed for future experimental approaches. Copyright 1998 Academic Press.
Interobject grouping facilitates visual awareness.
Stein, Timo; Kaiser, Daniel; Peelen, Marius V
2015-01-01
In organizing perception, the human visual system takes advantage of regularities in the visual input to perceptually group related image elements. Simple stimuli that can be perceptually grouped based on physical regularities, for example by forming an illusory contour, have a competitive advantage in entering visual awareness. Here, we show that regularities that arise from the relative positioning of complex, meaningful objects in the visual environment also modulate visual awareness. Using continuous flash suppression, we found that pairs of objects that were positioned according to real-world spatial regularities (e.g., a lamp above a table) accessed awareness more quickly than the same object pairs shown in irregular configurations (e.g., a table above a lamp). This advantage was specific to upright stimuli and abolished by stimulus inversion, meaning that it did not reflect physical stimulus confounds or the grouping of simple image elements. Thus, knowledge of the spatial configuration of objects in the environment shapes the contents of conscious perception.
ERIC Educational Resources Information Center
Kyllingsbaek, Soren; Markussen, Bo; Bundesen, Claus
2012-01-01
The authors propose and test a simple model of the time course of visual identification of briefly presented, mutually confusable single stimuli in pure accuracy tasks. The model implies that during stimulus analysis, tentative categorizations that stimulus i belongs to category j are made at a constant Poisson rate, v(i, j). The analysis is…
Brain activity during auditory and visual phonological, spatial and simple discrimination tasks.
Salo, Emma; Rinne, Teemu; Salonen, Oili; Alho, Kimmo
2013-02-16
We used functional magnetic resonance imaging to measure human brain activity during tasks demanding selective attention to auditory or visual stimuli delivered in concurrent streams. Auditory stimuli were syllables spoken by different voices and occurring in central or peripheral space. Visual stimuli were centrally or more peripherally presented letters in darker or lighter fonts. The participants performed a phonological, spatial or "simple" (speaker-gender or font-shade) discrimination task in either modality. Within each modality, we expected a clear distinction between brain activations related to nonspatial and spatial processing, as reported in previous studies. However, within each modality, different tasks activated largely overlapping areas in modality-specific (auditory and visual) cortices, as well as in the parietal and frontal brain regions. These overlaps may be due to effects of attention common for all three tasks within each modality or interaction of processing task-relevant features and varying task-irrelevant features in the attended-modality stimuli. Nevertheless, brain activations caused by auditory and visual phonological tasks overlapped in the left mid-lateral prefrontal cortex, while those caused by the auditory and visual spatial tasks overlapped in the inferior parietal cortex. These overlapping activations reveal areas of multimodal phonological and spatial processing. There was also some evidence for intermodal attention-related interaction. Most importantly, activity in the superior temporal sulcus elicited by unattended speech sounds was attenuated during the visual phonological task in comparison with the other visual tasks. This effect might be related to suppression of processing irrelevant speech presumably distracting the phonological task involving the letters. Copyright © 2012 Elsevier B.V. All rights reserved.
Phillips, M L; David, A S
1997-11-01
Left hemi-face (LHF) perceptual bias of chimeric faces in normal right-handers is well-documented. We investigated mechanisms underlying this by measuring visual scan paths in right-handed normal controls (n = 9) and schizophrenics (n = 8) for simple, full-face photographs and schematic, happy-sad chimeric faces over 5 s. Normals viewed the left side/ LHF first, more so than the right of all stimuli. Schizophrenics viewed the LHF first more than the right of stimuli for which there was a LHF choice of predominant affect. Neither group demonstrated an overall LHF perceptual bias for the chimeric stimuli. Readjustment of the initial LHF bias in controls was probably a result of increased attention to stimulus detail with scanning, whereas the schizophrenics demonstrated difficulty in redirection of the initial focus of attention. The study highlights the role of visual scan paths as a marker of normal and abnormal attentional processes. Copyright 1997 Academic Press.
A noninvasive brain computer interface using visually-induced near-infrared spectroscopy responses.
Chen, Cheng-Hsuan; Ho, Ming-Shan; Shyu, Kuo-Kai; Hsu, Kou-Cheng; Wang, Kuo-Wei; Lee, Po-Lei
2014-09-19
Visually-induced near-infrared spectroscopy (NIRS) response was utilized to design a brain computer interface (BCI) system. Four circular checkerboards driven by distinct flickering sequences were displayed on a LCD screen as visual stimuli to induce subjects' NIRS responses. Each flickering sequence was a concatenated sequence of alternative flickering segments and resting segments. The flickering segment was designed with fixed duration of 3s whereas the resting segment was chosen randomly within 15-20s to create the mutual independencies among different flickering sequences. Six subjects were recruited in this study and subjects were requested to gaze at the four visual stimuli one-after-one in a random order. Since visual responses in human brain are time-locked to the onsets of visual stimuli and the flicker sequences of distinct visual stimuli were designed mutually independent, the NIRS responses induced by user's gazed targets can be discerned from non-gazed targets by applying a simple averaging process. The accuracies for the six subjects were higher than 90% after 10 or more epochs being averaged. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Integrative cortical dysfunction and pervasive motion perception deficit in fragile X syndrome.
Kogan, C S; Bertone, A; Cornish, K; Boutet, I; Der Kaloustian, V M; Andermann, E; Faubert, J; Chaudhuri, A
2004-11-09
Fragile X syndrome (FXS) is associated with neurologic deficits recently attributed to the magnocellular pathway of the lateral geniculate nucleus. To test the hypotheses that FXS individuals 1) have a pervasive visual motion perception impairment affecting neocortical circuits in the parietal lobe and 2) have deficits in integrative neocortical mechanisms necessary for perception of complex stimuli. Psychophysical tests of visual motion and form perception defined by either first-order (luminance) or second-order (texture) attributes were used to probe early and later occipito-temporal and occipito-parietal functioning. When compared to developmental- and age-matched controls, FXS individuals displayed severe impairments in first- and second-order motion perception. This deficit was accompanied by near normal perception for first-order form stimuli but not second-order form stimuli. Impaired visual motion processing for first- and second-order stimuli suggests that both early- and later-level neurologic function of the parietal lobe are affected in Fragile X syndrome (FXS). Furthermore, this deficit likely stems from abnormal input from the magnocellular compartment of the lateral geniculate nucleus. Impaired visual form and motion processing for complex visual stimuli with normal processing for simple (i.e., first-order) form stimuli suggests that FXS individuals have normal early form processing accompanied by a generalized impairment in neurologic mechanisms necessary for integrating all early visual input.
Psychophysical and perceptual performance in a simulated-scotoma model of human eye injury
NASA Astrophysics Data System (ADS)
Brandeis, R.; Egoz, I.; Peri, D.; Sapiens, N.; Turetz, J.
2008-02-01
Macular scotomas, affecting visual functioning, characterize many eye and neurological diseases like AMD, diabetes mellitus, multiple sclerosis, and macular hole. In this work, foveal visual field defects were modeled, and their effects were evaluated on spatial contrast sensitivity and a task of stimulus detection and aiming. The modeled occluding scotomas, of different size, were superimposed on the stimuli presented on the computer display, and were stabilized on the retina using a mono Purkinje Eye-Tracker. Spatial contrast sensitivity was evaluated using square-wave grating stimuli, whose contrast thresholds were measured using the method of constant stimuli with "catch trials". The detection task consisted of a triple conjunctive visual search display of: size (in visual angle), contrast and background (simple, low-level features vs. complex, high-level features). Search/aiming accuracy as well as R.T. measures used for performance evaluation. Artificially generated scotomas suppressed spatial contrast sensitivity in a size dependent manner, similar to previous studies. Deprivation effect was dependent on spatial frequency, consistent with retinal inhomogeneity models. Stimulus detection time was slowed in complex background search situation more than in simple background. Detection speed was dependent on scotoma size and size of stimulus. In contrast, visually guided aiming was more sensitive to scotoma effect in simple background search situation than in complex background. Both stimulus aiming R.T. and accuracy (precision targeting) were impaired, as a function of scotoma size and size of stimulus. The data can be explained by models distinguishing between saliency-based, parallel and serial search processes, guiding visual attention, which are supported by underlying retinal as well as neural mechanisms.
Visual cortex in dementia with Lewy bodies: magnetic resonance imaging study
Taylor, John-Paul; Firbank, Michael J.; He, Jiabao; Barnett, Nicola; Pearce, Sarah; Livingstone, Anthea; Vuong, Quoc; McKeith, Ian G.; O’Brien, John T.
2012-01-01
Background Visual hallucinations and visuoperceptual deficits are common in dementia with Lewy bodies, suggesting that cortical visual function may be abnormal. Aims To investigate: (1) cortical visual function using functional magnetic resonance imaging (fMRI); and (2) the nature and severity of perfusion deficits in visual areas using arterial spin labelling (ASL)-MRI. Method In total, 17 participants with dementia with Lewy bodies (DLB group) and 19 similarly aged controls were presented with simple visual stimuli (checkerboard, moving dots, and objects) during fMRI and subsequently underwent ASL-MRI (DLB group n = 15, control group n = 19). Results Functional activations were evident in visual areas in both the DLB and control groups in response to checkerboard and objects stimuli but reduced visual area V5/MT (middle temporal) activation occurred in the DLB group in response to motion stimuli. Posterior cortical perfusion deficits occurred in the DLB group, particularly in higher visual areas. Conclusions Higher visual areas, particularly occipito-parietal, appear abnormal in dementia with Lewy bodies, while there is a preservation of function in lower visual areas (V1 and V2/3). PMID:22500014
Distractor devaluation requires visual working memory.
Goolsby, Brian A; Shapiro, Kimron L; Raymond, Jane E
2009-02-01
Visual stimuli seen previously as distractors in a visual search task are subsequently evaluated more negatively than those seen as targets. An attentional inhibition account for this distractor-devaluation effect posits that associative links between attentional inhibition and to-be-ignored stimuli are established during search, stored, and then later reinstantiated, implying that distractor devaluation may require visual working memory (WM) resources. To assess this, we measured distractor devaluation with and without a concurrent visual WM load. Participants viewed a memory array, performed a simple search task, evaluated one of the search items (or a novel item), and then viewed a memory test array. Although distractor devaluation was observed with low (and no) WM load, it was absent when WM load was increased. This result supports the notions that active association of current attentional states with stimuli requires WM and that memory for these associations plays a role in affective response.
Brain State Effects on Layer 4 of the Awake Visual Cortex
Zhuang, Jun; Bereshpolova, Yulia; Stoelzel, Carl R.; Huff, Joseph M.; Hei, Xiaojuan; Alonso, Jose-Manuel
2014-01-01
Awake mammals can switch between alert and nonalert brain states hundreds of times per day. Here, we study the effects of alertness on two cell classes in layer 4 of primary visual cortex of awake rabbits: presumptive excitatory “simple” cells and presumptive fast-spike inhibitory neurons (suspected inhibitory interneurons). We show that in both cell classes, alertness increases the strength and greatly enhances the reliability of visual responses. In simple cells, alertness also increases the temporal frequency bandwidth, but preserves contrast sensitivity, orientation tuning, and selectivity for direction and spatial frequency. Finally, alertness selectively suppresses the simple cell responses to high-contrast stimuli and stimuli moving orthogonal to the preferred direction, effectively enhancing mid-contrast borders. Using a population coding model, we show that these effects of alertness in simple cells—enhanced reliability, higher gain, and increased suppression in orthogonal orientation—could play a major role at increasing the speed of cortical feature detection. PMID:24623767
Residual attention guidance in blindsight monkeys watching complex natural scenes.
Yoshida, Masatoshi; Itti, Laurent; Berg, David J; Ikeda, Takuro; Kato, Rikako; Takaura, Kana; White, Brian J; Munoz, Douglas P; Isa, Tadashi
2012-08-07
Patients with damage to primary visual cortex (V1) demonstrate residual performance on laboratory visual tasks despite denial of conscious seeing (blindsight) [1]. After a period of recovery, which suggests a role for plasticity [2], visual sensitivity higher than chance is observed in humans and monkeys for simple luminance-defined stimuli, grating stimuli, moving gratings, and other stimuli [3-7]. Some residual cognitive processes including bottom-up attention and spatial memory have also been demonstrated [8-10]. To date, little is known about blindsight with natural stimuli and spontaneous visual behavior. In particular, is orienting attention toward salient stimuli during free viewing still possible? We used a computational saliency map model to analyze spontaneous eye movements of monkeys with blindsight from unilateral ablation of V1. Despite general deficits in gaze allocation, monkeys were significantly attracted to salient stimuli. The contribution of orientation features to salience was nearly abolished, whereas contributions of motion, intensity, and color features were preserved. Control experiments employing laboratory stimuli confirmed the free-viewing finding that lesioned monkeys retained color sensitivity. Our results show that attention guidance over complex natural scenes is preserved in the absence of V1, thereby directly challenging theories and models that crucially depend on V1 to compute the low-level visual features that guide attention. Copyright © 2012 Elsevier Ltd. All rights reserved.
Fusion Prevents the Redundant Signals Effect: Evidence from Stereoscopically Presented Stimuli
ERIC Educational Resources Information Center
Schroter, Hannes; Fiedler, Anja; Miller, Jeff; Ulrich, Rolf
2011-01-01
In a simple reaction time (RT) experiment, visual stimuli were stereoscopically presented either to one eye (single stimulation) or to both eyes (redundant stimulation), with brightness matched for single and redundant stimulations. Redundant stimulation resulted in two separate percepts when noncorresponding retinal areas were stimulated, whereas…
Lee, Kyung J.; Park, Seong-Beom; Lee, Inah
2014-01-01
Learning theories categorize learning systems into elemental and contextual systems, the former being processed by non-hippocampal regions and the latter being processed in the hippocampus. A set of complex stimuli such as a visual background is often considered a contextual stimulus and simple sensory stimuli such as pure tone and light are considered elemental stimuli. However, this elemental-contextual categorization scheme has only been tested in limited behavioral paradigms and it is largely unknown whether it can be generalized across different learning situations. By requiring rats to respond differently to a common object in association with various types of sensory cues including contextual and elemental stimuli, we tested whether different types of elemental and contextual sensory stimuli depended on the hippocampus to different degrees. In most rats, a surrounding visual background and a tactile stimulus served as contextual (hippocampal dependent) and elemental (non-hippocampal dependent) stimuli, respectively. However, simple tone and light stimuli frequently used as elemental cues in traditional experiments required the hippocampus to varying degrees among rats. Specifically, one group of rats showed a normal contextual bias when both contextual and elemental cues were present. These rats effectively switched to using elemental cues when the hippocampus was inactivated. The other group showed a strong contextual bias (and hippocampal dependence) because these rats were not able to use elemental cues when the hippocampus was unavailable. It is possible that the latter group of rats might have interpreted the elemental cues (light and tone) as background stimuli and depended more on the hippocampus in associating the cues with choice responses. Although exact mechanisms underlying these individual variances are unclear, our findings recommend a caution for adopting a simple sensory stimulus as a non-hippocampal sensory cue only based on the literature. PMID:24982624
Stimulus novelty, task relevance and the visual evoked potential in man
NASA Technical Reports Server (NTRS)
Courchesne, E.; Hillyard, S. A.; Galambos, R.
1975-01-01
The effect of task relevance on P3 (waveform of human evoked potential) waves and the methodologies used to deal with them are outlined. Visual evoked potentials (VEPs) were recorded from normal adult subjects performing in a visual discrimination task. Subjects counted the number of presentations of the numeral 4 which was interposed rarely and randomly within a sequence of tachistoscopically flashed background stimuli. Intrusive, task-irrelevant (not counted) stimuli were also interspersed rarely and randomly in the sequence of 2s; these stimuli were of two types: simples, which were easily recognizable, and novels, which were completely unrecognizable. It was found that the simples and the counted 4s evoked posteriorly distributed P3 waves while the irrelevant novels evoked large, frontally distributed P3 waves. These large, frontal P3 waves to novels were also found to be preceded by large N2 waves. These findings indicate that the P3 wave is not a unitary phenomenon but should be considered in terms of a family of waves, differing in their brain generators and in their psychological correlates.
Emerging category representation in the visual forebrain hierarchy of pigeons (Columba livia).
Azizi, Amir Hossein; Pusch, Roland; Koenen, Charlotte; Klatt, Sebastian; Bröcker, Franziska; Thiele, Samuel; Kellermann, Janosch; Güntürkün, Onur; Cheng, Sen
2018-06-06
Recognizing and categorizing visual stimuli are cognitive functions vital for survival, and an important feature of visual systems in primates as well as in birds. Visual stimuli are processed along the ventral visual pathway. At every stage in the hierarchy, neurons respond selectively to more complex features, transforming the population representation of the stimuli. It is therefore easier to read-out category information in higher visual areas. While explicit category representations have been observed in the primate brain, less is known on equivalent processes in the avian brain. Even though their brain anatomies are radically different, it has been hypothesized that visual object representations are comparable across mammals and birds. In the present study, we investigated category representations in the pigeon visual forebrain using recordings from single cells responding to photographs of real-world objects. Using a linear classifier, we found that the population activity in the visual associative area mesopallium ventrolaterale (MVL) distinguishes between animate and inanimate objects, although this distinction is not required by the task. By contrast, a population of cells in the entopallium, a region that is lower in the hierarchy of visual areas and that is related to the primate extrastriate cortex, lacked this information. A model that pools responses of simple cells, which function as edge detectors, can account for the animate vs. inanimate categorization in the MVL, but performance in the model is based on different features than in MVL. Therefore, processing in MVL cells is very likely more abstract than simple computations on the output of edge detectors. Copyright © 2018. Published by Elsevier B.V.
Effects of set-size and lateral masking in visual search.
Põder, Endel
2004-01-01
In the present research, the roles of lateral masking and central processing limitations in visual search were studied. Two search conditions were used: (1) target differed from distractors by presence/absence of a simple feature; (2) target differed by relative position of the same components only. The number of displayed stimuli (set-size) and the distance between neighbouring stimuli were varied as independently as possible in order to measure the effect of both. The effect of distance between stimuli (lateral masking) was found to be similar in both conditions. The effect of set-size was much larger for relative position stimuli. The results support the view that perception of relative position of stimulus components is limited mainly by the capacity of central processing.
Adaptation in human visual cortex as a mechanism for rapid discrimination of aversive stimuli.
Keil, Andreas; Stolarova, Margarita; Moratti, Stephan; Ray, William J
2007-06-01
The ability to react rapidly and efficiently to adverse stimuli is crucial for survival. Neuroscience and behavioral studies have converged to show that visual information associated with aversive content is processed quickly and accurately and is associated with rapid amplification of the neural responses. In particular, unpleasant visual information has repeatedly been shown to evoke increased cortical activity during early visual processing between 60 and 120 ms following the onset of a stimulus. However, the nature of these early responses is not well understood. Using neutral versus unpleasant colored pictures, the current report examines the time course of short-term changes in the human visual cortex when a subject is repeatedly exposed to simple grating stimuli in a classical conditioning paradigm. We analyzed changes in amplitude and synchrony of large-scale oscillatory activity across 2 days of testing, which included baseline measurements, 2 conditioning sessions, and a final extinction session. We found a gradual increase in amplitude and synchrony of very early cortical oscillations in the 20-35 Hz range across conditioning sessions, specifically for conditioned stimuli predicting aversive visual events. This increase for conditioned stimuli affected stimulus-locked cortical oscillations at a latency of around 60-90 ms and disappeared during extinction. Our findings suggest that reorganization of neural connectivity on the level of the visual cortex acts to optimize early perception of specific features indicative of emotional relevance.
[Sound improves distinction of low intensities of light in the visual cortex of a rabbit].
Polianskiĭ, V B; Alymkulov, D E; Evtikhin, D V; Chernyshev, B V
2011-01-01
Electrodes were implanted into cranium above the primary visual cortex of four rabbits (Orictolagus cuniculus). At the first stage, visual evoked potentials (VEPs) were recorded in response to substitution of threshold visual stimuli (0.28 and 0.31 cd/m2). Then the sound (2000 Hz, 84 dB, duration 40 ms) was added simultaneously to every visual stimulus. Single sounds (without visual stimuli) did not produce a VEP-response. It was found that the amplitude ofVEP component N1 (85-110 ms) in response to complex stimuli (visual and sound) increased 1.6 times as compared to "simple" visual stimulation. At the second stage, paired substitutions of 8 different visual stimuli (range 0.38-20.2 cd/m2) by each other were performed. Sensory spaces of intensity were reconstructed on the basis of factor analysis. Sensory spaces of complexes were reconstructed in a similar way for simultaneous visual and sound stimulation. Comparison of vectors representing the stimuli in the spaces showed that the addition of a sound led to a 1.4-fold expansion of the space occupied by smaller intensities (0.28; 1.02; 3.05; 6.35 cd/m2). Also, the addition of the sound led to an arrangement of intensities in an ascending order. At the same time, the sound 1.33-times narrowed the space of larger intensities (8.48; 13.7; 16.8; 20.2 cd/m2). It is suggested that the addition of a sound improves a distinction of smaller intensities and impairs a dis- tinction of larger intensities. Sensory spaces revealed by complex stimuli were two-dimensional. This fact can be a consequence of integration of sound and light in a unified complex at simultaneous stimulation.
Black–white asymmetry in visual perception
Lu, Zhong-Lin; Sperling, George
2012-01-01
With eleven different types of stimuli that exercise a wide gamut of spatial and temporal visual processes, negative perturbations from mean luminance are found to be typically 25% more effective visually than positive perturbations of the same magnitude (range 8–67%). In Experiment 12, the magnitude of the black–white asymmetry is shown to be a saturating function of stimulus contrast. Experiment 13 shows black–white asymmetry primarily involves a nonlinearity in the visual representation of decrements. Black–white asymmetry in early visual processing produces even-harmonic distortion frequencies in all ordinary stimuli and in illusions such as the perceived asymmetry of optically perfect sine wave gratings. In stimuli intended to stimulate exclusively second-order processing in which motion or shape are defined not by luminance differences but by differences in texture contrast, the black–white asymmetry typically generates artifactual luminance (first-order) motion and shape components. Because black–white asymmetry pervades psychophysical and neurophysiological procedures that utilize spatial or temporal variations of luminance, it frequently needs to be considered in the design and evaluation of experiments that involve visual stimuli. Simple procedures to compensate for black–white asymmetry are proposed. PMID:22984221
Binocular coordination in response to stereoscopic stimuli
NASA Astrophysics Data System (ADS)
Liversedge, Simon P.; Holliman, Nicolas S.; Blythe, Hazel I.
2009-02-01
Humans actively explore their visual environment by moving their eyes. Precise coordination of the eyes during visual scanning underlies the experience of a unified perceptual representation and is important for the perception of depth. We report data from three psychological experiments investigating human binocular coordination during visual processing of stereoscopic stimuli.In the first experiment participants were required to read sentences that contained a stereoscopically presented target word. Half of the word was presented exclusively to one eye and half exclusively to the other eye. Eye movements were recorded and showed that saccadic targeting was uninfluenced by the stereoscopic presentation, strongly suggesting that complementary retinal stimuli are perceived as a single, unified input prior to saccade initiation. In a second eye movement experiment we presented words stereoscopically to measure Panum's Fusional Area for linguistic stimuli. In the final experiment we compared binocular coordination during saccades between simple dot stimuli under 2D, stereoscopic 3D and real 3D viewing conditions. Results showed that depth appropriate vergence movements were made during saccades and fixations to real 3D stimuli, but only during fixations on stereoscopic 3D stimuli. 2D stimuli did not induce depth vergence movements. Together, these experiments indicate that stereoscopic visual stimuli are fused when they fall within Panum's Fusional Area, and that saccade metrics are computed on the basis of a unified percept. Also, there is sensitivity to non-foveal retinal disparity in real 3D stimuli, but not in stereoscopic 3D stimuli, and the system responsible for binocular coordination responds to this during saccades as well as fixations.
Physical Features of Visual Images Affect Macaque Monkey’s Preference for These Images
Funahashi, Shintaro
2016-01-01
Animals exhibit different degrees of preference toward various visual stimuli. In addition, it has been shown that strongly preferred stimuli can often act as a reward. The aim of the present study was to determine what features determine the strength of the preference for visual stimuli in order to examine neural mechanisms of preference judgment. We used 50 color photographs obtained from the Flickr Material Database (FMD) as original stimuli. Four macaque monkeys performed a simple choice task, in which two stimuli selected randomly from among the 50 stimuli were simultaneously presented on a monitor and monkeys were required to choose either stimulus by eye movements. We considered that the monkeys preferred the chosen stimulus if it continued to look at the stimulus for an additional 6 s and calculated a choice ratio for each stimulus. Each monkey exhibited a different choice ratio for each of the original 50 stimuli. They tended to select clear, colorful and in-focus stimuli. Complexity and clarity were stronger determinants of preference than colorfulness. Images that included greater amounts of spatial frequency components were selected more frequently. These results indicate that particular physical features of the stimulus can affect the strength of a monkey’s preference and that the complexity, clarity and colorfulness of the stimulus are important determinants of this preference. Neurophysiological studies would be needed to examine whether these features of visual stimuli produce more activation in neurons that participate in this preference judgment. PMID:27853424
Attention Priority Map of Face Images in Human Early Visual Cortex.
Mo, Ce; He, Dongjun; Fang, Fang
2018-01-03
Attention priority maps are topographic representations that are used for attention selection and guidance of task-related behavior during visual processing. Previous studies have identified attention priority maps of simple artificial stimuli in multiple cortical and subcortical areas, but investigating neural correlates of priority maps of natural stimuli is complicated by the complexity of their spatial structure and the difficulty of behaviorally characterizing their priority map. To overcome these challenges, we reconstructed the topographic representations of upright/inverted face images from fMRI BOLD signals in human early visual areas primary visual cortex (V1) and the extrastriate cortex (V2 and V3) based on a voxelwise population receptive field model. We characterized the priority map behaviorally as the first saccadic eye movement pattern when subjects performed a face-matching task relative to the condition in which subjects performed a phase-scrambled face-matching task. We found that the differential first saccadic eye movement pattern between upright/inverted and scrambled faces could be predicted from the reconstructed topographic representations in V1-V3 in humans of either sex. The coupling between the reconstructed representation and the eye movement pattern increased from V1 to V2/3 for the upright faces, whereas no such effect was found for the inverted faces. Moreover, face inversion modulated the coupling in V2/3, but not in V1. Our findings provide new evidence for priority maps of natural stimuli in early visual areas and extend traditional attention priority map theories by revealing another critical factor that affects priority maps in extrastriate cortex in addition to physical salience and task goal relevance: image configuration. SIGNIFICANCE STATEMENT Prominent theories of attention posit that attention sampling of visual information is mediated by a series of interacting topographic representations of visual space known as attention priority maps. Until now, neural evidence of attention priority maps has been limited to studies involving simple artificial stimuli and much remains unknown about the neural correlates of priority maps of natural stimuli. Here, we show that attention priority maps of face stimuli could be found in primary visual cortex (V1) and the extrastriate cortex (V2 and V3). Moreover, representations in extrastriate visual areas are strongly modulated by image configuration. These findings extend our understanding of attention priority maps significantly by showing that they are modulated, not only by physical salience and task-goal relevance, but also by the configuration of stimuli images. Copyright © 2018 the authors 0270-6474/18/380149-09$15.00/0.
White, Thomas E; Rojas, Bibiana; Mappes, Johanna; Rautiala, Petri; Kemp, Darrell J
2017-09-01
Much of what we know about human colour perception has come from psychophysical studies conducted in tightly-controlled laboratory settings. An enduring challenge, however, lies in extrapolating this knowledge to the noisy conditions that characterize our actual visual experience. Here we combine statistical models of visual perception with empirical data to explore how chromatic (hue/saturation) and achromatic (luminant) information underpins the detection and classification of stimuli in a complex forest environment. The data best support a simple linear model of stimulus detection as an additive function of both luminance and saturation contrast. The strength of each predictor is modest yet consistent across gross variation in viewing conditions, which accords with expectation based upon general primate psychophysics. Our findings implicate simple visual cues in the guidance of perception amidst natural noise, and highlight the potential for informing human vision via a fusion between psychophysical modelling and real-world behaviour. © 2017 The Author(s).
Achilles' ear? Inferior human short-term and recognition memory in the auditory modality.
Bigelow, James; Poremba, Amy
2014-01-01
Studies of the memory capabilities of nonhuman primates have consistently revealed a relative weakness for auditory compared to visual or tactile stimuli: extensive training is required to learn auditory memory tasks, and subjects are only capable of retaining acoustic information for a brief period of time. Whether a parallel deficit exists in human auditory memory remains an outstanding question. In the current study, a short-term memory paradigm was used to test human subjects' retention of simple auditory, visual, and tactile stimuli that were carefully equated in terms of discriminability, stimulus exposure time, and temporal dynamics. Mean accuracy did not differ significantly among sensory modalities at very short retention intervals (1-4 s). However, at longer retention intervals (8-32 s), accuracy for auditory stimuli fell substantially below that observed for visual and tactile stimuli. In the interest of extending the ecological validity of these findings, a second experiment tested recognition memory for complex, naturalistic stimuli that would likely be encountered in everyday life. Subjects were able to identify all stimuli when retention was not required, however, recognition accuracy following a delay period was again inferior for auditory compared to visual and tactile stimuli. Thus, the outcomes of both experiments provide a human parallel to the pattern of results observed in nonhuman primates. The results are interpreted in light of neuropsychological data from nonhuman primates, which suggest a difference in the degree to which auditory, visual, and tactile memory are mediated by the perirhinal and entorhinal cortices.
ERIC Educational Resources Information Center
Maekawa, Toshihiko; Tobimatsu, Shozo; Inada, Naoko; Oribe, Naoya; Onitsuka, Toshiaki; Kanba, Shigenobu; Kamio, Yoko
2011-01-01
Individuals with high-functioning autism spectrum disorder (HF-ASD) often show superior performance in simple visual tasks, despite difficulties in the perception of socially important information such as facial expression. The neural basis of visual perception abnormalities associated with HF-ASD is currently unclear. We sought to elucidate the…
Simple and powerful visual stimulus generator.
Kremlácek, J; Kuba, M; Kubová, Z; Vít, F
1999-02-01
We describe a cheap, simple, portable and efficient approach to visual stimulation for neurophysiology which does not need any special hardware equipment. The method based on an animation technique uses the FLI autodesk animator format. This form of the animation is replayed by a special program ('player') providing synchronisation pulses toward recording system via parallel port. The 'player is running on an IBM compatible personal computer under MS-DOS operation system and stimulus is displayed on a VGA computer monitor. Various stimuli created with this technique for visual evoked potentials (VEPs) are presented.
Crowding with conjunctions of simple features.
Põder, Endel; Wagemans, Johan
2007-11-20
Several recent studies have related crowding with the feature integration stage in visual processing. In order to understand the mechanisms involved in this stage, it is important to use stimuli that have several features to integrate, and these features should be clearly defined and measurable. In this study, Gabor patches were used as target and distractor stimuli. The stimuli differed in three dimensions: spatial frequency, orientation, and color. A group of 3, 5, or 7 objects was presented briefly at 4 deg eccentricity of the visual field. The observers' task was to identify the object located in the center of the group. A strong effect of the number of distractors was observed, consistent with various spatial pooling models. The analysis of incorrect responses revealed that these were a mix of feature errors and mislocalizations of the target object. Feature errors were not purely random, but biased by the features of distractors. We propose a simple feature integration model that predicts most of the observed regularities.
Neuronal Response Gain Enhancement prior to Microsaccades.
Chen, Chih-Yang; Ignashchenkova, Alla; Thier, Peter; Hafed, Ziad M
2015-08-17
Neuronal response gain enhancement is a classic signature of the allocation of covert visual attention without eye movements. However, microsaccades continuously occur during gaze fixation. Because these tiny eye movements are preceded by motor preparatory signals well before they are triggered, it may be the case that a corollary of such signals may cause enhancement, even without attentional cueing. In six different macaque monkeys and two different brain areas previously implicated in covert visual attention (superior colliculus and frontal eye fields), we show neuronal response gain enhancement for peripheral stimuli appearing immediately before microsaccades. This enhancement occurs both during simple fixation with behaviorally irrelevant peripheral stimuli and when the stimuli are relevant for the subsequent allocation of covert visual attention. Moreover, this enhancement occurs in both purely visual neurons and visual-motor neurons, and it is replaced by suppression for stimuli appearing immediately after microsaccades. Our results suggest that there may be an obligatory link between microsaccade occurrence and peripheral selective processing, even though microsaccades can be orders of magnitude smaller than the eccentricities of peripheral stimuli. Because microsaccades occur in a repetitive manner during fixation, and because these eye movements reset neurophysiological rhythms every time they occur, our results highlight a possible mechanism through which oculomotor events may aid periodic sampling of the visual environment for the benefit of perception, even when gaze is prevented from overtly shifting. One functional consequence of such periodic sampling could be the magnification of rhythmic fluctuations of peripheral covert visual attention. Copyright © 2015 Elsevier Ltd. All rights reserved.
Raymond, Jane E; O'Brien, Jennifer L
2009-08-01
Learning to associate the probability and value of behavioral outcomes with specific stimuli (value learning) is essential for rational decision making. However, in demanding cognitive conditions, access to learned values might be constrained by limited attentional capacity. We measured recognition of briefly presented faces seen previously in a value-learning task involving monetary wins and losses; the recognition task was performed both with and without constraints on available attention. Regardless of available attention, recognition was substantially enhanced for motivationally salient stimuli (i.e., stimuli highly predictive of outcomes), compared with equally familiar stimuli that had weak or no motivational salience, and this effect was found regardless of valence (win or loss). However, when attention was constrained (because stimuli were presented during an attentional blink, AB), valence determined recognition; win-associated faces showed no AB, but all other faces showed large ABs. Motivational salience acts independently of attention to modulate simple perceptual decisions, but when attention is limited, visual processing is biased in favor of reward-associated stimuli.
Multisensory integration across the senses in young and old adults
Mahoney, Jeannette R.; Li, Po Ching Clara; Oh-Park, Mooyeon; Verghese, Joe; Holtzer, Roee
2011-01-01
Stimuli are processed concurrently and across multiple sensory inputs. Here we directly compared the effect of multisensory integration (MSI) on reaction time across three paired sensory inputs in eighteen young (M=19.17 yrs) and eighteen old (M=76.44 yrs) individuals. Participants were determined to be non-demented and without any medical or psychiatric conditions that would affect their performance. Participants responded to randomly presented unisensory (auditory, visual, somatosensory) stimuli and three paired sensory inputs consisting of auditory-somatosensory (AS) auditory-visual (AV) and visual-somatosensory (VS) stimuli. Results revealed that reaction time (RT) to all multisensory pairings was significantly faster than those elicited to the constituent unisensory conditions across age groups; findings that could not be accounted for by simple probability summation. Both young and old participants responded the fastest to multisensory pairings containing somatosensory input. Compared to younger adults, older adults demonstrated a significantly greater RT benefit when processing concurrent VS information. In terms of co-activation, older adults demonstrated a significant increase in the magnitude of visual-somatosensory co-activation (i.e., multisensory integration), while younger adults demonstrated a significant increase in the magnitude of auditory-visual and auditory-somatosensory co-activation. This study provides first evidence in support of the facilitative effect of pairing somatosensory with visual stimuli in older adults. PMID:22024545
Griffeth, Valerie E M; Simon, Aaron B; Buxton, Richard B
2015-01-01
Quantitative functional MRI (fMRI) experiments to measure blood flow and oxygen metabolism coupling in the brain typically rely on simple repetitive stimuli. Here we compared such stimuli with a more naturalistic stimulus. Previous work on the primary visual cortex showed that direct attentional modulation evokes a blood flow (CBF) response with a relatively large oxygen metabolism (CMRO2) response in comparison to an unattended stimulus, which evokes a much smaller metabolic response relative to the flow response. We hypothesized that a similar effect would be associated with a more engaging stimulus, and tested this by measuring the primary human visual cortex response to two contrast levels of a radial flickering checkerboard in comparison to the response to free viewing of brief movie clips. We did not find a significant difference in the blood flow-metabolism coupling (n=%ΔCBF/%ΔCMRO2) between the movie stimulus and the flickering checkerboards employing two different analysis methods: a standard analysis using the Davis model and a new analysis using a heuristic model dependent only on measured quantities. This finding suggests that in the primary visual cortex a naturalistic stimulus (in comparison to a simple repetitive stimulus) is either not sufficient to provoke a change in flow-metabolism coupling by attentional modulation as hypothesized, that the experimental design disrupted the cognitive processes underlying the response to a more natural stimulus, or that the technique used is not sensitive enough to detect a small difference. Copyright © 2014 Elsevier Inc. All rights reserved.
Distributed Fading Memory for Stimulus Properties in the Primary Visual Cortex
Singer, Wolf; Maass, Wolfgang
2009-01-01
It is currently not known how distributed neuronal responses in early visual areas carry stimulus-related information. We made multielectrode recordings from cat primary visual cortex and applied methods from machine learning in order to analyze the temporal evolution of stimulus-related information in the spiking activity of large ensembles of around 100 neurons. We used sequences of up to three different visual stimuli (letters of the alphabet) presented for 100 ms and with intervals of 100 ms or larger. Most of the information about visual stimuli extractable by sophisticated methods of machine learning, i.e., support vector machines with nonlinear kernel functions, was also extractable by simple linear classification such as can be achieved by individual neurons. New stimuli did not erase information about previous stimuli. The responses to the most recent stimulus contained about equal amounts of information about both this and the preceding stimulus. This information was encoded both in the discharge rates (response amplitudes) of the ensemble of neurons and, when using short time constants for integration (e.g., 20 ms), in the precise timing of individual spikes (≤∼20 ms), and persisted for several 100 ms beyond the offset of stimuli. The results indicate that the network from which we recorded is endowed with fading memory and is capable of performing online computations utilizing information about temporally sequential stimuli. This result challenges models assuming frame-by-frame analyses of sequential inputs. PMID:20027205
ERIC Educational Resources Information Center
Miller, Jeff; Van Nes, Fenna
2007-01-01
Two experiments tested predictions of the hemispheric coactivation model for redundancy gain (J. O. Miller, 2004). Simple reaction time was measured in divided attention tasks with visual stimuli presented to the left or right of fixation or redundantly to both sides. Experiment 1 tested the prediction that redundancy gain--the decrease in…
Truppa, Valentina; Carducci, Paola; Trapanese, Cinzia; Hanus, Daniel
2015-01-01
Most experimental paradigms to study visual cognition in humans and non-human species are based on discrimination tasks involving the choice between two or more visual stimuli. To this end, different types of stimuli and procedures for stimuli presentation are used, which highlights the necessity to compare data obtained with different methods. The present study assessed whether, and to what extent, capuchin monkeys’ ability to solve a size discrimination problem is influenced by the type of procedure used to present the problem. Capuchins’ ability to generalise knowledge across different tasks was also evaluated. We trained eight adult tufted capuchin monkeys to select the larger of two stimuli of the same shape and different sizes by using pairs of food items (Experiment 1), computer images (Experiment 1) and objects (Experiment 2). Our results indicated that monkeys achieved the learning criterion faster with food stimuli compared to both images and objects. They also required consistently fewer trials with objects than with images. Moreover, female capuchins had higher levels of acquisition accuracy with food stimuli than with images. Finally, capuchins did not immediately transfer the solution of the problem acquired in one task condition to the other conditions. Overall, these findings suggest that – even in relatively simple visual discrimination problems where a single perceptual dimension (i.e., size) has to be judged – learning speed strongly depends on the mode of presentation. PMID:25927363
The mere exposure effect in the domain of haptics.
Jakesch, Martina; Carbon, Claus-Christian
2012-01-01
Zajonc showed that the attitude towards stimuli that one had been previously exposed to is more positive than towards novel stimuli. This mere exposure effect (MEE) has been tested extensively using various visual stimuli. Research on the MEE is sparse, however, for other sensory modalities. We used objects of two material categories (stone and wood) and two complexity levels (simple and complex) to test the influence of exposure frequency (F0 = novel stimuli, F2 = stimuli exposed twice, F10 = stimuli exposed ten times) under two sensory modalities (haptics only and haptics & vision). Effects of exposure frequency were found for high complex stimuli with significantly increasing liking from F0 to F2 and F10, but only for the stone category. Analysis of "Need for Touch" data showed the MEE in participants with high need for touch, which suggests different sensitivity or saturation levels of MEE. This different sensitivity or saturation levels might also reflect the effects of expertise on the haptic evaluation of objects. It seems that haptic and cross-modal MEEs are influenced by factors similar to those in the visual domain indicating a common cognitive basis.
Young Skilled Deaf Readers Have an Enhanced Perceptual Span in Reading.
Bélanger, Nathalie N; Lee, Michelle; Schotter, Elizabeth R
2017-04-27
Recently, Bélanger, Slattery, Mayberry and Rayner (2012) showed, using the moving window paradigm, that profoundly deaf adults have a wider perceptual span during reading relative to hearing adults matched on reading level. This difference might be related to the fact that deaf adults allocate more visual attention to simple stimuli in the parafovea (Bavelier, Dye & Hauser, 2006). Importantly, this reorganization of visual attention in deaf individuals is already manifesting in deaf children (Dye, Hauser & Bavelier, 2009). This leads to questions about the time course of the emergence of an enhanced perceptual span (which is under attentional control; Rayner, 2014; Miellet, O'Donnell, & Sereno, 2009) in young deaf readers. The present research addressed this question by comparing the perceptual spans of young deaf readers (age 7-15) and young hearing children (age 7-15). Young deaf readers, like deaf adults, were found to have a wider perceptual span relative to their hearing peers matched on reading level, suggesting that strong and early reorganization of visual attention in deaf individuals goes beyond the processing of simple visual stimuli and emerges into more cognitively complex tasks, such as reading.
Mirror me: Imitative responses in adults with autism.
Schunke, Odette; Schöttle, Daniel; Vettorazzi, Eik; Brandt, Valerie; Kahl, Ursula; Bäumer, Tobias; Ganos, Christos; David, Nicole; Peiker, Ina; Engel, Andreas K; Brass, Marcel; Münchau, Alexander
2016-02-01
Dysfunctions of the human mirror neuron system have been postulated to underlie some deficits in autism spectrum disorders including poor imitative performance and impaired social skills. Using three reaction time experiments addressing mirror neuron system functions under simple and complex conditions, we examined 20 adult autism spectrum disorder participants and 20 healthy controls matched for age, gender and education. Participants performed simple finger-lifting movements in response to (1) biological finger and non-biological dot movement stimuli, (2) acoustic stimuli and (3) combined visual-acoustic stimuli with different contextual (compatible/incompatible) and temporal (simultaneous/asynchronous) relation. Mixed model analyses revealed slower reaction times in autism spectrum disorder. Both groups responded faster to biological compared to non-biological stimuli (Experiment 1) implying intact processing advantage for biological stimuli in autism spectrum disorder. In Experiment 3, both groups had similar 'interference effects' when stimuli were presented simultaneously. However, autism spectrum disorder participants had abnormally slow responses particularly when incompatible stimuli were presented consecutively. Our results suggest imitative control deficits rather than global imitative system impairments. © The Author(s) 2015.
Effect of negative emotions evoked by light, noise and taste on trigeminal thermal sensitivity.
Yang, Guangju; Baad-Hansen, Lene; Wang, Kelun; Xie, Qiu-Fei; Svensson, Peter
2014-11-07
Patients with migraine often have impaired somatosensory function and experience headache attacks triggered by exogenous stimulus, such as light, sound or taste. This study aimed to assess the influence of three controlled conditioning stimuli (visual, auditory and gustatory stimuli and combined stimuli) on affective state and thermal sensitivity in healthy human participants. All participants attended four experimental sessions with visual, auditory and gustatory conditioning stimuli and combination of all stimuli, in a randomized sequence. In each session, the somatosensory sensitivity was tested in the perioral region with use of thermal stimuli with and without the conditioning stimuli. Positive and Negative Affect States (PANAS) were assessed before and after the tests. Subject based ratings of the conditioning and test stimuli in addition to skin temperature and heart rate as indicators of arousal responses were collected in real time during the tests. The three conditioning stimuli all induced significant increases in negative PANAS scores (paired t-test, P ≤0.016). Compared with baseline, the increases were in a near dose-dependent manner during visual and auditory conditioning stimulation. No significant effects of any single conditioning stimuli were observed on trigeminal thermal sensitivity (P ≥0.051) or arousal parameters (P ≥0.057). The effects of combined conditioning stimuli on subjective ratings (P ≤0.038) and negative affect (P = 0.011) were stronger than those of single stimuli. All three conditioning stimuli provided a simple way to evoke a negative affective state without physical arousal or influence on trigeminal thermal sensitivity. Multisensory conditioning had stronger effects but also failed to modulate thermal sensitivity, suggesting that so-called exogenous trigger stimuli e.g. bright light, noise, unpleasant taste in patients with migraine may require a predisposed or sensitized nervous system.
Semantic congruency and the (reversed) Colavita effect in children and adults.
Wille, Claudia; Ebersbach, Mirjam
2016-01-01
When presented with auditory, visual, or bimodal audiovisual stimuli in a discrimination task, adults tend to ignore the auditory component in bimodal stimuli and respond to the visual component only (i.e., Colavita visual dominance effect). The same is true for older children, whereas young children are dominated by the auditory component of bimodal audiovisual stimuli. This suggests a change of sensory dominance during childhood. The aim of the current study was to investigate, in three experimental conditions, whether children and adults show sensory dominance when presented with complex semantic stimuli and whether this dominance can be modulated by stimulus characteristics such as semantic (in)congruency, frequency of bimodal trials, and color information. Semantic (in)congruency did not affect the magnitude of the auditory dominance effect in 6-year-olds or the visual dominance effect in adults, but it was a modulating factor of the visual dominance in 9-year-olds (Conditions 1 and 2). Furthermore, the absence of color information (Condition 3) did not affect auditory dominance in 6-year-olds and hardly affected visual dominance in adults, whereas the visual dominance in 9-year-olds disappeared. Our results suggest that (a) sensory dominance in children and adults is not restricted to simple lights and sounds, as used in previous research, but can be extended to semantically meaningful stimuli and that (b) sensory dominance is more robust in 6-year-olds and adults than in 9-year-olds, implying a transitional stage around this age. Copyright © 2015 Elsevier Inc. All rights reserved.
Maheux, Manon; Jolicœur, Pierre
2017-04-01
We examined the role of attention and visual working memory in the evaluation of the number of target stimuli as well as their relative spatial position using the N2pc and the SPCN. Participants performed two tasks: a simple counting task in which they had to determine if a visual display contained one or two coloured items among grey fillers and one in which they had to identify a specific relation between two coloured items. The same stimuli were used for both tasks. Each task was designed to permit an easier evaluation of either the same-coloured or differently-coloured stimuli. We predicted a greater involvement of attention and visual working memory for more difficult stimulus-task pairings. The results confirmed these predictions and suggest that visuospatial configurations that require more time to evaluate induce a greater (and presumably longer) involvement of attention and visual working memory. Copyright © 2017 Elsevier B.V. All rights reserved.
Ouimet, Tia; Foster, Nicholas E V; Tryfon, Ana; Hyde, Krista L
2012-04-01
Autism spectrum disorder (ASD) is a complex neurodevelopmental condition characterized by atypical social and communication skills, repetitive behaviors, and atypical visual and auditory perception. Studies in vision have reported enhanced detailed ("local") processing but diminished holistic ("global") processing of visual features in ASD. Individuals with ASD also show enhanced processing of simple visual stimuli but diminished processing of complex visual stimuli. Relative to the visual domain, auditory global-local distinctions, and the effects of stimulus complexity on auditory processing in ASD, are less clear. However, one remarkable finding is that many individuals with ASD have enhanced musical abilities, such as superior pitch processing. This review provides a critical evaluation of behavioral and brain imaging studies of auditory processing with respect to current theories in ASD. We have focused on auditory-musical processing in terms of global versus local processing and simple versus complex sound processing. This review contributes to a better understanding of auditory processing differences in ASD. A deeper comprehension of sensory perception in ASD is key to better defining ASD phenotypes and, in turn, may lead to better interventions. © 2012 New York Academy of Sciences.
Generating Stimuli for Neuroscience Using PsychoPy.
Peirce, Jonathan W
2008-01-01
PsychoPy is a software library written in Python, using OpenGL to generate very precise visual stimuli on standard personal computers. It is designed to allow the construction of as wide a variety of neuroscience experiments as possible, with the least effort. By writing scripts in standard Python syntax users can generate an enormous variety of visual and auditory stimuli and can interact with a wide range of external hardware (enabling its use in fMRI, EEG, MEG etc.). The structure of scripts is simple and intuitive. As a result, new experiments can be written very quickly, and trying to understand a previously written script is easy, even with minimal code comments. PsychoPy can also generate movies and image sequences to be used in demos or simulated neuroscience experiments. This paper describes the range of tools and stimuli that it provides and the environment in which experiments are conducted.
Basic multisensory functions can be acquired after congenital visual pattern deprivation in humans.
Putzar, Lisa; Gondan, Matthias; Röder, Brigitte
2012-01-01
People treated for bilateral congenital cataracts offer a model to study the influence of visual deprivation in early infancy on visual and multisensory development. We investigated cross-modal integration capabilities in cataract patients using a simple detection task that provided redundant information to two different senses. In both patients and controls, redundancy gains were consistent with coactivation models, indicating an integrated processing of modality-specific information. This finding is in contrast with recent studies showing impaired higher-level multisensory interactions in cataract patients. The present results suggest that basic cross-modal integrative processes for simple short stimuli do not depend on visual and/or crossmodal input since birth.
Using complex auditory-visual samples to produce emergent relations in children with autism.
Groskreutz, Nicole C; Karsina, Allen; Miguel, Caio F; Groskreutz, Mark P
2010-03-01
Six participants with autism learned conditional relations between complex auditory-visual sample stimuli (dictated words and pictures) and simple visual comparisons (printed words) using matching-to-sample training procedures. Pre- and posttests examined potential stimulus control by each element of the complex sample when presented individually and emergence of additional conditional relations and oral labeling. Tests revealed class-consistent performance for all participants following training.
Strength and coherence of binocular rivalry depends on shared stimulus complexity.
Alais, David; Melcher, David
2007-01-01
Presenting incompatible images to the eyes results in alternations of conscious perception, a phenomenon known as binocular rivalry. We examined rivalry using either simple stimuli (oriented gratings) or coherent visual objects (faces, houses etc). Two rivalry characteristics were measured: Depth of rivalry suppression and coherence of alternations. Rivalry between coherent visual objects exhibits deep suppression and coherent rivalry, whereas rivalry between gratings exhibits shallow suppression and piecemeal rivalry. Interestingly, rivalry between a simple and a complex stimulus displays the same characteristics (shallow and piecemeal) as rivalry between two simple stimuli. Thus, complex stimuli fail to rival globally unless the fellow stimulus is also global. We also conducted a face adaptation experiment. Adaptation to rivaling faces improved subsequent face discrimination (as expected), but adaptation to a rivaling face/grating pair did not. To explain this, we suggest rivalry must be an early and local process (at least initially), instigated by the failure of binocular fusion, which can then become globally organized by feedback from higher-level areas when both rivalry stimuli are global, so that rivalry tends to oscillate coherently. These globally assembled images then flow through object processing areas, with the dominant image gaining in relative strength in a form of 'biased competition', therefore accounting for the deeper suppression of global images. In contrast, when only one eye receives a global image, local piecemeal suppression from the fellow eye overrides the organizing effects of global feedback to prevent coherent image formation. This indicates the primacy of local over global processes in rivalry.
Neural basis of hierarchical visual form processing of Japanese Kanji characters.
Higuchi, Hiroki; Moriguchi, Yoshiya; Murakami, Hiroki; Katsunuma, Ruri; Mishima, Kazuo; Uno, Akira
2015-12-01
We investigated the neural processing of reading Japanese Kanji characters, which involves unique hierarchical visual processing, including the recognition of visual components specific to Kanji, such as "radicals." We performed functional MRI to measure brain activity in response to hierarchical visual stimuli containing (1) real Kanji characters (complete structure with semantic information), (2) pseudo Kanji characters (subcomponents without complete character structure), (3) artificial characters (character fragments), and (4) checkerboard (simple photic stimuli). As we expected, the peaks of the activation in response to different stimulus types were aligned within the left occipitotemporal visual region along the posterior-anterior axis in order of the structural complexity of the stimuli, from fragments (3) to complete characters (1). Moreover, only the real Kanji characters produced functional connectivity between the left inferotemporal area and the language area (left inferior frontal triangularis), while pseudo Kanji characters induced connectivity between the left inferotemporal area and the bilateral cerebellum and left putamen. Visual processing of Japanese Kanji takes place in the left occipitotemporal cortex, with a clear hierarchy within the region such that the neural activation differentiates the elements in Kanji characters' fragments, subcomponents, and semantics, with different patterns of connectivity to remote regions among the elements.
Visual Short-Term Memory Capacity for Simple and Complex Objects
ERIC Educational Resources Information Center
Luria, Roy; Sessa, Paola; Gotler, Alex; Jolicoeur, Pierre; Dell'Acqua, Roberto
2010-01-01
Does the capacity of visual short-term memory (VSTM) depend on the complexity of the objects represented in memory? Although some previous findings indicated lower capacity for more complex stimuli, other results suggest that complexity effects arise during retrieval (due to errors in the comparison process with what is in memory) that is not…
Neural theory for the perception of causal actions.
Fleischer, Falk; Christensen, Andrea; Caggiano, Vittorio; Thier, Peter; Giese, Martin A
2012-07-01
The efficient prediction of the behavior of others requires the recognition of their actions and an understanding of their action goals. In humans, this process is fast and extremely robust, as demonstrated by classical experiments showing that human observers reliably judge causal relationships and attribute interactive social behavior to strongly simplified stimuli consisting of simple moving geometrical shapes. While psychophysical experiments have identified critical visual features that determine the perception of causality and agency from such stimuli, the underlying detailed neural mechanisms remain largely unclear, and it is an open question why humans developed this advanced visual capability at all. We created pairs of naturalistic and abstract stimuli of hand actions that were exactly matched in terms of their motion parameters. We show that varying critical stimulus parameters for both stimulus types leads to very similar modulations of the perception of causality. However, the additional form information about the hand shape and its relationship with the object supports more fine-grained distinctions for the naturalistic stimuli. Moreover, we show that a physiologically plausible model for the recognition of goal-directed hand actions reproduces the observed dependencies of causality perception on critical stimulus parameters. These results support the hypothesis that selectivity for abstract action stimuli might emerge from the same neural mechanisms that underlie the visual processing of natural goal-directed action stimuli. Furthermore, the model proposes specific detailed neural circuits underlying this visual function, which can be evaluated in future experiments.
Tucker, Thomas R; Katz, Lawrence C
2003-01-01
To investigate how neurons in cortical layer 2/3 integrate horizontal inputs arising from widely distributed sites, we combined intracellular recording and voltage-sensitive dye imaging to visualize the spatiotemporal dynamics of neuronal activity evoked by electrical stimulation of multiple sites in visual cortex. Individual stimuli evoked characteristic patterns of optical activity, while delivering stimuli at multiple sites generated interacting patterns in the regions of overlap. We observed that neurons in overlapping regions received convergent horizontal activation that generated nonlinear responses due to the emergence of large inhibitory potentials. The results indicate that co-activation of multiple sets of horizontal connections recruit strong inhibition from local inhibitory networks, causing marked deviations from simple linear integration.
Visual response time to colored stimuli in peripheral retina - Evidence for binocular summation
NASA Technical Reports Server (NTRS)
Haines, R. F.
1977-01-01
Simple onset response time (RT) experiments, previously shown to exhibit binocular summation effects for white stimuli along the horizontal meridian, were performed for red and green stimuli along 5 oblique meridians. Binocular RT was significantly shorter than monocular RT for a 45-min-diameter spot of red, green, or white light within eccentricities of about 50 deg from the fovea. Relatively large meridian differences were noted that appear to be due to the degree to which the images fall on corresponding retinal areas.
Shi, Lijuan; Zhou, Yuanyue; Ou, Jianjun; Gong, Jingbo; Wang, Suhong; Cui, Xilong; Lyu, Hailong; Zhao, Jingping; Luo, Xuerong
2015-01-01
Eye-tracking studies in young children with autism spectrum disorder (ASD) have shown a visual attention preference for geometric patterns when viewing paired dynamic social images (DSIs) and dynamic geometric images (DGIs). In the present study, eye-tracking of two different paired presentations of DSIs and DGIs was monitored in a group of 13 children aged 4 to 6 years with ASD and 20 chronologically age-matched typically developing children (TDC). The results indicated that compared with the control group, children with ASD attended significantly less to DSIs showing two or more children playing than to similar DSIs showing a single child. Visual attention preference in 4- to 6-year-old children with ASDs, therefore, appears to be modulated by the type of visual stimuli. PMID:25781170
The Mere Exposure Effect in the Domain of Haptics
Jakesch, Martina; Carbon, Claus-Christian
2012-01-01
Background Zajonc showed that the attitude towards stimuli that one had been previously exposed to is more positive than towards novel stimuli. This mere exposure effect (MEE) has been tested extensively using various visual stimuli. Research on the MEE is sparse, however, for other sensory modalities. Methodology/Principal Findings We used objects of two material categories (stone and wood) and two complexity levels (simple and complex) to test the influence of exposure frequency (F0 = novel stimuli, F2 = stimuli exposed twice, F10 = stimuli exposed ten times) under two sensory modalities (haptics only and haptics & vision). Effects of exposure frequency were found for high complex stimuli with significantly increasing liking from F0 to F2 and F10, but only for the stone category. Analysis of “Need for Touch” data showed the MEE in participants with high need for touch, which suggests different sensitivity or saturation levels of MEE. Conclusions/Significance This different sensitivity or saturation levels might also reflect the effects of expertise on the haptic evaluation of objects. It seems that haptic and cross-modal MEEs are influenced by factors similar to those in the visual domain indicating a common cognitive basis. PMID:22347451
Sanfratello, Lori; Aine, Cheryl; Stephen, Julia
2018-05-25
Impairments in auditory and visual processing are common in schizophrenia (SP). In the unisensory realm visual deficits are primarily noted for the dorsal visual stream. In addition, insensitivity to timing offsets between stimuli are widely reported for SP. The aim of the present study was to test at the physiological level differences in dorsal/ventral stream visual processing and timing sensitivity between SP and healthy controls (HC) using MEG and a simple auditory/visual task utilizing a variety of multisensory conditions. The paradigm included all combinations of synchronous/asynchronous and central/peripheral stimuli, yielding 4 task conditions. Both HC and SP groups showed activation in parietal areas (dorsal visual stream) during all multisensory conditions, with parietal areas showing decreased activation for SP relative to HC, and a significantly delayed peak of activation for SP in intraparietal sulcus (IPS). We also observed a differential effect of stimulus synchrony on HC and SP parietal response. Furthermore, a (negative) correlation was found between SP positive symptoms and activity in IPS. Taken together, our results provide evidence of impairment of the dorsal visual stream in SP during a multisensory task, along with an altered response to timing offsets between presented multisensory stimuli. Copyright © 2018 Elsevier B.V. All rights reserved.
Serial recall of colors: Two models of memory for serial order applied to continuous visual stimuli.
Peteranderl, Sonja; Oberauer, Klaus
2018-01-01
This study investigated the effects of serial position and temporal distinctiveness on serial recall of simple visual stimuli. Participants observed lists of five colors presented at varying, unpredictably ordered interitem intervals, and their task was to reproduce the colors in their order of presentation by selecting colors on a continuous-response scale. To control for the possibility of verbal labeling, articulatory suppression was required in one of two experimental sessions. The predictions were derived through simulation from two computational models of serial recall: SIMPLE represents the class of temporal-distinctiveness models, whereas SOB-CS represents event-based models. According to temporal-distinctiveness models, items that are temporally isolated within a list are recalled more accurately than items that are temporally crowded. In contrast, event-based models assume that the time intervals between items do not affect recall performance per se, although free time following an item can improve memory for that item because of extended time for the encoding. The experimental and the simulated data were fit to an interference measurement model to measure the tendency to confuse items with other items nearby on the list-the locality constraint-in people as well as in the models. The continuous-reproduction performance showed a pronounced primacy effect with no recency, as well as some evidence for transpositions obeying the locality constraint. Though not entirely conclusive, this evidence favors event-based models over a role for temporal distinctiveness. There was also a strong detrimental effect of articulatory suppression, suggesting that verbal codes can be used to support serial-order memory of simple visual stimuli.
Stobbe, Nina; Westphal-Fitch, Gesche; Aust, Ulrike; Fitch, W. Tecumseh
2012-01-01
Artificial grammar learning (AGL) provides a useful tool for exploring rule learning strategies linked to general purpose pattern perception. To be able to directly compare performance of humans with other species with different memory capacities, we developed an AGL task in the visual domain. Presenting entire visual patterns simultaneously instead of sequentially minimizes the amount of required working memory. This approach allowed us to evaluate performance levels of two bird species, kea (Nestor notabilis) and pigeons (Columba livia), in direct comparison to human participants. After being trained to discriminate between two types of visual patterns generated by rules at different levels of computational complexity and presented on a computer screen, birds and humans received further training with a series of novel stimuli that followed the same rules, but differed in various visual features from the training stimuli. Most avian and all human subjects continued to perform well above chance during this initial generalization phase, suggesting that they were able to generalize learned rules to novel stimuli. However, detailed testing with stimuli that violated the intended rules regarding the exact number of stimulus elements indicates that neither bird species was able to successfully acquire the intended pattern rule. Our data suggest that, in contrast to humans, these birds were unable to master a simple rule above the finite-state level, even with simultaneous item presentation and despite intensive training. PMID:22688635
Microcontroller based fibre-optic visual presentation system for multisensory neuroimaging.
Kurniawan, Veldri; Klemen, Jane; Chambers, Christopher D
2011-10-30
Presenting visual stimuli in physical 3D space during fMRI experiments carries significant technical challenges. Certain types of multisensory visuotactile experiments and visuomotor tasks require presentation of visual stimuli in peripersonal space, which cannot be accommodated by ordinary projection screens or binocular goggles. However, light points produced by a group of LEDs can be transmitted through fibre-optic cables and positioned anywhere inside the MRI scanner. Here we describe the design and implementation of a microcontroller-based programmable digital device for controlling fibre-optically transmitted LED lights from a PC. The main feature of this device is the ability to independently control the colour, brightness, and timing of each LED. Moreover, the device was designed in a modular and extensible way, which enables easy adaptation for various experimental paradigms. The device was tested and validated in three fMRI experiments involving basic visual perception, a simple colour discrimination task, and a blocked multisensory visuo-tactile task. The results revealed significant lateralized activation in occipital cortex of all participants, a reliable response in ventral occipital areas to colour stimuli elicited by the device, and strong activations in multisensory brain regions in the multisensory task. Overall, these findings confirm the suitability of this device for presenting complex fibre-optic visual and cross-modal stimuli inside the scanner. Copyright © 2011 Elsevier B.V. All rights reserved.
Contrast discrimination, non-uniform patterns and change blindness.
Scott-Brown, K C; Orbach, H S
1998-01-01
Change blindness--our inability to detect large changes in natural scenes when saccades, blinks and other transients interrupt visual input--seems to contradict psychophysical evidence for our exquisite sensitivity to contrast changes. Can the type of effects described as 'change blindness' be observed with simple, multi-element stimuli, amenable to psychophysical analysis? Such stimuli, composed of five mixed contrast elements, elicited a striking increase in contrast increment thresholds compared to those for an isolated element. Cue presentation prior to the stimulus substantially reduced thresholds, as for change blindness with natural scenes. On one hand, explanations for change blindness based on abstract and sketchy representations in short-term visual memory seem inappropriate for this low-level image property of contrast where there is ample evidence for exquisite performance on memory tasks. On the other hand, the highly increased thresholds for mixed contrast elements, and the decreased thresholds when a cue is present, argue against any simple early attentional or sensory explanation for change blindness. Thus, psychophysical results for very simple patterns cannot straightforwardly predict results even for the slightly more complicated patterns studied here. PMID:9872004
Measuring the effect of attention on simple visual search.
Palmer, J; Ames, C T; Lindsey, D T
1993-02-01
Set-size in visual search may be due to 1 or more of 3 factors: sensory processes such as lateral masking between stimuli, attentional processes limiting the perception of individual stimuli, or attentional processes affecting the decision rules for combining information from multiple stimuli. These possibilities were evaluated in tasks such as searching for a longer line among shorter lines. To evaluate sensory contributions, display set-size effects were compared with cuing conditions that held sensory phenomena constant. Similar effects for the display and cue manipulations suggested that sensory processes contributed little under the conditions of this experiment. To evaluate the contribution of decision processes, the set-size effects were modeled with signal detection theory. In these models, a decision effect alone was sufficient to predict the set-size effects without any attentional limitation due to perception.
The Attentional Drift Diffusion Model of Simple Perceptual Decision-Making.
Tavares, Gabriela; Perona, Pietro; Rangel, Antonio
2017-01-01
Perceptual decisions requiring the comparison of spatially distributed stimuli that are fixated sequentially might be influenced by fluctuations in visual attention. We used two psychophysical tasks with human subjects to investigate the extent to which visual attention influences simple perceptual choices, and to test the extent to which the attentional Drift Diffusion Model (aDDM) provides a good computational description of how attention affects the underlying decision processes. We find evidence for sizable attentional choice biases and that the aDDM provides a reasonable quantitative description of the relationship between fluctuations in visual attention, choices and reaction times. We also find that exogenous manipulations of attention induce choice biases consistent with the predictions of the model.
A comparative study of visual reaction time in table tennis players and healthy controls.
Bhabhor, Mahesh K; Vidja, Kalpesh; Bhanderi, Priti; Dodhia, Shital; Kathrotia, Rajesh; Joshi, Varsha
2013-01-01
Visual reaction time is time required to response to visual stimuli. The present study was conducted to measure visual reaction time in 209 subjects, 50 table tennis (TT) players and 159 healthy controls. The visual reaction time was measured by the direct RT computerized software in healthy controls and table tennis players. Simple visual reaction time was measured. During the reaction time testing, visual stimuli were given for eighteen times and average reaction time was taken as the final reaction time. The study shows that table tennis players had faster reaction time than healthy controls. On multivariate analysis, it was found that TT players had 74.121 sec (95% CI 98.8 and 49.4 sec) faster reaction time compared to non-TT players of same age and BMI. Also playing TT has a profound influence on visual reaction time than BMI. Our study concluded that persons involved in sports are having good reaction time as compared to controls. These results support the view that playing of table tennis is beneficial to eye-hand reaction time, improve the concentration and alertness.
Synaptic Mechanisms Generating Orientation Selectivity in the ON Pathway of the Rabbit Retina
Venkataramani, Sowmya
2016-01-01
Neurons that signal the orientation of edges within the visual field have been widely studied in primary visual cortex. Much less is known about the mechanisms of orientation selectivity that arise earlier in the visual stream. Here we examine the synaptic and morphological properties of a subtype of orientation-selective ganglion cell in the rabbit retina. The receptive field has an excitatory ON center, flanked by excitatory OFF regions, a structure similar to simple cell receptive fields in primary visual cortex. Examination of the light-evoked postsynaptic currents in these ON-type orientation-selective ganglion cells (ON-OSGCs) reveals that synaptic input is mediated almost exclusively through the ON pathway. Orientation selectivity is generated by larger excitation for preferred relative to orthogonal stimuli, and conversely larger inhibition for orthogonal relative to preferred stimuli. Excitatory orientation selectivity arises in part from the morphology of the dendritic arbors. Blocking GABAA receptors reduces orientation selectivity of the inhibitory synaptic inputs and the spiking responses. Negative contrast stimuli in the flanking regions produce orientation-selective excitation in part by disinhibition of a tonic NMDA receptor-mediated input arising from ON bipolar cells. Comparison with earlier studies of OFF-type OSGCs indicates that diverse synaptic circuits have evolved in the retina to detect the orientation of edges in the visual input. SIGNIFICANCE STATEMENT A core goal for visual neuroscientists is to understand how neural circuits at each stage of the visual system extract and encode features from the visual scene. This study documents a novel type of orientation-selective ganglion cell in the retina and shows that the receptive field structure is remarkably similar to that of simple cells in primary visual cortex. However, the data indicate that, unlike in the cortex, orientation selectivity in the retina depends on the activity of inhibitory interneurons. The results further reveal the physiological basis for feature detection in the visual system, elucidate the synaptic mechanisms that generate orientation selectivity at an early stage of visual processing, and illustrate a novel role for NMDA receptors in retinal processing. PMID:26985041
Synaptic Mechanisms Generating Orientation Selectivity in the ON Pathway of the Rabbit Retina.
Venkataramani, Sowmya; Taylor, W Rowland
2016-03-16
Neurons that signal the orientation of edges within the visual field have been widely studied in primary visual cortex. Much less is known about the mechanisms of orientation selectivity that arise earlier in the visual stream. Here we examine the synaptic and morphological properties of a subtype of orientation-selective ganglion cell in the rabbit retina. The receptive field has an excitatory ON center, flanked by excitatory OFF regions, a structure similar to simple cell receptive fields in primary visual cortex. Examination of the light-evoked postsynaptic currents in these ON-type orientation-selective ganglion cells (ON-OSGCs) reveals that synaptic input is mediated almost exclusively through the ON pathway. Orientation selectivity is generated by larger excitation for preferred relative to orthogonal stimuli, and conversely larger inhibition for orthogonal relative to preferred stimuli. Excitatory orientation selectivity arises in part from the morphology of the dendritic arbors. Blocking GABAA receptors reduces orientation selectivity of the inhibitory synaptic inputs and the spiking responses. Negative contrast stimuli in the flanking regions produce orientation-selective excitation in part by disinhibition of a tonic NMDA receptor-mediated input arising from ON bipolar cells. Comparison with earlier studies of OFF-type OSGCs indicates that diverse synaptic circuits have evolved in the retina to detect the orientation of edges in the visual input. A core goal for visual neuroscientists is to understand how neural circuits at each stage of the visual system extract and encode features from the visual scene. This study documents a novel type of orientation-selective ganglion cell in the retina and shows that the receptive field structure is remarkably similar to that of simple cells in primary visual cortex. However, the data indicate that, unlike in the cortex, orientation selectivity in the retina depends on the activity of inhibitory interneurons. The results further reveal the physiological basis for feature detection in the visual system, elucidate the synaptic mechanisms that generate orientation selectivity at an early stage of visual processing, and illustrate a novel role for NMDA receptors in retinal processing. Copyright © 2016 the authors 0270-6474/16/363336-14$15.00/0.
Multisensory brand search: How the meaning of sounds guides consumers' visual attention.
Knoeferle, Klemens M; Knoeferle, Pia; Velasco, Carlos; Spence, Charles
2016-06-01
Building on models of crossmodal attention, the present research proposes that brand search is inherently multisensory, in that the consumers' visual search for a specific brand can be facilitated by semantically related stimuli that are presented in another sensory modality. A series of 5 experiments demonstrates that the presentation of spatially nonpredictive auditory stimuli associated with products (e.g., usage sounds or product-related jingles) can crossmodally facilitate consumers' visual search for, and selection of, products. Eye-tracking data (Experiment 2) revealed that the crossmodal effect of auditory cues on visual search manifested itself not only in RTs, but also in the earliest stages of visual attentional processing, thus suggesting that the semantic information embedded within sounds can modulate the perceptual saliency of the target products' visual representations. Crossmodal facilitation was even observed for newly learnt associations between unfamiliar brands and sonic logos, implicating multisensory short-term learning in establishing audiovisual semantic associations. The facilitation effect was stronger when searching complex rather than simple visual displays, thus suggesting a modulatory role of perceptual load. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Attention Determines Contextual Enhancement versus Suppression in Human Primary Visual Cortex.
Flevaris, Anastasia V; Murray, Scott O
2015-09-02
Neural responses in primary visual cortex (V1) depend on stimulus context in seemingly complex ways. For example, responses to an oriented stimulus can be suppressed when it is flanked by iso-oriented versus orthogonally oriented stimuli but can also be enhanced when attention is directed to iso-oriented versus orthogonal flanking stimuli. Thus the exact same contextual stimulus arrangement can have completely opposite effects on neural responses-in some cases leading to orientation-tuned suppression and in other cases leading to orientation-tuned enhancement. Here we show that stimulus-based suppression and enhancement of fMRI responses in humans depends on small changes in the focus of attention and can be explained by a model that combines feature-based attention with response normalization. Neurons in the primary visual cortex (V1) respond to stimuli within a restricted portion of the visual field, termed their "receptive field." However, neuronal responses can also be influenced by stimuli that surround a receptive field, although the nature of these contextual interactions and underlying neural mechanisms are debated. Here we show that the response in V1 to a stimulus in the same context can either be suppressed or enhanced depending on the focus of attention. We are able to explain the results using a simple computational model that combines two well established properties of visual cortical responses: response normalization and feature-based enhancement. Copyright © 2015 the authors 0270-6474/15/3512273-08$15.00/0.
Liu, Baolin; Meng, Xianyao; Wang, Zhongning; Wu, Guangning
2011-11-14
In the present study, we used event-related potentials (ERPs) to examine whether semantic integration occurs for ecologically unrelated audio-visual information. Videos with synchronous audio-visual information were used as stimuli, where the auditory stimuli were sine wave sounds with different sound levels, and the visual stimuli were simple geometric figures with different areas. In the experiment, participants were shown an initial display containing a single shape (drawn from a set of 6 shapes) with a fixed size (14cm(2)) simultaneously with a 3500Hz tone of a fixed intensity (80dB). Following a short delay, another shape/tone pair was presented and the relationship between the size of the shape and the intensity of the tone varied across trials: in the V+A- condition, a large shape was paired with a soft tone; in the V+A+ condition, a large shape was paired with a loud tone, and so forth. The ERPs results revealed that N400 effect was elicited under the VA- condition (V+A- and V-A+) as compared to the VA+ condition (V+A+ and V-A-). It was shown that semantic integration would occur when simultaneous, ecologically unrelated auditory and visual stimuli enter the human brain. We considered that this semantic integration was based on semantic constraint of audio-visual information, which might come from the long-term learned association stored in the human brain and short-term experience of incoming information. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Real-world spatial regularities affect visual working memory for objects.
Kaiser, Daniel; Stein, Timo; Peelen, Marius V
2015-12-01
Traditional memory research has focused on measuring and modeling the capacity of visual working memory for simple stimuli such as geometric shapes or colored disks. Although these studies have provided important insights, it is unclear how their findings apply to memory for more naturalistic stimuli. An important aspect of real-world scenes is that they contain a high degree of regularity: For instance, lamps appear above tables, not below them. In the present study, we tested whether such real-world spatial regularities affect working memory capacity for individual objects. Using a delayed change-detection task with concurrent verbal suppression, we found enhanced visual working memory performance for objects positioned according to real-world regularities, as compared to irregularly positioned objects. This effect was specific to upright stimuli, indicating that it did not reflect low-level grouping, because low-level grouping would be expected to equally affect memory for upright and inverted displays. These results suggest that objects can be held in visual working memory more efficiently when they are positioned according to frequently experienced real-world regularities. We interpret this effect as the grouping of single objects into larger representational units.
Kyllingsbæk, Søren; Sy, Jocelyn L; Giesbrecht, Barry
2011-05-01
The allocation of visual processing capacity is a key topic in studies and theories of visual attention. The load theory of Lavie (1995) proposes that allocation happens in two steps where processing resources are first allocated to task-relevant stimuli and secondly remaining capacity 'spills over' to task-irrelevant distractors. In contrast, the Theory of Visual Attention (TVA) proposed by Bundesen (1990) assumes that allocation happens in a single step where processing capacity is allocated to all stimuli, both task-relevant and task-irrelevant, in proportion to their relative attentional weight. Here we present data from two partial report experiments where we varied the number and discriminability of the task-irrelevant stimuli (Experiment 1) and perceptual load (Experiment 2). The TVA fitted the data of the two experiments well thus favoring the simple explanation with a single step of capacity allocation. We also show that the effects of varying perceptual load can only be explained by a combined effect of allocation of processing capacity as well as limits in visual working memory. Finally, we link the results to processing capacity understood at the neural level based on the neural theory of visual attention by Bundesen et al. (2005). Copyright © 2010 Elsevier Ltd. All rights reserved.
Galashan, Daniela; Fehr, Thorsten; Kreiter, Andreas K; Herrmann, Manfred
2014-07-11
Initially, human area MT+ was considered a visual area solely processing motion information but further research has shown that it is also involved in various different cognitive operations, such as working memory tasks requiring motion-related information to be maintained or cognitive tasks with implied or expected motion.In the present fMRI study in humans, we focused on MT+ modulation during working memory maintenance using a dynamic shape-tracking working memory task with no motion-related working memory content. Working memory load was systematically varied using complex and simple stimulus material and parametrically increasing retention periods. Activation patterns for the difference between retention of complex and simple memorized stimuli were examined in order to preclude that the reported effects are caused by differences in retrieval. Conjunction analysis over all delay durations for the maintenance of complex versus simple stimuli demonstrated a wide-spread activation pattern. Percent signal change (PSC) in area MT+ revealed a pattern with higher values for the maintenance of complex shapes compared to the retention of a simple circle and with higher values for increasing delay durations. The present data extend previous knowledge by demonstrating that visual area MT+ presents a brain activity pattern usually found in brain regions that are actively involved in working memory maintenance.
Combining local and global limitations of visual search.
Põder, Endel
2017-04-01
There are different opinions about the roles of local interactions and central processing capacity in visual search. This study attempts to clarify the problem using a new version of relevant set cueing. A central precue indicates two symmetrical segments (that may contain a target object) within a circular array of objects presented briefly around the fixation point. The number of objects in the relevant segments, and density of objects in the array were varied independently. Three types of search experiments were run: (a) search for a simple visual feature (color, size, and orientation); (b) conjunctions of simple features; and (c) spatial configuration of simple features (rotated Ts). For spatial configuration stimuli, the results were consistent with a fixed global processing capacity and standard crowding zones. For simple features and their conjunctions, the results were different, dependent on the features involved. While color search exhibits virtually no capacity limits or crowding, search for an orientation target was limited by both. Results for conjunctions of features can be partly explained by the results from the respective features. This study shows that visual search is limited by both local interference and global capacity, and the limitations are different for different visual features.
Matsumoto, Narihisa; Eldridge, Mark A G; Saunders, Richard C; Reoli, Rachel; Richmond, Barry J
2016-01-06
In primates, visual recognition of complex objects depends on the inferior temporal lobe. By extension, categorizing visual stimuli based on similarity ought to depend on the integrity of the same area. We tested three monkeys before and after bilateral anterior inferior temporal cortex (area TE) removal. Although mildly impaired after the removals, they retained the ability to assign stimuli to previously learned categories, e.g., cats versus dogs, and human versus monkey faces, even with trial-unique exemplars. After the TE removals, they learned in one session to classify members from a new pair of categories, cars versus trucks, as quickly as they had learned the cats versus dogs before the removals. As with the dogs and cats, they generalized across trial-unique exemplars of cars and trucks. However, as seen in earlier studies, these monkeys with TE removals had difficulty learning to discriminate between two simple black and white stimuli. These results raise the possibility that TE is needed for memory of simple conjunctions of basic features, but that it plays only a small role in generalizing overall configural similarity across a large set of stimuli, such as would be needed for perceptual categorical assignment. The process of seeing and recognizing objects is attributed to a set of sequentially connected brain regions stretching forward from the primary visual cortex through the temporal lobe to the anterior inferior temporal cortex, a region designated area TE. Area TE is considered the final stage for recognizing complex visual objects, e.g., faces. It has been assumed, but not tested directly, that this area would be critical for visual generalization, i.e., the ability to place objects such as cats and dogs into their correct categories. Here, we demonstrate that monkeys rapidly and seemingly effortlessly categorize large sets of complex images (cats vs dogs, cars vs trucks), surprisingly, even after removal of area TE, leaving a puzzle about how this generalization is done. Copyright © 2016 the authors 0270-6474/16/360043-11$15.00/0.
A simple automated system for appetitive conditioning of zebrafish in their home tanks.
Doyle, Jillian M; Merovitch, Neil; Wyeth, Russell C; Stoyek, Matthew R; Schmidt, Michael; Wilfart, Florentin; Fine, Alan; Croll, Roger P
2017-01-15
We describe here an automated apparatus that permits rapid conditioning paradigms for zebrafish. Arduino microprocessors were used to control the delivery of auditory or visual stimuli to groups of adult or juvenile zebrafish in their home tanks in a conventional zebrafish facility. An automatic feeder dispensed precise amounts of food immediately after the conditioned stimuli, or at variable delays for controls. Responses were recorded using inexpensive cameras, with the video sequences analysed with ImageJ or Matlab. Fish showed significant conditioned responses in as few as 5 trials, learning that the conditioned stimulus was a predictor of food presentation at the water surface and at the end of the tank where the food was dispensed. Memories of these conditioned associations persisted for at least 2days after training when fish were tested either as groups or as individuals. Control fish, for which the auditory or visual stimuli were specifically unpaired with food, showed no comparable responses. This simple, low-cost, automated system permits scalable conditioning of zebrafish with minimal human intervention, greatly reducing both variability and labour-intensiveness. It will be useful for studies of the neural basis of learning and memory, and for high-throughput screening of compounds modifying those processes. Copyright © 2016 Elsevier B.V. All rights reserved.
SSVEP-based BCI for manipulating three-dimensional contents and devices
NASA Astrophysics Data System (ADS)
Mun, Sungchul; Cho, Sungjin; Whang, Mincheol; Ju, Byeong-Kwon; Park, Min-Chul
2012-06-01
Brain Computer Interface (BCI) studies have been done to help people manipulate electronic devices in a 2D space but less has been done for a vigorous 3D environment. The purpose of this study was to investigate the possibility of applying Steady State Visual Evoked Potentials (SSVEPs) to a 3D LCD display. Eight subjects (4 females) ranging in age between 20 to 26 years old participated in the experiment. They performed simple navigation tasks on a simple 2D space and virtual environment with/without 3D flickers generated by a Flim-Type Patterned Retarder (FPR). The experiments were conducted in a counterbalanced order. The results showed that 3D stimuli enhanced BCI performance, but no significant effects were found due to the small number of subjects. Visual fatigue that might be evoked by 3D stimuli was negligible in this study. The proposed SSVEP BCI combined with 3D flickers can allow people to control home appliances and other equipment such as wheelchairs, prosthetics, and orthotics without encountering dangerous situations that may happen when using BCIs in real world. 3D stimuli-based SSVEP BCI would motivate people to use 3D displays and vitalize the 3D related industry due to its entertainment value and high performance.
Age-related changes in event-cued visual and auditory prospective memory proper.
Uttl, Bob
2006-06-01
We rely upon prospective memory proper (ProMP) to bring back to awareness previously formed plans and intentions at the right place and time, and to enable us to act upon those plans and intentions. To examine age-related changes in ProMP, younger and older participants made decisions about simple stimuli (ongoing task) and at the same time were required to respond to a ProM cue, either a picture (visually cued ProM test) or a sound (auditorily cued ProM test), embedded in a simultaneously presented series of similar stimuli (either pictures or sounds). The cue display size or loudness increased across trials until a response was made. The cue size and cue loudness at the time of response indexed ProMP. The main results showed that both visual and auditory ProMP declined with age, and that such declines were mediated by age declines in sensory functions (visual acuity and hearing level), processing resources, working memory, intelligence, and ongoing task resource allocation.
Synaptic physiology of the flow of information in the cat's visual cortex in vivo
Hirsch, Judith A; Martinez, Luis M; Alonso, José-Manuel; Desai, Komal; Pillai, Cinthi; Pierre, Carhine
2002-01-01
Each stage of the striate cortical circuit extracts novel information about the visual environment. We asked if this analytic process reflected laminar variations in synaptic physiology by making whole-cell recording with dye-filled electrodes from the cat's visual cortex and thalamus; the stimuli were flashed spots. Thalamic afferents terminate in layer 4, which contains two types of cell, simple and complex, distinguished by the spatial structure of the receptive field. Previously, we had found that the postsynaptic and spike responses of simple cells reliably followed the time course of flash-evoked thalamic activity. Here we report that complex cells in layer 4 (or cells intermediate between simple and complex) similarly reprised thalamic activity (response/trial, 99 ± 1.9 %; response duration 159 ± 57 ms; latency 25 ± 4 ms; average ± standard deviation; n = 7). Thus, all cells in layer 4 share a common synaptic physiology that allows secure integration of thalamic input. By contrast, at the second cortical stage (layer 2+3), where layer 4 directs its output, postsynaptic responses did not track simple patterns of antecedent activity. Typical responses to the static stimulus were intermittent and brief (response/trial, 31 ± 40 %; response duration 72 ± 60 ms, latency 39 ± 7 ms; n = 11). Only richer stimuli like those including motion evoked reliable responses. All told, the second level of cortical processing differs markedly from the first. At that later stage, ascending information seems strongly gated by connections between cortical neurons. Inputs must be combined in newly specified patterns to influence intracortical stages of processing. PMID:11927691
Potts, Geoffrey F; Wood, Susan M; Kothmann, Delia; Martin, Laura E
2008-10-21
Attention directs limited-capacity information processing resources to a subset of available perceptual representations. The mechanisms by which attention selects task-relevant representations for preferential processing are not fully known. Triesman and Gelade's [Triesman, A., Gelade, G., 1980. A feature integration theory of attention. Cognit. Psychol. 12, 97-136.] influential attention model posits that simple features are processed preattentively, in parallel, but that attention is required to serially conjoin multiple features into an object representation. Event-related potentials have provided evidence for this model showing parallel processing of perceptual features in the posterior Selection Negativity (SN) and serial, hierarchic processing of feature conjunctions in the Frontal Selection Positivity (FSP). Most prior studies have been done on conjunctions within one sensory modality while many real-world objects have multimodal features. It is not known if the same neural systems of posterior parallel processing of simple features and frontal serial processing of feature conjunctions seen within a sensory modality also operate on conjunctions between modalities. The current study used ERPs and simultaneously presented auditory and visual stimuli in three task conditions: Attend Auditory (auditory feature determines the target, visual features are irrelevant), Attend Visual (visual features relevant, auditory irrelevant), and Attend Conjunction (target defined by the co-occurrence of an auditory and a visual feature). In the Attend Conjunction condition when the auditory but not the visual feature was a target there was an SN over auditory cortex, when the visual but not auditory stimulus was a target there was an SN over visual cortex, and when both auditory and visual stimuli were targets (i.e. conjunction target) there were SNs over both auditory and visual cortex, indicating parallel processing of the simple features within each modality. In contrast, an FSP was present when either the visual only or both auditory and visual features were targets, but not when only the auditory stimulus was a target, indicating that the conjunction target determination was evaluated serially and hierarchically with visual information taking precedence. This indicates that the detection of a target defined by audio-visual conjunction is achieved via the same mechanism as within a single perceptual modality, through separate, parallel processing of the auditory and visual features and serial processing of the feature conjunction elements, rather than by evaluation of a fused multimodal percept.
Roseboom, Warrick; Kawabe, Takahiro; Nishida, Shin'ya
2013-01-01
It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possible to maintain a temporal relationship distinct from other pairs. It has been suggested that spatial separation of the different audio-visual pairs is necessary to achieve multiple distinct audio-visual synchrony estimates. Here we investigated if this is necessarily true. Specifically, we examined whether it is possible to obtain two distinct temporal recalibrations for stimuli that differed only in featural content. Using both complex (audio visual speech; see Experiment 1) and simple stimuli (high and low pitch audio matched with either vertically or horizontally oriented Gabors; see Experiment 2) we found concurrent, and opposite, recalibrations despite there being no spatial difference in presentation location at any point throughout the experiment. This result supports the notion that the content of an audio-visual pair alone can be used to constrain distinct audio-visual synchrony estimates regardless of spatial overlap.
Audio-Visual Temporal Recalibration Can be Constrained by Content Cues Regardless of Spatial Overlap
Roseboom, Warrick; Kawabe, Takahiro; Nishida, Shin’Ya
2013-01-01
It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possible to maintain a temporal relationship distinct from other pairs. It has been suggested that spatial separation of the different audio-visual pairs is necessary to achieve multiple distinct audio-visual synchrony estimates. Here we investigated if this is necessarily true. Specifically, we examined whether it is possible to obtain two distinct temporal recalibrations for stimuli that differed only in featural content. Using both complex (audio visual speech; see Experiment 1) and simple stimuli (high and low pitch audio matched with either vertically or horizontally oriented Gabors; see Experiment 2) we found concurrent, and opposite, recalibrations despite there being no spatial difference in presentation location at any point throughout the experiment. This result supports the notion that the content of an audio-visual pair alone can be used to constrain distinct audio-visual synchrony estimates regardless of spatial overlap. PMID:23658549
de Heering, Adélaïde; Dormal, Giulia; Pelland, Maxime; Lewis, Terri; Maurer, Daphne; Collignon, Olivier
2016-11-21
Is a short and transient period of visual deprivation early in life sufficient to induce lifelong changes in how we attend to, and integrate, simple visual and auditory information [1, 2]? This question is of crucial importance given the recent demonstration in both animals and humans that a period of blindness early in life permanently affects the brain networks dedicated to visual, auditory, and multisensory processing [1-16]. To address this issue, we compared a group of adults who had been treated for congenital bilateral cataracts during early infancy with a group of normally sighted controls on a task requiring simple detection of lateralized visual and auditory targets, presented alone or in combination. Redundancy gains obtained from the audiovisual conditions were similar between groups and surpassed the reaction time distribution predicted by Miller's race model. However, in comparison to controls, cataract-reversal patients were faster at processing simple auditory targets and showed differences in how they shifted attention across modalities. Specifically, they were faster at switching attention from visual to auditory inputs than in the reverse situation, while an opposite pattern was observed for controls. Overall, these results reveal that the absence of visual input during the first months of life does not prevent the development of audiovisual integration but enhances the salience of simple auditory inputs, leading to a different crossmodal distribution of attentional resources between auditory and visual stimuli. Copyright © 2016 Elsevier Ltd. All rights reserved.
Parés, Narcís; Carreras, Anna; Durany, Jaume; Ferrer, Jaume; Freixa, Pere; Gómez, David; Kruglanski, Orit; Parés, Roc; Ribas, J Ignasi; Soler, Miquel; Sanjurjo, Alex
2006-04-01
On starting to think about interaction design for low-functioning persons in the autistic spectrum (PAS), especially children, one finds a number of questions that are difficult to answer: Can we typify the PAS user? Can we engage the user in interactive communication without generating frustrating or obsessive situations? What sort of visual stimuli can we provide? Will they prefer representational or abstract visual stimuli? Will they understand three-dimensional (3D) graphic representation? What sort of interfaces will they accept? Can we set ambitious goals such as education or therapy? Unfortunately, most of these questions have no answer yet. Hence, we decided to set an apparently simple goal: to design a "fun application," with no intention to reach the level of education or therapy. The goal was to be attained by giving the users a sense of agency--by providing first a sense of control in the interaction dialogue. Our approach to visual stimuli design has been based on the use of geometric, abstract, two-dimensional (2D), real-time computer graphics in a full-body, non-invasive, interactive space. The results obtained within the European-funded project MultiSensory Environment Design for an Interface between Autistic and Typical Expressiveness (MEDIATE) have been extremely encouraging.
Audiovisual correspondence between musical timbre and visual shapes
Adeli, Mohammad; Rouat, Jean; Molotchnikoff, Stéphane
2014-01-01
This article investigates the cross-modal correspondences between musical timbre and shapes. Previously, such features as pitch, loudness, light intensity, visual size, and color characteristics have mostly been used in studies of audio-visual correspondences. Moreover, in most studies, simple stimuli e.g., simple tones have been utilized. In this experiment, 23 musical sounds varying in fundamental frequency and timbre but fixed in loudness were used. Each sound was presented once against colored shapes and once against grayscale shapes. Subjects had to select the visual equivalent of a given sound i.e., its shape, color (or grayscale) and vertical position. This scenario permitted studying the associations between normalized timbre and visual shapes as well as some of the previous findings for more complex stimuli. One hundred and nineteen subjects (31 females and 88 males) participated in the online experiment. Subjects included 36 claimed professional musicians, 47 claimed amateur musicians, and 36 claimed non-musicians. Thirty-one subjects have also claimed to have synesthesia-like experiences. A strong association between timbre of envelope normalized sounds and visual shapes was observed. Subjects have strongly associated soft timbres with blue, green or light gray rounded shapes, harsh timbres with red, yellow or dark gray sharp angular shapes and timbres having elements of softness and harshness together with a mixture of the two previous shapes. Color or grayscale had no effect on timbre-shape associations. Fundamental frequency was not associated with height, grayscale or color. The significant correspondence between timbre and shape revealed by the present work allows designing substitution systems which might help the blind to perceive shapes through timbre. PMID:24910604
Coactivation of response initiation processes with redundant signals.
Maslovat, Dana; Hajj, Joëlle; Carlsen, Anthony N
2018-05-14
During reaction time (RT) tasks, participants respond faster to multiple stimuli from different modalities as compared to a single stimulus, a phenomenon known as the redundant signal effect (RSE). Explanations for this effect typically include coactivation arising from the multiple stimuli, which results in enhanced processing of one or more response production stages. The current study compared empirical RT data with the predictions of a model in which initiation-related activation arising from each stimulus is additive. Participants performed a simple wrist extension RT task following either a visual go-signal, an auditory go-signal, or both stimuli with the auditory stimulus delayed between 0 and 125 ms relative to the visual stimulus. Results showed statistical equivalence between the predictions of an additive initiation model and the observed RT data, providing novel evidence that the RSE can be explained via a coactivation of initiation-related processes. It is speculated that activation summation occurs at the thalamus, leading to the observed facilitation of response initiation. Copyright © 2018 Elsevier B.V. All rights reserved.
The neural basis of precise visual short-term memory for complex recognisable objects.
Veldsman, Michele; Mitchell, Daniel J; Cusack, Rhodri
2017-10-01
Recent evidence suggests that visual short-term memory (VSTM) capacity estimated using simple objects, such as colours and oriented bars, may not generalise well to more naturalistic stimuli. More visual detail can be stored in VSTM when complex, recognisable objects are maintained compared to simple objects. It is not yet known if it is recognisability that enhances memory precision, nor whether maintenance of recognisable objects is achieved with the same network of brain regions supporting maintenance of simple objects. We used a novel stimulus generation method to parametrically warp photographic images along a continuum, allowing separate estimation of the precision of memory representations and the number of items retained. The stimulus generation method was also designed to create unrecognisable, though perceptually matched, stimuli, to investigate the impact of recognisability on VSTM. We adapted the widely-used change detection and continuous report paradigms for use with complex, photographic images. Across three functional magnetic resonance imaging (fMRI) experiments, we demonstrated greater precision for recognisable objects in VSTM compared to unrecognisable objects. This clear behavioural advantage was not the result of recruitment of additional brain regions, or of stronger mean activity within the core network. Representational similarity analysis revealed greater variability across item repetitions in the representations of recognisable, compared to unrecognisable complex objects. We therefore propose that a richer range of neural representations support VSTM for complex recognisable objects. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Medina, José M.; Díaz, José A.
2006-05-01
Simple visual-reaction times (VRT) were measured for a variety of stimuli selected along red-green (L-M axis) and blue-yellow [S-(L+M) axis] directions in the isoluminant plane under different adaptation stimuli. Data were plotted in terms of the RMS cone contrast in contrast-threshold units. For each opponent system, a modified Piéron function was fitted in each experimental configuration and on all adaptation stimuli. A single function did not account for all the data, confirming the existence of separate postreceptoral adaptation mechanisms in each opponent system under suprathreshold conditions. The analysis of the VRT-hazard functions suggested that both color-opponent mechanisms present a well-defined, transient-sustained structure at marked suprathreshold conditions. The influence of signal polarity and chromatic adaptation on each color axis proves the existence of asymmetries in the integrated hazard functions, suggesting separate detection mechanisms for each pole (red, green, blue, and yellow detectors).
Brief Report: Face-Specific Recognition Deficits in Young Children with Autism Spectrum Disorders
ERIC Educational Resources Information Center
Bradshaw, Jessica; Shic, Frederick; Chawarska, Katarzyna
2011-01-01
This study used eyetracking to investigate the ability of young children with autism spectrum disorders (ASD) to recognize social (faces) and nonsocial (simple objects and complex block patterns) stimuli using the visual paired comparison (VPC) paradigm. Typically developing (TD) children showed evidence for recognition of faces and simple…
Victor, Jonathan D; Mechler, Ferenc; Ohiorhenuan, Ifije; Schmid, Anita M; Purpura, Keith P
2009-12-01
A full understanding of the computations performed in primary visual cortex is an important yet elusive goal. Receptive field models consisting of cascades of linear filters and static nonlinearities may be adequate to account for responses to simple stimuli such as gratings and random checkerboards, but their predictions of responses to complex stimuli such as natural scenes are only approximately correct. It is unclear whether these discrepancies are limited to quantitative inaccuracies that reflect well-recognized mechanisms such as response normalization, gain controls, and cross-orientation suppression or, alternatively, imply additional qualitative features of the underlying computations. To address this question, we examined responses of V1 and V2 neurons in the monkey and area 17 neurons in the cat to two-dimensional Hermite functions (TDHs). TDHs are intermediate in complexity between traditional analytic stimuli and natural scenes and have mathematical properties that facilitate their use to test candidate models. By exploiting these properties, along with the laminar organization of V1, we identify qualitative aspects of neural computations beyond those anticipated from the above-cited model framework. Specifically, we find that V1 neurons receive signals from orientation-selective mechanisms that are highly nonlinear: they are sensitive to phase correlations, not just spatial frequency content. That is, the behavior of V1 neurons departs from that of linear-nonlinear cascades with standard modulatory mechanisms in a qualitative manner: even relatively simple stimuli evoke responses that imply complex spatial nonlinearities. The presence of these findings in the input layers suggests that these nonlinearities act in a feedback fashion.
Exploration of complex visual feature spaces for object perception
Leeds, Daniel D.; Pyles, John A.; Tarr, Michael J.
2014-01-01
The mid- and high-level visual properties supporting object perception in the ventral visual pathway are poorly understood. In the absence of well-specified theory, many groups have adopted a data-driven approach in which they progressively interrogate neural units to establish each unit's selectivity. Such methods are challenging in that they require search through a wide space of feature models and stimuli using a limited number of samples. To more rapidly identify higher-level features underlying human cortical object perception, we implemented a novel functional magnetic resonance imaging method in which visual stimuli are selected in real-time based on BOLD responses to recently shown stimuli. This work was inspired by earlier primate physiology work, in which neural selectivity for mid-level features in IT was characterized using a simple parametric approach (Hung et al., 2012). To extend such work to human neuroimaging, we used natural and synthetic object stimuli embedded in feature spaces constructed on the basis of the complex visual properties of the objects themselves. During fMRI scanning, we employed a real-time search method to control continuous stimulus selection within each image space. This search was designed to maximize neural responses across a pre-determined 1 cm3 brain region within ventral cortex. To assess the value of this method for understanding object encoding, we examined both the behavior of the method itself and the complex visual properties the method identified as reliably activating selected brain regions. We observed: (1) Regions selective for both holistic and component object features and for a variety of surface properties; (2) Object stimulus pairs near one another in feature space that produce responses at the opposite extremes of the measured activity range. Together, these results suggest that real-time fMRI methods may yield more widely informative measures of selectivity within the broad classes of visual features associated with cortical object representation. PMID:25309408
Almeida, Inês; van Asselen, Marieke; Castelo-Branco, Miguel
2013-09-01
In human cognition, most relevant stimuli, such as faces, are processed in central vision. However, it is widely believed that recognition of relevant stimuli (e.g. threatening animal faces) at peripheral locations is also important due to their survival value. Moreover, task instructions have been shown to modulate brain regions involved in threat recognition (e.g. the amygdala). In this respect it is also controversial whether tasks requiring explicit focus on stimulus threat content vs. implicit processing differently engage primitive subcortical structures involved in emotional appraisal. Here we have addressed the role of central vs. peripheral processing in the human amygdala using animal threatening vs. non-threatening face stimuli. First, a simple animal face recognition task with threatening and non-threatening animal faces, as well as non-face control stimuli, was employed in naïve subjects (implicit task). A subsequent task was then performed with the same stimulus categories (but different stimuli) in which subjects were told to explicitly detect threat signals. We found lateralized amygdala responses both to the spatial location of stimuli and to the threatening content of faces depending on the task performed: the right amygdala showed increased responses to central compared to left presented stimuli specifically during the threat detection task, while the left amygdala was better prone to discriminate threatening faces from non-facial displays during the animal face recognition task. Additionally, the right amygdala responded to faces during the threat detection task but only when centrally presented. Moreover, we have found no evidence for superior responses of the amygdala to peripheral stimuli. Importantly, we have found that striatal regions activate differentially depending on peripheral vs. central processing of threatening faces. Accordingly, peripheral processing of these stimuli activated more strongly the putaminal region, while central processing engaged mainly the caudate nucleus. We conclude that the human amygdala has a central bias for face stimuli, and that visual processing recruits different striatal regions, putaminal or caudate based, depending on the task and on whether peripheral or central visual processing is involved. © 2013 Elsevier Ltd. All rights reserved.
Prey Capture Behavior Evoked by Simple Visual Stimuli in Larval Zebrafish
Bianco, Isaac H.; Kampff, Adam R.; Engert, Florian
2011-01-01
Understanding how the nervous system recognizes salient stimuli in the environment and selects and executes the appropriate behavioral responses is a fundamental question in systems neuroscience. To facilitate the neuroethological study of visually guided behavior in larval zebrafish, we developed “virtual reality” assays in which precisely controlled visual cues can be presented to larvae whilst their behavior is automatically monitored using machine vision algorithms. Freely swimming larvae responded to moving stimuli in a size-dependent manner: they directed multiple low amplitude orienting turns (∼20°) toward small moving spots (1°) but reacted to larger spots (10°) with high-amplitude aversive turns (∼60°). The tracking of small spots led us to examine how larvae respond to prey during hunting routines. By analyzing movie sequences of larvae hunting paramecia, we discovered that all prey capture routines commence with eye convergence and larvae maintain their eyes in a highly converged position for the duration of the prey-tracking and capture swim phases. We adapted our virtual reality assay to deliver artificial visual cues to partially restrained larvae and found that small moving spots evoked convergent eye movements and J-turns of the tail, which are defining features of natural hunting. We propose that eye convergence represents the engagement of a predatory mode of behavior in larval fish and serves to increase the region of binocular visual space to enable stereoscopic targeting of prey. PMID:22203793
The Modality Shift Experiment in Adults and Children with High Functioning Autism
ERIC Educational Resources Information Center
Williams, Diane L.; Goldstein, Gerald; Minshew, Nancy J.
2013-01-01
This study used the modality shift experiment, a relatively simple reaction time measure to visual and auditory stimuli, to examine attentional shifting within and across modalities in 33 children and 42 adults with high-functioning autism as compared to matched numbers of age- and ability-matched typical controls. An exaggerated "modality shift…
Aging and the interaction of sensory cortical function and structure.
Peiffer, Ann M; Hugenschmidt, Christina E; Maldjian, Joseph A; Casanova, Ramon; Srikanth, Ryali; Hayasaka, Satoru; Burdette, Jonathan H; Kraft, Robert A; Laurienti, Paul J
2009-01-01
Even the healthiest older adults experience changes in cognitive and sensory function. Studies show that older adults have reduced neural responses to sensory information. However, it is well known that sensory systems do not act in isolation but function cooperatively to either enhance or suppress neural responses to individual environmental stimuli. Very little research has been dedicated to understanding how aging affects the interactions between sensory systems, especially cross-modal deactivations or the ability of one sensory system (e.g., audition) to suppress the neural responses in another sensory system cortex (e.g., vision). Such cross-modal interactions have been implicated in attentional shifts between sensory modalities and could account for increased distractibility in older adults. To assess age-related changes in cross-modal deactivations, functional MRI studies were performed in 61 adults between 18 and 80 years old during simple auditory and visual discrimination tasks. Results within visual cortex confirmed previous findings of decreased responses to visual stimuli for older adults. Age-related changes in the visual cortical response to auditory stimuli were, however, much more complex and suggested an alteration with age in the functional interactions between the senses. Ventral visual cortical regions exhibited cross-modal deactivations in younger but not older adults, whereas more dorsal aspects of visual cortex were suppressed in older but not younger adults. These differences in deactivation also remained after adjusting for age-related reductions in brain volume of sensory cortex. Thus, functional differences in cortical activity between older and younger adults cannot solely be accounted for by differences in gray matter volume. (c) 2007 Wiley-Liss, Inc.
Enhanced visual short-term memory in action video game players.
Blacker, Kara J; Curby, Kim M
2013-08-01
Visual short-term memory (VSTM) is critical for acquiring visual knowledge and shows marked individual variability. Previous work has illustrated a VSTM advantage among action video game players (Boot et al. Acta Psychologica 129:387-398, 2008). A growing body of literature has suggested that action video game playing can bolster visual cognitive abilities in a domain-general manner, including abilities related to visual attention and the speed of processing, providing some potential bases for this VSTM advantage. In the present study, we investigated the VSTM advantage among video game players and assessed whether enhanced processing speed can account for this advantage. Experiment 1, using simple colored stimuli, revealed that action video game players demonstrate a similar VSTM advantage over nongamers, regardless of whether they are given limited or ample time to encode items into memory. Experiment 2, using complex shapes as the stimuli to increase the processing demands of the task, replicated this VSTM advantage, irrespective of encoding duration. These findings are inconsistent with a speed-of-processing account of this advantage. An alternative, attentional account, grounded in the existing literature on the visuo-cognitive consequences of video game play, is discussed.
Executive and Perceptual Distraction in Visual Working Memory
2017-01-01
The contents of visual working memory are likely to reflect the influence of both executive control resources and information present in the environment. We investigated whether executive attention is critical in the ability to exclude unwanted stimuli by introducing concurrent potentially distracting irrelevant items to a visual working memory paradigm, and manipulating executive load using simple or more demanding secondary verbal tasks. Across 7 experiments varying in presentation format, timing, stimulus set, and distractor number, we observed clear disruptive effects of executive load and visual distraction, but relatively minimal evidence supporting an interactive relationship between these factors. These findings are in line with recent evidence using delay-based interference, and suggest that different forms of attentional selection operate relatively independently in visual working memory. PMID:28414499
Neocortical Rebound Depolarization Enhances Visual Perception
Funayama, Kenta; Ban, Hiroshi; Chan, Allen W.; Matsuki, Norio; Murphy, Timothy H.; Ikegaya, Yuji
2015-01-01
Animals are constantly exposed to the time-varying visual world. Because visual perception is modulated by immediately prior visual experience, visual cortical neurons may register recent visual history into a specific form of offline activity and link it to later visual input. To examine how preceding visual inputs interact with upcoming information at the single neuron level, we designed a simple stimulation protocol in which a brief, orientated flashing stimulus was subsequently coupled to visual stimuli with identical or different features. Using in vivo whole-cell patch-clamp recording and functional two-photon calcium imaging from the primary visual cortex (V1) of awake mice, we discovered that a flash of sinusoidal grating per se induces an early, transient activation as well as a long-delayed reactivation in V1 neurons. This late response, which started hundreds of milliseconds after the flash and persisted for approximately 2 s, was also observed in human V1 electroencephalogram. When another drifting grating stimulus arrived during the late response, the V1 neurons exhibited a sublinear, but apparently increased response, especially to the same grating orientation. In behavioral tests of mice and humans, the flashing stimulation enhanced the detection power of the identically orientated visual stimulation only when the second stimulation was presented during the time window of the late response. Therefore, V1 late responses likely provide a neural basis for admixing temporally separated stimuli and extracting identical features in time-varying visual environments. PMID:26274866
Enhanced pure-tone pitch discrimination among persons with autism but not Asperger syndrome.
Bonnel, Anna; McAdams, Stephen; Smith, Bennett; Berthiaume, Claude; Bertone, Armando; Ciocca, Valter; Burack, Jacob A; Mottron, Laurent
2010-07-01
Persons with Autism spectrum disorders (ASD) display atypical perceptual processing in visual and auditory tasks. In vision, Bertone, Mottron, Jelenic, and Faubert (2005) found that enhanced and diminished visual processing is linked to the level of neural complexity required to process stimuli, as proposed in the neural complexity hypothesis. Based on these findings, Samson, Mottron, Jemel, Belin, and Ciocca (2006) proposed to extend the neural complexity hypothesis to the auditory modality. They hypothesized that persons with ASD should display enhanced performance for simple tones that are processed in primary auditory cortical regions, but diminished performance for complex tones that require additional processing in associative auditory regions, in comparison to typically developing individuals. To assess this hypothesis, we designed four auditory discrimination experiments targeting pitch, non-vocal and vocal timbre, and loudness. Stimuli consisted of spectro-temporally simple and complex tones. The participants were adolescents and young adults with autism, Asperger syndrome, and typical developmental histories, all with IQs in the normal range. Consistent with the neural complexity hypothesis and enhanced perceptual functioning model of ASD (Mottron, Dawson, Soulières, Hubert, & Burack, 2006), the participants with autism, but not with Asperger syndrome, displayed enhanced pitch discrimination for simple tones. However, no discrimination-thresholds differences were found between the participants with ASD and the typically developing persons across spectrally and temporally complex conditions. These findings indicate that enhanced pure-tone pitch discrimination may be a cognitive correlate of speech-delay among persons with ASD. However, auditory discrimination among this group does not appear to be directly contingent on the spectro-temporal complexity of the stimuli. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
How previous experience shapes perception in different sensory modalities
Snyder, Joel S.; Schwiedrzik, Caspar M.; Vitela, A. Davi; Melloni, Lucia
2015-01-01
What has transpired immediately before has a strong influence on how sensory stimuli are processed and perceived. In particular, temporal context can have contrastive effects, repelling perception away from the interpretation of the context stimulus, and attractive effects (TCEs), whereby perception repeats upon successive presentations of the same stimulus. For decades, scientists have documented contrastive and attractive temporal context effects mostly with simple visual stimuli. But both types of effects also occur in other modalities, e.g., audition and touch, and for stimuli of varying complexity, raising the possibility that context effects reflect general computational principles of sensory systems. Neuroimaging shows that contrastive and attractive context effects arise from neural processes in different areas of the cerebral cortex, suggesting two separate operations with distinct functional roles. Bayesian models can provide a functional account of both context effects, whereby prior experience adjusts sensory systems to optimize perception of future stimuli. PMID:26582982
Visual Distractors Disrupt Audiovisual Integration Regardless of Stimulus Complexity
Gibney, Kyla D.; Aligbe, Enimielen; Eggleston, Brady A.; Nunes, Sarah R.; Kerkhoff, Willa G.; Dean, Cassandra L.; Kwakye, Leslie D.
2017-01-01
The intricate relationship between multisensory integration and attention has been extensively researched in the multisensory field; however, the necessity of attention for the binding of multisensory stimuli remains contested. In the current study, we investigated whether diverting attention from well-known multisensory tasks would disrupt integration and whether the complexity of the stimulus and task modulated this interaction. A secondary objective of this study was to investigate individual differences in the interaction of attention and multisensory integration. Participants completed a simple audiovisual speeded detection task and McGurk task under various perceptual load conditions: no load (multisensory task while visual distractors present), low load (multisensory task while detecting the presence of a yellow letter in the visual distractors), and high load (multisensory task while detecting the presence of a number in the visual distractors). Consistent with prior studies, we found that increased perceptual load led to decreased reports of the McGurk illusion, thus confirming the necessity of attention for the integration of speech stimuli. Although increased perceptual load led to longer response times for all stimuli in the speeded detection task, participants responded faster on multisensory trials than unisensory trials. However, the increase in multisensory response times violated the race model for no and low perceptual load conditions only. Additionally, a geometric measure of Miller’s inequality showed a decrease in multisensory integration for the speeded detection task with increasing perceptual load. Surprisingly, we found diverging changes in multisensory integration with increasing load for participants who did not show integration for the no load condition: no changes in integration for the McGurk task with increasing load but increases in integration for the detection task. The results of this study indicate that attention plays a crucial role in multisensory integration for both highly complex and simple multisensory tasks and that attention may interact differently with multisensory processing in individuals who do not strongly integrate multisensory information. PMID:28163675
Visual Distractors Disrupt Audiovisual Integration Regardless of Stimulus Complexity.
Gibney, Kyla D; Aligbe, Enimielen; Eggleston, Brady A; Nunes, Sarah R; Kerkhoff, Willa G; Dean, Cassandra L; Kwakye, Leslie D
2017-01-01
The intricate relationship between multisensory integration and attention has been extensively researched in the multisensory field; however, the necessity of attention for the binding of multisensory stimuli remains contested. In the current study, we investigated whether diverting attention from well-known multisensory tasks would disrupt integration and whether the complexity of the stimulus and task modulated this interaction. A secondary objective of this study was to investigate individual differences in the interaction of attention and multisensory integration. Participants completed a simple audiovisual speeded detection task and McGurk task under various perceptual load conditions: no load (multisensory task while visual distractors present), low load (multisensory task while detecting the presence of a yellow letter in the visual distractors), and high load (multisensory task while detecting the presence of a number in the visual distractors). Consistent with prior studies, we found that increased perceptual load led to decreased reports of the McGurk illusion, thus confirming the necessity of attention for the integration of speech stimuli. Although increased perceptual load led to longer response times for all stimuli in the speeded detection task, participants responded faster on multisensory trials than unisensory trials. However, the increase in multisensory response times violated the race model for no and low perceptual load conditions only. Additionally, a geometric measure of Miller's inequality showed a decrease in multisensory integration for the speeded detection task with increasing perceptual load. Surprisingly, we found diverging changes in multisensory integration with increasing load for participants who did not show integration for the no load condition: no changes in integration for the McGurk task with increasing load but increases in integration for the detection task. The results of this study indicate that attention plays a crucial role in multisensory integration for both highly complex and simple multisensory tasks and that attention may interact differently with multisensory processing in individuals who do not strongly integrate multisensory information.
The effect of changing the secondary task in dual-task paradigms for measuring listening effort.
Picou, Erin M; Ricketts, Todd A
2014-01-01
The purpose of this study was to evaluate the effect of changing the secondary task in dual-task paradigms that measure listening effort. Specifically, the effects of increasing the secondary task complexity or the depth of processing on a paradigm's sensitivity to changes in listening effort were quantified in a series of two experiments. Specific factors investigated within each experiment were background noise and visual cues. Participants in Experiment 1 were adults with normal hearing (mean age 23 years) and participants in Experiment 2 were adults with mild sloping to moderately severe sensorineural hearing loss (mean age 60.1 years). In both experiments, participants were tested using three dual-task paradigms. These paradigms had identical primary tasks, which were always monosyllable word recognition. The secondary tasks were all physical reaction time measures. The stimulus for the secondary task varied by paradigm and was a (1) simple visual probe, (2) a complex visual probe, or (3) the category of word presented. In this way, the secondary tasks mainly varied from the simple paradigm by either complexity or depth of speech processing. Using all three paradigms, participants were tested in four conditions, (1) auditory-only stimuli in quiet, (2) auditory-only stimuli in noise, (3) auditory-visual stimuli in quiet, and (4) auditory-visual stimuli in noise. During auditory-visual conditions, the talker's face was visible. Signal-to-noise ratios used during conditions with background noise were set individually so word recognition performance was matched in auditory-only and auditory-visual conditions. In noise, word recognition performance was approximately 80% and 65% for Experiments 1 and 2, respectively. For both experiments, word recognition performance was stable across the three paradigms, confirming that none of the secondary tasks interfered with the primary task. In Experiment 1 (listeners with normal hearing), analysis of median reaction times revealed a significant main effect of background noise on listening effort only with the paradigm that required deep processing. Visual cues did not change listening effort as measured with any of the three dual-task paradigms. In Experiment 2 (listeners with hearing loss), analysis of median reaction times revealed expected significant effects of background noise using all three paradigms, but no significant effects of visual cues. None of the dual-task paradigms were sensitive to the effects of visual cues. Furthermore, changing the complexity of the secondary task did not change dual-task paradigm sensitivity to the effects of background noise on listening effort for either group of listeners. However, the paradigm whose secondary task involved deeper processing was more sensitive to the effects of background noise for both groups of listeners. While this paradigm differed from the others in several respects, depth of processing may be partially responsible for the increased sensitivity. Therefore, this paradigm may be a valuable tool for evaluating other factors that affect listening effort.
Zhu, Hai-Feng; Zele, Andrew J; Suheimat, Marwan; Lambert, Andrew J; Atchison, David A
2016-08-01
This study compared neural resolution and detection limits of the human mid-/long-wavelength and short-wavelength cone systems with anatomical estimates of photoreceptor and retinal ganglion cell spacings and sizes. Detection and resolution limits were measured from central fixation out to 35° eccentricity across the horizontal visual field using a modified Lotmar interferometer. The mid-/long-wavelength cone system was studied using a green (550 nm) test stimulus to which S-cones have low sensitivity. To bias resolution and detection to the short-wavelength cone system, a blue (450 nm) test stimulus was presented against a bright yellow background that desensitized the M- and L-cones. Participants were three trichromatic males with normal visual functions. With green stimuli, resolution showed a steep central-peripheral gradient that was similar between participants, whereas the detection gradient was shallower and patterns were different between participants. Detection and resolution with blue stimuli were poorer than for green stimuli. The detection of blue stimuli was superior to resolution across the horizontal visual field and the patterns were different between participants. The mid-/long-wavelength cone system's resolution is limited by midget ganglion cell spacing and its detection is limited by the size of the M- and L-cone photoreceptors, consistent with previous observations. We found that no such simple relationships occur for the short-wavelength cone system between resolution and the bistratified ganglion cell spacing, nor between detection and the S-cone photoreceptor sizes.
The influence of surround suppression on adaptation effects in primary visual cortex
Wissig, Stephanie C.
2012-01-01
Adaptation, the prolonged presentation of stimuli, has been used to probe mechanisms of visual processing in physiological, imaging, and perceptual studies. Previous neurophysiological studies have measured adaptation effects by using stimuli tailored to evoke robust responses in individual neurons. This approach provides an incomplete view of how an adapter alters the representation of sensory stimuli by a population of neurons with diverse functional properties. We implanted microelectrode arrays in primary visual cortex (V1) of macaque monkeys and measured orientation tuning and contrast sensitivity in populations of neurons before and after prolonged adaptation. Whereas previous studies in V1 have reported that adaptation causes stimulus-specific suppression of responsivity and repulsive shifts in tuning preference, we have found that adaptation can also lead to response facilitation and shifts in tuning toward the adapter. To explain this range of effects, we have proposed and tested a simple model that employs stimulus-specific suppression in both the receptive field and the spatial surround. The predicted effects on tuning depend on the relative drive provided by the adapter to these two receptive field components. Our data reveal that adaptation can have a much richer repertoire of effects on neuronal responsivity and tuning than previously considered and suggest an intimate mechanistic relationship between spatial and temporal contextual effects. PMID:22423001
Rychkova, Svetlana; Ninio, Jacques
2011-01-01
When stereoscopic images are presented alternately to the two eyes, stereopsis occurs at F ≥ 1 Hz full-cycle frequencies for very simple stimuli, and F ≥ 3 Hz full-cycle frequencies for random-dot stereograms (eg Ludwig I, Pieper W, Lachnit H, 2007 “Temporal integration of monocular images separated in time: stereopsis, stereoacuity, and binocular luster” Perception & Psychophysics 69 92–102). Using twenty different stereograms presented through liquid crystal shutters, we studied the transition to stereopsis with fifteen subjects. The onset of stereopsis was observed during a stepwise increase of the alternation frequency, and its disappearance was observed during a stepwise decrease in frequency. The lowest F values (around 2.5 Hz) were observed with stimuli involving two to four simple disjoint elements (circles, arcs, rectangles). Higher F values were needed for stimuli containing slanted elements or curved surfaces (about 1 Hz increment), overlapping elements at two different depths (about 2.5 Hz increment), or camouflaged overlapping surfaces (> 7 Hz increment). A textured cylindrical surface with a horizontal axis appeared easier to interpret (5.7 Hz) than a pair of slanted segments separated in depth but forming a cross in projection (8 Hz). Training effects were minimal, and F usually increased as disparities were reduced. The hierarchy of difficulties revealed in the study may shed light on various problems that the brain needs to solve during stereoscopic interpretation. During the construction of the three-dimensional percept, the loss of information due to natural decay of the stimuli traces must be compensated by refreshes of visual input. In the discussion an attempt is made to link our results with recent advances in the comprehension of visual scene memory. PMID:23145225
Crossmodal attention switching: auditory dominance in temporal discrimination tasks.
Lukas, Sarah; Philipp, Andrea M; Koch, Iring
2014-11-01
Visual stimuli are often processed more efficiently than accompanying stimuli in another modality. In line with this "visual dominance", earlier studies on attentional switching showed a clear benefit for visual stimuli in a bimodal visual-auditory modality-switch paradigm that required spatial stimulus localization in the relevant modality. The present study aimed to examine the generality of this visual dominance effect. The modality appropriateness hypothesis proposes that stimuli in different modalities are differentially effectively processed depending on the task dimension, so that processing of visual stimuli is favored in the dimension of space, whereas processing auditory stimuli is favored in the dimension of time. In the present study, we examined this proposition by using a temporal duration judgment in a bimodal visual-auditory switching paradigm. Two experiments demonstrated that crossmodal interference (i.e., temporal stimulus congruence) was larger for visual stimuli than for auditory stimuli, suggesting auditory dominance when performing temporal judgment tasks. However, attention switch costs were larger for the auditory modality than for visual modality, indicating a dissociation of the mechanisms underlying crossmodal competition in stimulus processing and modality-specific biasing of attentional set. Copyright © 2014 Elsevier B.V. All rights reserved.
Begum, Tahamina; Reza, Faruque; Ahmed, Izmer; Abdullah, Jafri Malin
2014-03-01
Simple geometric and organic shapes and their arrangement are being used in different neuropsychology tests for the assessment of cognitive function, special memory and also for the therapy purpose in different patient groups. Until now there is no electrophysiological evidence of cognitive function determination for simple geometric, organic shapes and their arrangement. Then the main objective of this study is to know the cortical processing and amplitude, latency of visual induced N170 and P300 event related potential components on different geometric, organic shapes and their arrangement and different educational influence on it, which is worthwhile to know for the early and better treatment for those patient groups. While education influenced on cognitive function by using auditory oddball task, little is known about the influence of education on cognitive function induced by visual attention task in case of the choice of geometric, organic shapes and their arrangements. Using a 128-electrode sensor net, we studied the responses of the choice of the different geometric and organic shapes randomly in experiment 1 and their arrangements in experiment 2 in the high, medium and low education groups. In both experiments, subjects push the button "1" or "2" if like or dislike, respectively. Total 45 healthy subjects (15 in each group) were recruited. ERPs were measured from 11 electrode sites and analyzed to see the evoked N170/N240 and P300 ERP components. There were no differences between like and dislike in amplitudes even in latencies in every stimulus in both experiments. We fixed geometric shapes and organic shapes stimuli only, not like and dislike. Upon the stimulus types, N170 ERP component was found instead of N240, in occipito-temporal (T5, T6, O1 and O2) locations where the amplitude is the highest at O2 location and P300 was distributed in the central (Cz and Pz) locations in both experiments in all groups. In experiment 1, significant low amplitude and non-significant larger latency of the N170 component are found out at O1 location for both stimuli in low education group comparing medium education groups, but in experiment 2, there is no significant difference between stimuli among groups in amplitude and latency. In both experiments, P300 component was found in Cz and Pz locations though the amplitudes are higher at Cz than Pz areas. In experiment 1, medium education group evoked significantly (geometric shape stimuli, P = 0.05; organic shape stimuli, P = 0.02) higher amplitude of P300 component comparing low education group at Cz location. Whereas, there is no significant difference of amplitudes among groups across stimuli in Cz and Pz locations in experiment 2. Latencies have no significant differences in both experiments among groups also, but longer latency are found in low education group at Cz location comparing medium education group, though not significant. We conclude that simple geometric shapes, organic shapes and their arrangements evoked visual N170 component at temporo-occipital areas with right lateralization and P300 ERP component at centro-parietal areas. Significant low amplitude of N170 and P300 ERP components and longer latencies during different shape stimuli in low education group prove that, low education significantly influence on visual cognitive functions in low education group.
A simple and sensitive method to measure timing accuracy.
De Clercq, Armand; Crombez, Geert; Buysse, Ann; Roeyers, Herbert
2003-02-01
Timing accuracy in presenting experimental stimuli (visual information on a PC or on a TV) and responding (keyboard presses and mouse signals) is of importance in several experimental paradigms. In this article, a simple system for measuring timing accuracy is described. The system uses two PCs (at least Pentium II, 200 MHz), a photocell, and an amplifier. No additional boards and timing hardware are needed. The first PC, a SlavePC, monitors the keyboard presses or mouse signals from the PC under test and uses a photocell that is placed in front of the screen to detect the appearance of visual stimuli on the display. The software consists of a small program running on the SlavePC. The SlavePC is connected through a serial line with a second PC. This MasterPC controls the SlavePC through an ActiveX control, which is used in a Visual Basic program. The accuracy of our system was investigated by using a similar setup of a SlavePC and a MasterPC to generate pulses and by using a pulse generator card. These tests revealed that our system has a 0.01-msec accuracy. As an illustration, the reaction time accuracy of INQUISIT for a few applications was tested using our system. It was found that in those applications that we investigated, INQUISIT measures reaction times from keyboard presses with millisecond accuracy.
Honeybees in a virtual reality environment learn unique combinations of colour and shape.
Rusch, Claire; Roth, Eatai; Vinauger, Clément; Riffell, Jeffrey A
2017-10-01
Honeybees are well-known models for the study of visual learning and memory. Whereas most of our knowledge of learned responses comes from experiments using free-flying bees, a tethered preparation would allow fine-scale control of the visual stimuli as well as accurate characterization of the learned responses. Unfortunately, conditioning procedures using visual stimuli in tethered bees have been limited in their efficacy. In this study, using a novel virtual reality environment and a differential training protocol in tethered walking bees, we show that the majority of honeybees learn visual stimuli, and need only six paired training trials to learn the stimulus. We found that bees readily learn visual stimuli that differ in both shape and colour. However, bees learn certain components over others (colour versus shape), and visual stimuli are learned in a non-additive manner with the interaction of specific colour and shape combinations being crucial for learned responses. To better understand which components of the visual stimuli the bees learned, the shape-colour association of the stimuli was reversed either during or after training. Results showed that maintaining the visual stimuli in training and testing phases was necessary to elicit visual learning, suggesting that bees learn multiple components of the visual stimuli. Together, our results demonstrate a protocol for visual learning in restrained bees that provides a powerful tool for understanding how components of a visual stimulus elicit learned responses as well as elucidating how visual information is processed in the honeybee brain. © 2017. Published by The Company of Biologists Ltd.
NASA Technical Reports Server (NTRS)
Haines, R. F.; Fitzgerald, J. W.; Rositano, S. A. (Inventor)
1976-01-01
An automated visual examination apparatus for measuring visual sensitivity and mapping blind spot location including a projection system for displaying to a patient a series of visual stimuli. A response switch enables him to indicate his reaction to the stimuli, and a recording system responsive to both the visual stimuli per se and the patient's response. The recording system thereby provides a correlated permanent record of both stimuli and response from which a substantive and readily apparent visual evaluation can be made.
Brady, Timothy F; Störmer, Viola S; Alvarez, George A
2016-07-05
Visual working memory is the cognitive system that holds visual information active to make it resistant to interference from new perceptual input. Information about simple stimuli-colors and orientations-is encoded into working memory rapidly: In under 100 ms, working memory ‟fills up," revealing a stark capacity limit. However, for real-world objects, the same behavioral limits do not hold: With increasing encoding time, people store more real-world objects and do so with more detail. This boost in performance for real-world objects is generally assumed to reflect the use of a separate episodic long-term memory system, rather than working memory. Here we show that this behavioral increase in capacity with real-world objects is not solely due to the use of separate episodic long-term memory systems. In particular, we show that this increase is a result of active storage in working memory, as shown by directly measuring neural activity during the delay period of a working memory task using EEG. These data challenge fixed-capacity working memory models and demonstrate that working memory and its capacity limitations are dependent upon our existing knowledge.
Touch Interaction with 3D Geographical Visualization on Web: Selected Technological and User Issues
NASA Astrophysics Data System (ADS)
Herman, L.; Stachoň, Z.; Stuchlík, R.; Hladík, J.; Kubíček, P.
2016-10-01
The use of both 3D visualization and devices with touch displays is increasing. In this paper, we focused on the Web technologies for 3D visualization of spatial data and its interaction via touch screen gestures. At the first stage, we compared the support of touch interaction in selected JavaScript libraries on different hardware (desktop PCs with touch screens, tablets, and smartphones) and software platforms. Afterward, we realized simple empiric test (within-subject design, 6 participants, 2 simple tasks, LCD touch monitor Acer and digital terrain models as stimuli) focusing on the ability of users to solve simple spatial tasks via touch screens. An in-house testing web tool was developed and used based on JavaScript, PHP, and X3DOM languages and Hammer.js libraries. The correctness of answers, speed of users' performances, used gestures, and a simple gesture metric was recorded and analysed. Preliminary results revealed that the pan gesture is most frequently used by test participants and it is also supported by the majority of 3D libraries. Possible gesture metrics and future developments including the interpersonal differences are discussed in the conclusion.
Stevenson, Ryan A; Fister, Juliane Krueger; Barnett, Zachary P; Nidiffer, Aaron R; Wallace, Mark T
2012-05-01
In natural environments, human sensory systems work in a coordinated and integrated manner to perceive and respond to external events. Previous research has shown that the spatial and temporal relationships of sensory signals are paramount in determining how information is integrated across sensory modalities, but in ecologically plausible settings, these factors are not independent. In the current study, we provide a novel exploration of the impact on behavioral performance for systematic manipulations of the spatial location and temporal synchrony of a visual-auditory stimulus pair. Simple auditory and visual stimuli were presented across a range of spatial locations and stimulus onset asynchronies (SOAs), and participants performed both a spatial localization and simultaneity judgment task. Response times in localizing paired visual-auditory stimuli were slower in the periphery and at larger SOAs, but most importantly, an interaction was found between the two factors, in which the effect of SOA was greater in peripheral as opposed to central locations. Simultaneity judgments also revealed a novel interaction between space and time: individuals were more likely to judge stimuli as synchronous when occurring in the periphery at large SOAs. The results of this study provide novel insights into (a) how the speed of spatial localization of an audiovisual stimulus is affected by location and temporal coincidence and the interaction between these two factors and (b) how the location of a multisensory stimulus impacts judgments concerning the temporal relationship of the paired stimuli. These findings provide strong evidence for a complex interdependency between spatial location and temporal structure in determining the ultimate behavioral and perceptual outcome associated with a paired multisensory (i.e., visual-auditory) stimulus.
Neuronal responses to face-like stimuli in the monkey pulvinar.
Nguyen, Minh Nui; Hori, Etsuro; Matsumoto, Jumpei; Tran, Anh Hai; Ono, Taketoshi; Nishijo, Hisao
2013-01-01
The pulvinar nuclei appear to function as the subcortical visual pathway that bypasses the striate cortex, rapidly processing coarse facial information. We investigated responses from monkey pulvinar neurons during a delayed non-matching-to-sample task, in which monkeys were required to discriminate five categories of visual stimuli [photos of faces with different gaze directions, line drawings of faces, face-like patterns (three dark blobs on a bright oval), eye-like patterns and simple geometric patterns]. Of 401 neurons recorded, 165 neurons responded differentially to the visual stimuli. These visual responses were suppressed by scrambling the images. Although these neurons exhibited a broad response latency distribution, face-like patterns elicited responses with the shortest latencies (approximately 50 ms). Multidimensional scaling analysis indicated that the pulvinar neurons could specifically encode face-like patterns during the first 50-ms period after stimulus onset and classify the stimuli into one of the five different categories during the next 50-ms period. The amount of stimulus information conveyed by the pulvinar neurons and the number of stimulus-differentiating neurons were consistently higher during the second 50-ms period than during the first 50-ms period. These results suggest that responsiveness to face-like patterns during the first 50-ms period might be attributed to ascending inputs from the superior colliculus or the retina, while responsiveness to the five different stimulus categories during the second 50-ms period might be mediated by descending inputs from cortical regions. These findings provide neurophysiological evidence for pulvinar involvement in social cognition and, specifically, rapid coarse facial information processing. © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong
2013-01-01
This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160-200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360-400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides.
Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong
2013-01-01
This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160–200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360–400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides. PMID:23799097
NASA Technical Reports Server (NTRS)
Haines, R. F.; Fitzgerald, J. W.; Rositano, S. A. (Inventor)
1973-01-01
An automated visual examination apparatus for measuring visual sensitivity and mapping blind spot location is described. The apparatus includes a projection system for displaying to a patient a series of visual stimuli, a response switch enabling him to indicate his reaction to the stimuli, and a recording system responsive to both the visual stimuli per se and the patient's response. The recording system provides a correlated permanent record of both stimuli and response from which a substantive and readily apparent visual evaluation can be made.
A case of epilepsy induced by eating or by visual stimuli of food made of minced meat.
Mimura, Naoya; Inoue, Takeshi; Shimotake, Akihiro; Matsumoto, Riki; Ikeda, Akio; Takahashi, Ryosuke
2017-08-31
We report a 34-year-old woman with eating epilepsy induced not only by eating but also seeing foods made of minced meat. In her early 20s of age, she started having simple partial seizures (SPS) as flashback and epigastric discomfort induced by particular foods. When she was 33 years old, she developed SPS, followed by secondarily generalized tonic-clonic seizure (sGTCS) provoked by eating a hot dog, and 6 months later, only seeing the video of dumpling. We performed video electroencephalogram (EEG) monitoring while she was seeing the video of soup dumpling, which most likely caused sGTCS. Ictal EEG showed rhythmic theta activity in the left frontal to mid-temporal area, followed by generalized seizure pattern. In this patient, seizures were provoked not only by eating particular foods but also by seeing these. This suggests a form of epilepsy involving visual stimuli.
A sLORETA study for gaze-independent BCI speller.
Xingwei An; Jinwen Wei; Shuang Liu; Dong Ming
2017-07-01
EEG-based BCI (brain-computer-interface) speller, especially gaze-independent BCI speller, has become a hot topic in recent years. It provides direct spelling device by non-muscular method for people with severe motor impairments and with limited gaze movement. Brain needs to conduct both stimuli-driven and stimuli-related attention in fast presented BCI paradigms for such BCI speller applications. Few researchers studied the mechanism of brain response to such fast presented BCI applications. In this study, we compared the distribution of brain activation in visual, auditory, and audio-visual combined stimuli paradigms using sLORETA (standardized low-resolution brain electromagnetic tomography). Between groups comparisons showed the importance of visual and auditory stimuli in audio-visual combined paradigm. They both contribute to the activation of brain regions, with visual stimuli being the predominate stimuli. Visual stimuli related brain region was mainly located at parietal and occipital lobe, whereas response in frontal-temporal lobes might be caused by auditory stimuli. These regions played an important role in audio-visual bimodal paradigms. These new findings are important for future study of ERP speller as well as the mechanism of fast presented stimuli.
Auditory and visual spatial impression: Recent studies of three auditoria
NASA Astrophysics Data System (ADS)
Nguyen, Andy; Cabrera, Densil
2004-10-01
Auditory spatial impression is widely studied for its contribution to auditorium acoustical quality. By contrast, visual spatial impression in auditoria has received relatively little attention in formal studies. This paper reports results from a series of experiments investigating the auditory and visual spatial impression of concert auditoria. For auditory stimuli, a fragment of an anechoic recording of orchestral music was convolved with calibrated binaural impulse responses, which had been made with the dummy head microphone at a wide range of positions in three auditoria and the sound source on the stage. For visual stimuli, greyscale photographs were used, taken at the same positions in the three auditoria, with a visual target on the stage. Subjective experiments were conducted with auditory stimuli alone, visual stimuli alone, and visual and auditory stimuli combined. In these experiments, subjects rated apparent source width, listener envelopment, intimacy and source distance (auditory stimuli), and spaciousness, envelopment, stage dominance, intimacy and target distance (visual stimuli). Results show target distance to be of primary importance in auditory and visual spatial impression-thereby providing a basis for covariance between some attributes of auditory and visual spatial impression. Nevertheless, some attributes of spatial impression diverge between the senses.
Corney, David; Haynes, John-Dylan; Rees, Geraint; Lotto, R. Beau
2009-01-01
Background The perception of brightness depends on spatial context: the same stimulus can appear light or dark depending on what surrounds it. A less well-known but equally important contextual phenomenon is that the colour of a stimulus can also alter its brightness. Specifically, stimuli that are more saturated (i.e. purer in colour) appear brighter than stimuli that are less saturated at the same luminance. Similarly, stimuli that are red or blue appear brighter than equiluminant yellow and green stimuli. This non-linear relationship between stimulus intensity and brightness, called the Helmholtz-Kohlrausch (HK) effect, was first described in the nineteenth century but has never been explained. Here, we take advantage of the relative simplicity of this ‘illusion’ to explain it and contextual effects more generally, by using a simple Bayesian ideal observer model of the human visual ecology. We also use fMRI brain scans to identify the neural correlates of brightness without changing the spatial context of the stimulus, which has complicated the interpretation of related fMRI studies. Results Rather than modelling human vision directly, we use a Bayesian ideal observer to model human visual ecology. We show that the HK effect is a result of encoding the non-linear statistical relationship between retinal images and natural scenes that would have been experienced by the human visual system in the past. We further show that the complexity of this relationship is due to the response functions of the cone photoreceptors, which themselves are thought to represent an efficient solution to encoding the statistics of images. Finally, we show that the locus of the response to the relationship between images and scenes lies in the primary visual cortex (V1), if not earlier in the visual system, since the brightness of colours (as opposed to their luminance) accords with activity in V1 as measured with fMRI. Conclusions The data suggest that perceptions of brightness represent a robust visual response to the likely sources of stimuli, as determined, in this instance, by the known statistical relationship between scenes and their retinal responses. While the responses of the early visual system (receptors in this case) may represent specifically the statistics of images, post receptor responses are more likely represent the statistical relationship between images and scenes. A corollary of this suggestion is that the visual cortex is adapted to relate the retinal image to behaviour given the statistics of its past interactions with the sources of retinal images: the visual cortex is adapted to the signals it receives from the eyes, and not directly to the world beyond. PMID:19333398
Fort, Alexandra; Delpuech, Claude; Pernier, Jacques; Giard, Marie-Hélène
2002-10-01
Very recently, a number of neuroimaging studies in humans have begun to investigate the question of how the brain integrates information from different sensory modalities to form unified percepts. Already, intermodal neural processing appears to depend on the modalities of inputs or the nature (speech/non-speech) of information to be combined. Yet, the variety of paradigms, stimuli and technics used make it difficult to understand the relationships between the factors operating at the perceptual level and the underlying physiological processes. In a previous experiment, we used event-related potentials to describe the spatio-temporal organization of audio-visual interactions during a bimodal object recognition task. Here we examined the network of cross-modal interactions involved in simple detection of the same objects. The objects were defined either by unimodal auditory or visual features alone, or by the combination of the two features. As expected, subjects detected bimodal stimuli more rapidly than either unimodal stimuli. Combined analysis of potentials, scalp current densities and dipole modeling revealed several interaction patterns within the first 200 micro s post-stimulus: in occipito-parietal visual areas (45-85 micro s), in deep brain structures, possibly the superior colliculus (105-140 micro s), and in right temporo-frontal regions (170-185 micro s). These interactions differed from those found during object identification in sensory-specific areas and possibly in the superior colliculus, indicating that the neural operations governing multisensory integration depend crucially on the nature of the perceptual processes involved.
Are V1 Simple Cells Optimized for Visual Occlusions? A Comparative Study
Bornschein, Jörg; Henniges, Marc; Lücke, Jörg
2013-01-01
Simple cells in primary visual cortex were famously found to respond to low-level image components such as edges. Sparse coding and independent component analysis (ICA) emerged as the standard computational models for simple cell coding because they linked their receptive fields to the statistics of visual stimuli. However, a salient feature of image statistics, occlusions of image components, is not considered by these models. Here we ask if occlusions have an effect on the predicted shapes of simple cell receptive fields. We use a comparative approach to answer this question and investigate two models for simple cells: a standard linear model and an occlusive model. For both models we simultaneously estimate optimal receptive fields, sparsity and stimulus noise. The two models are identical except for their component superposition assumption. We find the image encoding and receptive fields predicted by the models to differ significantly. While both models predict many Gabor-like fields, the occlusive model predicts a much sparser encoding and high percentages of ‘globular’ receptive fields. This relatively new center-surround type of simple cell response is observed since reverse correlation is used in experimental studies. While high percentages of ‘globular’ fields can be obtained using specific choices of sparsity and overcompleteness in linear sparse coding, no or only low proportions are reported in the vast majority of studies on linear models (including all ICA models). Likewise, for the here investigated linear model and optimal sparsity, only low proportions of ‘globular’ fields are observed. In comparison, the occlusive model robustly infers high proportions and can match the experimentally observed high proportions of ‘globular’ fields well. Our computational study, therefore, suggests that ‘globular’ fields may be evidence for an optimal encoding of visual occlusions in primary visual cortex. PMID:23754938
Effect of a synesthete's photisms on name recall.
Mills, Carol Bergfeld; Innis, Joanne; Westendorf, Taryn; Owsianiecki, Lauren; McDonald, Angela
2006-02-01
A multilingual, colored-letter synesthete professor (MLS), 9 nonsynesthete multilingual professors and 4 nonsynesthete art professors learned 30 names of individuals (first and last name pairs) in three trials. They recalled the names after each trial and six months later, as well as performed cued recall trials initially and after six months. As hypothesized, MLS recalled significantly more names than control groups on all free recall tests (except after the first trial) and on cued recall tests. In addition, MLS gave qualitatively different reasons for remembering names than any individual control participant. MLS gave mostly color reasons for remembering the names, whereas nonsynesthetes gave reasons based on familiarity or language or art knowledge. Results on standardized memory tests showed that MLS had average performance on non-language visual memory tests (the Benton Visual Retention Test-Revised--BURT-R, and the Rey-Osterrieth Complex Figure Test--CFT), but had superior memory performance on a verbal test consisting of lists of nouns (Rey Auditory-Verbal Learning Test--RAVLT). MLS's synesthesia seems to aid memory for visually or auditorily presented language stimuli (names and nouns), but not for non-language visual stimuli (simple and complex figures).
Exploring the parahippocampal cortex response to high and low spatial frequency spaces.
Zeidman, Peter; Mullally, Sinéad L; Schwarzkopf, Dietrich Samuel; Maguire, Eleanor A
2012-05-30
The posterior parahippocampal cortex (PHC) supports a range of cognitive functions, in particular scene processing. However, it has recently been suggested that PHC engagement during functional MRI simply reflects the representation of three-dimensional local space. If so, PHC should respond to space in the absence of scenes, geometric layout, objects or contextual associations. It has also been reported that PHC activation may be influenced by low-level visual properties of stimuli such as spatial frequency. Here, we tested whether PHC was responsive to the mere sense of space in highly simplified stimuli, and whether this was affected by their spatial frequency distribution. Participants were scanned using functional MRI while viewing depictions of simple three-dimensional space, and matched control stimuli that did not depict a space. Half the stimuli were low-pass filtered to ascertain the impact of spatial frequency. We observed a significant interaction between space and spatial frequency in bilateral PHC. Specifically, stimuli depicting space (more than nonspatial stimuli) engaged the right PHC when they featured high spatial frequencies. In contrast, the interaction in the left PHC did not show a preferential response to space. We conclude that a simple depiction of three-dimensional space that is devoid of objects, scene layouts or contextual associations is sufficient to robustly engage the right PHC, at least when high spatial frequencies are present. We suggest that coding for the presence of space may be a core function of PHC, and could explain its engagement in a range of tasks, including scene processing, where space is always present.
Johnson, Matthew R; Johnson, Marcia K
2009-12-01
Recent research has demonstrated top-down attentional modulation of activity in extrastriate category-selective visual areas while stimuli are in view (perceptual attention) and after they are removed from view (reflective attention). Perceptual attention is capable of both enhancing and suppressing activity in category-selective areas relative to a passive viewing baseline. In this study, we demonstrate that a brief, simple act of reflective attention ("refreshing") is also capable of both enhancing and suppressing activity in some scene-selective areas (the parahippocampal place area [PPA]) but not others (refreshing resulted in enhancement but not in suppression in the middle occipital gyrus [MOG]). This suggests that different category-selective extrastriate areas preferring the same class of stimuli may contribute differentially to reflective processing of one's internal representations of such stimuli.
Decreased visual detection during subliminal stimulation.
Bareither, Isabelle; Villringer, Arno; Busch, Niko A
2014-10-17
What is the perceptual fate of invisible stimuli-are they processed at all and does their processing have consequences for the perception of other stimuli? As has been shown previously in the somatosensory system, even stimuli that are too weak to be consciously detected can influence our perception: Subliminal stimulation impairs perception of near-threshold stimuli and causes a functional deactivation in the somatosensory cortex. In a recent study, we showed that subliminal visual stimuli lead to similar responses, indicated by an increase in alpha-band power as measured with electroencephalography (EEG). In the current study, we investigated whether a behavioral inhibitory mechanism also exists within the visual system. We tested the detection of peripheral visual target stimuli under three different conditions: Target stimuli were presented alone or embedded in a concurrent train of subliminal stimuli either at the same location as the target or in the opposite hemifield. Subliminal stimuli were invisible due to their low contrast, not due to a masking procedure. We demonstrate that target detection was impaired by the subliminal stimuli, but only when they were presented at the same location as the target. This finding indicates that subliminal, low-intensity stimuli induce a similar inhibitory effect in the visual system as has been observed in the somatosensory system. In line with previous reports, we propose that the function underlying this effect is the inhibition of spurious noise by the visual system. © 2014 ARVO.
Eberhardt, Silvio P; Auer, Edward T; Bernstein, Lynne E
2014-01-01
In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that audiovisual (AV) training with speech stimuli can promote auditory-only (AO) perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO) training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA) training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee's primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee's lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory (RHT).
Eberhardt, Silvio P.; Auer Jr., Edward T.; Bernstein, Lynne E.
2014-01-01
In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that audiovisual (AV) training with speech stimuli can promote auditory-only (AO) perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO) training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA) training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee’s primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee’s lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory (RHT). PMID:25400566
Ma, Wei Ji; Zhou, Xiang; Ross, Lars A; Foxe, John J; Parra, Lucas C
2009-01-01
Watching a speaker's facial movements can dramatically enhance our ability to comprehend words, especially in noisy environments. From a general doctrine of combining information from different sensory modalities (the principle of inverse effectiveness), one would expect that the visual signals would be most effective at the highest levels of auditory noise. In contrast, we find, in accord with a recent paper, that visual information improves performance more at intermediate levels of auditory noise than at the highest levels, and we show that a novel visual stimulus containing only temporal information does the same. We present a Bayesian model of optimal cue integration that can explain these conflicts. In this model, words are regarded as points in a multidimensional space and word recognition is a probabilistic inference process. When the dimensionality of the feature space is low, the Bayesian model predicts inverse effectiveness; when the dimensionality is high, the enhancement is maximal at intermediate auditory noise levels. When the auditory and visual stimuli differ slightly in high noise, the model makes a counterintuitive prediction: as sound quality increases, the proportion of reported words corresponding to the visual stimulus should first increase and then decrease. We confirm this prediction in a behavioral experiment. We conclude that auditory-visual speech perception obeys the same notion of optimality previously observed only for simple multisensory stimuli.
Willmore, Ben D.B.; Bulstrode, Harry; Tolhurst, David J.
2012-01-01
Neuronal populations in the primary visual cortex (V1) of mammals exhibit contrast normalization. Neurons that respond strongly to simple visual stimuli – such as sinusoidal gratings – respond less well to the same stimuli when they are presented as part of a more complex stimulus which also excites other, neighboring neurons. This phenomenon is generally attributed to generalized patterns of inhibitory connections between nearby V1 neurons. The Bienenstock, Cooper and Munro (BCM) rule is a neural network learning rule that, when trained on natural images, produces model neurons which, individually, have many tuning properties in common with real V1 neurons. However, when viewed as a population, a BCM network is very different from V1 – each member of the BCM population tends to respond to the same dominant features of visual input, producing an incomplete, highly redundant code for visual information. Here, we demonstrate that, by adding contrast normalization into the BCM rule, we arrive at a neurally-plausible Hebbian learning rule that can learn an efficient sparse, overcomplete representation that is a better model for stimulus selectivity in V1. This suggests that one role of contrast normalization in V1 is to guide the neonatal development of receptive fields, so that neurons respond to different features of visual input. PMID:22230381
Auditory and visual interhemispheric communication in musicians and non-musicians.
Woelfle, Rebecca; Grahn, Jessica A
2013-01-01
The corpus callosum (CC) is a brain structure composed of axon fibres linking the right and left hemispheres. Musical training is associated with larger midsagittal cross-sectional area of the CC, suggesting that interhemispheric communication may be faster in musicians. Here we compared interhemispheric transmission times (ITTs) for musicians and non-musicians. ITT was measured by comparing simple reaction times to stimuli presented to the same hemisphere that controlled a button-press response (uncrossed reaction time), or to the contralateral hemisphere (crossed reaction time). Both visual and auditory stimuli were tested. We predicted that the crossed-uncrossed difference (CUD) for musicians would be smaller than for non-musicians as a result of faster interhemispheric transfer times. We did not expect a difference in CUDs between the visual and auditory modalities for either musicians or non-musicians, as previous work indicates that interhemispheric transfer may happen through the genu of the CC, which contains motor fibres rather than sensory fibres. There were no significant differences in CUDs between musicians and non-musicians. However, auditory CUDs were significantly smaller than visual CUDs. Although this auditory-visual difference was larger in musicians than non-musicians, the interaction between modality and musical training was not significant. Therefore, although musical training does not significantly affect ITT, the crossing of auditory information between hemispheres appears to be faster than visual information, perhaps because subcortical pathways play a greater role for auditory interhemispheric transfer.
Does bimodal stimulus presentation increase ERP components usable in BCIs?
NASA Astrophysics Data System (ADS)
Thurlings, Marieke E.; Brouwer, Anne-Marie; Van Erp, Jan B. F.; Blankertz, Benjamin; Werkhoven, Peter J.
2012-08-01
Event-related potential (ERP)-based brain-computer interfaces (BCIs) employ differences in brain responses to attended and ignored stimuli. Typically, visual stimuli are used. Tactile stimuli have recently been suggested as a gaze-independent alternative. Bimodal stimuli could evoke additional brain activity due to multisensory integration which may be of use in BCIs. We investigated the effect of visual-tactile stimulus presentation on the chain of ERP components, BCI performance (classification accuracies and bitrates) and participants’ task performance (counting of targets). Ten participants were instructed to navigate a visual display by attending (spatially) to targets in sequences of either visual, tactile or visual-tactile stimuli. We observe that attending to visual-tactile (compared to either visual or tactile) stimuli results in an enhanced early ERP component (N1). This bimodal N1 may enhance BCI performance, as suggested by a nonsignificant positive trend in offline classification accuracies. A late ERP component (P300) is reduced when attending to visual-tactile compared to visual stimuli, which is consistent with the nonsignificant negative trend of participants’ task performance. We discuss these findings in the light of affected spatial attention at high-level compared to low-level stimulus processing. Furthermore, we evaluate bimodal BCIs from a practical perspective and for future applications.
Beauty and the beholder: the role of visual sensitivity in visual preference
Spehar, Branka; Wong, Solomon; van de Klundert, Sarah; Lui, Jessie; Clifford, Colin W. G.; Taylor, Richard P.
2015-01-01
For centuries, the essence of aesthetic experience has remained one of the most intriguing mysteries for philosophers, artists, art historians and scientists alike. Recently, views emphasizing the link between aesthetics, perception and brain function have become increasingly prevalent (Ramachandran and Hirstein, 1999; Zeki, 1999; Livingstone, 2002; Ishizu and Zeki, 2013). The link between art and the fractal-like structure of natural images has also been highlighted (Spehar et al., 2003; Graham and Field, 2007; Graham and Redies, 2010). Motivated by these claims and our previous findings that humans display a consistent preference across various images with fractal-like statistics, here we explore the possibility that observers’ preference for visual patterns might be related to their sensitivity for such patterns. We measure sensitivity to simple visual patterns (sine-wave gratings varying in spatial frequency and random textures with varying scaling exponent) and find that they are highly correlated with visual preferences exhibited by the same observers. Although we do not attempt to offer a comprehensive neural model of aesthetic experience, we demonstrate a strong relationship between visual sensitivity and preference for simple visual patterns. Broadly speaking, our results support assertions that there is a close relationship between aesthetic experience and the sensory coding of natural stimuli. PMID:26441611
Elevated audiovisual temporal interaction in patients with migraine without aura
2014-01-01
Background Photophobia and phonophobia are the most prominent symptoms in patients with migraine without aura. Hypersensitivity to visual stimuli can lead to greater hypersensitivity to auditory stimuli, which suggests that the interaction between visual and auditory stimuli may play an important role in the pathogenesis of migraine. However, audiovisual temporal interactions in migraine have not been well studied. Therefore, our aim was to examine auditory and visual interactions in migraine. Methods In this study, visual, auditory, and audiovisual stimuli with different temporal intervals between the visual and auditory stimuli were randomly presented to the left or right hemispace. During this time, the participants were asked to respond promptly to target stimuli. We used cumulative distribution functions to analyze the response times as a measure of audiovisual integration. Results Our results showed that audiovisual integration was significantly elevated in the migraineurs compared with the normal controls (p < 0.05); however, audiovisual suppression was weaker in the migraineurs compared with the normal controls (p < 0.05). Conclusions Our findings further objectively support the notion that migraineurs without aura are hypersensitive to external visual and auditory stimuli. Our study offers a new quantitative and objective method to evaluate hypersensitivity to audio-visual stimuli in patients with migraine. PMID:24961903
Auditory emotional cues enhance visual perception.
Zeelenberg, René; Bocanegra, Bruno R
2010-04-01
Recent studies show that emotional stimuli impair performance to subsequently presented neutral stimuli. Here we show a cross-modal perceptual enhancement caused by emotional cues. Auditory cue words were followed by a visually presented neutral target word. Two-alternative forced-choice identification of the visual target was improved by emotional cues as compared to neutral cues. When the cue was presented visually we replicated the emotion-induced impairment found in other studies. Our results suggest emotional stimuli have a twofold effect on perception. They impair perception by reflexively attracting attention at the expense of competing stimuli. However, emotional stimuli also induce a nonspecific perceptual enhancement that carries over onto other stimuli when competition is reduced, for example, by presenting stimuli in different modalities. Copyright 2009 Elsevier B.V. All rights reserved.
Murphy, Kathleen M; Saunders, Muriel D; Saunders, Richard R; Olswang, Lesley B
2004-01-01
The effects of different types and amounts of environmental stimuli (visual and auditory) on microswitch use and behavioral states of three individuals with profound multiple impairments were examined. The individual's switch use and behavioral states were measured under three setting conditions: natural stimuli (typical visual and auditory stimuli in a recreational situation), reduced visual stimuli, and reduced visual and auditory stimuli. Results demonstrated differential switch use in all participants with the varying environmental setting conditions. No consistent effects were observed in behavioral state related to environmental condition. Predominant behavioral state scores and switch use did not systematically covary with any participant. Results suggest the importance of considering environmental stimuli in relationship to switch use when working with individuals with profound multiple impairments.
Gender differences in identifying emotions from auditory and visual stimuli.
Waaramaa, Teija
2017-12-01
The present study focused on gender differences in emotion identification from auditory and visual stimuli produced by two male and two female actors. Differences in emotion identification from nonsense samples, language samples and prolonged vowels were investigated. It was also studied whether auditory stimuli can convey the emotional content of speech without visual stimuli, and whether visual stimuli can convey the emotional content of speech without auditory stimuli. The aim was to get a better knowledge of vocal attributes and a more holistic understanding of the nonverbal communication of emotion. Females tended to be more accurate in emotion identification than males. Voice quality parameters played a role in emotion identification in both genders. The emotional content of the samples was best conveyed by nonsense sentences, better than by prolonged vowels or shared native language of the speakers and participants. Thus, vocal non-verbal communication tends to affect the interpretation of emotion even in the absence of language. The emotional stimuli were better recognized from visual stimuli than auditory stimuli by both genders. Visual information about speech may not be connected to the language; instead, it may be based on the human ability to understand the kinetic movements in speech production more readily than the characteristics of the acoustic cues.
Kotchoubey, Boris; Pavlov, Yuri G; Kleber, Boris
2015-01-01
According to a prevailing view, the visual system works by dissecting stimuli into primitives, whereas the auditory system processes simple and complex stimuli with their corresponding features in parallel. This makes musical stimulation particularly suitable for patients with disorders of consciousness (DoC), because the processing pathways related to complex stimulus features can be preserved even when those related to simple features are no longer available. An additional factor speaking in favor of musical stimulation in DoC is the low efficiency of visual stimulation due to prevalent maladies of vision or gaze fixation in DoC patients. Hearing disorders, in contrast, are much less frequent in DoC, which allows us to use auditory stimulation at various levels of complexity. The current paper overviews empirical data concerning the four main domains of brain functioning in DoC patients that musical stimulation can address: perception (e.g., pitch, timbre, and harmony), cognition (e.g., musical syntax and meaning), emotions, and motor functions. Music can approach basic levels of patients' self-consciousness, which may even exist when all higher-level cognitions are lost, whereas music induced emotions and rhythmic stimulation can affect the dopaminergic reward-system and activity in the motor system respectively, thus serving as a starting point for rehabilitation.
Kotchoubey, Boris; Pavlov, Yuri G.; Kleber, Boris
2015-01-01
According to a prevailing view, the visual system works by dissecting stimuli into primitives, whereas the auditory system processes simple and complex stimuli with their corresponding features in parallel. This makes musical stimulation particularly suitable for patients with disorders of consciousness (DoC), because the processing pathways related to complex stimulus features can be preserved even when those related to simple features are no longer available. An additional factor speaking in favor of musical stimulation in DoC is the low efficiency of visual stimulation due to prevalent maladies of vision or gaze fixation in DoC patients. Hearing disorders, in contrast, are much less frequent in DoC, which allows us to use auditory stimulation at various levels of complexity. The current paper overviews empirical data concerning the four main domains of brain functioning in DoC patients that musical stimulation can address: perception (e.g., pitch, timbre, and harmony), cognition (e.g., musical syntax and meaning), emotions, and motor functions. Music can approach basic levels of patients’ self-consciousness, which may even exist when all higher-level cognitions are lost, whereas music induced emotions and rhythmic stimulation can affect the dopaminergic reward-system and activity in the motor system respectively, thus serving as a starting point for rehabilitation. PMID:26640445
2017-03-01
Reports an error in "Multisensory brand search: How the meaning of sounds guides consumers' visual attention" by Klemens M. Knoeferle, Pia Knoeferle, Carlos Velasco and Charles Spence ( Journal of Experimental Psychology: Applied , 2016[Jun], Vol 22[2], 196-210). In the article, under Experiment 2, Design and Stimuli, the set number of target products and visual distractors reported in the second paragraph should be 20 and 13, respectively: "On each trial, the 16 products shown in the display were randomly selected from a set of 20 products belonging to different categories. Out of the set of 20 products, seven were potential targets, whereas the other 13 were used as visual distractors only throughout the experiment (since they were not linked to specific usage or consumption sounds)." Consequently, Appendix A in the supplemental materials has been updated. (The following abstract of the original article appeared in record 2016-28876-002.) Building on models of crossmodal attention, the present research proposes that brand search is inherently multisensory, in that the consumers' visual search for a specific brand can be facilitated by semantically related stimuli that are presented in another sensory modality. A series of 5 experiments demonstrates that the presentation of spatially nonpredictive auditory stimuli associated with products (e.g., usage sounds or product-related jingles) can crossmodally facilitate consumers' visual search for, and selection of, products. Eye-tracking data (Experiment 2) revealed that the crossmodal effect of auditory cues on visual search manifested itself not only in RTs, but also in the earliest stages of visual attentional processing, thus suggesting that the semantic information embedded within sounds can modulate the perceptual saliency of the target products' visual representations. Crossmodal facilitation was even observed for newly learnt associations between unfamiliar brands and sonic logos, implicating multisensory short-term learning in establishing audiovisual semantic associations. The facilitation effect was stronger when searching complex rather than simple visual displays, thus suggesting a modulatory role of perceptual load. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
The threshold for conscious report: Signal loss and response bias in visual and frontal cortex.
van Vugt, Bram; Dagnino, Bruno; Vartak, Devavrat; Safaai, Houman; Panzeri, Stefano; Dehaene, Stanislas; Roelfsema, Pieter R
2018-05-04
Why are some visual stimuli consciously detected, whereas others remain subliminal? We investigated the fate of weak visual stimuli in the visual and frontal cortex of awake monkeys trained to report stimulus presence. Reported stimuli were associated with strong sustained activity in the frontal cortex, and frontal activity was weaker and quickly decayed for unreported stimuli. Information about weak stimuli could be lost at successive stages en route from the visual to the frontal cortex, and these propagation failures were confirmed through microstimulation of area V1. Fluctuations in response bias and sensitivity during perception of identical stimuli were traced back to prestimulus brain-state markers. A model in which stimuli become consciously reportable when they elicit a nonlinear ignition process in higher cortical areas explained our results. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.
Emotional arousal amplifies the effects of biased competition in the brain
Lee, Tae-Ho; Sakaki, Michiko; Cheng, Ruth; Velasco, Ricardo
2014-01-01
The arousal-biased competition model predicts that arousal increases the gain on neural competition between stimuli representations. Thus, the model predicts that arousal simultaneously enhances processing of salient stimuli and impairs processing of relatively less-salient stimuli. We tested this model with a simple dot-probe task. On each trial, participants were simultaneously exposed to one face image as a salient cue stimulus and one place image as a non-salient stimulus. A border around the face cue location further increased its bottom-up saliency. Before these visual stimuli were shown, one of two tones played: one that predicted a shock (increasing arousal) or one that did not. An arousal-by-saliency interaction in category-specific brain regions (fusiform face area for salient faces and parahippocampal place area for non-salient places) indicated that brain activation associated with processing the salient stimulus was enhanced under arousal whereas activation associated with processing the non-salient stimulus was suppressed under arousal. This is the first functional magnetic resonance imaging study to demonstrate that arousal can enhance information processing for prioritized stimuli while simultaneously impairing processing of non-prioritized stimuli. Thus, it goes beyond previous research to show that arousal does not uniformly enhance perceptual processing, but instead does so selectively in ways that optimizes attention to highly salient stimuli. PMID:24532703
Simulator of human visual perception
NASA Astrophysics Data System (ADS)
Bezzubik, Vitalii V.; Belashenkov, Nickolai R.
2016-04-01
Difference of Circs (DoC) model allowing to simulate the response of neurons - ganglion cells as a reaction to stimuli is represented and studied in relation with representation of receptive fields of human retina. According to this model the response of neurons is reduced to execution of simple arithmetic operations and the results of these calculations well correlate with experimental data in wide range of stimuli parameters. The simplicity of the model and reliability of reproducing of responses allow to propose the conception of a device which can simulate the signals generated by ganglion cells as a reaction to presented stimuli. The signals produced according to DoC model are considered as a result of primary processing of information received from receptors independently of their type and may be sent to higher levels of nervous system of living creatures for subsequent processing. Such device may be used as a prosthesis for disabled organ.
Computing local edge probability in natural scenes from a population of oriented simple cells
Ramachandra, Chaithanya A.; Mel, Bartlett W.
2013-01-01
A key computation in visual cortex is the extraction of object contours, where the first stage of processing is commonly attributed to V1 simple cells. The standard model of a simple cell—an oriented linear filter followed by a divisive normalization—fits a wide variety of physiological data, but is a poor performing local edge detector when applied to natural images. The brain's ability to finely discriminate edges from nonedges therefore likely depends on information encoded by local simple cell populations. To gain insight into the corresponding decoding problem, we used Bayes's rule to calculate edge probability at a given location/orientation in an image based on a surrounding filter population. Beginning with a set of ∼ 100 filters, we culled out a subset that were maximally informative about edges, and minimally correlated to allow factorization of the joint on- and off-edge likelihood functions. Key features of our approach include a new, efficient method for ground-truth edge labeling, an emphasis on achieving filter independence, including a focus on filters in the region orthogonal rather than tangential to an edge, and the use of a customized parametric model to represent the individual filter likelihood functions. The resulting population-based edge detector has zero parameters, calculates edge probability based on a sum of surrounding filter influences, is much more sharply tuned than the underlying linear filters, and effectively captures fine-scale edge structure in natural scenes. Our findings predict nonmonotonic interactions between cells in visual cortex, wherein a cell may for certain stimuli excite and for other stimuli inhibit the same neighboring cell, depending on the two cells' relative offsets in position and orientation, and their relative activation levels. PMID:24381295
Jacoby, Oscar; Hall, Sarah E; Mattingley, Jason B
2012-07-16
Mechanisms of attention are required to prioritise goal-relevant sensory events under conditions of stimulus competition. According to the perceptual load model of attention, the extent to which task-irrelevant inputs are processed is determined by the relative demands of discriminating the target: the more perceptually demanding the target task, the less unattended stimuli will be processed. Although much evidence supports the perceptual load model for competing stimuli within a single sensory modality, the effects of perceptual load in one modality on distractor processing in another is less clear. Here we used steady-state evoked potentials (SSEPs) to measure neural responses to irrelevant visual checkerboard stimuli while participants performed either a visual or auditory task that varied in perceptual load. Consistent with perceptual load theory, increasing visual task load suppressed SSEPs to the ignored visual checkerboards. In contrast, increasing auditory task load enhanced SSEPs to the ignored visual checkerboards. This enhanced neural response to irrelevant visual stimuli under auditory load suggests that exhausting capacity within one modality selectively compromises inhibitory processes required for filtering stimuli in another. Copyright © 2012 Elsevier Inc. All rights reserved.
Temporal Influence on Awareness
1995-12-01
43 38. Test Setup Timing: Measured vs Expected Modal Delays (in ms) ............. 46 39. Experiment I: visual and auditory stimuli...presented simultaneously; visual- auditory delay=Oms, visual-visual delay=0ms ....... .......................... 47 40. Experiment II: visual and auditory ...stimuli presented in order; visual- auditory de- lay=Oms, visual-visual delay=variable ................................ 48 41. Experiment II: visual and
The audiovisual structure of onomatopoeias: An intrusion of real-world physics in lexical creation.
Taitz, Alan; Assaneo, M Florencia; Elisei, Natalia; Trípodi, Mónica; Cohen, Laurent; Sitt, Jacobo D; Trevisan, Marcos A
2018-01-01
Sound-symbolic word classes are found in different cultures and languages worldwide. These words are continuously produced to code complex information about events. Here we explore the capacity of creative language to transport complex multisensory information in a controlled experiment, where our participants improvised onomatopoeias from noisy moving objects in audio, visual and audiovisual formats. We found that consonants communicate movement types (slide, hit or ring) mainly through the manner of articulation in the vocal tract. Vowels communicate shapes in visual stimuli (spiky or rounded) and sound frequencies in auditory stimuli through the configuration of the lips and tongue. A machine learning model was trained to classify movement types and used to validate generalizations of our results across formats. We implemented the classifier with a list of cross-linguistic onomatopoeias simple actions were correctly classified, while different aspects were selected to build onomatopoeias of complex actions. These results show how the different aspects of complex sensory information are coded and how they interact in the creation of novel onomatopoeias.
Mangun, G R; Buck, L A
1998-03-01
This study investigated the simple reaction time (RT) and event-related potential (ERP) correlates of biasing attention towards a location in the visual field. RTs and ERPs were recorded to stimuli flashed randomly and with equal probability to the left and right visual hemifields in the three blocked, covert attention conditions: (i) attention divided equally to left and right hemifield locations; (ii) attention biased towards the left location; or (iii) attention biased towards the right location. Attention was biased towards left or right by instructions to the subjects, and responses were required to all stimuli. Relative to the divided attention condition, RTs were significantly faster for targets occurring where more attention was allocated (benefits), and slower to targets where less attention was allocated (costs). The early P1 (100-140 msec) component over the lateral occipital scalp regions showed attentional benefits. There were no amplitude modulations of the occipital N1 (125-180 msec) component with attention. Between 200 and 500 msec latency, a late positive deflection (LPD) showed both attentional costs and benefits. The behavioral findings show that when sufficiently induced to bias attention, human observers demonstrate RT benefits as well as costs. The corresponding P1 benefits suggest that the RT benefits of spatial attention may arise as the result of modulations of visual information processing in the extrastriate visual cortex.
Visual Motion Processing Subserves Faster Visuomotor Reaction in Badminton Players.
Hülsdünker, Thorben; Strüder, Heiko K; Mierau, Andreas
2017-06-01
Athletes participating in ball or racquet sports have to respond to visual stimuli under critical time pressure. Previous studies used visual contrast stimuli to determine visual perception and visuomotor reaction in athletes and nonathletes; however, ball and racquet sports are characterized by motion rather than contrast visual cues. Because visual contrast and motion signals are processed in different cortical regions, this study aimed to determine differences in perception and processing of visual motion between athletes and nonathletes. Twenty-five skilled badminton players and 28 age-matched nonathletic controls participated in this study. Using a 64-channel EEG system, we investigated visual motion perception/processing in the motion-sensitive middle temporal (MT) cortical area in response to radial motion of different velocities. In a simple visuomotor reaction task, visuomotor transformation in Brodmann area 6 (BA6) and BA4 as well as muscular activation (EMG onset) and visuomotor reaction time (VMRT) were investigated. Stimulus- and response-locked potentials were determined to differentiate between perceptual and motor-related processes. As compared with nonathletes, athletes showed earlier EMG onset times (217 vs 178 ms, P < 0.001), accompanied by a faster VMRT (274 vs 243 ms, P < 0.001). Furthermore, athletes showed an earlier stimulus-locked peak activation of MT (200 vs 182 ms, P = 0.002) and BA6 (161 vs 137 ms, P = 0.009). Response-locked peak activation in MT was later in athletes (-7 vs 26 ms, P < 0.001), whereas no group differences were observed in BA6 and BA4. Multiple regression analyses with stimulus- and response-locked cortical potentials predicted EMG onset (r = 0.83) and VMRT (r = 0.77). The athletes' superior visuomotor performance in response to visual motion is primarily related to visual perception and, to a minor degree, to motor-related processes.
Papera, Massimiliano; Richards, Anne
2016-05-01
Exogenous allocation of attentional resources allows the visual system to encode and maintain representations of stimuli in visual working memory (VWM). However, limits in the processing capacity to allocate resources can prevent unexpected visual stimuli from gaining access to VWM and thereby to consciousness. Using a novel approach to create unbiased stimuli of increasing saliency, we investigated visual processing during a visual search task in individuals who show a high or low propensity to neglect unexpected stimuli. When propensity to inattention is high, ERP recordings show a diminished amplification concomitantly with a decrease in theta band power during the N1 latency, followed by a poor target enhancement during the N2 latency. Furthermore, a later modulation in the P3 latency was also found in individuals showing propensity to visual neglect, suggesting that more effort is required for conscious maintenance of visual information in VWM. Effects during early stages of processing (N80 and P1) were also observed suggesting that sensitivity to contrasts and medium-to-high spatial frequencies may be modulated by low-level saliency (albeit no statistical group differences were found). In accordance with the Global Workplace Model, our data indicate that a lack of resources in low-level processors and visual attention may be responsible for the failure to "ignite" a state of high-level activity spread across several brain areas that is necessary for stimuli to access awareness. These findings may aid in the development of diagnostic tests and intervention to detect/reduce inattention propensity to visual neglect of unexpected stimuli. © 2016 Society for Psychophysiological Research.
Auditory and Visual Interhemispheric Communication in Musicians and Non-Musicians
Woelfle, Rebecca; Grahn, Jessica A.
2013-01-01
The corpus callosum (CC) is a brain structure composed of axon fibres linking the right and left hemispheres. Musical training is associated with larger midsagittal cross-sectional area of the CC, suggesting that interhemispheric communication may be faster in musicians. Here we compared interhemispheric transmission times (ITTs) for musicians and non-musicians. ITT was measured by comparing simple reaction times to stimuli presented to the same hemisphere that controlled a button-press response (uncrossed reaction time), or to the contralateral hemisphere (crossed reaction time). Both visual and auditory stimuli were tested. We predicted that the crossed-uncrossed difference (CUD) for musicians would be smaller than for non-musicians as a result of faster interhemispheric transfer times. We did not expect a difference in CUDs between the visual and auditory modalities for either musicians or non-musicians, as previous work indicates that interhemispheric transfer may happen through the genu of the CC, which contains motor fibres rather than sensory fibres. There were no significant differences in CUDs between musicians and non-musicians. However, auditory CUDs were significantly smaller than visual CUDs. Although this auditory-visual difference was larger in musicians than non-musicians, the interaction between modality and musical training was not significant. Therefore, although musical training does not significantly affect ITT, the crossing of auditory information between hemispheres appears to be faster than visual information, perhaps because subcortical pathways play a greater role for auditory interhemispheric transfer. PMID:24386382
Visual short-term memory capacity for simple and complex objects.
Luria, Roy; Sessa, Paola; Gotler, Alex; Jolicoeur, Pierre; Dell'Acqua, Roberto
2010-03-01
Does the capacity of visual short-term memory (VSTM) depend on the complexity of the objects represented in memory? Although some previous findings indicated lower capacity for more complex stimuli, other results suggest that complexity effects arise during retrieval (due to errors in the comparison process with what is in memory) that is not related to storage limitations of VSTM, per se. We used ERPs to track neuronal activity specifically related to retention in VSTM by measuring the sustained posterior contralateral negativity during a change detection task (which required detecting if an item was changed between a memory and a test array). The sustained posterior contralateral negativity, during the retention interval, was larger for complex objects than for simple objects, suggesting that neurons mediating VSTM needed to work harder to maintain more complex objects. This, in turn, is consistent with the view that VSTM capacity depends on complexity.
Eighteen-month-olds' memory for short movies of simple stories.
Kingo, Osman S; Krøjgaard, Peter
2015-04-01
This study investigated twenty four 18-month-olds' memory for dynamic visual stimuli. During the first visit participants saw one of two brief movies (30 seconds) with a simple storyline displayed in four iterations. After 2 weeks, memory was tested in the visual paired comparison paradigm in which the familiar and the novel movie were contrasted simultaneously and displayed in two iterations for a total of 60 seconds. Eye-tracking revealed that participants fixated the familiar movie significantly more than the novel movie, thus indicating memory for the familiar movie. Furthermore, time-dependent analysis of the data revealed that individual differences in the looking-patterns for the first and second iteration of the movies were related to individual differences in productive vocabulary. We suggest that infants' vocabulary may be indicative of their ability to understand and remember the storyline of the movies, thereby affecting their subsequent memory. © 2015 Scandinavian Psychological Associations and John Wiley & Sons Ltd.
Dissociating emotion-induced blindness and hypervision.
Bocanegra, Bruno R; Zeelenberg, René
2009-12-01
Previous findings suggest that emotional stimuli sometimes improve (emotion-induced hypervision) and sometimes impair (emotion-induced blindness) the visual perception of subsequent neutral stimuli. We hypothesized that these differential carryover effects might be due to 2 distinct emotional influences in visual processing. On the one hand, emotional stimuli trigger a general enhancement in the efficiency of visual processing that can carry over onto other stimuli. On the other hand, emotional stimuli benefit from a stimulus-specific enhancement in later attentional processing at the expense of competing visual stimuli. We investigated whether detrimental (blindness) and beneficial (hypervision) carryover effects of emotion in perception can be dissociated within a single experimental paradigm. In 2 experiments, we manipulated the temporal competition for attention between an emotional cue word and a subsequent neutral target word by varying cue-target interstimulus interval (ISI) and cue visibility. Interestingly, emotional cues impaired target identification at short ISIs but improved target identification when competition was diminished by either increasing ISI or reducing cue visibility, suggesting that emotional significance of stimuli can improve and impair visual performance through distinct perceptual mechanisms.
Brodeur, Mathieu B.; Dionne-Dostie, Emmanuelle; Montreuil, Tina; Lepage, Martin
2010-01-01
There are currently stimuli with published norms available to study several psychological aspects of language and visual cognitions. Norms represent valuable information that can be used as experimental variables or systematically controlled to limit their potential influence on another experimental manipulation. The present work proposes 480 photo stimuli that have been normalized for name, category, familiarity, visual complexity, object agreement, viewpoint agreement, and manipulability. Stimuli are also available in grayscale, blurred, scrambled, and line-drawn version. This set of objects, the Bank Of Standardized Stimuli (BOSS), was created specifically to meet the needs of scientists in cognition, vision and psycholinguistics who work with photo stimuli. PMID:20532245
Brodeur, Mathieu B; Dionne-Dostie, Emmanuelle; Montreuil, Tina; Lepage, Martin
2010-05-24
There are currently stimuli with published norms available to study several psychological aspects of language and visual cognitions. Norms represent valuable information that can be used as experimental variables or systematically controlled to limit their potential influence on another experimental manipulation. The present work proposes 480 photo stimuli that have been normalized for name, category, familiarity, visual complexity, object agreement, viewpoint agreement, and manipulability. Stimuli are also available in grayscale, blurred, scrambled, and line-drawn version. This set of objects, the Bank Of Standardized Stimuli (BOSS), was created specifically to meet the needs of scientists in cognition, vision and psycholinguistics who work with photo stimuli.
The perception of isoluminant coloured stimuli of amblyopic eye and defocused eye
NASA Astrophysics Data System (ADS)
Krumina, Gunta; Ozolinsh, Maris; Ikaunieks, Gatis
2008-09-01
In routine eye examination the visual acuity usually is determined using standard charts with black letters on a white background, however contrast and colour are important characteristics of visual perception. The purpose of research was to study the perception of isoluminant coloured stimuli in the cases of true and simulated amlyopia. We estimated difference in visual acuity with isoluminant coloured stimuli comparing to that for high contrast black-white stimuli for true amblyopia and simulated amblyopia. Tests were generated on computer screen. Visual acuity was detected using different charts in two ways: standard achromatic stimuli (black symbols on a white background) and isoluminant coloured stimuli (white symbols on a yellow background, grey symbols on blue, green or red background). Thus isoluminant tests had colour contrast only but had no luminance contrast. Visual acuity evaluated with the standard method and colour tests were studied for subjects with good visual acuity, if necessary using the best vision correction. The same was performed for subjects with defocused eye and with true amblyopia. Defocus was realized with optical lenses placed in front of the normal eye. The obtained results applying the isoluminant colour charts revealed worsening of the visual acuity comparing with the visual acuity estimated with a standard high contrast method (black symbols on a white background).
Schwartzman, José Salomão; Velloso, Renata de Lima; D'Antino, Maria Eloísa Famá; Santos, Silvana
2015-05-01
To compare visual fixation at social stimuli in Rett syndrome (RT) and autism spectrum disorders (ASD) patients. Visual fixation at social stimuli was analyzed in 14 RS female patients (age range 4-30 years), 11 ASD male patients (age range 4-20 years), and 17 children with typical development (TD). Patients were exposed to three different pictures (two of human faces and one with social and non-social stimuli) presented for 8 seconds each on the screen of a computer attached to an eye-tracker equipment. Percentage of visual fixation at social stimuli was significantly higher in the RS group compared to ASD and even to TD groups. Visual fixation at social stimuli seems to be one more endophenotype making RS to be very different from ASD.
Cross-modal cueing of attention alters appearance and early cortical processing of visual stimuli
Störmer, Viola S.; McDonald, John J.; Hillyard, Steven A.
2009-01-01
The question of whether attention makes sensory impressions appear more intense has been a matter of debate for over a century. Recent psychophysical studies have reported that attention increases apparent contrast of visual stimuli, but the issue continues to be debated. We obtained converging neurophysiological evidence from human observers as they judged the relative contrast of visual stimuli presented to the left and right visual fields following a lateralized auditory cue. Cross-modal cueing of attention boosted the apparent contrast of the visual target in association with an enlarged neural response in the contralateral visual cortex that began within 100 ms after target onset. The magnitude of the enhanced neural response was positively correlated with perceptual reports of the cued target being higher in contrast. The results suggest that attention increases the perceived contrast of visual stimuli by boosting early sensory processing in the visual cortex. PMID:20007778
Cross-modal cueing of attention alters appearance and early cortical processing of visual stimuli.
Störmer, Viola S; McDonald, John J; Hillyard, Steven A
2009-12-29
The question of whether attention makes sensory impressions appear more intense has been a matter of debate for over a century. Recent psychophysical studies have reported that attention increases apparent contrast of visual stimuli, but the issue continues to be debated. We obtained converging neurophysiological evidence from human observers as they judged the relative contrast of visual stimuli presented to the left and right visual fields following a lateralized auditory cue. Cross-modal cueing of attention boosted the apparent contrast of the visual target in association with an enlarged neural response in the contralateral visual cortex that began within 100 ms after target onset. The magnitude of the enhanced neural response was positively correlated with perceptual reports of the cued target being higher in contrast. The results suggest that attention increases the perceived contrast of visual stimuli by boosting early sensory processing in the visual cortex.
Hekkert, Paul; Thurgood, Clementine; Whitfield, T W Allan
2013-10-01
The finding that repeated exposure to a stimulus enhances attitudes directed towards it is a well-established phenomenon. Despite this, the effects of exposure to products are difficult to determine given that they could have previously been exposed to participants any number of times. Furthermore, factors other than simple repeated exposure can influence affective evaluations for stimuli that are meaningful. In our first study, we examined the influence of existing familiarity with common objects and showed that the attractiveness of shapes representing common objects increases with their rated commonness. In our second study, we eliminated the effects of prior exposure by creating fictitious yet plausible products; thus, exposure frequency was under complete experimental control. We also manipulated the attention to be drawn to the products' designs by placing them in contexts where their visual appearance was stressed to be important versus contexts in which it was indicated that little attention had been paid to their design. Following mere exposure, attractiveness ratings increased linearly with exposure frequency, with the slope of the function being steeper for stimuli presented in an inconspicuous context-indicating that individuals engage in more deliberate processing of the stimuli when attention is drawn to their visual appearance. © 2013 Elsevier B.V. All rights reserved.
Kremkow, Jens; Perrinet, Laurent U.; Monier, Cyril; Alonso, Jose-Manuel; Aertsen, Ad; Frégnac, Yves; Masson, Guillaume S.
2016-01-01
Neurons in the primary visual cortex are known for responding vigorously but with high variability to classical stimuli such as drifting bars or gratings. By contrast, natural scenes are encoded more efficiently by sparse and temporal precise spiking responses. We used a conductance-based model of the visual system in higher mammals to investigate how two specific features of the thalamo-cortical pathway, namely push-pull receptive field organization and fast synaptic depression, can contribute to this contextual reshaping of V1 responses. By comparing cortical dynamics evoked respectively by natural vs. artificial stimuli in a comprehensive parametric space analysis, we demonstrate that the reliability and sparseness of the spiking responses during natural vision is not a mere consequence of the increased bandwidth in the sensory input spectrum. Rather, it results from the combined impacts of fast synaptic depression and push-pull inhibition, the later acting for natural scenes as a form of “effective” feed-forward inhibition as demonstrated in other sensory systems. Thus, the combination of feedforward-like inhibition with fast thalamo-cortical synaptic depression by simple cells receiving a direct structured input from thalamus composes a generic computational mechanism for generating a sparse and reliable encoding of natural sensory events. PMID:27242445
Modulation of V1 Spike Response by Temporal Interval of Spatiotemporal Stimulus Sequence
Kim, Taekjun; Kim, HyungGoo R.; Kim, Kayeon; Lee, Choongkil
2012-01-01
The spike activity of single neurons of the primary visual cortex (V1) becomes more selective and reliable in response to wide-field natural scenes compared to smaller stimuli confined to the classical receptive field (RF). However, it is largely unknown what aspects of natural scenes increase the selectivity of V1 neurons. One hypothesis is that modulation by surround interaction is highly sensitive to small changes in spatiotemporal aspects of RF surround. Such a fine-tuned modulation would enable single neurons to hold information about spatiotemporal sequences of oriented stimuli, which extends the role of V1 neurons as a simple spatiotemporal filter confined to the RF. In the current study, we examined the hypothesis in the V1 of awake behaving monkeys, by testing whether the spike response of single V1 neurons is modulated by temporal interval of spatiotemporal stimulus sequence encompassing inside and outside the RF. We used two identical Gabor stimuli that were sequentially presented with a variable stimulus onset asynchrony (SOA): the preceding one (S1) outside the RF and the following one (S2) in the RF. This stimulus configuration enabled us to examine the spatiotemporal selectivity of response modulation from a focal surround region. Although S1 alone did not evoke spike responses, visual response to S2 was modulated for SOA in the range of tens of milliseconds. These results suggest that V1 neurons participate in processing spatiotemporal sequences of oriented stimuli extending outside the RF. PMID:23091631
Representation of visual symbols in the visual word processing network.
Muayqil, Taim; Davies-Thompson, Jodie; Barton, Jason J S
2015-03-01
Previous studies have shown that word processing involves a predominantly left-sided occipitotemporal network. Words are a form of symbolic representation, in that they are arbitrary perceptual stimuli that represent other objects, actions or concepts. Lesions of parts of the visual word processing network can cause alexia, which can be associated with difficulty processing other types of symbols such as musical notation or road signs. We investigated whether components of the visual word processing network were also activated by other types of symbols. In 16 music-literate subjects, we defined the visual word network using fMRI and examined responses to four symbolic categories: visual words, musical notation, instructive symbols (e.g. traffic signs), and flags and logos. For each category we compared responses not only to scrambled stimuli, but also to similar stimuli that lacked symbolic meaning. The left visual word form area and a homologous right fusiform region responded similarly to all four categories, but equally to both symbolic and non-symbolic equivalents. Greater response to symbolic than non-symbolic stimuli occurred only in the left inferior frontal and middle temporal gyri, but only for words, and in the case of the left inferior frontal gyri, also for musical notation. A whole-brain analysis comparing symbolic versus non-symbolic stimuli revealed a distributed network of inferior temporooccipital and parietal regions that differed for different symbols. The fusiform gyri are involved in processing the form of many symbolic stimuli, but not specifically for stimuli with symbolic content. Selectivity for stimuli with symbolic content only emerges in the visual word network at the level of the middle temporal and inferior frontal gyri, but is specific for words and musical notation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Domanski, Cezary W
2004-01-01
This article unveils some previously unknown facts about the short life of Max Friedrich (1856-1887), the author in 1881 of the first PhD dissertation on experimental psychology, written under the supervision of Wilhelm Wundt: "On the Duration of Apperception for Simple and Complex Visual Stimuli." The article describes Friedrich's family background and life, professional career as a teacher, and works in psychology and mathematics. Copyright 2004 Wiley Periodicals, Inc.
Moors, Pieter; Huygelier, Hanne; Wagemans, Johan; de-Wit, Lee; van Ee, Raymond
2015-01-01
Previous studies using binocular rivalry have shown that signals in a modality other than the visual can bias dominance durations depending on their congruency with the rivaling stimuli. More recently, studies using continuous flash suppression (CFS) have reported that multisensory integration influences how long visual stimuli remain suppressed. In this study, using CFS, we examined whether the contrast thresholds for detecting visual looming stimuli are influenced by a congruent auditory stimulus. In Experiment 1, we show that a looming visual stimulus can result in lower detection thresholds compared to a static concentric grating, but that auditory tone pips congruent with the looming stimulus did not lower suppression thresholds any further. In Experiments 2, 3, and 4, we again observed no advantage for congruent multisensory stimuli. These results add to our understanding of the conditions under which multisensory integration is possible, and suggest that certain forms of multisensory integration are not evident when the visual stimulus is suppressed from awareness using CFS.
Auditory presentation and synchronization in Adobe Flash and HTML5/JavaScript Web experiments.
Reimers, Stian; Stewart, Neil
2016-09-01
Substantial recent research has examined the accuracy of presentation durations and response time measurements for visually presented stimuli in Web-based experiments, with a general conclusion that accuracy is acceptable for most kinds of experiments. However, many areas of behavioral research use auditory stimuli instead of, or in addition to, visual stimuli. Much less is known about auditory accuracy using standard Web-based testing procedures. We used a millisecond-accurate Black Box Toolkit to measure the actual durations of auditory stimuli and the synchronization of auditory and visual presentation onsets. We examined the distribution of timings for 100 presentations of auditory and visual stimuli across two computers with difference specs, three commonly used browsers, and code written in either Adobe Flash or JavaScript. We also examined different coding options for attempting to synchronize the auditory and visual onsets. Overall, we found that auditory durations were very consistent, but that the lags between visual and auditory onsets varied substantially across browsers and computer systems.
Vautin, R G; Berkley, M A
1977-09-01
1. The activity of single cortical cells in area 17 of anesthetized and unanesthetized cats was recorded in response to prolonged stimulation with moving stimuli. 2. Under the appropriate conditions, all cells observed showed a progressive response decrement during the stimulation period, regardless of cell classification, i.e., simple, complex, or hypercomplex. 3. The observed response decrement was shown to be largely cortical in origin and could be adequately described with an exponential function of the form R = Rf +(R1-Rf)e-t/T. Time constants derived from such calculations yielded values ranging from 1.92 to 12.45 s under conditions of optimal-stimulation. 4. Most cells showed poststimulation effects, usually a brief period of reduced responsiveness that recovered exponentially. Recovery was essentially complete in about 5-35 s. 5. The degree to which stimuli were effective at inducing response was shown to have significant effects on the magnitude of the response decrement. 6. Several cells showed neural patterns of response and recovery that suggested the operation of intracortical inhibitory mechanisms. 7. A simple two-process model that adequately describes the behavior of all the studied cells is presented. 8. Because the properties of the cells studied correlate well with human psychophysical measures of contour and movement adaptation and recovery, a causal relationship to similar neural mechanisms in humans is suggested.
Grundy, John G; Shedden, Judith M
2014-05-01
In the present study, we examine electrophysiological correlates of factors influencing an adjustment in cognitive control known as the bivalency effect. During task-switching, the occasional presence of bivalent stimuli in a block of univalent trials is enough to elicit a response slowing on all subsequent univalent trials. Bivalent stimuli can be congruent or incongruent with respect to the response afforded by the irrelevant stimulus feature. Here we show that the incongruent bivalency effect, the congruent bivalency effect, and an effect of a simple violation of expectancy are captured at a frontal ERP component (between 300 and 550ms) associated with ACC activity, and that the unexpectedness effect is distinguished from both congruent and incongruent bivalency effects at an earlier component (100-120ms) associated with the temporal parietal junction. We suggest that the frontal component reflects the dACC's role in predicting future cognitive load based on recent history. In contrast, the posterior component may index early visual feature extraction in response to bivalent stimuli that cue currently ongoing tasks; dACC activity may trigger the temporal parietal activity only when specific task cueing is involved and not for simple violations of expectancy. Copyright © 2014 Elsevier Ltd. All rights reserved.
Compatibility of motion facilitates visuomotor synchronization.
Hove, Michael J; Spivey, Michael J; Krumhansl, Carol L
2010-12-01
Prior research indicates that synchronized tapping performance is very poor with flashing visual stimuli compared with auditory stimuli. Three finger-tapping experiments compared flashing visual metronomes with visual metronomes containing a spatial component, either compatible, incompatible, or orthogonal to the tapping action. In Experiment 1, synchronization success rates increased dramatically for spatiotemporal sequences of both geometric and biological forms over flashing sequences. In Experiment 2, synchronization performance was best when target sequences and movements were directionally compatible (i.e., simultaneously down), followed by orthogonal stimuli, and was poorest for incompatible moving stimuli and flashing stimuli. In Experiment 3, synchronization performance was best with auditory sequences, followed by compatible moving stimuli, and was worst for flashing and fading stimuli. Results indicate that visuomotor synchronization improves dramatically with compatible spatial information. However, an auditory advantage in sensorimotor synchronization persists.
Attentional load modulates responses of human primary visual cortex to invisible stimuli.
Bahrami, Bahador; Lavie, Nilli; Rees, Geraint
2007-03-20
Visual neuroscience has long sought to determine the extent to which stimulus-evoked activity in visual cortex depends on attention and awareness. Some influential theories of consciousness maintain that the allocation of attention is restricted to conscious representations [1, 2]. However, in the load theory of attention [3], competition between task-relevant and task-irrelevant stimuli for limited-capacity attention does not depend on conscious perception of the irrelevant stimuli. The critical test is whether the level of attentional load in a relevant task would determine unconscious neural processing of invisible stimuli. Human participants were scanned with high-field fMRI while they performed a foveal task of low or high attentional load. Irrelevant, invisible monocular stimuli were simultaneously presented peripherally and were continuously suppressed by a flashing mask in the other eye [4]. Attentional load in the foveal task strongly modulated retinotopic activity evoked in primary visual cortex (V1) by the invisible stimuli. Contrary to traditional views [1, 2, 5, 6], we found that availability of attentional capacity determines neural representations related to unconscious processing of continuously suppressed stimuli in human primary visual cortex. Spillover of attention to cortical representations of invisible stimuli (under low load) cannot be a sufficient condition for their awareness.
Synchronization with competing visual and auditory rhythms: bouncing ball meets metronome.
Hove, Michael J; Iversen, John R; Zhang, Allen; Repp, Bruno H
2013-07-01
Synchronization of finger taps with periodically flashing visual stimuli is known to be much more variable than synchronization with an auditory metronome. When one of these rhythms is the synchronization target and the other serves as a distracter at various temporal offsets, strong auditory dominance is observed. However, it has recently been shown that visuomotor synchronization improves substantially with moving stimuli such as a continuously bouncing ball. The present study pitted a bouncing ball against an auditory metronome in a target-distracter synchronization paradigm, with the participants being auditory experts (musicians) and visual experts (video gamers and ball players). Synchronization was still less variable with auditory than with visual target stimuli in both groups. For musicians, auditory stimuli tended to be more distracting than visual stimuli, whereas the opposite was the case for the visual experts. Overall, there was no main effect of distracter modality. Thus, a distracting spatiotemporal visual rhythm can be as effective as a distracting auditory rhythm in its capacity to perturb synchronous movement, but its effectiveness also depends on modality-specific expertise.
Haider, Bilal; Krause, Matthew R.; Duque, Alvaro; Yu, Yuguo; Touryan, Jonathan; Mazer, James A.; McCormick, David A.
2011-01-01
SUMMARY During natural vision, the entire visual field is stimulated by images rich in spatiotemporal structure. Although many visual system studies restrict stimuli to the classical receptive field (CRF), it is known that costimulation of the CRF and the surrounding nonclassical receptive field (nCRF) increases neuronal response sparseness. The cellular and network mechanisms underlying increased response sparseness remain largely unexplored. Here we show that combined CRF + nCRF stimulation increases the sparseness, reliability, and precision of spiking and membrane potential responses in classical regular spiking (RSC) pyramidal neurons of cat primary visual cortex. Conversely, fast-spiking interneurons exhibit increased activity and decreased selectivity during CRF + nCRF stimulation. The increased sparseness and reliability of RSC neuron spiking is associated with increased inhibitory barrages and narrower visually evoked synaptic potentials. Our experimental observations were replicated with a simple computational model, suggesting that network interactions among neuronal subtypes ultimately sharpen recurrent excitation, producing specific and reliable visual responses. PMID:20152117
ERIC Educational Resources Information Center
Teubert, Manuel; Lohaus, Arnold; Fassbender, Ina; Vierhaus, Marc; Spangler, Sibylle; Borchert, Sonja; Freitag, Claudia; Goertz, Claudia; Graf, Frauke; Gudi, Helene; Kolling, Thorsten; Lamm, Bettina; Keller, Heidi; Knopf, Monika; Schwarzer, Gudrun
2012-01-01
This longitudinal study examined the influence of stimulus material on attention and expectation learning in the visual expectation paradigm. Female faces were used as attention-attracting stimuli, and non-meaningful visual stimuli of comparable complexity (Greebles) were used as low attention-attracting stimuli. Expectation learning performance…
NASA Astrophysics Data System (ADS)
Nasaruddin, N. H.; Yusoff, A. N.; Kaur, S.
2014-11-01
The objective of this multiple-subjects functional magnetic resonance imaging (fMRI) study was to identify the common brain areas that are activated when viewing black-and-white checkerboard pattern stimuli of various shapes, pattern and size and to investigate specific brain areas that are involved in processing static and moving visual stimuli. Sixteen participants viewed the moving (expanding ring, rotating wedge, flipping hour glass and bowtie and arc quadrant) and static (full checkerboard) stimuli during an fMRI scan. All stimuli have black-and-white checkerboard pattern. Statistical parametric mapping (SPM) was used in generating brain activation. Differential analyses were implemented to separately search for areas involved in processing static and moving stimuli. In general, the stimuli of various shapes, pattern and size activated multiple brain areas mostly in the left hemisphere. The activation in the right middle temporal gyrus (MTG) was found to be significantly higher in processing moving visual stimuli as compared to static stimulus. In contrast, the activation in the left calcarine sulcus and left lingual gyrus were significantly higher for static stimulus as compared to moving stimuli. Visual stimulation of various shapes, pattern and size used in this study indicated left lateralization of activation. The involvement of the right MTG in processing moving visual information was evident from differential analysis, while the left calcarine sulcus and left lingual gyrus are the areas that are involved in the processing of static visual stimulus.
Expectancy and surprise predict neural and behavioral measures of attention to threatening stimuli
Browning, Michael; Harmer, Catherine J.
2012-01-01
Attention is preferentially deployed toward those stimuli which are threatening and those which are surprising. The current paper examines the intersection of these phenomena; how do expectations about the threatening nature of stimuli influence the deployment of attention? The predictions tested were that individuals would direct attention toward stimuli which were expected to be threatening (regardless of whether they were or not) and toward stimuli which were surprising. As anxiety has been associated with deficient control of attention to threat, it was additionally predicted that high levels of trait anxiety would be associated with deficits in the use of threat-expectation to guide attention. During fMRI scanning, 29 healthy volunteers completed a simple task in which threat-expectation was manipulated by altering the frequency with which fearful or neutral faces were presented. Individual estimates of threat-expectation and surprise were created using a Bayesian computational model. The degree to which the model derived estimates of threat-expectation and surprise were able to explain both a behavioral measure of attention to the faces and activity in the visual cortex and anterior attentional control areas was then tested. As predicted, increased threat-expectation and surprise were associated with increases in both the behavioral and neuroimaging measures of attention to the faces. Additionally, regions of the orbitofrontal cortex and left amygdala were found to covary with threat-expectation whereas anterior cingulate and lateral prefrontal cortices covaried with surprise. Individuals with higher levels of trait anxiety were less able to modify neuroimaging measures of attention in response to threat-expectation. These results suggest that continuously calculated estimates of the probability of threat may plausibly be used to influence the deployment of visual attention and that use of this information is perturbed in anxious individuals. PMID:21945791
A computational visual saliency model based on statistics and machine learning.
Lin, Ru-Je; Lin, Wei-Song
2014-08-01
Identifying the type of stimuli that attracts human visual attention has been an appealing topic for scientists for many years. In particular, marking the salient regions in images is useful for both psychologists and many computer vision applications. In this paper, we propose a computational approach for producing saliency maps using statistics and machine learning methods. Based on four assumptions, three properties (Feature-Prior, Position-Prior, and Feature-Distribution) can be derived and combined by a simple intersection operation to obtain a saliency map. These properties are implemented by a similarity computation, support vector regression (SVR) technique, statistical analysis of training samples, and information theory using low-level features. This technique is able to learn the preferences of human visual behavior while simultaneously considering feature uniqueness. Experimental results show that our approach performs better in predicting human visual attention regions than 12 other models in two test databases. © 2014 ARVO.
Affective and physiological correlates of the perception of unimodal and bimodal emotional stimuli.
Rosa, Pedro J; Oliveira, Jorge; Alghazzawi, Daniyal; Fardoun, Habib; Gamito, Pedro
2017-08-01
Despite the multisensory nature of perception, previous research on emotions has been focused on unimodal emotional cues with visual stimuli. To the best of our knowledge, there is no evidence on the extent to which incongruent emotional cues from visual and auditory sensory channels affect pupil size. To investigate the effects of audiovisual emotional information perception on the physiological and affective response, but also to determine the impact of mismatched cues in emotional perception on these physiological indexes. Pupil size, electrodermal activity and affective subjective responses were recorded while 30 participants were exposed to visual and auditory stimuli with varied emotional content in three different experimental conditions: pictures and sounds presented alone (unimodal), emotionally matched audio-visual stimuli (bimodal congruent) and emotionally mismatched audio-visual stimuli (bimodal incongruent). The data revealed no effect of emotional incongruence on physiological and affective responses. On the other hand, pupil size covaried with skin conductance response (SCR), but the subjective experience was partially dissociated from autonomic responses. Emotional stimuli are able to trigger physiological responses regardless of valence, sensory modality or level of emotional congruence.
Slow Feature Analysis on Retinal Waves Leads to V1 Complex Cells
Dähne, Sven; Wilbert, Niko; Wiskott, Laurenz
2014-01-01
The developing visual system of many mammalian species is partially structured and organized even before the onset of vision. Spontaneous neural activity, which spreads in waves across the retina, has been suggested to play a major role in these prenatal structuring processes. Recently, it has been shown that when employing an efficient coding strategy, such as sparse coding, these retinal activity patterns lead to basis functions that resemble optimal stimuli of simple cells in primary visual cortex (V1). Here we present the results of applying a coding strategy that optimizes for temporal slowness, namely Slow Feature Analysis (SFA), to a biologically plausible model of retinal waves. Previously, SFA has been successfully applied to model parts of the visual system, most notably in reproducing a rich set of complex-cell features by training SFA with quasi-natural image sequences. In the present work, we obtain SFA units that share a number of properties with cortical complex-cells by training on simulated retinal waves. The emergence of two distinct properties of the SFA units (phase invariance and orientation tuning) is thoroughly investigated via control experiments and mathematical analysis of the input-output functions found by SFA. The results support the idea that retinal waves share relevant temporal and spatial properties with natural visual input. Hence, retinal waves seem suitable training stimuli to learn invariances and thereby shape the developing early visual system such that it is best prepared for coding input from the natural world. PMID:24810948
Martínez-Cañada, Pablo; Halnes, Geir; Fyhn, Marianne
2018-01-01
Despite half-a-century of research since the seminal work of Hubel and Wiesel, the role of the dorsal lateral geniculate nucleus (dLGN) in shaping the visual signals is not properly understood. Placed on route from retina to primary visual cortex in the early visual pathway, a striking feature of the dLGN circuit is that both the relay cells (RCs) and interneurons (INs) not only receive feedforward input from retinal ganglion cells, but also a prominent feedback from cells in layer 6 of visual cortex. This feedback has been proposed to affect synchronicity and other temporal properties of the RC firing. It has also been seen to affect spatial properties such as the center-surround antagonism of thalamic receptive fields, i.e., the suppression of the response to very large stimuli compared to smaller, more optimal stimuli. Here we explore the spatial effects of cortical feedback on the RC response by means of a a comprehensive network model with biophysically detailed, single-compartment and multicompartment neuron models of RCs, INs and a population of orientation-selective layer 6 simple cells, consisting of pyramidal cells (PY). We have considered two different arrangements of synaptic feedback from the ON and OFF zones in the visual cortex to the dLGN: phase-reversed (‘push-pull’) and phase-matched (‘push-push’), as well as different spatial extents of the corticothalamic projection pattern. Our simulation results support that a phase-reversed arrangement provides a more effective way for cortical feedback to provide the increased center-surround antagonism seen in experiments both for flashing spots and, even more prominently, for patch gratings. This implies that ON-center RCs receive direct excitation from OFF-dominated cortical cells and indirect inhibitory feedback from ON-dominated cortical cells. The increased center-surround antagonism in the model is accompanied by spatial focusing, i.e., the maximum RC response occurs for smaller stimuli when feedback is present. PMID:29377888
Scene and human face recognition in the central vision of patients with glaucoma
Aptel, Florent; Attye, Arnaud; Guyader, Nathalie; Boucart, Muriel; Chiquet, Christophe; Peyrin, Carole
2018-01-01
Primary open-angle glaucoma (POAG) firstly mainly affects peripheral vision. Current behavioral studies support the idea that visual defects of patients with POAG extend into parts of the central visual field classified as normal by static automated perimetry analysis. This is particularly true for visual tasks involving processes of a higher level than mere detection. The purpose of this study was to assess visual abilities of POAG patients in central vision. Patients were assigned to two groups following a visual field examination (Humphrey 24–2 SITA-Standard test). Patients with both peripheral and central defects and patients with peripheral but no central defect, as well as age-matched controls, participated in the experiment. All participants had to perform two visual tasks where low-contrast stimuli were presented in the central 6° of the visual field. A categorization task of scene images and human face images assessed high-level visual recognition abilities. In contrast, a detection task using the same stimuli assessed low-level visual function. The difference in performance between detection and categorization revealed the cost of high-level visual processing. Compared to controls, patients with a central visual defect showed a deficit in both detection and categorization of all low-contrast images. This is consistent with the abnormal retinal sensitivity as assessed by perimetry. However, the deficit was greater for categorization than detection. Patients without a central defect showed similar performances to the controls concerning the detection and categorization of faces. However, while the detection of scene images was well-maintained, these patients showed a deficit in their categorization. This suggests that the simple loss of peripheral vision could be detrimental to scene recognition, even when the information is displayed in central vision. This study revealed subtle defects in the central visual field of POAG patients that cannot be predicted by static automated perimetry assessment using Humphrey 24–2 SITA-Standard test. PMID:29481572
1987-03-01
3/4 hours. Performance tests evaluated simple and choice reaction time to visual stimuli, vigilance, and processing of symbolic, numerical, verbal...minimize the adverse consequences of these stressors. Tyrosine enhanced performance (e.g. complex information processing , vigilance, and reaction time... processes inherent in many real-world tasks. For example, Map Compass requires association of Wsi PL AFCm uA O-SV CHETCLtISS) direction and degree
Visual memories for perceived length are well preserved in older adults.
Norman, J Farley; Holmin, Jessica S; Bartholomew, Ashley N
2011-09-15
Three experiments compared younger (mean age was 23.7years) and older (mean age was 72.1years) observers' ability to visually discriminate line length using both explicit and implicit standard stimuli. In Experiment 1, the method of constant stimuli (with an explicit standard) was used to determine difference thresholds, whereas the method of single stimuli (where the knowledge of the standard length was only implicit and learned from previous test stimuli) was used in Experiments 2 and 3. The study evaluated whether increases in age affect older observers' ability to learn, retain, and utilize effective implicit visual standards. Overall, the observers' length difference thresholds were 5.85% of the standard when the method of constant stimuli was used and improved to 4.39% of the standard for the method of single stimuli (a decrease of 25%). Both age groups performed similarly in all conditions. The results demonstrate that older observers retain the ability to create, remember, and utilize effective implicit standards from a series of visual stimuli. Copyright © 2011 Elsevier Ltd. All rights reserved.
Rapid figure-ground responses to stereograms reveal an advantage for a convex foreground.
Bertamini, Marco; Lawson, Rebecca
2008-01-01
Convexity has long been recognised as a factor that affects figure - ground segmentation, even when pitted against other factors such as symmetry [Kanizsa and Gerbino, 1976 Art and Artefacts Ed.M Henle (New York: Springer) pp 25-32]. It is accepted in the literature that the difference between concave and convex contours is important for the visual system, and that there is a prior expectation favouring convexities as figure. We used bipartite stimuli and a simple task in which observers had to report whether the foreground was on the left or the right. We report objective evidence that supports the idea that convexity affects figure-ground assignment, even though our stimuli were not pictorial in that depth order was specified unambiguously by binocular disparity.
Cross-modal extinction in a boy with severely autistic behaviour and high verbal intelligence.
Bonneh, Yoram S; Belmonte, Matthew K; Pei, Francesca; Iversen, Portia E; Kenet, Tal; Akshoomoff, Natacha; Adini, Yael; Simon, Helen J; Moore, Christopher I; Houde, John F; Merzenich, Michael M
2008-07-01
Anecdotal reports from individuals with autism suggest a loss of awareness to stimuli from one modality in the presence of stimuli from another. Here we document such a case in a detailed study of A.M., a 13-year-old boy with autism in whom significant autistic behaviours are combined with an uneven IQ profile of superior verbal and low performance abilities. Although A.M.'s speech is often unintelligible, and his behaviour is dominated by motor stereotypies and impulsivity, he can communicate by typing or pointing independently within a letter board. A series of experiments using simple and highly salient visual, auditory, and tactile stimuli demonstrated a hierarchy of cross-modal extinction, in which auditory information extinguished other modalities at various levels of processing. A.M. also showed deficits in shifting and sustaining attention. These results provide evidence for monochannel perception in autism and suggest a general pattern of winner-takes-all processing in which a stronger stimulus-driven representation dominates behaviour, extinguishing weaker representations.
Ohyanagi, Toshio; Sengoku, Yasuhito
2010-02-01
This article presents a new solution for measuring accurate reaction time (SMART) to visual stimuli. The SMART is a USB device realized with a Cypress Programmable System-on-Chip (PSoC) mixed-signal array programmable microcontroller. A brief overview of the hardware and firmware of the PSoC is provided, together with the results of three experiments. In Experiment 1, we investigated the timing accuracy of the SMART in measuring reaction time (RT) under different conditions of operating systems (OSs; Windows XP or Vista) and monitor displays (a CRT or an LCD). The results indicated that the timing error in measuring RT by the SMART was less than 2 msec, on average, under all combinations of OS and display and that the SMART was tolerant to jitter and noise. In Experiment 2, we tested the SMART with 8 participants. The results indicated that there was no significant difference among RTs obtained with the SMART under the different conditions of OS and display. In Experiment 3, we used Microsoft (MS) PowerPoint to present visual stimuli on the display. We found no significant difference in RTs obtained using MS DirectX technology versus using the PowerPoint file with the SMART. We are certain that the SMART is a simple and practical solution for measuring RTs accurately. Although there are some restrictions in using the SMART with RT paradigms, the SMART is capable of providing both researchers and health professionals working in clinical settings with new ways of using RT paradigms in their work.
Evaluating the operations underlying multisensory integration in the cat superior colliculus.
Stanford, Terrence R; Quessy, Stephan; Stein, Barry E
2005-07-13
It is well established that superior colliculus (SC) multisensory neurons integrate cues from different senses; however, the mechanisms responsible for producing multisensory responses are poorly understood. Previous studies have shown that spatially congruent cues from different modalities (e.g., auditory and visual) yield enhanced responses and that the greatest relative enhancements occur for combinations of the least effective modality-specific stimuli. Although these phenomena are well documented, little is known about the mechanisms that underlie them, because no study has systematically examined the operation that multisensory neurons perform on their modality-specific inputs. The goal of this study was to evaluate the computations that multisensory neurons perform in combining the influences of stimuli from two modalities. The extracellular activities of single neurons in the SC of the cat were recorded in response to visual, auditory, and bimodal visual-auditory stimulation. Each neuron was tested across a range of stimulus intensities and multisensory responses evaluated against the null hypothesis of simple summation of unisensory influences. We found that the multisensory response could be superadditive, additive, or subadditive but that the computation was strongly dictated by the efficacies of the modality-specific stimulus components. Superadditivity was most common within a restricted range of near-threshold stimulus efficacies, whereas for the majority of stimuli, response magnitudes were consistent with the linear summation of modality-specific influences. In addition to providing a constraint for developing models of multisensory integration, the relationship between response mode and stimulus efficacy emphasizes the importance of considering stimulus parameters when inducing or interpreting multisensory phenomena.
Natural images dominate in binocular rivalry
Baker, Daniel H.; Graf, Erich W.
2009-01-01
Ecological approaches to perception have demonstrated that information encoding by the visual system is informed by the natural environment, both in terms of simple image attributes like luminance and contrast, and more complex relationships corresponding to Gestalt principles of perceptual organization. Here, we ask if this optimization biases perception of visual inputs that are perceptually bistable. Using the binocular rivalry paradigm, we designed stimuli that varied in either their spatiotemporal amplitude spectra or their phase spectra. We found that noise stimuli with “natural” amplitude spectra (i.e., amplitude content proportional to 1/f, where f is spatial or temporal frequency) dominate over those with any other systematic spectral slope, along both spatial and temporal dimensions. This could not be explained by perceived contrast measurements, and occurred even though all stimuli had equal energy. Calculating the effective contrast following attenuation by a model contrast sensitivity function suggested that the strong contrast dependency of rivalry provides the mechanism by which binocular vision is optimized for viewing natural images. We also compared rivalry between natural and phase-scrambled images and found a strong preference for natural phase spectra that could not be accounted for by observer biases in a control task. We propose that this phase specificity relates to contour information, and arises either from the activity of V1 complex cells, or from later visual areas, consistent with recent neuroimaging and single-cell work. Our findings demonstrate that human vision integrates information across space, time, and phase to select the input most likely to hold behavioral relevance. PMID:19289828
Interaction between visual and chemical cues in a Liolaemus lizard: a multimodal approach.
Vicente, Natalin S; Halloy, Monique
2017-12-01
Multimodal communication involves the use of signals and cues across two or more sensory modalities. The genus Liolaemus (Iguania: Liolaemidae) offers a great potential for studies on the ecology and evolution of multimodal communication, including visual and chemical signals. In this study, we analyzed the response of male and female Liolaemus pacha to chemical, visual and combined (multimodal) stimuli. Using cue-isolation tests, we registered the number of tongue flicks and headbob displays from exposure to signals in each modality. Number of tongue flicks was greater when a chemical stimulus was presented alone than in the presence of visual or multimodal stimuli. In contrast, headbob displays were fewer in number with visual and chemical stimuli alone, but significantly higher in number when combined. Female signallers triggered significantly more tongue flicks than male signallers, suggesting that chemical cues are involved in sexual recognition. We did not find an inhibition between chemical and visual cues. On the contrary, we observed a dominance of the chemical modality, because when presented with visual stimuli, lizards also responded with more tongue flicks than headbob displays. The total response produced by multimodal stimuli was similar to that of the chemical stimuli alone, possibly suggesting non-redundancy. We discuss whether the visual component of a multimodal signal could attract attention at a distance, increasing the effectiveness of transmission and reception of the information in chemical cues. Copyright © 2017 Elsevier GmbH. All rights reserved.
Filbrich, Lieve; Alamia, Andrea; Burns, Soline; Legrain, Valéry
2017-07-01
Despite their high relevance for defending the integrity of the body, crossmodal links between nociception, the neural system specifically coding potentially painful information, and vision are still poorly studied, especially the effects of nociception on visual perception. This study investigated if, and in which time window, a nociceptive stimulus can attract attention to its location on the body, independently of voluntary control, to facilitate the processing of visual stimuli occurring in the same side of space as the limb on which the visual stimulus was applied. In a temporal order judgment task based on an adaptive procedure, participants judged which of two visual stimuli, one presented next to either hand in either side of space, had been perceived first. Each pair of visual stimuli was preceded (by 200, 400, or 600 ms) by a nociceptive stimulus applied either unilaterally on one single hand, or bilaterally, on both hands simultaneously. Results show that, as compared to the bilateral condition, participants' judgments were biased to the advantage of the visual stimuli that occurred in the same side of space as the hand on which a unilateral, nociceptive stimulus was applied. This effect was present in a time window ranging from 200 to 600 ms, but importantly, biases increased with decreasing time interval. These results suggest that nociceptive stimuli can affect the perceptual processing of spatially congruent visual inputs.
Duration estimates within a modality are integrated sub-optimally
Cai, Ming Bo; Eagleman, David M.
2015-01-01
Perceived duration can be influenced by various properties of sensory stimuli. For example, visual stimuli of higher temporal frequency are perceived to last longer than those of lower temporal frequency. How does the brain form a representation of duration when each of two simultaneously presented stimuli influences perceived duration in different way? To answer this question, we investigated the perceived duration of a pair of dynamic visual stimuli of different temporal frequencies in comparison to that of a single visual stimulus of either low or high temporal frequency. We found that the duration representation of simultaneously occurring visual stimuli is best described by weighting the estimates of duration based on each individual stimulus. However, the weighting performance deviates from the prediction of statistically optimal integration. In addition, we provided a Bayesian account to explain a difference in the apparent sensitivity of the psychometric curves introduced by the order in which the two stimuli are displayed in a two-alternative forced-choice task. PMID:26321965
Qian, Ning; Dayan, Peter
2013-01-01
A wealth of studies has found that adapting to second-order visual stimuli has little effect on the perception of first-order stimuli. This is physiologically and psychologically troubling, since many cells show similar tuning to both classes of stimuli, and since adapting to first-order stimuli leads to aftereffects that do generalize to second-order stimuli. Focusing on high-level visual stimuli, we recently proposed the novel explanation that the lack of transfer arises partially from the characteristically different backgrounds of the two stimulus classes. Here, we consider the effect of stimulus backgrounds in the far more prevalent, lower-level, case of the orientation tilt aftereffect. Using a variety of first- and second-order oriented stimuli, we show that we could increase or decrease both within- and cross-class adaptation aftereffects by increasing or decreasing the similarity of the otherwise apparently uninteresting or irrelevant backgrounds of adapting and test patterns. Our results suggest that similarity between background statistics of the adapting and test stimuli contributes to low-level visual adaptation, and that these backgrounds are thus not discarded by visual processing but provide contextual modulation of adaptation. Null cross-adaptation aftereffects must also be interpreted cautiously. These findings reduce the apparent inconsistency between psychophysical and neurophysiological data about first- and second-order stimuli. PMID:23732217
Factors influencing the latency of simple reaction time
Woods, David L.; Wyma, John M.; Yund, E. William; Herron, Timothy J.; Reed, Bruce
2015-01-01
Simple reaction time (SRT), the minimal time needed to respond to a stimulus, is a basic measure of processing speed. SRTs were first measured by Francis Galton in the 19th century, who reported visual SRT latencies below 190 ms in young subjects. However, recent large-scale studies have reported substantially increased SRT latencies that differ markedly in different laboratories, in part due to timing delays introduced by the computer hardware and software used for SRT measurement. We developed a calibrated and temporally precise SRT test to analyze the factors that influence SRT latencies in a paradigm where visual stimuli were presented to the left or right hemifield at varying stimulus onset asynchronies (SOAs). Experiment 1 examined a community sample of 1469 subjects ranging in age from 18 to 65. Mean SRT latencies were short (231, 213 ms when corrected for hardware delays) and increased significantly with age (0.55 ms/year), but were unaffected by sex or education. As in previous studies, SRTs were prolonged at shorter SOAs and were slightly faster for stimuli presented in the visual field contralateral to the responding hand. Stimulus detection time (SDT) was estimated by subtracting movement initiation time, measured in a speeded finger tapping test, from SRTs. SDT latencies averaged 131 ms and were unaffected by age. Experiment 2 tested 189 subjects ranging in age from 18 to 82 years in a different laboratory using a larger range of SOAs. Both SRTs and SDTs were slightly prolonged (by 7 ms). SRT latencies increased with age while SDT latencies remained stable. Precise computer-based measurements of SRT latencies show that processing speed is as fast in contemporary populations as in the Victorian era, and that age-related increases in SRT latencies are due primarily to slowed motor output. PMID:25859198
Cell structure and function in the visual cortex of the cat
Kelly, J. P.; Van Essen, D. C.
1974-01-01
1. The organization of the visual cortex was studied with a technique that allows one to determine the physiology and morphology of individual cells. Micro-electrodes filled with the fluorescent dye Procion yellow were used to record intracellularly from cells in area 17 of the cat. The visual receptive field of each neurone was classified as simple, complex, or hypercomplex, and the cell was then stained by the iontophoretic injection of dye. 2. Fifty neurones were successfully examined in this way, and their structural features were compared to the varieties of cell types seen in Golgi preparations of area 17. The majority of simple units were stellate cells, whereas the majority of complex and hypercomplex units were pyramidal cells. Several neurones belonged to less common morphological types, such as double bouquet cells. Simple cells were concentrated in layer IV, hypercomplex cells in layer II + III, and complex cells in layers II + III, V and VI. 3. Electrically inexcitable cells that had high resting potentials but no impulse activity were stained and identified as glial cells. Glial cells responded to visual stimuli with slow graded depolarizations, and many of them showed a preference for a stimulus orientation similar to the optimal orientation for adjacent neurones. 4. The results show that there is a clear, but not absolute correlation between the major structural and functional classes of cells in the visual cortex. This approach, linking the physiological properties of a single cell to a given morphological type, will help in furthering our understanding of the cerebral cortex. ImagesPlate 4Plate 1Plate 2Plate 3 PMID:4136579
Barban, Francesco; Zannino, Gian Daniele; Macaluso, Emiliano; Caltagirone, Carlo; Carlesimo, Giovanni A
2013-06-01
Iconic memory is a high-capacity low-duration visual memory store that allows the persistence of a visual stimulus after its offset. The categorical nature of this store has been extensively debated. This study provides functional magnetic resonance imaging evidence for brain regions underlying the persistence of postcategorical representations of visual stimuli. In a partial report paradigm, subjects matched a cued row of a 3 × 3 array of letters (postcategorical stimuli) or false fonts (precategorical stimuli) with a subsequent triplet of stimuli. The cued row was indicated by two visual flankers presented at the onset (physical stimulus readout) or after the offset of the array (iconic memory readout). The left planum temporale showed a greater modulation of the source of readout (iconic memory vs. physical stimulus) when letters were presented compared to false fonts. This is a multimodal brain region responsible for matching incoming acoustic and visual patterns with acoustic pattern templates. These findings suggest that letters persist after their physical offset in an abstract postcategorical representation. A targeted region of interest analysis revealed a similar pattern of activation in the Visual Word Form Area. These results suggest that multiple higher-order visual areas mediate iconic memory for postcategorical stimuli. Copyright © 2012 Wiley Periodicals, Inc.
Fujisawa, Junya; Touyama, Hideaki; Hirose, Michitaka
2008-01-01
In this paper, alpha band modulation during visual spatial attention without visual stimuli was focused. Visual spatial attention has been expected to provide a new channel of non-invasive independent brain computer interface (BCI), but little work has been done on the new interfacing method. The flickering stimuli used in previous work cause a decline of independency and have difficulties in a practical use. Therefore we investigated whether visual spatial attention could be detected without such stimuli. Further, the common spatial patterns (CSP) were for the first time applied to the brain states during visual spatial attention. The performance evaluation was based on three brain states of left, right and center direction attention. The 30-channel scalp electroencephalographic (EEG) signals over occipital cortex were recorded for five subjects. Without CSP, the analyses made 66.44 (range 55.42 to 72.27) % of average classification performance in discriminating left and right attention classes. With CSP, the averaged classification accuracy was 75.39 (range 63.75 to 86.13) %. It is suggested that CSP is useful in the context of visual spatial attention, and the alpha band modulation during visual spatial attention without flickering stimuli has the possibility of a new channel for independent BCI as well as motor imagery.
Zold, Camila L.
2015-01-01
The primary visual cortex (V1) is widely regarded as faithfully conveying the physical properties of visual stimuli. Thus, experience-induced changes in V1 are often interpreted as improving visual perception (i.e., perceptual learning). Here we describe how, with experience, cue-evoked oscillations emerge in V1 to convey expected reward time as well as to relate experienced reward rate. We show, in chronic multisite local field potential recordings from rat V1, that repeated presentation of visual cues induces the emergence of visually evoked oscillatory activity. Early in training, the visually evoked oscillations relate to the physical parameters of the stimuli. However, with training, the oscillations evolve to relate the time in which those stimuli foretell expected reward. Moreover, the oscillation prevalence reflects the reward rate recently experienced by the animal. Thus, training induces experience-dependent changes in V1 activity that relate to what those stimuli have come to signify behaviorally: when to expect future reward and at what rate. PMID:26134643
Segalowitz, Sidney J; Sternin, Avital; Lewis, Terri L; Dywan, Jane; Maurer, Daphne
2017-04-01
We examined the role of early visual input in visual system development by testing adults who had been born with dense bilateral cataracts that blocked all patterned visual input during infancy until the cataractous lenses were removed surgically and the eyes fitted with compensatory contact lenses. Patients viewed checkerboards and textures to explore early processing regions (V1, V2), Glass patterns to examine global form processing (V4), and moving stimuli to explore global motion processing (V5). Patients' ERPs differed from those of controls in that (1) the V1 component was much smaller for all but the simplest stimuli and (2) extrastriate components did not differentiate amongst texture stimuli, Glass patterns, or motion stimuli. The results indicate that early visual deprivation contributes to permanent abnormalities at early and mid levels of visual processing, consistent with enduring behavioral deficits in the ability to process complex textures, global form, and global motion. © 2017 Wiley Periodicals, Inc.
Adaptive optics without altering visual perception.
Koenig, D E; Hart, N W; Hofer, H J
2014-04-01
Adaptive optics combined with visual psychophysics creates the potential to study the relationship between visual function and the retina at the cellular scale. This potential is hampered, however, by visual interference from the wavefront-sensing beacon used during correction. For example, we have previously shown that even a dim, visible beacon can alter stimulus perception (Hofer et al., 2012). Here we describe a simple strategy employing a longer wavelength (980nm) beacon that, in conjunction with appropriate restriction on timing and placement, allowed us to perform psychophysics when dark adapted without altering visual perception. The method was verified by comparing detection and color appearance of foveally presented small spot stimuli with and without the wavefront beacon present in 5 subjects. As an important caution, we found that significant perceptual interference can occur even with a subliminal beacon when additional measures are not taken to limit exposure. Consequently, the lack of perceptual interference should be verified for a given system, and not assumed based on invisibility of the beacon. Copyright © 2014 Elsevier B.V. All rights reserved.
Relativistic compression and expansion of experiential time in the left and right space.
Vicario, Carmelo Mario; Pecoraro, Patrizia; Turriziani, Patrizia; Koch, Giacomo; Caltagirone, Carlo; Oliveri, Massimiliano
2008-03-05
Time, space and numbers are closely linked in the physical world. However, the relativistic-like effects on time perception of spatial and magnitude factors remain poorly investigated. Here we wanted to investigate whether duration judgments of digit visual stimuli are biased depending on the side of space where the stimuli are presented and on the magnitude of the stimulus itself. Different groups of healthy subjects performed duration judgment tasks on various types of visual stimuli. In the first two experiments visual stimuli were constituted by digit pairs (1 and 9), presented in the centre of the screen or in the right and left space. In a third experiment visual stimuli were constituted by black circles. The duration of the reference stimulus was fixed at 300 ms. Subjects had to indicate the relative duration of the test stimulus compared with the reference one. The main results showed that, regardless of digit magnitude, duration of stimuli presented in the left hemispace is underestimated and that of stimuli presented in the right hemispace is overestimated. On the other hand, in midline position, duration judgments are affected by the numerical magnitude of the presented stimulus, with time underestimation of stimuli of low magnitude and time overestimation of stimuli of high magnitude. These results argue for the presence of strict interactions between space, time and magnitude representation on the human brain.
Neural oscillatory deficits in schizophrenia predict behavioral and neurocognitive impairments
Martínez, Antígona; Gaspar, Pablo A.; Hillyard, Steven A.; Bickel, Stephan; Lakatos, Peter; Dias, Elisa C.; Javitt, Daniel C.
2015-01-01
Paying attention to visual stimuli is typically accompanied by event-related desynchronizations (ERD) of ongoing alpha (7–14 Hz) activity in visual cortex. The present study used time-frequency based analyses to investigate the role of impaired alpha ERD in visual processing deficits in schizophrenia (Sz). Subjects viewed sinusoidal gratings of high (HSF) and low (LSF) spatial frequency (SF) designed to test functioning of the parvo- vs. magnocellular pathways, respectively. Patients with Sz and healthy controls paid attention selectively to either the LSF or HSF gratings which were presented in random order. Event-related brain potentials (ERPs) were recorded to all stimuli. As in our previous study, it was found that Sz patients were selectively impaired at detecting LSF target stimuli and that ERP amplitudes to LSF stimuli were diminished, both for the early sensory-evoked components and for the attend minus unattend difference component (the Selection Negativity), which is generally regarded as a specific index of feature-selective attention. In the time-frequency domain, the differential ERP deficits to LSF stimuli were echoed in a virtually absent theta-band phase locked response to both unattended and attended LSF stimuli (along with relatively intact theta-band activity for HSF stimuli). In contrast to the theta-band evoked responses which were tightly stimulus locked, stimulus-induced desynchronizations of ongoing alpha activity were not tightly stimulus locked and were apparent only in induced power analyses. Sz patients were significantly impaired in the attention-related modulation of ongoing alpha activity for both HSF and LSF stimuli. These deficits correlated with patients’ behavioral deficits in visual information processing as well as with visually based neurocognitive deficits. These findings suggest an additional, pathway-independent, mechanism by which deficits in early visual processing contribute to overall cognitive impairment in Sz. PMID:26190988
Examining the cognitive demands of analogy instructions compared to explicit instructions.
Tse, Choi Yeung Andy; Wong, Andus; Whitehill, Tara; Ma, Estella; Masters, Rich
2016-10-01
In many learning domains, instructions are presented explicitly despite high cognitive demands associated with their processing. This study examined cognitive demands imposed on working memory by different types of instruction to speak with maximum pitch variation: visual analogy, verbal analogy and explicit verbal instruction. Forty participants were asked to memorise a set of 16 visual and verbal stimuli while reading aloud a Cantonese paragraph with maximum pitch variation. Instructions about how to achieve maximum pitch variation were presented via visual analogy, verbal analogy, explicit rules or no instruction. Pitch variation was assessed off-line, using standard deviation of fundamental frequency. Immediately after reading, participants recalled as many stimuli as possible. Analogy instructions resulted in significantly increased pitch variation compared to explicit instructions or no instructions. Explicit instructions resulted in poorest recall of stimuli. Visual analogy instructions resulted in significantly poorer recall of visual stimuli than verbal stimuli. The findings suggest that non-propositional instructions presented via analogy may be less cognitively demanding than instructions that are presented explicitly. Processing analogy instructions that are presented as a visual representation is likely to load primarily visuospatial components of working memory rather than phonological components. The findings are discussed with reference to speech therapy and human cognition.
Postural time-to-contact as a precursor of visually induced motion sickness.
Li, Ruixuan; Walter, Hannah; Curry, Christopher; Rath, Ruth; Peterson, Nicolette; Stoffregen, Thomas A
2018-06-01
The postural instability theory of motion sickness predicts that subjective symptoms of motion sickness will be preceded by unstable control of posture. In previous studies, this prediction has been confirmed with measures of the spatial magnitude and the temporal dynamics of postural activity. In the present study, we examine whether precursors of visually induced motion sickness might exist in postural time-to-contact, a measure of postural activity that is related to the risk of falling. Standing participants were exposed to oscillating visual motion stimuli in a standard laboratory protocol. Both before and during exposure to visual motion stimuli, we monitored the kinematics of the body's center of pressure. We predicted that postural activity would differ between participants who reported motion sickness and those who did not, and that these differences would exist before participants experienced subjective symptoms of motion sickness. During exposure to visual motion stimuli, the multifractality of sway differed between the Well and Sick groups. Postural time-to-contact differed between the Well and Sick groups during exposure to visual motion stimuli, but also before exposure to any motion stimuli. The results provide a qualitatively new type of support for the postural instability theory of motion sickness.
Spatial Scaling of the Profile of Selective Attention in the Visual Field.
Gannon, Matthew A; Knapp, Ashley A; Adams, Thomas G; Long, Stephanie M; Parks, Nathan A
2016-01-01
Neural mechanisms of selective attention must be capable of adapting to variation in the absolute size of an attended stimulus in the ever-changing visual environment. To date, little is known regarding how attentional selection interacts with fluctuations in the spatial expanse of an attended object. Here, we use event-related potentials (ERPs) to investigate the scaling of attentional enhancement and suppression across the visual field. We measured ERPs while participants performed a task at fixation that varied in its attentional demands (attentional load) and visual angle (1.0° or 2.5°). Observers were presented with a stream of task-relevant stimuli while foveal, parafoveal, and peripheral visual locations were probed by irrelevant distractor stimuli. We found two important effects in the N1 component of visual ERPs. First, N1 modulations to task-relevant stimuli indexed attentional selection of stimuli during the load task and further correlated with task performance. Second, with increased task size, attentional modulation of the N1 to distractor stimuli showed a differential pattern that was consistent with a scaling of attentional selection. Together, these results demonstrate that the size of an attended stimulus scales the profile of attentional selection across the visual field and provides insights into the attentional mechanisms associated with such spatial scaling.
Visual field asymmetries in visual evoked responses
Hagler, Donald J.
2014-01-01
Behavioral responses to visual stimuli exhibit visual field asymmetries, but cortical folding and the close proximity of visual cortical areas make electrophysiological comparisons between different stimulus locations problematic. Retinotopy-constrained source estimation (RCSE) uses distributed dipole models simultaneously constrained by multiple stimulus locations to provide separation between individual visual areas that is not possible with conventional source estimation methods. Magnetoencephalography and RCSE were used to estimate time courses of activity in V1, V2, V3, and V3A. Responses to left and right hemifield stimuli were not significantly different. Peak latencies for peripheral stimuli were significantly shorter than those for perifoveal stimuli in V1, V2, and V3A, likely related to the greater proportion of magnocellular input to V1 in the periphery. Consistent with previous results, sensor magnitudes for lower field stimuli were about twice as large as for upper field, which is only partially explained by the proximity to sensors for lower field cortical sources in V1, V2, and V3. V3A exhibited both latency and amplitude differences for upper and lower field responses. There were no differences for V3, consistent with previous suggestions that dorsal and ventral V3 are two halves of a single visual area, rather than distinct areas V3 and VP. PMID:25527151
Correa-Jaraba, Kenia S.; Cid-Fernández, Susana; Lindín, Mónica; Díaz, Fernando
2016-01-01
The main aim of this study was to examine the effects of aging on event-related brain potentials (ERPs) associated with the automatic detection of unattended infrequent deviant and novel auditory stimuli (Mismatch Negativity, MMN) and with the orienting to these stimuli (P3a component), as well as the effects on ERPs associated with reorienting to relevant visual stimuli (Reorienting Negativity, RON). Participants were divided into three age groups: (1) Young: 21–29 years old; (2) Middle-aged: 51–64 years old; and (3) Old: 65–84 years old. They performed an auditory-visual distraction-attention task in which they were asked to attend to visual stimuli (Go, NoGo) and to ignore auditory stimuli (S: standard, D: deviant, N: novel). Reaction times (RTs) to Go visual stimuli were longer in old and middle-aged than in young participants. In addition, in all three age groups, longer RTs were found when Go visual stimuli were preceded by novel relative to deviant and standard auditory stimuli, indicating a distraction effect provoked by novel stimuli. ERP components were identified in the Novel minus Standard (N-S) and Deviant minus Standard (D-S) difference waveforms. In the N-S condition, MMN latency was significantly longer in middle-aged and old participants than in young participants, indicating a slowing of automatic detection of changes. The following results were observed in both difference waveforms: (1) the P3a component comprised two consecutive phases in all three age groups—an early-P3a (e-P3a) that may reflect the orienting response toward the irrelevant stimulation and a late-P3a (l-P3a) that may be a correlate of subsequent evaluation of the infrequent unexpected novel or deviant stimuli; (2) the e-P3a, l-P3a, and RON latencies were significantly longer in the Middle-aged and Old groups than in the Young group, indicating delay in the orienting response to and the subsequent evaluation of unattended auditory stimuli, and in the reorienting of attention to relevant (Go) visual stimuli, respectively; and (3) a significantly smaller e-P3a amplitude in Middle-aged and Old groups, indicating a deficit in the orienting response to irrelevant novel and deviant auditory stimuli. PMID:27065004
Endogenous Sequential Cortical Activity Evoked by Visual Stimuli
Miller, Jae-eun Kang; Hamm, Jordan P.; Jackson, Jesse; Yuste, Rafael
2015-01-01
Although the functional properties of individual neurons in primary visual cortex have been studied intensely, little is known about how neuronal groups could encode changing visual stimuli using temporal activity patterns. To explore this, we used in vivo two-photon calcium imaging to record the activity of neuronal populations in primary visual cortex of awake mice in the presence and absence of visual stimulation. Multidimensional analysis of the network activity allowed us to identify neuronal ensembles defined as groups of cells firing in synchrony. These synchronous groups of neurons were themselves activated in sequential temporal patterns, which repeated at much higher proportions than chance and were triggered by specific visual stimuli such as natural visual scenes. Interestingly, sequential patterns were also present in recordings of spontaneous activity without any sensory stimulation and were accompanied by precise firing sequences at the single-cell level. Moreover, intrinsic dynamics could be used to predict the occurrence of future neuronal ensembles. Our data demonstrate that visual stimuli recruit similar sequential patterns to the ones observed spontaneously, consistent with the hypothesis that already existing Hebbian cell assemblies firing in predefined temporal sequences could be the microcircuit substrate that encodes visual percepts changing in time. PMID:26063915
Rendering visual events as sounds: Spatial attention capture by auditory augmented reality.
Stone, Scott A; Tata, Matthew S
2017-01-01
Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B) to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible.
Rendering visual events as sounds: Spatial attention capture by auditory augmented reality
Tata, Matthew S.
2017-01-01
Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B) to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible. PMID:28792518
Compatibility of Motion Facilitates Visuomotor Synchronization
ERIC Educational Resources Information Center
Hove, Michael J.; Spivey, Michael J.; Krumhansl, Carol L.
2010-01-01
Prior research indicates that synchronized tapping performance is very poor with flashing visual stimuli compared with auditory stimuli. Three finger-tapping experiments compared flashing visual metronomes with visual metronomes containing a spatial component, either compatible, incompatible, or orthogonal to the tapping action. In Experiment 1,…
Bidelman, Gavin M
2016-10-01
Musical training is associated with behavioral and neurophysiological enhancements in auditory processing for both musical and nonmusical sounds (e.g., speech). Yet, whether the benefits of musicianship extend beyond enhancements to auditory-specific skills and impact multisensory (e.g., audiovisual) processing has yet to be fully validated. Here, we investigated multisensory integration of auditory and visual information in musicians and nonmusicians using a double-flash illusion, whereby the presentation of multiple auditory stimuli (beeps) concurrent with a single visual object (flash) induces an illusory perception of multiple flashes. We parametrically varied the onset asynchrony between auditory and visual events (leads and lags of ±300 ms) to quantify participants' "temporal window" of integration, i.e., stimuli in which auditory and visual cues were fused into a single percept. Results show that musically trained individuals were both faster and more accurate at processing concurrent audiovisual cues than their nonmusician peers; nonmusicians had a higher susceptibility for responding to audiovisual illusions and perceived double flashes over an extended range of onset asynchronies compared to trained musicians. Moreover, temporal window estimates indicated that musicians' windows (<100 ms) were ~2-3× shorter than nonmusicians' (~200 ms), suggesting more refined multisensory integration and audiovisual binding. Collectively, findings indicate a more refined binding of auditory and visual cues in musically trained individuals. We conclude that experience-dependent plasticity of intensive musical experience extends beyond simple listening skills, improving multimodal processing and the integration of multiple sensory systems in a domain-general manner.
Cross-modal links among vision, audition, and touch in complex environments.
Ferris, Thomas K; Sarter, Nadine B
2008-02-01
This study sought to determine whether performance effects of cross-modal spatial links that were observed in earlier laboratory studies scale to more complex environments and need to be considered in multimodal interface design. It also revisits the unresolved issue of cross-modal cuing asymmetries. Previous laboratory studies employing simple cues, tasks, and/or targets have demonstrated that the efficiency of processing visual, auditory, and tactile stimuli is affected by the modality, lateralization, and timing of surrounding cues. Very few studies have investigated these cross-modal constraints in the context of more complex environments to determine whether they scale and how complexity affects the nature of cross-modal cuing asymmetries. Amicroworld simulation of battlefield operations with a complex task set and meaningful visual, auditory, and tactile stimuli was used to investigate cuing effects for all cross-modal pairings. Significant asymmetric performance effects of cross-modal spatial links were observed. Auditory cues shortened response latencies for collocated visual targets but visual cues did not do the same for collocated auditory targets. Responses to contralateral (rather than ipsilateral) targets were faster for tactually cued auditory targets and each visual-tactile cue-target combination, suggesting an inhibition-of-return effect. The spatial relationships between multimodal cues and targets significantly affect target response times in complex environments. The performance effects of cross-modal links and the observed cross-modal cuing asymmetries need to be examined in more detail and considered in future interface design. The findings from this study have implications for the design of multimodal and adaptive interfaces and for supporting attention management in complex, data-rich domains.
Sex differences in adults' relative visual interest in female and male faces, toys, and play styles.
Alexander, Gerianne M; Charles, Nora
2009-06-01
An individual's reproductive potential appears to influence response to attractive faces of the opposite sex. Otherwise, relatively little is known about the characteristics of the adult observer that may influence his or her affective evaluation of male and female faces. An untested hypothesis (based on the proposed role of attractive faces in mate selection) is that most women would show greater interest in male faces whereas most men would show greater interest in female faces. Further, evidence from individuals with preferences for same-sex sexual partners suggests that response to attractive male and female faces may be influenced by gender-linked play preferences. To test these hypotheses, visual attention directed to sex-linked stimuli (faces, toys, play styles) was measured in 39 men and 44 women using eye tracking technology. Consistent with our predictions, men directed greater visual attention to all male-typical stimuli and visual attention to male and female faces was associated with visual attention to gender conforming or nonconforming stimuli in a manner consistent with previous research on sexual orientation. In contrast, women showed a visual preference for female-typical toys, but no visual preference for male faces or female-typical play styles. These findings indicate that sex differences in visual processing extend beyond stimuli associated with adult sexual behavior. We speculate that sex differences in visual processing are a component of the expression of gender phenotypes across the lifespan that may reflect sex differences in the motivational properties of gender-linked stimuli.
2011-01-01
Background Anecdotal reports and a few scientific publications suggest that flyovers of helicopters at low altitude may elicit fear- or anxiety-related behavioral reactions in grazing feral and farm animals. We investigated the behavioral and physiological stress reactions of five individually housed dairy goats to different acoustic and visual stimuli from helicopters and to combinations of these stimuli under controlled environmental (indoor) conditions. The visual stimuli were helicopter animations projected on a large screen in front of the enclosures of the goats. Acoustic and visual stimuli of a tractor were also presented. On the final day of the study the goats were exposed to two flyovers (altitude 50 m and 75 m) of a Chinook helicopter while grazing in a pasture. Salivary cortisol, behavior, and heart rate of the goats were registered before, during and after stimulus presentations. Results The goats reacted alert to the visual and/or acoustic stimuli that were presented in their room. They raised their heads and turned their ears forward in the direction of the stimuli. There was no statistically reliable rise of the average velocity of moving of the goats in their enclosure and no increase of the duration of moving during presentation of the stimuli. Also there was no increase in heart rate or salivary cortisol concentration during the indoor test sessions. Surprisingly, no physiological and behavioral stress responses were observed during the flyover of a Chinook at 50 m, which produced a peak noise of 110 dB. Conclusions We conclude that the behavior and physiology of goats are unaffected by brief episodes of intense, adverse visual and acoustic stimulation such as the sight and noise of overflying helicopters. The absence of a physiological stress response and of elevated emotional reactivity of goats subjected to helicopter stimuli is discussed in relation to the design and testing schedule of this study. PMID:21496239
Auditory Emotional Cues Enhance Visual Perception
ERIC Educational Resources Information Center
Zeelenberg, Rene; Bocanegra, Bruno R.
2010-01-01
Recent studies show that emotional stimuli impair performance to subsequently presented neutral stimuli. Here we show a cross-modal perceptual enhancement caused by emotional cues. Auditory cue words were followed by a visually presented neutral target word. Two-alternative forced-choice identification of the visual target was improved by…
Reduced Perceptual Exclusivity during Object and Grating Rivalry in Autism
Freyberg, J.; Robertson, C.E.; Baron-Cohen, S.
2015-01-01
Background The dynamics of binocular rivalry may be a behavioural footprint of excitatory and inhibitory neural transmission in visual cortex. Given the presence of atypical visual features in Autism Spectrum Conditions (ASC), and evidence in support of the idea of an imbalance in excitatory/inhibitory neural transmission in ASC, we hypothesized that binocular rivalry might prove a simple behavioural marker of such a transmission imbalance in the autistic brain. In support of this hypothesis, we previously reported a slower rate of rivalry in ASC, driven by reduced perceptual exclusivity. Methods We tested whether atypical dynamics of binocular rivalry in ASC are specific to certain stimulus features. 53 participants (26 with ASC, matched for age, sex and IQ) participated in binocular rivalry experiments in which the dynamics of rivalry were measured at two levels of stimulus complexity, low (grayscale gratings) and high (coloured objects). Results Individuals with ASC experienced a slower rate of rivalry, driven by longer transitional states between dominant percepts. These exaggerated transitional states were present at both low and high levels of stimulus complexity, suggesting that atypical rivalry dynamics in autism are robust with respect to stimulus choice. Interactions between stimulus properties and rivalry dynamics in autism indicate that achromatic grating stimuli produce stronger group differences. Conclusion These results confirm the finding of atypical dynamics of binocular rivalry in ASC. These dynamics were present for stimuli of both low and high levels of visual complexity, suggesting an imbalance in competitive interactions throughout the visual system of individuals with ASC. PMID:26382002
Cortical Integration of Audio-Visual Information
Vander Wyk, Brent C.; Ramsay, Gordon J.; Hudac, Caitlin M.; Jones, Warren; Lin, David; Klin, Ami; Lee, Su Mei; Pelphrey, Kevin A.
2013-01-01
We investigated the neural basis of audio-visual processing in speech and non-speech stimuli. Physically identical auditory stimuli (speech and sinusoidal tones) and visual stimuli (animated circles and ellipses) were used in this fMRI experiment. Relative to unimodal stimuli, each of the multimodal conjunctions showed increased activation in largely non-overlapping areas. The conjunction of Ellipse and Speech, which most resembles naturalistic audiovisual speech, showed higher activation in the right inferior frontal gyrus, fusiform gyri, left posterior superior temporal sulcus, and lateral occipital cortex. The conjunction of Circle and Tone, an arbitrary audio-visual pairing with no speech association, activated middle temporal gyri and lateral occipital cortex. The conjunction of Circle and Speech showed activation in lateral occipital cortex, and the conjunction of Ellipse and Tone did not show increased activation relative to unimodal stimuli. Further analysis revealed that middle temporal regions, although identified as multimodal only in the Circle-Tone condition, were more strongly active to Ellipse-Speech or Circle-Speech, but regions that were identified as multimodal for Ellipse-Speech were always strongest for Ellipse-Speech. Our results suggest that combinations of auditory and visual stimuli may together be processed by different cortical networks, depending on the extent to which speech or non-speech percepts are evoked. PMID:20709442
Schneider, Till R; Hipp, Joerg F; Domnick, Claudia; Carl, Christine; Büchel, Christian; Engel, Andreas K
2018-05-26
Human faces are among the most salient visual stimuli and act both as socially and emotionally relevant signals. Faces and especially faces with emotional expression receive prioritized processing in the human brain and activate a distributed network of brain areas reflected, e.g., in enhanced oscillatory neuronal activity. However, an inconsistent picture emerged so far regarding neuronal oscillatory activity across different frequency-bands modulated by emotionally and socially relevant stimuli. The individual level of anxiety among healthy populations might be one explanation for these inconsistent findings. Therefore, we tested the hypothesis whether oscillatory neuronal activity is associated with individual anxiety levels during perception of faces with neutral and fearful facial expressions. We recorded neuronal activity using magnetoencephalography (MEG) in 27 healthy participants and determined their individual state anxiety levels. Images of human faces with neutral and fearful expressions, and physically matched visual control stimuli were presented while participants performed a simple color detection task. Spectral analyses revealed that face processing and in particular processing of fearful faces was characterized by enhanced neuronal activity in the theta- and gamma-band and decreased activity in the beta-band in early visual cortex and the fusiform gyrus (FFG). Moreover, the individuals' state anxiety levels correlated positively with the gamma-band response and negatively with the beta response in the FFG and the amygdala. Our results suggest that oscillatory neuronal activity plays an important role in affective face processing and is dependent on the individual level of state anxiety. Our work provides new insights on the role of oscillatory neuronal activity underlying processing of faces. Copyright © 2018. Published by Elsevier Inc.
The Time Is Up: Compression of Visual Time Interval Estimations of Bimodal Aperiodic Patterns
Duarte, Fabiola; Lemus, Luis
2017-01-01
The ability to estimate time intervals subserves many of our behaviors and perceptual experiences. However, it is not clear how aperiodic (AP) stimuli affect our perception of time intervals across sensory modalities. To address this question, we evaluated the human capacity to discriminate between two acoustic (A), visual (V) or audiovisual (AV) time intervals of trains of scattered pulses. We first measured the periodicity of those stimuli and then sought for correlations with the accuracy and reaction times (RTs) of the subjects. We found that, for all time intervals tested in our experiment, the visual system consistently perceived AP stimuli as being shorter than the periodic (P) ones. In contrast, such a compression phenomenon was not apparent during auditory trials. Our conclusions are: first, the subjects exposed to P stimuli are more likely to measure their durations accurately. Second, perceptual time compression occurs for AP visual stimuli. Lastly, AV discriminations are determined by A dominance rather than by AV enhancement. PMID:28848406
Sequential Ideal-Observer Analysis of Visual Discriminations.
ERIC Educational Resources Information Center
Geisler, Wilson S.
1989-01-01
A new analysis, based on the concept of the ideal observer in signal detection theory, is described. It allows: tracing of the flow of discrimination information through the initial physiological stages of visual processing for arbitrary spatio-chromatic stimuli, and measurement of the information content of said visual stimuli. (TJH)
ERIC Educational Resources Information Center
Falkmer, Marita; Bjallmark, Anna; Larsson, Matilda; Falkmer, Torbjorn
2011-01-01
Several studies, using eye tracking methodology, suggest that different visual strategies in persons with autism spectrum conditions, compared with controls, are applied when viewing facial stimuli. Most eye tracking studies are, however, made in laboratory settings with either static (photos) or non-interactive dynamic stimuli, such as video…
Sex Differences in Response to Visual Sexual Stimuli: A Review
Rupp, Heather A.; Wallen, Kim
2009-01-01
This article reviews what is currently known about how men and women respond to the presentation of visual sexual stimuli. While the assumption that men respond more to visual sexual stimuli is generally empirically supported, previous reports of sex differences are confounded by the variable content of the stimuli presented and measurement techniques. We propose that the cognitive processing stage of responding to sexual stimuli is the first stage in which sex differences occur. The divergence between men and women is proposed to occur at this time, reflected in differences in neural activation, and contribute to previously reported sex differences in downstream peripheral physiological responses and subjective reports of sexual arousal. Additionally, this review discusses factors that may contribute to the variability in sex differences observed in response to visual sexual stimuli. Factors include participant variables, such as hormonal state and socialized sexual attitudes, as well as variables specific to the content presented in the stimuli. Based on the literature reviewed, we conclude that content characteristics may differentially produce higher levels of sexual arousal in men and women. Specifically, men appear more influenced by the sex of the actors depicted in the stimuli while women’s response may differ with the context presented. Sexual motivation, perceived gender role expectations, and sexual attitudes are possible influences. These differences are of practical importance to future research on sexual arousal that aims to use experimental stimuli comparably appealing to men and women and also for general understanding of cognitive sex differences. PMID:17668311
Graewe, Britta; De Weerd, Peter; Farivar, Reza; Castelo-Branco, Miguel
2012-01-01
Many studies have linked the processing of different object categories to specific event-related potentials (ERPs) such as the face-specific N170. Despite reports showing that object-related ERPs are influenced by visual stimulus features, there is consensus that these components primarily reflect categorical aspects of the stimuli. Here, we re-investigated this idea by systematically measuring the effects of visual feature manipulations on ERP responses elicited by both structure-from-motion (SFM)-defined and luminance-defined object stimuli. SFM objects elicited a novel component at 200–250 ms (N250) over parietal and posterior temporal sites. We found, however, that the N250 amplitude was unaffected by restructuring SFM stimuli into meaningless objects based on identical visual cues. This suggests that this N250 peak was not uniquely linked to categorical aspects of the objects, but is strongly determined by visual stimulus features. We provide strong support for this hypothesis by parametrically manipulating the depth range of both SFM- and luminance-defined object stimuli and showing that the N250 evoked by SFM stimuli as well as the well-known N170 to static faces were sensitive to this manipulation. Importantly, this effect could not be attributed to compromised object categorization in low depth stimuli, confirming a strong impact of visual stimulus features on object-related ERP signals. As ERP components linked with visual categorical object perception are likely determined by multiple stimulus features, this creates an interesting inverse problem when deriving specific perceptual processes from variations in ERP components. PMID:22363479
Graewe, Britta; De Weerd, Peter; Farivar, Reza; Castelo-Branco, Miguel
2012-01-01
Many studies have linked the processing of different object categories to specific event-related potentials (ERPs) such as the face-specific N170. Despite reports showing that object-related ERPs are influenced by visual stimulus features, there is consensus that these components primarily reflect categorical aspects of the stimuli. Here, we re-investigated this idea by systematically measuring the effects of visual feature manipulations on ERP responses elicited by both structure-from-motion (SFM)-defined and luminance-defined object stimuli. SFM objects elicited a novel component at 200-250 ms (N250) over parietal and posterior temporal sites. We found, however, that the N250 amplitude was unaffected by restructuring SFM stimuli into meaningless objects based on identical visual cues. This suggests that this N250 peak was not uniquely linked to categorical aspects of the objects, but is strongly determined by visual stimulus features. We provide strong support for this hypothesis by parametrically manipulating the depth range of both SFM- and luminance-defined object stimuli and showing that the N250 evoked by SFM stimuli as well as the well-known N170 to static faces were sensitive to this manipulation. Importantly, this effect could not be attributed to compromised object categorization in low depth stimuli, confirming a strong impact of visual stimulus features on object-related ERP signals. As ERP components linked with visual categorical object perception are likely determined by multiple stimulus features, this creates an interesting inverse problem when deriving specific perceptual processes from variations in ERP components.
Differences in apparent straightness of dot and line stimuli.
NASA Technical Reports Server (NTRS)
Parlee, M. B.
1972-01-01
An investigation has been made of anisotropic responses to contoured and noncontoured stimuli to obtain an insight into the way these stimuli are processed. For this purpose, eight subjects judged the alignment of minimally contoured (3 dot) and contoured (line) stimuli. Stimuli, presented to each eye separately, vertically subtended either 8 or 32 deg visual angle and were located 10 deg left, center, or 10 deg right in the visual field. Location-dependent deviations from physical straightness were larger for dot stimuli than for lines. The results were the same for the two eyes. In a second experiment, subjects judged the alignment of stimuli composed of different densities of dots. Apparent straightness for these stimuli was the same as for lines. The results are discussed in terms of alternative mechanisms for analysis of contoured and minimally contoured stimuli.
Evolutionary relevance facilitates visual information processing.
Jackson, Russell E; Calvillo, Dusti P
2013-11-03
Visual search of the environment is a fundamental human behavior that perceptual load affects powerfully. Previously investigated means for overcoming the inhibitions of high perceptual load, however, generalize poorly to real-world human behavior. We hypothesized that humans would process evolutionarily relevant stimuli more efficiently than evolutionarily novel stimuli, and evolutionary relevance would mitigate the repercussions of high perceptual load during visual search. Animacy is a significant component to evolutionary relevance of visual stimuli because perceiving animate entities is time-sensitive in ways that pose significant evolutionary consequences. Participants completing a visual search task located evolutionarily relevant and animate objects fastest and with the least impact of high perceptual load. Evolutionarily novel and inanimate objects were located slowest and with the highest impact of perceptual load. Evolutionary relevance may importantly affect everyday visual information processing.
Marschall-Lévesque, Shawn; Rouleau, Joanne-Lucine; Renaud, Patrice
2018-02-01
Penile plethysmography (PPG) is a measure of sexual interests that relies heavily on the stimuli it uses to generate valid results. Ethical considerations surrounding the use of real images in PPG have further limited the content admissible for these stimuli. To palliate this limitation, the current study aimed to combine audio and visual stimuli by incorporating computer-generated characters to create new stimuli capable of accurately classifying sex offenders with child victims, while also increasing the number of valid profiles. Three modalities (audio, visual, and audiovisual) were compared using two groups (15 sex offenders with child victims and 15 non-offenders). Both the new visual and audiovisual stimuli resulted in a 13% increase in the number of valid profiles at 2.5 mm, when compared to the standard audio stimuli. Furthermore, the new audiovisual stimuli generated a 34% increase in penile responses. All three modalities were able to discriminate between the two groups by their responses to the adult and child stimuli. Lastly, sexual interest indices for all three modalities could accurately classify participants in their appropriate groups, as demonstrated by ROC curve analysis (i.e., audio AUC = .81, 95% CI [.60, 1.00]; visual AUC = .84, 95% CI [.66, 1.00], and audiovisual AUC = .83, 95% CI [.63, 1.00]). Results suggest that computer-generated characters allow accurate discrimination of sex offenders with child victims and can be added to already validated stimuli to increase the number of valid profiles. The implications of audiovisual stimuli using computer-generated characters and their possible use in PPG evaluations are also discussed.
Spiegel, Daniel P; Reynaud, Alexandre; Ruiz, Tatiana; Laguë-Beauvais, Maude; Hess, Robert; Farivar, Reza
2016-05-01
Vision is disrupted by traumatic brain injury (TBI), with vision-related complaints being amongst the most common in this population. Based on the neural responses of early visual cortical areas, injury to the visual cortex would be predicted to affect both 1(st) order and 2(nd) order contrast sensitivity functions (CSFs)-the height and/or the cut-off of the CSF are expected to be affected by TBI. Previous studies have reported disruptions only in 2(nd) order contrast sensitivity, but using a narrow range of parameters and divergent methodologies-no study has characterized the effect of TBI on the full CSF for both 1(st) and 2(nd) order stimuli. Such information is needed to properly understand the effect of TBI on contrast perception, which underlies all visual processing. Using a unified framework based on the quick contrast sensitivity function, we measured full CSFs for static and dynamic 1(st) and 2(nd) order stimuli. Our results provide a unique dataset showing alterations in sensitivity for both 1(st) and 2(nd) order visual stimuli. In particular, we show that TBI patients have increased sensitivity for 1(st) order motion stimuli and decreased sensitivity to orientation-defined and contrast-defined 2(nd) order stimuli. In addition, our data suggest that TBI patients' sensitivity for both 1(st) order stimuli and 2(nd) order contrast-defined stimuli is shifted towards higher spatial frequencies. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Fixating at far distance shortens reaction time to peripheral visual stimuli at specific locations.
Kokubu, Masahiro; Ando, Soichi; Oda, Shingo
2018-01-18
The purpose of the present study was to examine whether the fixation distance in real three-dimensional space affects manual reaction time to peripheral visual stimuli. Light-emitting diodes were used for presenting a fixation point and four peripheral visual stimuli. The visual stimuli were located at a distance of 45cm and at 25° in the left, right, upper, and lower directions from the sagittal axis including the fixation point. Near (30cm), Middle (45cm), Far (90cm), and Very Far (300cm) fixation distance conditions were used. When one of the four visual stimuli was randomly illuminated, the participants released a button as quickly as possible. Results showed that overall peripheral reaction time decreased as the fixation distance increased. The significant interaction between fixation distance and stimulus location indicated that the effect of fixation distance on reaction time was observed at the left, right, and upper locations but not at the lower location. These results suggest that fixating at far distance would contribute to faster reaction and that the effect is specific to locations in the peripheral visual field. The present findings are discussed in terms of viewer-centered representation, the focus of attention in depth, and visual field asymmetry related to neurological and psychological aspects. Copyright © 2017 Elsevier B.V. All rights reserved.
Dores, A R; Almeida, I; Barbosa, F; Castelo-Branco, M; Monteiro, L; Reis, M; de Sousa, L; Caldas, A Castro
2013-01-01
Examining changes in brain activation linked with emotion-inducing stimuli is essential to the study of emotions. Due to the ecological potential of techniques such as virtual reality (VR), inspection of whether brain activation in response to emotional stimuli can be modulated by the three-dimensional (3D) properties of the images is important. The current study sought to test whether the activation of brain areas involved in the emotional processing of scenarios of different valences can be modulated by 3D. Therefore, the focus was made on the interaction effect between emotion-inducing stimuli of different emotional valences (pleasant, unpleasant and neutral valences) and visualization types (2D, 3D). However, main effects were also analyzed. The effect of emotional valence and visualization types and their interaction were analyzed through a 3 × 2 repeated measures ANOVA. Post-hoc t-tests were performed under a ROI-analysis approach. The results show increased brain activation for the 3D affective-inducing stimuli in comparison with the same stimuli in 2D scenarios, mostly in cortical and subcortical regions that are related to emotional processing, in addition to visual processing regions. This study has the potential of clarify brain mechanisms involved in the processing of emotional stimuli (scenarios' valence) and their interaction with three-dimensionality.
Chernyshev, Boris V; Bryzgalov, Dmitri V; Lazarev, Ivan E; Chernysheva, Elena G
2016-08-03
Current understanding of feature binding remains controversial. Studies involving mismatch negativity (MMN) measurement show a low level of binding, whereas behavioral experiments suggest a higher level. We examined the possibility that the two levels of feature binding coexist and may be shown within one experiment. The electroencephalogram was recorded while participants were engaged in an auditory two-alternative choice task, which was a combination of the oddball and the condensation tasks. Two types of deviant target stimuli were used - complex stimuli, which required feature conjunction to be identified, and simple stimuli, which differed from standard stimuli in a single feature. Two behavioral outcomes - correct responses and errors - were analyzed separately. Responses to complex stimuli were slower and less accurate than responses to simple stimuli. MMN was prominent and its amplitude was similar for both simple and complex stimuli, whereas the respective stimuli differed from standards in a single feature or two features respectively. Errors in response only to complex stimuli were associated with decreased MMN amplitude. P300 amplitude was greater for complex stimuli than for simple stimuli. Our data are compatible with the explanation that feature binding in auditory modality depends on two concurrent levels of processing. We speculate that the earlier level related to MMN generation is an essential and critical stage. Yet, a later analysis is also carried out, affecting P300 amplitude and response time. The current findings provide resolution to conflicting views on the nature of feature binding and show that feature binding is a distributed multilevel process.
Is improved contrast sensitivity a natural consequence of visual training?
Levi, Aaron; Shaked, Danielle; Tadin, Duje; Huxlin, Krystel R.
2015-01-01
Many studies have shown that training and testing conditions modulate specificity of visual learning to trained stimuli and tasks. In visually impaired populations, generalizability of visual learning to untrained stimuli/tasks is almost always reported, with contrast sensitivity (CS) featuring prominently among these collaterally-improved functions. To understand factors underlying this difference, we measured CS for direction and orientation discrimination in the visual periphery of three groups of visually-intact subjects. Group 1 trained on an orientation discrimination task with static Gabors whose luminance contrast was decreased as performance improved. Group 2 trained on a global direction discrimination task using high-contrast random dot stimuli previously used to recover motion perception in cortically blind patients. Group 3 underwent no training. Both forms of training improved CS with some degree of specificity for basic attributes of the trained stimulus/task. Group 1's largest enhancement was in CS around the trained spatial/temporal frequencies; similarly, Group 2's largest improvements occurred in CS for discriminating moving and flickering stimuli. Group 3 saw no significant CS changes. These results indicate that CS improvements may be a natural consequence of multiple forms of visual training in visually intact humans, albeit with some specificity to the trained visual domain(s). PMID:26305736
Shades of yellow: interactive effects of visual and odour cues in a pest beetle
Stevenson, Philip C.; Belmain, Steven R.
2016-01-01
Background: The visual ecology of pest insects is poorly studied compared to the role of odour cues in determining their behaviour. Furthermore, the combined effects of both odour and vision on insect orientation are frequently ignored, but could impact behavioural responses. Methods: A locomotion compensator was used to evaluate use of different visual stimuli by a major coleopteran pest of stored grains (Sitophilus zeamais), with and without the presence of host odours (known to be attractive to this species), in an open-loop setup. Results: Some visual stimuli—in particular, one shade of yellow, solid black and high-contrast black-against-white stimuli—elicited positive orientation behaviour from the beetles in the absence of odour stimuli. When host odours were also present, at 90° to the source of the visual stimulus, the beetles presented with yellow and vertical black-on-white grating patterns changed their walking course and typically adopted a path intermediate between the two stimuli. The beetles presented with a solid black-on-white target continued to orient more strongly towards the visual than the odour stimulus. Discussion: Visual stimuli can strongly influence orientation behaviour, even in species where use of visual cues is sometimes assumed to be unimportant, while the outcomes from exposure to multimodal stimuli are unpredictable and need to be determined under differing conditions. The importance of the two modalities of stimulus (visual and olfactory) in food location is likely to depend upon relative stimulus intensity and motivational state of the insect. PMID:27478707
Virtual reality stimuli for force platform posturography.
Tossavainen, Timo; Juhola, Martti; Ilmari, Pyykö; Aalto, Heikki; Toppila, Esko
2002-01-01
People relying much on vision in the control of posture are known to have an elevated risk of falling. Dependence on visual control is an important parameter in the diagnosis of balance disorders. We have previously shown that virtual reality methods can be used to produce visual stimuli that affect balance, but suitable stimuli need to be found. In this study the effect of six different virtual reality stimuli on the balance of 22 healthy test subjects was evaluated using force platform posturography. According to the tests two of the stimuli have a significant effect on balance.
Neural Basis of Visual Attentional Orienting in Childhood Autism Spectrum Disorders.
Murphy, Eric R; Norr, Megan; Strang, John F; Kenworthy, Lauren; Gaillard, William D; Vaidya, Chandan J
2017-01-01
We examined spontaneous attention orienting to visual salience in stimuli without social significance using a modified Dot-Probe task during functional magnetic resonance imaging in high-functioning preadolescent children with Autism Spectrum Disorder (ASD) and age- and IQ-matched control children. While the magnitude of attentional bias (faster response to probes in the location of solid color patch) to visually salient stimuli was similar in the groups, activation differences in frontal and temporoparietal regions suggested hyper-sensitivity to visual salience or to sameness in ASD children. Further, activation in a subset of those regions was associated with symptoms of restricted and repetitive behavior. Thus, atypicalities in response to visual properties of stimuli may drive attentional orienting problems associated with ASD.
Moors, Pieter; Wagemans, Johan; de-Wit, Lee
2014-01-01
Continuous flash suppression (CFS) is a powerful interocular suppression technique, which is often described as an effective means to reliably suppress stimuli from visual awareness. Suppression through CFS has been assumed to depend upon a reduction in (retinotopically specific) neural adaptation caused by the continual updating of the contents of the visual input to one eye. In this study, we started from the observation that suppressing a moving stimulus through CFS appeared to be more effective when using a mask that was actually more prone to retinotopically specific neural adaptation, but in which the properties of the mask were more similar to those of the to-be-suppressed stimulus. In two experiments, we find that using a moving Mondrian mask (i.e., one that includes motion) is more effective in suppressing a moving stimulus than a regular CFS mask. The observed pattern of results cannot be explained by a simple simulation that computes the degree of retinotopically specific neural adaptation over time, suggesting that this kind of neural adaptation does not play a large role in predicting the differences between conditions in this context. We also find some evidence consistent with the idea that the most effective CFS mask is the one that matches the properties (speed) of the suppressed stimulus. These results question the general importance of retinotopically specific neural adaptation in CFS, and potentially help to explain an implicit trend in the literature to adapt one's CFS mask to match one's to-be-suppressed stimuli. Finally, the results should help to guide the methodological development of future research where continuous suppression of moving stimuli is desired.
Development of a Bayesian Estimator for Audio-Visual Integration: A Neurocomputational Study
Ursino, Mauro; Crisafulli, Andrea; di Pellegrino, Giuseppe; Magosso, Elisa; Cuppini, Cristiano
2017-01-01
The brain integrates information from different sensory modalities to generate a coherent and accurate percept of external events. Several experimental studies suggest that this integration follows the principle of Bayesian estimate. However, the neural mechanisms responsible for this behavior, and its development in a multisensory environment, are still insufficiently understood. We recently presented a neural network model of audio-visual integration (Neural Computation, 2017) to investigate how a Bayesian estimator can spontaneously develop from the statistics of external stimuli. Model assumes the presence of two unimodal areas (auditory and visual) topologically organized. Neurons in each area receive an input from the external environment, computed as the inner product of the sensory-specific stimulus and the receptive field synapses, and a cross-modal input from neurons of the other modality. Based on sensory experience, synapses were trained via Hebbian potentiation and a decay term. Aim of this work is to improve the previous model, including a more realistic distribution of visual stimuli: visual stimuli have a higher spatial accuracy at the central azimuthal coordinate and a lower accuracy at the periphery. Moreover, their prior probability is higher at the center, and decreases toward the periphery. Simulations show that, after training, the receptive fields of visual and auditory neurons shrink to reproduce the accuracy of the input (both at the center and at the periphery in the visual case), thus realizing the likelihood estimate of unimodal spatial position. Moreover, the preferred positions of visual neurons contract toward the center, thus encoding the prior probability of the visual input. Finally, a prior probability of the co-occurrence of audio-visual stimuli is encoded in the cross-modal synapses. The model is able to simulate the main properties of a Bayesian estimator and to reproduce behavioral data in all conditions examined. In particular, in unisensory conditions the visual estimates exhibit a bias toward the fovea, which increases with the level of noise. In cross modal conditions, the SD of the estimates decreases when using congruent audio-visual stimuli, and a ventriloquism effect becomes evident in case of spatially disparate stimuli. Moreover, the ventriloquism decreases with the eccentricity. PMID:29046631
ERIC Educational Resources Information Center
Baeken, Chris; Van Schuerbeek, Peter; De Raedt, Rudi; Vanderhasselt, Marie-Anne; De Mey, Johan; Bossuyt, Axel; Luypaert, Robert
2012-01-01
The amygdalae are key players in the processing of a variety of emotional stimuli. Especially aversive visual stimuli have been reported to attract attention and activate the amygdalae. However, as it has been argued that passively viewing withdrawal-related images could attenuate instead of activate amygdalae neuronal responses, its role under…
ERIC Educational Resources Information Center
Guo, Jing; McLeod, Poppy Lauretta
2014-01-01
Drawing upon the Search for Ideas in Associative Memory (SIAM) model as the theoretical framework, the impact of heterogeneity and topic relevance of visual stimuli on ideation performance was examined. Results from a laboratory experiment showed that visual stimuli increased productivity and diversity of idea generation, that relevance to the…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-25
.... Acoustic and visual stimuli generated by: (1) Helicopter landings/takeoffs; (2) noise generated during... minimize acoustic and visual disturbances) as described in NMFS' December 22, 2010 (75 FR 80471) notice of... Activity on Marine Mammals Acoustic and visual stimuli generated by: (1) Helicopter landings/ takeoffs; (2...
Stropahl, Maren; Schellhardt, Sebastian; Debener, Stefan
2017-06-01
The concurrent presentation of different auditory and visual syllables may result in the perception of a third syllable, reflecting an illusory fusion of visual and auditory information. This well-known McGurk effect is frequently used for the study of audio-visual integration. Recently, it was shown that the McGurk effect is strongly stimulus-dependent, which complicates comparisons across perceivers and inferences across studies. To overcome this limitation, we developed the freely available Oldenburg audio-visual speech stimuli (OLAVS), consisting of 8 different talkers and 12 different syllable combinations. The quality of the OLAVS set was evaluated with 24 normal-hearing subjects. All 96 stimuli were characterized based on their stimulus disparity, which was obtained from a probabilistic model (cf. Magnotti & Beauchamp, 2015). Moreover, the McGurk effect was studied in eight adult cochlear implant (CI) users. By applying the individual, stimulus-independent parameters of the probabilistic model, the predicted effect of stronger audio-visual integration in CI users could be confirmed, demonstrating the validity of the new stimulus material.
Mishra, Jyoti; Zanto, Theodore; Nilakantan, Aneesha; Gazzaley, Adam
2013-01-01
Intrasensory interference during visual working memory (WM) maintenance by object stimuli (such as faces and scenes), has been shown to negatively impact WM performance, with greater detrimental impacts of interference observed in aging. Here we assessed age-related impacts by intrasensory WM interference from lower-level stimulus features such as visual and auditory motion stimuli. We consistently found that interference in the form of ignored distractions and secondary task i nterruptions presented during a WM maintenance period, degraded memory accuracy in both the visual and auditory domain. However, in contrast to prior studies assessing WM for visual object stimuli, feature-based interference effects were not observed to be significantly greater in older adults. Analyses of neural oscillations in the alpha frequency band further revealed preserved mechanisms of interference processing in terms of post-stimulus alpha suppression, which was observed maximally for secondary task interruptions in visual and auditory modalities in both younger and older adults. These results suggest that age-related sensitivity of WM to interference may be limited to complex object stimuli, at least at low WM loads. PMID:23791629
Iconic-Memory Processing of Unfamiliar Stimuli by Retarded and Nonretarded Individuals.
ERIC Educational Resources Information Center
Hornstein, Henry A.; Mosley, James L.
1979-01-01
The iconic-memory processing of unfamiliar stimuli by 11 mentally retarded males (mean age 22 years) was undertaken employing a visually cued partial-report procedure and a visual masking procedure. (Author/CL)
Parsons, Thomas D.
2015-01-01
An essential tension can be found between researchers interested in ecological validity and those concerned with maintaining experimental control. Research in the human neurosciences often involves the use of simple and static stimuli lacking many of the potentially important aspects of real world activities and interactions. While this research is valuable, there is a growing interest in the human neurosciences to use cues about target states in the real world via multimodal scenarios that involve visual, semantic, and prosodic information. These scenarios should include dynamic stimuli presented concurrently or serially in a manner that allows researchers to assess the integrative processes carried out by perceivers over time. Furthermore, there is growing interest in contextually embedded stimuli that can constrain participant interpretations of cues about a target’s internal states. Virtual reality environments proffer assessment paradigms that combine the experimental control of laboratory measures with emotionally engaging background narratives to enhance affective experience and social interactions. The present review highlights the potential of virtual reality environments for enhanced ecological validity in the clinical, affective, and social neurosciences. PMID:26696869
Asymmetric neighborhood functions accelerate ordering process of self-organizing maps
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ota, Kaiichiro; Aoki, Takaaki; Kurata, Koji
2011-02-15
A self-organizing map (SOM) algorithm can generate a topographic map from a high-dimensional stimulus space to a low-dimensional array of units. Because a topographic map preserves neighborhood relationships between the stimuli, the SOM can be applied to certain types of information processing such as data visualization. During the learning process, however, topological defects frequently emerge in the map. The presence of defects tends to drastically slow down the formation of a globally ordered topographic map. To remove such topological defects, it has been reported that an asymmetric neighborhood function is effective, but only in the simple case of mapping one-dimensionalmore » stimuli to a chain of units. In this paper, we demonstrate that even when high-dimensional stimuli are used, the asymmetric neighborhood function is effective for both artificial and real-world data. Our results suggest that applying the asymmetric neighborhood function to the SOM algorithm improves the reliability of the algorithm. In addition, it enables processing of complicated, high-dimensional data by using this algorithm.« less
Parsons, Thomas D
2015-01-01
An essential tension can be found between researchers interested in ecological validity and those concerned with maintaining experimental control. Research in the human neurosciences often involves the use of simple and static stimuli lacking many of the potentially important aspects of real world activities and interactions. While this research is valuable, there is a growing interest in the human neurosciences to use cues about target states in the real world via multimodal scenarios that involve visual, semantic, and prosodic information. These scenarios should include dynamic stimuli presented concurrently or serially in a manner that allows researchers to assess the integrative processes carried out by perceivers over time. Furthermore, there is growing interest in contextually embedded stimuli that can constrain participant interpretations of cues about a target's internal states. Virtual reality environments proffer assessment paradigms that combine the experimental control of laboratory measures with emotionally engaging background narratives to enhance affective experience and social interactions. The present review highlights the potential of virtual reality environments for enhanced ecological validity in the clinical, affective, and social neurosciences.
Real time unsupervised learning of visual stimuli in neuromorphic VLSI systems
NASA Astrophysics Data System (ADS)
Giulioni, Massimiliano; Corradi, Federico; Dante, Vittorio; Del Giudice, Paolo
2015-10-01
Neuromorphic chips embody computational principles operating in the nervous system, into microelectronic devices. In this domain it is important to identify computational primitives that theory and experiments suggest as generic and reusable cognitive elements. One such element is provided by attractor dynamics in recurrent networks. Point attractors are equilibrium states of the dynamics (up to fluctuations), determined by the synaptic structure of the network; a ‘basin’ of attraction comprises all initial states leading to a given attractor upon relaxation, hence making attractor dynamics suitable to implement robust associative memory. The initial network state is dictated by the stimulus, and relaxation to the attractor state implements the retrieval of the corresponding memorized prototypical pattern. In a previous work we demonstrated that a neuromorphic recurrent network of spiking neurons and suitably chosen, fixed synapses supports attractor dynamics. Here we focus on learning: activating on-chip synaptic plasticity and using a theory-driven strategy for choosing network parameters, we show that autonomous learning, following repeated presentation of simple visual stimuli, shapes a synaptic connectivity supporting stimulus-selective attractors. Associative memory develops on chip as the result of the coupled stimulus-driven neural activity and ensuing synaptic dynamics, with no artificial separation between learning and retrieval phases.
Immediate effects of anticipatory coarticulation in spoken-word recognition
Salverda, Anne Pier; Kleinschmidt, Dave; Tanenhaus, Michael K.
2014-01-01
Two visual-world experiments examined listeners’ use of pre word-onset anticipatory coarticulation in spoken-word recognition. Experiment 1 established the shortest lag with which information in the speech signal influences eye-movement control, using stimuli such as “The … ladder is the target”. With a neutral token of the definite article preceding the target word, saccades to the referent were not more likely than saccades to an unrelated distractor until 200–240 ms after the onset of the target word. In Experiment 2, utterances contained definite articles which contained natural anticipatory coarticulation pertaining to the onset of the target word (“ The ladder … is the target”). A simple Gaussian classifier was able to predict the initial sound of the upcoming target word from formant information from the first few pitch periods of the article’s vowel. With these stimuli, effects of speech on eye-movement control began about 70 ms earlier than in Experiment 1, suggesting rapid use of anticipatory coarticulation. The results are interpreted as support for “data explanation” approaches to spoken-word recognition. Methodological implications for visual-world studies are also discussed. PMID:24511179
Real time unsupervised learning of visual stimuli in neuromorphic VLSI systems.
Giulioni, Massimiliano; Corradi, Federico; Dante, Vittorio; del Giudice, Paolo
2015-10-14
Neuromorphic chips embody computational principles operating in the nervous system, into microelectronic devices. In this domain it is important to identify computational primitives that theory and experiments suggest as generic and reusable cognitive elements. One such element is provided by attractor dynamics in recurrent networks. Point attractors are equilibrium states of the dynamics (up to fluctuations), determined by the synaptic structure of the network; a 'basin' of attraction comprises all initial states leading to a given attractor upon relaxation, hence making attractor dynamics suitable to implement robust associative memory. The initial network state is dictated by the stimulus, and relaxation to the attractor state implements the retrieval of the corresponding memorized prototypical pattern. In a previous work we demonstrated that a neuromorphic recurrent network of spiking neurons and suitably chosen, fixed synapses supports attractor dynamics. Here we focus on learning: activating on-chip synaptic plasticity and using a theory-driven strategy for choosing network parameters, we show that autonomous learning, following repeated presentation of simple visual stimuli, shapes a synaptic connectivity supporting stimulus-selective attractors. Associative memory develops on chip as the result of the coupled stimulus-driven neural activity and ensuing synaptic dynamics, with no artificial separation between learning and retrieval phases.
Visual-auditory integration during speech imitation in autism.
Williams, Justin H G; Massaro, Dominic W; Peel, Natalie J; Bosseler, Alexis; Suddendorf, Thomas
2004-01-01
Children with autistic spectrum disorder (ASD) may have poor audio-visual integration, possibly reflecting dysfunctional 'mirror neuron' systems which have been hypothesised to be at the core of the condition. In the present study, a computer program, utilizing speech synthesizer software and a 'virtual' head (Baldi), delivered speech stimuli for identification in auditory, visual or bimodal conditions. Children with ASD were poorer than controls at recognizing stimuli in the unimodal conditions, but once performance on this measure was controlled for, no group difference was found in the bimodal condition. A group of participants with ASD were also trained to develop their speech-reading ability. Training improved visual accuracy and this also improved the children's ability to utilize visual information in their processing of speech. Overall results were compared to predictions from mathematical models based on integration and non-integration, and were most consistent with the integration model. We conclude that, whilst they are less accurate in recognizing stimuli in the unimodal condition, children with ASD show normal integration of visual and auditory speech stimuli. Given that training in recognition of visual speech was effective, children with ASD may benefit from multi-modal approaches in imitative therapy and language training.
Axonal Conduction Delays, Brain State, and Corticogeniculate Communication
2017-01-01
Thalamocortical conduction times are short, but layer 6 corticothalamic axons display an enormous range of conduction times, some exceeding 40–50 ms. Here, we investigate (1) how axonal conduction times of corticogeniculate (CG) neurons are related to the visual information conveyed to the thalamus, and (2) how alert versus nonalert awake brain states affect visual processing across the spectrum of CG conduction times. In awake female Dutch-Belted rabbits, we found 58% of CG neurons to be visually responsive, and 42% to be unresponsive. All responsive CG neurons had simple, orientation-selective receptive fields, and generated sustained responses to stationary stimuli. CG axonal conduction times were strongly related to modulated firing rates (F1 values) generated by drifting grating stimuli, and their associated interspike interval distributions, suggesting a continuum of visual responsiveness spanning the spectrum of axonal conduction times. CG conduction times were also significantly related to visual response latency, contrast sensitivity (C-50 values), directional selectivity, and optimal stimulus velocity. Increasing alertness did not cause visually unresponsive CG neurons to become responsive and did not change the response linearity (F1/F0 ratios) of visually responsive CG neurons. However, for visually responsive CG neurons, increased alertness nearly doubled the modulated response amplitude to optimal visual stimulation (F1 values), significantly shortened response latency, and dramatically increased response reliability. These effects of alertness were uniform across the broad spectrum of CG axonal conduction times. SIGNIFICANCE STATEMENT Corticothalamic neurons of layer 6 send a dense feedback projection to thalamic nuclei that provide input to sensory neocortex. While sensory information reaches the cortex after brief thalamocortical axonal delays, corticothalamic axons can exhibit conduction delays of <2 ms to 40–50 ms. Here, in the corticogeniculate visual system of awake rabbits, we investigate the functional significance of this axonal diversity, and the effects of shifting alert/nonalert brain states on corticogeniculate processing. We show that axonal conduction times are strongly related to multiple visual response properties, suggesting a continuum of visual responsiveness spanning the spectrum of corticogeniculate axonal conduction times. We also show that transitions between awake brain states powerfully affect corticogeniculate processing, in some ways more strongly than in layer 4. PMID:28559382
Influences of selective adaptation on perception of audiovisual speech
Dias, James W.; Cook, Theresa C.; Rosenblum, Lawrence D.
2016-01-01
Research suggests that selective adaptation in speech is a low-level process dependent on sensory-specific information shared between the adaptor and test-stimuli. However, previous research has only examined how adaptors shift perception of unimodal test stimuli, either auditory or visual. In the current series of experiments, we investigated whether adaptation to cross-sensory phonetic information can influence perception of integrated audio-visual phonetic information. We examined how selective adaptation to audio and visual adaptors shift perception of speech along an audiovisual test continuum. This test-continuum consisted of nine audio-/ba/-visual-/va/ stimuli, ranging in visual clarity of the mouth. When the mouth was clearly visible, perceivers “heard” the audio-visual stimulus as an integrated “va” percept 93.7% of the time (e.g., McGurk & MacDonald, 1976). As visibility of the mouth became less clear across the nine-item continuum, the audio-visual “va” percept weakened, resulting in a continuum ranging in audio-visual percepts from /va/ to /ba/. Perception of the test-stimuli was tested before and after adaptation. Changes in audiovisual speech perception were observed following adaptation to visual-/va/ and audiovisual-/va/, but not following adaptation to auditory-/va/, auditory-/ba/, or visual-/ba/. Adaptation modulates perception of integrated audio-visual speech by modulating the processing of sensory-specific information. The results suggest that auditory and visual speech information are not completely integrated at the level of selective adaptation. PMID:27041781
Visual search and contextual cueing: differential effects in 10-year-old children and adults.
Couperus, Jane W; Hunt, Ruskin H; Nelson, Charles A; Thomas, Kathleen M
2011-02-01
The development of contextual cueing specifically in relation to attention was examined in two experiments. Adult and 10-year-old participants completed a context cueing visual search task (Jiang & Chun, The Quarterly Journal of Experimental Psychology, 54A(4), 1105-1124, 2001) containing stimuli presented in an attended (e.g., red) and unattended (e.g., green) color. When the spatial configuration of stimuli in the attended and unattended color was invariant and consistently paired with the target location, adult reaction times improved, demonstrating learning. Learning also occurred if only the attended stimuli's configuration remained fixed. In contrast, while 10 year olds, like adults, showed incrementally slower reaction times as the number of attended stimuli increased, they did not show learning in the standard paradigm. However, they did show learning when the ratio of attended to unattended stimuli was high, irrespective of the total number of attended stimuli. Findings suggest children show efficient attentional guidance by color in visual search but differences in contextual cueing.
Song, Jae-Jin; Lee, Hyo-Jeong; Kang, Hyejin; Lee, Dong Soo; Chang, Sun O; Oh, Seung Ha
2015-03-01
While deafness-induced plasticity has been investigated in the visual and auditory domains, not much is known about language processing in audiovisual multimodal environments for patients with restored hearing via cochlear implant (CI) devices. Here, we examined the effect of agreeing or conflicting visual inputs on auditory processing in deaf patients equipped with degraded artificial hearing. Ten post-lingually deafened CI users with good performance, along with matched control subjects, underwent H 2 (15) O-positron emission tomography scans while carrying out a behavioral task requiring the extraction of speech information from unimodal auditory stimuli, bimodal audiovisual congruent stimuli, and incongruent stimuli. Regardless of congruency, the control subjects demonstrated activation of the auditory and visual sensory cortices, as well as the superior temporal sulcus, the classical multisensory integration area, indicating a bottom-up multisensory processing strategy. Compared to CI users, the control subjects exhibited activation of the right ventral premotor-supramarginal pathway. In contrast, CI users activated primarily the visual cortices more in the congruent audiovisual condition than in the null condition. In addition, compared to controls, CI users displayed an activation focus in the right amygdala for congruent audiovisual stimuli. The most notable difference between the two groups was an activation focus in the left inferior frontal gyrus in CI users confronted with incongruent audiovisual stimuli, suggesting top-down cognitive modulation for audiovisual conflict. Correlation analysis revealed that good speech performance was positively correlated with right amygdala activity for the congruent condition, but negatively correlated with bilateral visual cortices regardless of congruency. Taken together these results suggest that for multimodal inputs, cochlear implant users are more vision-reliant when processing congruent stimuli and are disturbed more by visual distractors when confronted with incongruent audiovisual stimuli. To cope with this multimodal conflict, CI users activate the left inferior frontal gyrus to adopt a top-down cognitive modulation pathway, whereas normal hearing individuals primarily adopt a bottom-up strategy.
Raymond, J L; Lisberger, S G
1996-12-01
We characterized the dependence of motor learning in the monkey vestibulo-ocular reflex (VOR) on the duration, frequency, and relative timing of the visual and vestibular stimuli used to induce learning. The amplitude of the VOR was decreased or increased through training with paired head and visual stimulus motion in the same or opposite directions, respectively. For training stimuli that consisted of simultaneous pulses of head and target velocity 80-1000 msec in duration, brief stimuli caused small changes in the amplitude of the VOR, whereas long stimuli caused larger changes in amplitude as well as changes in the dynamics of the reflex. When the relative timing of the visual and vestibular stimuli was varied, brief image motion paired with the beginning of a longer vestibular stimulus caused changes in the amplitude of the reflex alone, but the same image motion paired with a later time in the vestibular stimulus caused changes in the dynamics as well as the amplitude of the VOR. For training stimuli that consisted of sinusoidal head and visual stimulus motion, low-frequency training stimuli induced frequency-selective changes in the VOR, as reported previously, whereas high-frequency training stimuli induced changes in the amplitude of the VOR that were more similar across test frequency. The results suggest that there are at least two distinguishable components of motor learning in the VOR. One component is induced by short-duration or high-frequency stimuli and involves changes in only the amplitude of the reflex. A second component is induced by long-duration or low-frequency stimuli and involves changes in the amplitude and dynamics of the VOR.
NASA Technical Reports Server (NTRS)
Raymond, J. L.; Lisberger, S. G.
1996-01-01
We characterized the dependence of motor learning in the monkey vestibulo-ocular reflex (VOR) on the duration, frequency, and relative timing of the visual and vestibular stimuli used to induce learning. The amplitude of the VOR was decreased or increased through training with paired head and visual stimulus motion in the same or opposite directions, respectively. For training stimuli that consisted of simultaneous pulses of head and target velocity 80-1000 msec in duration, brief stimuli caused small changes in the amplitude of the VOR, whereas long stimuli caused larger changes in amplitude as well as changes in the dynamics of the reflex. When the relative timing of the visual and vestibular stimuli was varied, brief image motion paired with the beginning of a longer vestibular stimulus caused changes in the amplitude of the reflex alone, but the same image motion paired with a later time in the vestibular stimulus caused changes in the dynamics as well as the amplitude of the VOR. For training stimuli that consisted of sinusoidal head and visual stimulus motion, low-frequency training stimuli induced frequency-selective changes in the VOR, as reported previously, whereas high-frequency training stimuli induced changes in the amplitude of the VOR that were more similar across test frequency. The results suggest that there are at least two distinguishable components of motor learning in the VOR. One component is induced by short-duration or high-frequency stimuli and involves changes in only the amplitude of the reflex. A second component is induced by long-duration or low-frequency stimuli and involves changes in the amplitude and dynamics of the VOR.
Computational model for perception of objects and motions.
Yang, WenLu; Zhang, LiQing; Ma, LiBo
2008-06-01
Perception of objects and motions in the visual scene is one of the basic problems in the visual system. There exist 'What' and 'Where' pathways in the superior visual cortex, starting from the simple cells in the primary visual cortex. The former is able to perceive objects such as forms, color, and texture, and the latter perceives 'where', for example, velocity and direction of spatial movement of objects. This paper explores brain-like computational architectures of visual information processing. We propose a visual perceptual model and computational mechanism for training the perceptual model. The computational model is a three-layer network. The first layer is the input layer which is used to receive the stimuli from natural environments. The second layer is designed for representing the internal neural information. The connections between the first layer and the second layer, called the receptive fields of neurons, are self-adaptively learned based on principle of sparse neural representation. To this end, we introduce Kullback-Leibler divergence as the measure of independence between neural responses and derive the learning algorithm based on minimizing the cost function. The proposed algorithm is applied to train the basis functions, namely receptive fields, which are localized, oriented, and bandpassed. The resultant receptive fields of neurons in the second layer have the characteristics resembling that of simple cells in the primary visual cortex. Based on these basis functions, we further construct the third layer for perception of what and where in the superior visual cortex. The proposed model is able to perceive objects and their motions with a high accuracy and strong robustness against additive noise. Computer simulation results in the final section show the feasibility of the proposed perceptual model and high efficiency of the learning algorithm.
Multimodal emotion perception after anterior temporal lobectomy (ATL)
Milesi, Valérie; Cekic, Sezen; Péron, Julie; Frühholz, Sascha; Cristinzio, Chiara; Seeck, Margitta; Grandjean, Didier
2014-01-01
In the context of emotion information processing, several studies have demonstrated the involvement of the amygdala in emotion perception, for unimodal and multimodal stimuli. However, it seems that not only the amygdala, but several regions around it, may also play a major role in multimodal emotional integration. In order to investigate the contribution of these regions to multimodal emotion perception, five patients who had undergone unilateral anterior temporal lobe resection were exposed to both unimodal (vocal or visual) and audiovisual emotional and neutral stimuli. In a classic paradigm, participants were asked to rate the emotional intensity of angry, fearful, joyful, and neutral stimuli on visual analog scales. Compared with matched controls, patients exhibited impaired categorization of joyful expressions, whether the stimuli were auditory, visual, or audiovisual. Patients confused joyful faces with neutral faces, and joyful prosody with surprise. In the case of fear, unlike matched controls, patients provided lower intensity ratings for visual stimuli than for vocal and audiovisual ones. Fearful faces were frequently confused with surprised ones. When we controlled for lesion size, we no longer observed any overall difference between patients and controls in their ratings of emotional intensity on the target scales. Lesion size had the greatest effect on intensity perceptions and accuracy in the visual modality, irrespective of the type of emotion. These new findings suggest that a damaged amygdala, or a disrupted bundle between the amygdala and the ventral part of the occipital lobe, has a greater impact on emotion perception in the visual modality than it does in either the vocal or audiovisual one. We can surmise that patients are able to use the auditory information contained in multimodal stimuli to compensate for difficulty processing visually conveyed emotion. PMID:24839437
The measurement and treatment of suppression in amblyopia.
Black, Joanna M; Hess, Robert F; Cooperstock, Jeremy R; To, Long; Thompson, Benjamin
2012-12-14
Amblyopia, a developmental disorder of the visual cortex, is one of the leading causes of visual dysfunction in the working age population. Current estimates put the prevalence of amblyopia at approximately 1-3%(1-3), the majority of cases being monocular(2). Amblyopia is most frequently caused by ocular misalignment (strabismus), blur induced by unequal refractive error (anisometropia), and in some cases by form deprivation. Although amblyopia is initially caused by abnormal visual input in infancy, once established, the visual deficit often remains when normal visual input has been restored using surgery and/or refractive correction. This is because amblyopia is the result of abnormal visual cortex development rather than a problem with the amblyopic eye itself(4,5) . Amblyopia is characterized by both monocular and binocular deficits(6,7) which include impaired visual acuity and poor or absent stereopsis respectively. The visual dysfunction in amblyopia is often associated with a strong suppression of the inputs from the amblyopic eye under binocular viewing conditions(8). Recent work has indicated that suppression may play a central role in both the monocular and binocular deficits associated with amblyopia(9,10) . Current clinical tests for suppression tend to verify the presence or absence of suppression rather than giving a quantitative measurement of the degree of suppression. Here we describe a technique for measuring amblyopic suppression with a compact, portable device(11,12) . The device consists of a laptop computer connected to a pair of virtual reality goggles. The novelty of the technique lies in the way we present visual stimuli to measure suppression. Stimuli are shown to the amblyopic eye at high contrast while the contrast of the stimuli shown to the non-amblyopic eye are varied. Patients perform a simple signal/noise task that allows for a precise measurement of the strength of excitatory binocular interactions. The contrast offset at which neither eye has a performance advantage is a measure of the "balance point" and is a direct measure of suppression. This technique has been validated psychophysically both in control(13,14) and patient(6,9,11) populations. In addition to measuring suppression this technique also forms the basis of a novel form of treatment to decrease suppression over time and improve binocular and often monocular function in adult patients with amblyopia(12,15,16) . This new treatment approach can be deployed either on the goggle system described above or on a specially modified iPod touch device(15).
Peel, Hayden J.; Sperandio, Irene; Laycock, Robin; Chouinard, Philippe A.
2018-01-01
Our understanding of how form, orientation and size are processed within and outside of awareness is limited and requires further investigation. Therefore, we investigated whether or not the visual discrimination of basic object features can be influenced by subliminal processing of stimuli presented beforehand. Visual masking was used to render stimuli perceptually invisible. Three experiments examined if visible and invisible primes could facilitate the subsequent feature discrimination of visible targets. The experiments differed in the kind of perceptual discrimination that participants had to make. Namely, participants were asked to discriminate visual stimuli on the basis of their form, orientation, or size. In all three experiments, we demonstrated reliable priming effects when the primes were visible but not when the primes were made invisible. Our findings underscore the importance of conscious awareness in facilitating the perceptual discrimination of basic object features. PMID:29725292
Cognitive workload modulation through degraded visual stimuli: a single-trial EEG study
NASA Astrophysics Data System (ADS)
Yu, K.; Prasad, I.; Mir, H.; Thakor, N.; Al-Nashash, H.
2015-08-01
Objective. Our experiments explored the effect of visual stimuli degradation on cognitive workload. Approach. We investigated the subjective assessment, event-related potentials (ERPs) as well as electroencephalogram (EEG) as measures of cognitive workload. Main results. These experiments confirm that degradation of visual stimuli increases cognitive workload as assessed by subjective NASA task load index and confirmed by the observed P300 amplitude attenuation. Furthermore, the single-trial multi-level classification using features extracted from ERPs and EEG is found to be promising. Specifically, the adopted single-trial oscillatory EEG/ERP detection method achieved an average accuracy of 85% for discriminating 4 workload levels. Additionally, we found from the spatial patterns obtained from EEG signals that the frontal parts carry information that can be used for differentiating workload levels. Significance. Our results show that visual stimuli can modulate cognitive workload, and the modulation can be measured by the single trial EEG/ERP detection method.
Lateral eye-movement responses to visual stimuli.
Wilbur, M P; Roberts-Wilbur, J
1985-08-01
The association of left lateral eye-movement with emotionality or arousal of affect and of right lateral eye-movement with cognitive/interpretive operations and functions was investigated. Participants were junior and senior students enrolled in an undergraduate course in developmental psychology. There were 37 women and 13 men, ranging from 19 to 45 yr. of age. Using videotaped lateral eye-movements of 50 participants' responses to 15 visually presented stimuli (precategorized as neutral, emotional, or intellectual), content and statistical analyses supported the association between left lateral eye-movement and emotional arousal and between right lateral eye-movement and cognitive functions. Precategorized visual stimuli included items such as a ball (neutral), gun (emotional), and calculator (intellectual). The findings are congruent with existing lateral eye-movement literature and also are additive by using visual stimuli that do not require the explicit response or implicit processing of verbal questioning.
Peel, Hayden J; Sperandio, Irene; Laycock, Robin; Chouinard, Philippe A
2018-01-01
Our understanding of how form, orientation and size are processed within and outside of awareness is limited and requires further investigation. Therefore, we investigated whether or not the visual discrimination of basic object features can be influenced by subliminal processing of stimuli presented beforehand. Visual masking was used to render stimuli perceptually invisible. Three experiments examined if visible and invisible primes could facilitate the subsequent feature discrimination of visible targets. The experiments differed in the kind of perceptual discrimination that participants had to make. Namely, participants were asked to discriminate visual stimuli on the basis of their form, orientation, or size. In all three experiments, we demonstrated reliable priming effects when the primes were visible but not when the primes were made invisible. Our findings underscore the importance of conscious awareness in facilitating the perceptual discrimination of basic object features.
Braun, Doris I; Schütz, Alexander C; Gegenfurtner, Karl R
2017-07-01
Visual sensitivity is dynamically modulated by eye movements. During saccadic eye movements, sensitivity is reduced selectively for low-spatial frequency luminance stimuli and largely unaffected for high-spatial frequency luminance and chromatic stimuli (Nature 371 (1994), 511-513). During smooth pursuit eye movements, sensitivity for low-spatial frequency luminance stimuli is moderately reduced while sensitivity for chromatic and high-spatial frequency luminance stimuli is even increased (Nature Neuroscience, 11 (2008), 1211-1216). Since these effects are at least partly of different polarity, we investigated the combined effects of saccades and smooth pursuit on visual sensitivity. For the time course of chromatic sensitivity, we found that detection rates increased slightly around pursuit onset. During saccades to static and moving targets, detection rates dropped briefly before the saccade and reached a minimum at saccade onset. This reduction of chromatic sensitivity was present whenever a saccade was executed and it was not modified by subsequent pursuit. We also measured contrast sensitivity for flashed high- and low-spatial frequency luminance and chromatic stimuli during saccades and pursuit. During saccades, the reduction of contrast sensitivity was strongest for low-spatial frequency luminance stimuli (about 90%). However, a significant reduction was also present for chromatic stimuli (about 58%). Chromatic sensitivity was increased during smooth pursuit (about 12%). These results suggest that the modulation of visual sensitivity during saccades and smooth pursuit is more complex than previously assumed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Visual Presentation Effects on Identification of Multiple Environmental Sounds
Masakura, Yuko; Ichikawa, Makoto; Shimono, Koichi; Nakatsuka, Reio
2016-01-01
This study examined how the contents and timing of a visual stimulus affect the identification of mixed sounds recorded in a daily life environment. For experiments, we presented four environment sounds as auditory stimuli for 5 s along with a picture or a written word as a visual stimulus that might or might not denote the source of one of the four sounds. Three conditions of temporal relations between the visual stimuli and sounds were used. The visual stimulus was presented either: (a) for 5 s simultaneously with the sound; (b) for 5 s, 1 s before the sound (SOA between the audio and visual stimuli was 6 s); or (c) for 33 ms, 1 s before the sound (SOA was 1033 ms). Participants reported all identifiable sounds for those audio–visual stimuli. To characterize the effects of visual stimuli on sound identification, the following were used: the identification rates of sounds for which the visual stimulus denoted its sound source, the rates of other sounds for which the visual stimulus did not denote the sound source, and the frequency of false hearing of a sound that was not presented for each sound set. Results of the four experiments demonstrated that a picture or a written word promoted identification of the sound when it was related to the sound, particularly when the visual stimulus was presented for 5 s simultaneously with the sounds. However, a visual stimulus preceding the sounds had a benefit only for the picture, not for the written word. Furthermore, presentation with a picture denoting a sound simultaneously with the sound reduced the frequency of false hearing. These results suggest three ways that presenting a visual stimulus affects identification of the auditory stimulus. First, activation of the visual representation extracted directly from the picture promotes identification of the denoted sound and suppresses the processing of sounds for which the visual stimulus did not denote the sound source. Second, effects based on processing of the conceptual information promote identification of the denoted sound and suppress the processing of sounds for which the visual stimulus did not denote the sound source. Third, processing of the concurrent visual representation suppresses false hearing. PMID:26973478
Casual Video Games as Training Tools for Attentional Processes in Everyday Life.
Stroud, Michael J; Whitbourne, Susan Krauss
2015-11-01
Three experiments examined the attentional components of the popular match-3 casual video game, Bejeweled Blitz (BJB). Attentionally demanding, BJB is highly popular among adults, particularly those in middle and later adulthood. In experiment 1, 54 older adults (Mage = 70.57) and 33 younger adults (Mage = 19.82) played 20 rounds of BJB, and completed online tasks measuring reaction time, simple visual search, and conjunction visual search. Prior experience significantly predicted BJB scores for younger adults, but for older adults, both prior experience and simple visual search task scores predicted BJB performance. Experiment 2 tested whether BJB practice alone would result in a carryover benefit to a visual search task in a sample of 58 young adults (Mage = 19.57) who completed 0, 10, or 30 rounds of BJB followed by a BJB-like visual search task with targets present or absent. Reaction times were significantly faster for participants who completed 30 but not 10 rounds of BJB compared with the search task only. This benefit was evident when targets were both present and absent, suggesting that playing BJB improves not only target detection, but also the ability to quit search effectively. Experiment 3 tested whether the attentional benefit in experiment 2 would apply to non-BJB stimuli. The results revealed a similar numerical but not significant trend. Taken together, the findings suggest there are benefits of casual video game playing to attention and relevant everyday skills, and that these games may have potential value as training tools.
Information from multiple modalities helps 5-month-olds learn abstract rules.
Frank, Michael C; Slemmer, Jonathan A; Marcus, Gary F; Johnson, Scott P
2009-07-01
By 7 months of age, infants are able to learn rules based on the abstract relationships between stimuli (Marcus et al., 1999), but they are better able to do so when exposed to speech than to some other classes of stimuli. In the current experiments we ask whether multimodal stimulus information will aid younger infants in identifying abstract rules. We habituated 5-month-olds to simple abstract patterns (ABA or ABB) instantiated in coordinated looming visual shapes and speech sounds (Experiment 1), shapes alone (Experiment 2), and speech sounds accompanied by uninformative but coordinated shapes (Experiment 3). Infants showed evidence of rule learning only in the presence of the informative multimodal cues. We hypothesize that the additional evidence present in these multimodal displays was responsible for the success of younger infants in learning rules, congruent with both a Bayesian account and with the Intersensory Redundancy Hypothesis.
Methodological issues in measures of imitative reaction times.
Aicken, Michael D; Wilson, Andrew D; Williams, Justin H G; Mon-Williams, Mark
2007-04-01
Ideomotor (IM) theory suggests that observing someone else perform an action activates an internal motor representation of that behaviour within the observer. Evidence supporting the case for an ideomotor theory of imitation has come from studies that show imitative responses to be faster than the same behavioural measures performed in response to spatial cues. In an attempt to replicate these findings, we manipulated the salience of the visual cue and found that we could reverse the advantage of the imitative cue over the spatial cue. We suggest that participants utilised a simple visuomotor mechanism to perform all aspects of this task, with performance being driven by the relative visual salience of the stimuli. Imitation is a more complex motor skill that would constitute an inefficient strategy for rapid performance.
Skilled Deaf Readers have an Enhanced Perceptual Span in Reading
Bélanger, Nathalie N.; Slattery, Timothy J.; Mayberry, Rachel I.; Rayner, Keith
2013-01-01
Recent evidence suggests that deaf people have enhanced visual attention to simple stimuli in the parafovea in comparison to hearing people. Although a large part of reading involves processing the fixated words in foveal vision, readers also utilize information in parafoveal vision to pre-process upcoming words and decide where to look next. We investigated whether auditory deprivation affects low-level visual processing during reading, and compared the perceptual span of deaf signers who were skilled and less skilled readers to that of skilled hearing readers. Compared to hearing readers, deaf readers had a larger perceptual span than would be expected by their reading ability. These results provide the first evidence that deaf readers’ enhanced attentional allocation to the parafovea is used during a complex cognitive task such as reading. PMID:22683830
Ball, Felix; Elzemann, Anne; Busch, Niko A
2014-09-01
The change blindness paradigm, in which participants often fail to notice substantial changes in a scene, is a popular tool for studying scene perception, visual memory, and the link between awareness and attention. Some of the most striking and popular examples of change blindness have been demonstrated with digital photographs of natural scenes; in most studies, however, much simpler displays, such as abstract stimuli or "free-floating" objects, are typically used. Although simple displays have undeniable advantages, natural scenes remain a very useful and attractive stimulus for change blindness research. To assist researchers interested in using natural-scene stimuli in change blindness experiments, we provide here a step-by-step tutorial on how to produce changes in natural-scene images with a freely available image-processing tool (GIMP). We explain how changes in a scene can be made by deleting objects or relocating them within the scene or by changing the color of an object, in just a few simple steps. We also explain how the physical properties of such changes can be analyzed using GIMP and MATLAB (a high-level scientific programming tool). Finally, we present an experiment confirming that scenes manipulated according to our guidelines are effective in inducing change blindness and demonstrating the relationship between change blindness and the physical properties of the change and inter-individual differences in performance measures. We expect that this tutorial will be useful for researchers interested in studying the mechanisms of change blindness, attention, or visual memory using natural scenes.
Visual stimuli and written production of deaf signers.
Jacinto, Laís Alves; Ribeiro, Karen Barros; Soares, Aparecido José Couto; Cárnio, Maria Silvia
2012-01-01
To verify the interference of visual stimuli in written production of deaf signers with no complaints regarding reading and writing. The research group consisted of 12 students with education between the 4th and 5th grade of elementary school, with severe or profound sensorineural hearing loss, users of LIBRAS and with alphabetical writing level. The evaluation was performed with pictures in a logical sequence and an action picture. The analysis used the communicative competence criteria. There were no differences in the writing production of the subjects for both stimuli. In all texts there was no title and punctuation, verbs were in the infinitive mode, there was lack of cohesive links and inclusion of created words. The different visual stimuli did not affect the production of texts.
Infant Attention to Dynamic Audiovisual Stimuli: Look Duration from 3 to 9 Months of Age
ERIC Educational Resources Information Center
Reynolds, Greg D.; Zhang, Dantong; Guy, Maggie W.
2013-01-01
The goal of this study was to examine developmental change in visual attention to dynamic visual and audiovisual stimuli in 3-, 6-, and 9-month-old infants. Infant look duration was measured during exposure to dynamic geometric patterns and Sesame Street video clips under three different stimulus modality conditions: unimodal visual, synchronous…
Heightened attentional capture by visual food stimuli in anorexia nervosa.
Neimeijer, Renate A M; Roefs, Anne; de Jong, Peter J
2017-08-01
The present study was designed to test the hypothesis that anorexia nervosa (AN) patients are relatively insensitive to the attentional capture of visual food stimuli. Attentional avoidance of food might help AN patients to prevent more elaborate processing of food stimuli and the subsequent generation of craving, which might enable AN patients to maintain their strict diet. Participants were 66 restrictive AN spectrum patients and 55 healthy controls. A single-target rapid serial visual presentation task was used with food and disorder-neutral cues as critical distracter stimuli and disorder-neutral pictures as target stimuli. AN spectrum patients showed diminished task performance when visual food cues were presented in close temporal proximity of the to-be-identified target. In contrast to our hypothesis, results indicate that food cues automatically capture AN spectrum patients' attention. One explanation could be that the enhanced attentional capture of food cues in AN is driven by the relatively high threat value of food items in AN. Implications and suggestions for future research are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Positive mood broadens visual attention to positive stimuli.
Wadlinger, Heather A; Isaacowitz, Derek M
2006-03-01
In an attempt to investigate the impact of positive emotions on visual attention within the context of Fredrickson's (1998) broaden-and-build model, eye tracking was used in two studies to measure visual attentional preferences of college students (n=58, n=26) to emotional pictures. Half of each sample experienced induced positive mood immediately before viewing slides of three similarly-valenced images, in varying central-peripheral arrays. Attentional breadth was determined by measuring the percentage viewing time to peripheral images as well as by the number of visual saccades participants made per slide. Consistent with Fredrickson's theory, the first study showed that individuals induced into positive mood fixated more on peripheral stimuli than did control participants; however, this only held true for highly-valenced positive stimuli. Participants under induced positive mood also made more frequent saccades for slides of neutral and positive valence. A second study showed that these effects were not simply due to differences in emotional arousal between stimuli. Selective attentional broadening to positive stimuli may act both to facilitate later building of resources as well as to maintain current positive affective states.
Toward a reliable gaze-independent hybrid BCI combining visual and natural auditory stimuli.
Barbosa, Sara; Pires, Gabriel; Nunes, Urbano
2016-03-01
Brain computer interfaces (BCIs) are one of the last communication options for patients in the locked-in state (LIS). For complete LIS patients, interfaces must be gaze-independent due to their eye impairment. However, unimodal gaze-independent approaches typically present levels of performance substantially lower than gaze-dependent approaches. The combination of multimodal stimuli has been pointed as a viable way to increase users' performance. A hybrid visual and auditory (HVA) P300-based BCI combining simultaneously visual and auditory stimulation is proposed. Auditory stimuli are based on natural meaningful spoken words, increasing stimuli discrimination and decreasing user's mental effort in associating stimuli to the symbols. The visual part of the interface is covertly controlled ensuring gaze-independency. Four conditions were experimentally tested by 10 healthy participants: visual overt (VO), visual covert (VC), auditory (AU) and covert HVA. Average online accuracy for the hybrid approach was 85.3%, which is more than 32% over VC and AU approaches. Questionnaires' results indicate that the HVA approach was the less demanding gaze-independent interface. Interestingly, the P300 grand average for HVA approach coincides with an almost perfect sum of P300 evoked separately by VC and AU tasks. The proposed HVA-BCI is the first solution simultaneously embedding natural spoken words and visual words to provide a communication lexicon. Online accuracy and task demand of the approach compare favorably with state-of-the-art. The proposed approach shows that the simultaneous combination of visual covert control and auditory modalities can effectively improve the performance of gaze-independent BCIs. Copyright © 2015 Elsevier B.V. All rights reserved.
Dorsal hippocampus is necessary for visual categorization in rats.
Kim, Jangjin; Castro, Leyre; Wasserman, Edward A; Freeman, John H
2018-02-23
The hippocampus may play a role in categorization because of the need to differentiate stimulus categories (pattern separation) and to recognize category membership of stimuli from partial information (pattern completion). We hypothesized that the hippocampus would be more crucial for categorization of low-density (few relevant features) stimuli-due to the higher demand on pattern separation and pattern completion-than for categorization of high-density (many relevant features) stimuli. Using a touchscreen apparatus, rats were trained to categorize multiple abstract stimuli into two different categories. Each stimulus was a pentagonal configuration of five visual features; some of the visual features were relevant for defining the category whereas others were irrelevant. Two groups of rats were trained with either a high (dense, n = 8) or low (sparse, n = 8) number of category-relevant features. Upon reaching criterion discrimination (≥75% correct, on 2 consecutive days), bilateral cannulas were implanted in the dorsal hippocampus. The rats were then given either vehicle or muscimol infusions into the hippocampus just prior to various testing sessions. They were tested with: the previously trained stimuli (trained), novel stimuli involving new irrelevant features (novel), stimuli involving relocated features (relocation), and a single relevant feature (singleton). In training, the dense group reached criterion faster than the sparse group, indicating that the sparse task was more difficult than the dense task. In testing, accuracy of both groups was equally high for trained and novel stimuli. However, both groups showed impaired accuracy in the relocation and singleton conditions, with a greater deficit in the sparse group. The testing data indicate that rats encode both the relevant features and the spatial locations of the features. Hippocampal inactivation impaired visual categorization regardless of the density of the category-relevant features for the trained, novel, relocation, and singleton stimuli. Hippocampus-mediated pattern completion and pattern separation mechanisms may be necessary for visual categorization involving overlapping irrelevant features. © 2018 Wiley Periodicals, Inc.
Weyand, T G; Gafka, A C
2001-01-01
We studied the visuomotor activity of corticotectal (CT) cells in two visual cortical areas [area 17 and the posteromedial lateral suprasylvian cortex (PMLS)] of the cat. The cats were trained in simple oculomotor tasks, and head position was fixed. Most CT cells in both cortical areas gave a vigorous discharge to a small stimulus used to control gaze when it fell within the retinotopically defined visual field. However, the vigor of the visual response did not predict latency to initiate a saccade, saccade velocity, amplitude, or even if a saccade would be made, minimizing any potential role these cells might have in premotor or attentional processes. Most CT cells in both areas were selective for direction of stimulus motion, and cells in PMLS showed a direction preference favoring motion away from points of central gaze. CT cells did not discharge with eye movements in the dark. During eye movements in the light, many CT cells in area 17 increased their activity. In contrast, cells in PMLS, including CT cells, were generally unresponsive during saccades. Paradoxically, cells in PMLS responded vigorously to stimuli moving at saccadic velocities, indicating that the oculomotor system suppresses visual activity elicited by moving the retina across an illuminated scene. Nearly all CT cells showed oscillatory activity in the frequency range of 20-90 Hz, especially in response to visual stimuli. However, this activity was capricious; strong oscillations in one trial could disappear in the next despite identical stimulus conditions. Although the CT cells in both of these regions share many characteristics, the direction anisotropy and the suppression of activity during eye movements which characterize the neurons in PMLS suggests that these two areas have different roles in facilitating perceptual/motor processes at the level of the superior colliculus.
Submillisecond unmasked subliminal visual stimuli evoke electrical brain responses.
Sperdin, Holger F; Spierer, Lucas; Becker, Robert; Michel, Christoph M; Landis, Theodor
2015-04-01
Subliminal perception is strongly associated to the processing of meaningful or emotional information and has mostly been studied using visual masking. In this study, we used high density 256-channel EEG coupled with an liquid crystal display (LCD) tachistoscope to characterize the spatio-temporal dynamics of the brain response to visual checkerboard stimuli (Experiment 1) or blank stimuli (Experiment 2) presented without a mask for 1 ms (visible), 500 µs (partially visible), and 250 µs (subliminal) by applying time-wise, assumption-free nonparametric randomization statistics on the strength and on the topography of high-density scalp-recorded electric field. Stimulus visibility was assessed in a third separate behavioral experiment. Results revealed that unmasked checkerboards presented subliminally for 250 µs evoked weak but detectable visual evoked potential (VEP) responses. When the checkerboards were replaced by blank stimuli, there was no evidence for the presence of an evoked response anymore. Furthermore, the checkerboard VEPs were modulated topographically between 243 and 296 ms post-stimulus onset as a function of stimulus duration, indicative of the engagement of distinct configuration of active brain networks. A distributed electrical source analysis localized this modulation within the right superior parietal lobule near the precuneus. These results show the presence of a brain response to submillisecond unmasked subliminal visual stimuli independently of their emotional saliency or meaningfulness and opens an avenue for new investigations of subliminal stimulation without using visual masking. © 2014 Wiley Periodicals, Inc.
Matsuzaki, Naoyuki; Schwarzlose, Rebecca F.; Nishida, Masaaki; Ofen, Noa; Asano, Eishi
2015-01-01
Behavioral studies demonstrate that a face presented in the upright orientation attracts attention more rapidly than an inverted face. Saccades toward an upright face take place in 100-140 ms following presentation. The present study using electrocorticography determined whether upright face-preferential neural activation, as reflected by augmentation of high-gamma activity at 80-150 Hz, involved the lower-order visual cortex within the first 100 ms post-stimulus presentation. Sampled lower-order visual areas were verified by the induction of phosphenes upon electrical stimulation. These areas resided in the lateral-occipital, lingual, and cuneus gyri along the calcarine sulcus, roughly corresponding to V1 and V2. Measurement of high-gamma augmentation during central (circular) and peripheral (annular) checkerboard reversal pattern stimulation indicated that central-field stimuli were processed by the more polar surface whereas peripheral-field stimuli by the more anterior medial surface. Upright face stimuli, compared to inverted ones, elicited up to 23% larger augmentation of high-gamma activity in the lower-order visual regions at 40-90 ms. Upright face-preferential high-gamma augmentation was more highly correlated with high-gamma augmentation for central than peripheral stimuli. Our observations are consistent with the hypothesis that lower-order visual regions, especially those for the central field, are involved in visual cues for rapid detection of upright face stimuli. PMID:25579446
Yahata, Izumi; Kawase, Tetsuaki; Kanno, Akitake; Hidaka, Hiroshi; Sakamoto, Shuichi; Nakasato, Nobukazu; Kawashima, Ryuta; Katori, Yukio
2017-01-01
The effects of visual speech (the moving image of the speaker's face uttering speech sound) on early auditory evoked fields (AEFs) were examined using a helmet-shaped magnetoencephalography system in 12 healthy volunteers (9 males, mean age 35.5 years). AEFs (N100m) in response to the monosyllabic sound /be/ were recorded and analyzed under three different visual stimulus conditions, the moving image of the same speaker's face uttering /be/ (congruent visual stimuli) or uttering /ge/ (incongruent visual stimuli), and visual noise (still image processed from speaker's face using a strong Gaussian filter: control condition). On average, latency of N100m was significantly shortened in the bilateral hemispheres for both congruent and incongruent auditory/visual (A/V) stimuli, compared to the control A/V condition. However, the degree of N100m shortening was not significantly different between the congruent and incongruent A/V conditions, despite the significant differences in psychophysical responses between these two A/V conditions. Moreover, analysis of the magnitudes of these visual effects on AEFs in individuals showed that the lip-reading effects on AEFs tended to be well correlated between the two different audio-visual conditions (congruent vs. incongruent visual stimuli) in the bilateral hemispheres but were not significantly correlated between right and left hemisphere. On the other hand, no significant correlation was observed between the magnitudes of visual speech effects and psychophysical responses. These results may indicate that the auditory-visual interaction observed on the N100m is a fundamental process which does not depend on the congruency of the visual information.
The role of prestimulus activity in visual extinction☆
Urner, Maren; Sarri, Margarita; Grahn, Jessica; Manly, Tom; Rees, Geraint; Friston, Karl
2013-01-01
Patients with visual extinction following right-hemisphere damage sometimes see and sometimes miss stimuli in the left visual field, particularly when stimuli are presented simultaneously to both visual fields. Awareness of left visual field stimuli is associated with increased activity in bilateral parietal and frontal cortex. However, it is unknown why patients see or miss these stimuli. Previous neuroimaging studies in healthy adults show that prestimulus activity biases perceptual decisions, and biases in visual perception can be attributed to fluctuations in prestimulus activity in task relevant brain regions. Here, we used functional MRI to investigate whether prestimulus activity affected perception in the context of visual extinction following stroke. We measured prestimulus activity in stimulus-responsive cortical areas during an extinction paradigm in a patient with unilateral right parietal damage and visual extinction. This allowed us to compare prestimulus activity on physically identical bilateral trials that either did or did not lead to visual extinction. We found significantly increased activity prior to stimulus presentation in two areas that were also activated by visual stimulation: the left calcarine sulcus and right occipital inferior cortex. Using dynamic causal modelling (DCM) we found that both these differences in prestimulus activity and stimulus evoked responses could be explained by enhanced effective connectivity within and between visual areas, prior to stimulus presentation. Thus, we provide evidence for the idea that differences in ongoing neural activity in visually responsive areas prior to stimulus onset affect awareness in visual extinction, and that these differences are mediated by fluctuations in extrinsic and intrinsic connectivity. PMID:23680398
The role of prestimulus activity in visual extinction.
Urner, Maren; Sarri, Margarita; Grahn, Jessica; Manly, Tom; Rees, Geraint; Friston, Karl
2013-07-01
Patients with visual extinction following right-hemisphere damage sometimes see and sometimes miss stimuli in the left visual field, particularly when stimuli are presented simultaneously to both visual fields. Awareness of left visual field stimuli is associated with increased activity in bilateral parietal and frontal cortex. However, it is unknown why patients see or miss these stimuli. Previous neuroimaging studies in healthy adults show that prestimulus activity biases perceptual decisions, and biases in visual perception can be attributed to fluctuations in prestimulus activity in task relevant brain regions. Here, we used functional MRI to investigate whether prestimulus activity affected perception in the context of visual extinction following stroke. We measured prestimulus activity in stimulus-responsive cortical areas during an extinction paradigm in a patient with unilateral right parietal damage and visual extinction. This allowed us to compare prestimulus activity on physically identical bilateral trials that either did or did not lead to visual extinction. We found significantly increased activity prior to stimulus presentation in two areas that were also activated by visual stimulation: the left calcarine sulcus and right occipital inferior cortex. Using dynamic causal modelling (DCM) we found that both these differences in prestimulus activity and stimulus evoked responses could be explained by enhanced effective connectivity within and between visual areas, prior to stimulus presentation. Thus, we provide evidence for the idea that differences in ongoing neural activity in visually responsive areas prior to stimulus onset affect awareness in visual extinction, and that these differences are mediated by fluctuations in extrinsic and intrinsic connectivity. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.
Charbonneau, Geneviève; Véronneau, Marie; Boudrias-Fournier, Colin; Lepore, Franco; Collignon, Olivier
2013-10-28
The relative reliability of separate sensory estimates influences the way they are merged into a unified percept. We investigated how eccentricity-related changes in reliability of auditory and visual stimuli influence their integration across the entire frontal space. First, we surprisingly found that despite a strong decrease in auditory and visual unisensory localization abilities in periphery, the redundancy gain resulting from the congruent presentation of audio-visual targets was not affected by stimuli eccentricity. This result therefore contrasts with the common prediction that a reduction in sensory reliability necessarily induces an enhanced integrative gain. Second, we demonstrate that the visual capture of sounds observed with spatially incongruent audio-visual targets (ventriloquist effect) steadily decreases with eccentricity, paralleling a lowering of the relative reliability of unimodal visual over unimodal auditory stimuli in periphery. Moreover, at all eccentricities, the ventriloquist effect positively correlated with a weighted combination of the spatial resolution obtained in unisensory conditions. These findings support and extend the view that the localization of audio-visual stimuli relies on an optimal combination of auditory and visual information according to their respective spatial reliability. All together, these results evidence that the external spatial coordinates of multisensory events relative to an observer's body (e.g., eyes' or head's position) influence how this information is merged, and therefore determine the perceptual outcome.
Allon, Ayala S.; Balaban, Halely; Luria, Roy
2014-01-01
In three experiments we manipulated the resolution of novel complex objects in visual working memory (WM) by changing task demands. Previous studies that investigated the trade-off between quantity and resolution in visual WM yielded mixed results for simple familiar stimuli. We used the contralateral delay activity as an electrophysiological marker to directly track the deployment of visual WM resources while participants preformed a change-detection task. Across three experiments we presented the same novel complex items but changed the task demands. In Experiment 1 we induced a medium resolution task by using change trials in which a random polygon changed to a different type of polygon and replicated previous findings showing that novel complex objects are represented with higher resolution relative to simple familiar objects. In Experiment 2 we induced a low resolution task that required distinguishing between polygons and other types of stimulus categories, but we failed in finding a corresponding decrease in the resolution of the represented item. Finally, in Experiment 3 we induced a high resolution task that required discriminating between highly similar polygons with somewhat different contours. This time, we observed an increase in the item’s resolution. Our findings indicate that the resolution for novel complex objects can be increased but not decreased according to task demands, suggesting that minimal resolution is required in order to maintain these items in visual WM. These findings support studies claiming that capacity and resolution in visual WM reflect different mechanisms. PMID:24734026
Neuronal Representation of Ultraviolet Visual Stimuli in Mouse Primary Visual Cortex
Tan, Zhongchao; Sun, Wenzhi; Chen, Tsai-Wen; Kim, Douglas; Ji, Na
2015-01-01
The mouse has become an important model for understanding the neural basis of visual perception. Although it has long been known that mouse lens transmits ultraviolet (UV) light and mouse opsins have absorption in the UV band, little is known about how UV visual information is processed in the mouse brain. Using a custom UV stimulation system and in vivo calcium imaging, we characterized the feature selectivity of layer 2/3 neurons in mouse primary visual cortex (V1). In adult mice, a comparable percentage of the neuronal population responds to UV and visible stimuli, with similar pattern selectivity and receptive field properties. In young mice, the orientation selectivity for UV stimuli increased steadily during development, but not direction selectivity. Our results suggest that, by expanding the spectral window through which the mouse can acquire visual information, UV sensitivity provides an important component for mouse vision. PMID:26219604
Visual Attention in Flies-Dopamine in the Mushroom Bodies Mediates the After-Effect of Cueing.
Koenig, Sebastian; Wolf, Reinhard; Heisenberg, Martin
2016-01-01
Visual environments may simultaneously comprise stimuli of different significance. Often such stimuli require incompatible responses. Selective visual attention allows an animal to respond exclusively to the stimuli at a certain location in the visual field. In the process of establishing its focus of attention the animal can be influenced by external cues. Here we characterize the behavioral properties and neural mechanism of cueing in the fly Drosophila melanogaster. A cue can be attractive, repulsive or ineffective depending upon (e.g.) its visual properties and location in the visual field. Dopamine signaling in the brain is required to maintain the effect of cueing once the cue has disappeared. Raising or lowering dopamine at the synapse abolishes this after-effect. Specifically, dopamine is necessary and sufficient in the αβ-lobes of the mushroom bodies. Evidence is provided for an involvement of the αβposterior Kenyon cells.
NASA Technical Reports Server (NTRS)
Jaeger, R. J.; Agarwal, G. C.; Gottlieb, G. L.
1978-01-01
Subjects can correct their own errors of movement more quickly than they can react to external stimuli by using three general categories of feedback: (1) knowledge of results, primarily visually mediated; (2) proprioceptive or kinaesthetic such as from muscle spindles and joint receptors, and (3) corollary discharge or efference copy within the central nervous system. The effects of these feedbacks on simple reaction time, choice reaction time, and error correction time were studied in four normal human subjects. The movement used was plantarflexion and dorsiflexion of the ankle joint. The feedback loops were modified, by changing the sign of the visual display to alter the subject's perception of results, and by applying vibration at 100 Hz simultaneously to both the agonist and antagonist muscles of the ankle joint. The central processing was interfered with when the subjects were given moderate doses of alcohol (blood alcohol concentration levels of up to 0.07%). Vibration and alcohol increase both the simple and choice reaction times but not the error correction time.
Working Memory Enhances Visual Perception: Evidence from Signal Detection Analysis
ERIC Educational Resources Information Center
Soto, David; Wriglesworth, Alice; Bahrami-Balani, Alex; Humphreys, Glyn W.
2010-01-01
We show that perceptual sensitivity to visual stimuli can be modulated by matches between the contents of working memory (WM) and stimuli in the visual field. Observers were presented with an object cue (to hold in WM or to merely attend) and subsequently had to identify a brief target presented within a colored shape. The cue could be…
Explaining the Colavita visual dominance effect.
Spence, Charles
2009-01-01
The last couple of years have seen a resurgence of interest in the Colavita visual dominance effect. In the basic experimental paradigm, a random series of auditory, visual, and audiovisual stimuli are presented to participants who are instructed to make one response whenever they see a visual target and another response whenever they hear an auditory target. Many studies have now shown that participants sometimes fail to respond to auditory targets when they are presented at the same time as visual targets (i.e., on the bimodal trials), despite the fact that they have no problems in responding to the auditory and visual stimuli when they are presented individually. The existence of the Colavita visual dominance effect provides an intriguing contrast with the results of the many other recent studies showing the superiority of multisensory (over unisensory) information processing in humans. Various accounts have been put forward over the years in order to try and explain the effect, including the suggestion that it reflects nothing more than an underlying bias to attend to the visual modality. Here, the empirical literature on the Colavita visual dominance effect is reviewed and some of the key factors modulating the effect highlighted. The available research has now provided evidence against all previous accounts of the Colavita effect. A novel explanation of the Colavita effect is therefore put forward here, one that is based on the latest findings highlighting the asymmetrical effect that auditory and visual stimuli exert on people's responses to stimuli presented in the other modality.
Visual grouping under isoluminant condition: impact of mental fatigue
NASA Astrophysics Data System (ADS)
Pladere, Tatjana; Bete, Diana; Skilters, Jurgis; Krumina, Gunta
2016-09-01
Instead of selecting arbitrary elements our visual perception prefers only certain grouping of information. There is ample evidence that the visual attention and perception is substantially impaired in the presence of mental fatigue. The question is how visual grouping, which can be considered a bottom-up controlled neuronal gain mechanism, is influenced. The main purpose of our study is to determine the influence of mental fatigue on visual grouping of definite information - color and configuration of stimuli in the psychophysical experiment. Individuals provided subjective data by filling in the questionnaire about their health and general feeling. The objective evidence was obtained in the specially designed visual search task were achromatic and chromatic isoluminant stimuli were used in order to avoid so called pop-out effect due to differences in light intensity. Each individual was instructed to define the symbols with aperture in the same direction in four tasks. The color component differed in the visual search tasks according to the goals of study. The results reveal that visual grouping is completed faster when visual stimuli have the same color and aperture direction. The shortest reaction time is in the evening. What is more, the results of reaction time suggest that the analysis of two grouping processes compete for selective attention in the visual system when similarity in color conflicts with similarity in configuration of stimuli. The described effect increases significantly in the presence of mental fatigue. But it does not have strong influence on the accuracy of task accomplishment.
Dynamics of normalization underlying masking in human visual cortex.
Tsai, Jeffrey J; Wade, Alex R; Norcia, Anthony M
2012-02-22
Stimulus visibility can be reduced by other stimuli that overlap the same region of visual space, a process known as masking. Here we studied the neural mechanisms of masking in humans using source-imaged steady state visual evoked potentials and frequency-domain analysis over a wide range of relative stimulus strengths of test and mask stimuli. Test and mask stimuli were tagged with distinct temporal frequencies and we quantified spectral response components associated with the individual stimuli (self terms) and responses due to interaction between stimuli (intermodulation terms). In early visual cortex, masking alters the self terms in a manner consistent with a reduction of input contrast. We also identify a novel signature of masking: a robust intermodulation term that peaks when the test and mask stimuli have equal contrast and disappears when they are widely different. We fit all of our data simultaneously with family of a divisive gain control models that differed only in their dynamics. Models with either very short or very long temporal integration constants for the gain pool performed worse than a model with an integration time of ∼30 ms. Finally, the absolute magnitudes of the response were controlled by the ratio of the stimulus contrasts, not their absolute values. This contrast-contrast invariance suggests that many neurons in early visual cortex code relative rather than absolute contrast. Together, these results provide a more complete description of masking within the normalization framework of contrast gain control and suggest that contrast normalization accomplishes multiple functional goals.
[Intermodal timing cues for audio-visual speech recognition].
Hashimoto, Masahiro; Kumashiro, Masaharu
2004-06-01
The purpose of this study was to investigate the limitations of lip-reading advantages for Japanese young adults by desynchronizing visual and auditory information in speech. In the experiment, audio-visual speech stimuli were presented under the six test conditions: audio-alone, and audio-visually with either 0, 60, 120, 240 or 480 ms of audio delay. The stimuli were the video recordings of a face of a female Japanese speaking long and short Japanese sentences. The intelligibility of the audio-visual stimuli was measured as a function of audio delays in sixteen untrained young subjects. Speech intelligibility under the audio-delay condition of less than 120 ms was significantly better than that under the audio-alone condition. On the other hand, the delay of 120 ms corresponded to the mean mora duration measured for the audio stimuli. The results implied that audio delays of up to 120 ms would not disrupt lip-reading advantage, because visual and auditory information in speech seemed to be integrated on a syllabic time scale. Potential applications of this research include noisy workplace in which a worker must extract relevant speech from all the other competing noises.
The primate amygdala represents the positive and negative value of visual stimuli during learning
Paton, Joseph J.; Belova, Marina A.; Morrison, Sara E.; Salzman, C. Daniel
2008-01-01
Visual stimuli can acquire positive or negative value through their association with rewards and punishments, a process called reinforcement learning. Although we now know a great deal about how the brain analyses visual information, we know little about how visual representations become linked with values. To study this process, we turned to the amygdala, a brain structure implicated in reinforcement learning1–5. We recorded the activity of individual amygdala neurons in monkeys while abstract images acquired either positive or negative value through conditioning. After monkeys had learned the initial associations, we reversed image value assignments. We examined neural responses in relation to these reversals in order to estimate the relative contribution to neural activity of the sensory properties of images and their conditioned values. Here we show that changes in the values of images modulate neural activity, and that this modulation occurs rapidly enough to account for, and correlates with, monkeys’ learning. Furthermore, distinct populations of neurons encode the positive and negative values of visual stimuli. Behavioural and physiological responses to visual stimuli may therefore be based in part on the plastic representation of value provided by the amygdala. PMID:16482160
Influence of auditory and audiovisual stimuli on the right-left prevalence effect.
Vu, Kim-Phuong L; Minakata, Katsumi; Ngo, Mary Kim
2014-01-01
When auditory stimuli are used in two-dimensional spatial compatibility tasks, where the stimulus and response configurations vary along the horizontal and vertical dimensions simultaneously, a right-left prevalence effect occurs in which horizontal compatibility dominates over vertical compatibility. The right-left prevalence effects obtained with auditory stimuli are typically larger than that obtained with visual stimuli even though less attention should be demanded from the horizontal dimension in auditory processing. In the present study, we examined whether auditory or visual dominance occurs when the two-dimensional stimuli are audiovisual, as well as whether there will be cross-modal facilitation of response selection for the horizontal and vertical dimensions. We also examined whether there is an additional benefit of adding a pitch dimension to the auditory stimulus to facilitate vertical coding through use of the spatial-musical association of response codes (SMARC) effect, where pitch is coded in terms of height in space. In Experiment 1, we found a larger right-left prevalence effect for unimodal auditory than visual stimuli. Neutral, non-pitch coded, audiovisual stimuli did not result in cross-modal facilitation, but did show evidence of visual dominance. The right-left prevalence effect was eliminated in the presence of SMARC audiovisual stimuli, but the effect influenced horizontal rather than vertical coding. Experiment 2 showed that the influence of the pitch dimension was not in terms of influencing response selection on a trial-to-trial basis, but in terms of altering the salience of the task environment. Taken together, these findings indicate that in the absence of salient vertical cues, auditory and audiovisual stimuli tend to be coded along the horizontal dimension and vision tends to dominate audition in this two-dimensional spatial stimulus-response task.
Spatial decoupling of targets and flashing stimuli for visual brain-computer interfaces
NASA Astrophysics Data System (ADS)
Waytowich, Nicholas R.; Krusienski, Dean J.
2015-06-01
Objective. Recently, paradigms using code-modulated visual evoked potentials (c-VEPs) have proven to achieve among the highest information transfer rates for noninvasive brain-computer interfaces (BCIs). One issue with current c-VEP paradigms, and visual-evoked paradigms in general, is that they require direct foveal fixation of the flashing stimuli. These interfaces are often visually unpleasant and can be irritating and fatiguing to the user, thus adversely impacting practical performance. In this study, a novel c-VEP BCI paradigm is presented that attempts to perform spatial decoupling of the targets and flashing stimuli using two distinct concepts: spatial separation and boundary positioning. Approach. For the paradigm, the flashing stimuli form a ring that encompasses the intended non-flashing targets, which are spatially separated from the stimuli. The user fixates on the desired target, which is classified using the changes to the EEG induced by the flashing stimuli located in the non-foveal visual field. Additionally, a subset of targets is also positioned at or near the stimulus boundaries, which decouples targets from direct association with a single stimulus. This allows a greater number of target locations for a fixed number of flashing stimuli. Main results. Results from 11 subjects showed practical classification accuracies for the non-foveal condition, with comparable performance to the direct-foveal condition for longer observation lengths. Online results from 5 subjects confirmed the offline results with an average accuracy across subjects of 95.6% for a 4-target condition. The offline analysis also indicated that targets positioned at or near the boundaries of two stimuli could be classified with the same accuracy as traditional superimposed (non-boundary) targets. Significance. The implications of this research are that c-VEPs can be detected and accurately classified to achieve comparable BCI performance without requiring potentially irritating direct foveation of flashing stimuli. Furthermore, this study shows that it is possible to increase the number of targets beyond the number of stimuli without degrading performance. Given the superior information transfer rate of c-VEP paradigms, these results can lead to the development of more practical and ergonomic BCIs.
NASA Astrophysics Data System (ADS)
Ramirez, Joshua; Mann, Virginia
2005-08-01
Both dyslexics and auditory neuropathy (AN) subjects show inferior consonant-vowel (CV) perception in noise, relative to controls. To better understand these impairments, natural acoustic speech stimuli that were masked in speech-shaped noise at various intensities were presented to dyslexic, AN, and control subjects either in isolation or accompanied by visual articulatory cues. AN subjects were expected to benefit from the pairing of visual articulatory cues and auditory CV stimuli, provided that their speech perception impairment reflects a relatively peripheral auditory disorder. Assuming that dyslexia reflects a general impairment of speech processing rather than a disorder of audition, dyslexics were not expected to similarly benefit from an introduction of visual articulatory cues. The results revealed an increased effect of noise masking on the perception of isolated acoustic stimuli by both dyslexic and AN subjects. More importantly, dyslexics showed less effective use of visual articulatory cues in identifying masked speech stimuli and lower visual baseline performance relative to AN subjects and controls. Last, a significant positive correlation was found between reading ability and the ameliorating effect of visual articulatory cues on speech perception in noise. These results suggest that some reading impairments may stem from a central deficit of speech processing.
Montijn, Jorrit S; Goltstein, Pieter M; Pennartz, Cyriel MA
2015-01-01
Previous studies have demonstrated the importance of the primary sensory cortex for the detection, discrimination, and awareness of visual stimuli, but it is unknown how neuronal populations in this area process detected and undetected stimuli differently. Critical differences may reside in the mean strength of responses to visual stimuli, as reflected in bulk signals detectable in functional magnetic resonance imaging, electro-encephalogram, or magnetoencephalography studies, or may be more subtly composed of differentiated activity of individual sensory neurons. Quantifying single-cell Ca2+ responses to visual stimuli recorded with in vivo two-photon imaging, we found that visual detection correlates more strongly with population response heterogeneity rather than overall response strength. Moreover, neuronal populations showed consistencies in activation patterns across temporally spaced trials in association with hit responses, but not during nondetections. Contrary to models relying on temporally stable networks or bulk signaling, these results suggest that detection depends on transient differentiation in neuronal activity within cortical populations. DOI: http://dx.doi.org/10.7554/eLife.10163.001 PMID:26646184
Neural responses to salient visual stimuli.
Morris, J S; Friston, K J; Dolan, R J
1997-01-01
The neural mechanisms involved in the selective processing of salient or behaviourally important stimuli are uncertain. We used an aversive conditioning paradigm in human volunteer subjects to manipulate the salience of visual stimuli (emotionally expressive faces) presented during positron emission tomography (PET) neuroimaging. Increases in salience, and conflicts between the innate and acquired value of the stimuli, produced augmented activation of the pulvinar nucleus of the right thalamus. Furthermore, this pulvinar activity correlated positively with responses in structures hypothesized to mediate value in the brain right amygdala and basal forebrain (including the cholinergic nucleus basalis of Meynert). The results provide evidence that the pulvinar nucleus of the thalamus plays a crucial modulatory role in selective visual processing, and that changes in perceptual salience are mediated by value-dependent plasticity in pulvinar responses. PMID:9178546
Galvez-Pol, A; Calvo-Merino, B; Capilla, A; Forster, B
2018-07-01
Working memory (WM) supports temporary maintenance of task-relevant information. This process is associated with persistent activity in the sensory cortex processing the information (e.g., visual stimuli activate visual cortex). However, we argue here that more multifaceted stimuli moderate this sensory-locked activity and recruit distinctive cortices. Specifically, perception of bodies recruits somatosensory cortex (SCx) beyond early visual areas (suggesting embodiment processes). Here we explore persistent activation in processing areas beyond the sensory cortex initially relevant to the modality of the stimuli. Using visual and somatosensory evoked-potentials in a visual WM task, we isolated different levels of visual and somatosensory involvement during encoding of body and non-body-related images. Persistent activity increased in SCx only when maintaining body images in WM, whereas visual/posterior regions' activity increased significantly when maintaining non-body images. Our results bridge WM and embodiment frameworks, supporting a dynamic WM process where the nature of the information summons specific processing resources. Copyright © 2018 Elsevier Inc. All rights reserved.
Butts, Daniel A; Weng, Chong; Jin, Jianzhong; Alonso, Jose-Manuel; Paninski, Liam
2011-08-03
Visual neurons can respond with extremely precise temporal patterning to visual stimuli that change on much slower time scales. Here, we investigate how the precise timing of cat thalamic spike trains-which can have timing as precise as 1 ms-is related to the stimulus, in the context of both artificial noise and natural visual stimuli. Using a nonlinear modeling framework applied to extracellular data, we demonstrate that the precise timing of thalamic spike trains can be explained by the interplay between an excitatory input and a delayed suppressive input that resembles inhibition, such that neuronal responses only occur in brief windows where excitation exceeds suppression. The resulting description of thalamic computation resembles earlier models of contrast adaptation, suggesting a more general role for mechanisms of contrast adaptation in visual processing. Thus, we describe a more complex computation underlying thalamic responses to artificial and natural stimuli that has implications for understanding how visual information is represented in the early stages of visual processing.
Dong, Guangheng; Yang, Lizhu; Shen, Yue
2009-08-21
The present study investigated the course of visual searching to a target in a fixed location, using an emotional flanker task. Event-related potentials (ERPs) were recorded while participants performed the task. Emotional facial expressions were used as emotion-eliciting triggers. The course of visual searching was analyzed through the emotional effects arising from these emotion-eliciting stimuli. The flanker stimuli showed effects at about 150-250 ms following the stimulus onset, while the effect of target stimuli showed effects at about 300-400 ms. The visual search sequence in an emotional flanker task moved from a whole overview to a specific target, even if the target always appeared at a known location. The processing sequence was "parallel" in this task. The results supported the feature integration theory of visual search.
NASA Astrophysics Data System (ADS)
Pardo, P. J.; Pérez, A. L.; Suero, M. I.
2004-01-01
An old fluorescence spectrophotometer was recycled to make a three-channel colorimeter. The various modifications involved in its design and implementation are described. An optical system was added that allows the fusion of two visual stimuli coming from the two monochromators of the spectrofluorimeter. Each of these stimuli has a wavelength and bandwidth control, and a third visual stimulus may be taken from a monochromator, a cathode ray tube, a thin film transistor screen, or any other light source. This freedom in the choice of source of the third chromatic channel, together with the characteristics of the visual stimuli from the spectrofluorimeter, give this design a great versatility in its application to novel visual experiments on color vision.
Verhoef, Bram-Ernst; Bohon, Kaitlin S.
2015-01-01
Binocular disparity is a powerful depth cue for object perception. The computations for object vision culminate in inferior temporal cortex (IT), but the functional organization for disparity in IT is unknown. Here we addressed this question by measuring fMRI responses in alert monkeys to stimuli that appeared in front of (near), behind (far), or at the fixation plane. We discovered three regions that showed preferential responses for near and far stimuli, relative to zero-disparity stimuli at the fixation plane. These “near/far” disparity-biased regions were located within dorsal IT, as predicted by microelectrode studies, and on the posterior inferotemporal gyrus. In a second analysis, we instead compared responses to near stimuli with responses to far stimuli and discovered a separate network of “near” disparity-biased regions that extended along the crest of the superior temporal sulcus. We also measured in the same animals fMRI responses to faces, scenes, color, and checkerboard annuli at different visual field eccentricities. Disparity-biased regions defined in either analysis did not show a color bias, suggesting that disparity and color contribute to different computations within IT. Scene-biased regions responded preferentially to near and far stimuli (compared with stimuli without disparity) and had a peripheral visual field bias, whereas face patches had a marked near bias and a central visual field bias. These results support the idea that IT is organized by a coarse eccentricity map, and show that disparity likely contributes to computations associated with both central (face processing) and peripheral (scene processing) visual field biases, but likely does not contribute much to computations within IT that are implicated in processing color. PMID:25926470
Davis, Chris; Kislyuk, Daniel; Kim, Jeesun; Sams, Mikko
2008-11-25
We used whole-head magnetoencephalograpy (MEG) to record changes in neuromagnetic N100m responses generated in the left and right auditory cortex as a function of the match between visual and auditory speech signals. Stimuli were auditory-only (AO) and auditory-visual (AV) presentations of /pi/, /ti/ and /vi/. Three types of intensity matched auditory stimuli were used: intact speech (Normal), frequency band filtered speech (Band) and speech-shaped white noise (Noise). The behavioural task was to detect the /vi/ syllables which comprised 12% of stimuli. N100m responses were measured to averaged /pi/ and /ti/ stimuli. Behavioural data showed that identification of the stimuli was faster and more accurate for Normal than for Band stimuli, and for Band than for Noise stimuli. Reaction times were faster for AV than AO stimuli. MEG data showed that in the left hemisphere, N100m to both AO and AV stimuli was largest for the Normal, smaller for Band and smallest for Noise stimuli. In the right hemisphere, Normal and Band AO stimuli elicited N100m responses of quite similar amplitudes, but N100m amplitude to Noise was about half of that. There was a reduction in N100m for the AV compared to the AO conditions. The size of this reduction for each stimulus type was same in the left hemisphere but graded in the right (being largest to the Normal, smaller to the Band and smallest to the Noise stimuli). The N100m decrease for the Normal stimuli was significantly larger in the right than in the left hemisphere. We suggest that the effect of processing visual speech seen in the right hemisphere likely reflects suppression of the auditory response based on AV cues for place of articulation.
The role of early visual cortex in visual short-term memory and visual attention.
Offen, Shani; Schluppeck, Denis; Heeger, David J
2009-06-01
We measured cortical activity with functional magnetic resonance imaging to probe the involvement of early visual cortex in visual short-term memory and visual attention. In four experimental tasks, human subjects viewed two visual stimuli separated by a variable delay period. The tasks placed differential demands on short-term memory and attention, but the stimuli were visually identical until after the delay period. Early visual cortex exhibited sustained responses throughout the delay when subjects performed attention-demanding tasks, but delay-period activity was not distinguishable from zero when subjects performed a task that required short-term memory. This dissociation reveals different computational mechanisms underlying the two processes.
Binocular interactions in random chromatic changes at isoluminance
NASA Astrophysics Data System (ADS)
Medina, José M.
2006-02-01
To examine the type of chromatic interactions at isoluminance in the phenomenon of binocular vision, I have determined simple visual reaction times (VRT) under three observational conditions (monocular left, monocular right, and binocular) for different chromatic stimuli along random color axes at isoluminance (simultaneous L-, M-, and S-cone variations). Upper and lower boundaries of probability summation as well as the binocular capacity coefficient were estimated with observed distributions of reaction times. The results were not consistent with the notion of independent chromatic channels between eyes, suggesting the existence of excitatory and inhibitory binocular interactions at suprathreshold isoluminance conditions.
Mullen, Kathy T; Chang, Dorita H F; Hess, Robert F
2015-12-01
There is controversy as to how responses to colour in the human brain are organized within the visual pathways. A key issue is whether there are modular pathways that respond selectively to colour or whether there are common neural substrates for both colour and achromatic (Ach) contrast. We used functional magnetic resonance imaging (fMRI) adaptation to investigate the responses of early and extrastriate visual areas to colour and Ach contrast. High-contrast red-green (RG) and Ach sinewave rings (0.5 cycles/degree, 2 Hz) were used as both adapting stimuli and test stimuli in a block design. We found robust adaptation to RG or Ach contrast in all visual areas. Cross-adaptation between RG and Ach contrast occurred in all areas indicating the presence of integrated, colour and Ach responses. Notably, we revealed contrasting trends for the two test stimuli. For the RG test, unselective processing (robust adaptation to both RG and Ach contrast) was most evident in the early visual areas (V1 and V2), but selective responses, revealed as greater adaptation between the same stimuli than cross-adaptation between different stimuli, emerged in the ventral cortex, in V4 and VO in particular. For the Ach test, unselective responses were again most evident in early visual areas but Ach selectivity emerged in the dorsal cortex (V3a and hMT+). Our findings support a strong presence of integrated mechanisms for colour and Ach contrast across the visual hierarchy, with a progression towards selective processing in extrastriate visual areas. © 2015 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Primary visual response (M100) delays in adolescents with FASD as measured with MEG.
Coffman, Brian A; Kodituwakku, Piyadasa; Kodituwakku, Elizabeth L; Romero, Lucinda; Sharadamma, Nirupama Muniswamy; Stone, David; Stephen, Julia M
2013-11-01
Fetal alcohol spectrum disorders (FASD) are debilitating, with effects of prenatal alcohol exposure persisting into adolescence and adulthood. Complete characterization of FASD is crucial for the development of diagnostic tools and intervention techniques to decrease the high cost to individual families and society of this disorder. In this experiment, we investigated visual system deficits in adolescents (12-21 years) diagnosed with an FASD by measuring the latency of patients' primary visual M100 responses using MEG. We hypothesized that patients with FASD would demonstrate delayed primary visual responses compared to controls. M100 latencies were assessed both for FASD patients and age-matched healthy controls for stimuli presented at the fovea (central stimulus) and at the periphery (peripheral stimuli; left or right of the central stimulus) in a saccade task requiring participants to direct their attention and gaze to these stimuli. Source modeling was performed on visual responses to the central and peripheral stimuli and the latency of the first prominent peak (M100) in the occipital source timecourse was identified. The peak latency of the M100 responses were delayed in FASD patients for both stimulus types (central and peripheral), but the difference in latency of primary visual responses to central vs. peripheral stimuli was significant only in FASD patients, indicating that, while FASD patients' visual systems are impaired in general, this impairment is more pronounced in the periphery. These results suggest that basic sensory deficits in this population may contribute to sensorimotor integration deficits described previously in this disorder. Copyright © 2012 Wiley Periodicals, Inc.
Visual but not motor processes predict simple visuomotor reaction time of badminton players.
Hülsdünker, Thorben; Strüder, Heiko K; Mierau, Andreas
2018-03-01
The athlete's brain exhibits significant functional adaptations that facilitate visuomotor reaction performance. However, it is currently unclear if the same neurophysiological processes that differentiate athletes from non-athletes also determine performance within a homogeneous group of athletes. This information can provide valuable help for athletes and coaches aiming to optimize existing training regimes. Therefore, this study aimed to identify the neurophysiological correlates of visuomotor reaction performance in a group of skilled athletes. In 36 skilled badminton athletes, electroencephalography (EEG) was used to investigate pattern reversal and motion onset visual-evoked potentials (VEPs) as well as visuomotor reaction time (VMRT) during a simple reaction task. Stimulus-locked and response-locked event-related potentials (ERPs) in visual and motor regions as well as the onset of muscle activation (EMG onset) were determined. Correlation and multiple regression analyses identified the neurophysiological parameters predicting EMG onset and VMRT. For pattern reversal stimuli, the P100 latency and age best predicted EMG onset (r = 0.43; p = .003) and VMRT (r = 0.62; p = .001). In the motion onset experiment, EMG onset (r = 0.80; p < .001) and VMRT (r = 0.78; p < .001) were predicted by N2 latency and age. In both conditions, cortical potentials in motor regions were not correlated with EMG onset or VMRT. It is concluded that previously identified neurophysiological parameters differentiating athletes from non-athletes do not necessarily determine performance within a homogeneous group of athletes. Specifically, the speed of visual perception/processing predicts EMG onset and VMRT in skilled badminton players while motor-related processes, although differentiating athletes from non-athletes, are not associated simple with visuomotor reaction performance.
[Ventriloquism and audio-visual integration of voice and face].
Yokosawa, Kazuhiko; Kanaya, Shoko
2012-07-01
Presenting synchronous auditory and visual stimuli in separate locations creates the illusion that the sound originates from the direction of the visual stimulus. Participants' auditory localization bias, called the ventriloquism effect, has revealed factors affecting the perceptual integration of audio-visual stimuli. However, many studies on audio-visual processes have focused on performance in simplified experimental situations, with a single stimulus in each sensory modality. These results cannot necessarily explain our perceptual behavior in natural scenes, where various signals exist within a single sensory modality. In the present study we report the contributions of a cognitive factor, that is, the audio-visual congruency of speech, although this factor has often been underestimated in previous ventriloquism research. Thus, we investigated the contribution of speech congruency on the ventriloquism effect using a spoken utterance and two videos of a talking face. The salience of facial movements was also manipulated. As a result, when bilateral visual stimuli are presented in synchrony with a single voice, cross-modal speech congruency was found to have a significant impact on the ventriloquism effect. This result also indicated that more salient visual utterances attracted participants' auditory localization. The congruent pairing of audio-visual utterances elicited greater localization bias than did incongruent pairing, whereas previous studies have reported little dependency on the reality of stimuli in ventriloquism. Moreover, audio-visual illusory congruency, owing to the McGurk effect, caused substantial visual interference to auditory localization. This suggests that a greater flexibility in responding to multi-sensory environments exists than has been previously considered.
Effects of Visual Speech on Early Auditory Evoked Fields - From the Viewpoint of Individual Variance
Yahata, Izumi; Kanno, Akitake; Hidaka, Hiroshi; Sakamoto, Shuichi; Nakasato, Nobukazu; Kawashima, Ryuta; Katori, Yukio
2017-01-01
The effects of visual speech (the moving image of the speaker’s face uttering speech sound) on early auditory evoked fields (AEFs) were examined using a helmet-shaped magnetoencephalography system in 12 healthy volunteers (9 males, mean age 35.5 years). AEFs (N100m) in response to the monosyllabic sound /be/ were recorded and analyzed under three different visual stimulus conditions, the moving image of the same speaker’s face uttering /be/ (congruent visual stimuli) or uttering /ge/ (incongruent visual stimuli), and visual noise (still image processed from speaker’s face using a strong Gaussian filter: control condition). On average, latency of N100m was significantly shortened in the bilateral hemispheres for both congruent and incongruent auditory/visual (A/V) stimuli, compared to the control A/V condition. However, the degree of N100m shortening was not significantly different between the congruent and incongruent A/V conditions, despite the significant differences in psychophysical responses between these two A/V conditions. Moreover, analysis of the magnitudes of these visual effects on AEFs in individuals showed that the lip-reading effects on AEFs tended to be well correlated between the two different audio-visual conditions (congruent vs. incongruent visual stimuli) in the bilateral hemispheres but were not significantly correlated between right and left hemisphere. On the other hand, no significant correlation was observed between the magnitudes of visual speech effects and psychophysical responses. These results may indicate that the auditory-visual interaction observed on the N100m is a fundamental process which does not depend on the congruency of the visual information. PMID:28141836
Body Context and Posture Affect Mental Imagery of Hands
Ionta, Silvio; Perruchoud, David; Draganski, Bogdan; Blanke, Olaf
2012-01-01
Different visual stimuli have been shown to recruit different mental imagery strategies. However the role of specific visual stimuli properties related to body context and posture in mental imagery is still under debate. Aiming to dissociate the behavioural correlates of mental processing of visual stimuli characterized by different body context, in the present study we investigated whether the mental rotation of stimuli showing either hands as attached to a body (hands-on-body) or not (hands-only), would be based on different mechanisms. We further examined the effects of postural changes on the mental rotation of both stimuli. Thirty healthy volunteers verbally judged the laterality of rotated hands-only and hands-on-body stimuli presented from the dorsum- or the palm-view, while positioning their hands on their knees (front postural condition) or behind their back (back postural condition). Mental rotation of hands-only, but not of hands-on-body, was modulated by the stimulus view and orientation. Additionally, only the hands-only stimuli were mentally rotated at different speeds according to the postural conditions. This indicates that different stimulus-related mechanisms are recruited in mental rotation by changing the bodily context in which a particular body part is presented. The present data suggest that, with respect to hands-only, mental rotation of hands-on-body is less dependent on biomechanical constraints and proprioceptive input. We interpret our results as evidence for preferential processing of visual- rather than kinesthetic-based mechanisms during mental transformation of hands-on-body and hands-only, respectively. PMID:22479618
Modulation of Temporal Precision in Thalamic Population Responses to Natural Visual Stimuli
Desbordes, Gaëlle; Jin, Jianzhong; Alonso, Jose-Manuel; Stanley, Garrett B.
2010-01-01
Natural visual stimuli have highly structured spatial and temporal properties which influence the way visual information is encoded in the visual pathway. In response to natural scene stimuli, neurons in the lateral geniculate nucleus (LGN) are temporally precise – on a time scale of 10–25 ms – both within single cells and across cells within a population. This time scale, established by non stimulus-driven elements of neuronal firing, is significantly shorter than that of natural scenes, yet is critical for the neural representation of the spatial and temporal structure of the scene. Here, a generalized linear model (GLM) that combines stimulus-driven elements with spike-history dependence associated with intrinsic cellular dynamics is shown to predict the fine timing precision of LGN responses to natural scene stimuli, the corresponding correlation structure across nearby neurons in the population, and the continuous modulation of spike timing precision and latency across neurons. A single model captured the experimentally observed neural response, across different levels of contrasts and different classes of visual stimuli, through interactions between the stimulus correlation structure and the nonlinearity in spike generation and spike history dependence. Given the sensitivity of the thalamocortical synapse to closely timed spikes and the importance of fine timing precision for the faithful representation of natural scenes, the modulation of thalamic population timing over these time scales is likely important for cortical representations of the dynamic natural visual environment. PMID:21151356
A Multidimensional Approach to the Study of Emotion Recognition in Autism Spectrum Disorders
Xavier, Jean; Vignaud, Violaine; Ruggiero, Rosa; Bodeau, Nicolas; Cohen, David; Chaby, Laurence
2015-01-01
Although deficits in emotion recognition have been widely reported in autism spectrum disorder (ASD), experiments have been restricted to either facial or vocal expressions. Here, we explored multimodal emotion processing in children with ASD (N = 19) and with typical development (TD, N = 19), considering uni (faces and voices) and multimodal (faces/voices simultaneously) stimuli and developmental comorbidities (neuro-visual, language and motor impairments). Compared to TD controls, children with ASD had rather high and heterogeneous emotion recognition scores but showed also several significant differences: lower emotion recognition scores for visual stimuli, for neutral emotion, and a greater number of saccades during visual task. Multivariate analyses showed that: (1) the difficulties they experienced with visual stimuli were partially alleviated with multimodal stimuli. (2) Developmental age was significantly associated with emotion recognition in TD children, whereas it was the case only for the multimodal task in children with ASD. (3) Language impairments tended to be associated with emotion recognition scores of ASD children in the auditory modality. Conversely, in the visual or bimodal (visuo-auditory) tasks, the impact of developmental coordination disorder or neuro-visual impairments was not found. We conclude that impaired emotion processing constitutes a dimension to explore in the field of ASD, as research has the potential to define more homogeneous subgroups and tailored interventions. However, it is clear that developmental age, the nature of the stimuli, and other developmental comorbidities must also be taken into account when studying this dimension. PMID:26733928
Repetition priming of face recognition in a serial choice reaction-time task.
Roberts, T; Bruce, V
1989-05-01
Marshall & Walker (1987) found that pictorial stimuli yield visual priming that is disrupted by an unpredictable visual event in the response-stimulus interval. They argue that visual stimuli are represented in memory in the form of distinct visual and object codes. Bruce & Young (1986) propose similar pictorial, structural and semantic codes which mediate the recognition of faces, yet repetition priming results obtained with faces as stimuli (Bruce & Valentine, 1985), and with objects (Warren & Morton, 1982) are quite different from those of Marshall & Walker (1987), in the sense that recognition is facilitated by pictures presented 20 minutes earlier. The experiment reported here used different views of familiar and unfamiliar faces as stimuli in a serial choice reaction-time task and found that, with identical pictures, repetition priming survives and intervening item requiring a response, with both familiar and unfamiliar faces. Furthermore, with familiar faces such priming was present even when the view of the prime was different from the target. The theoretical implications of these results are discussed.
Brain activation by visual erotic stimuli in healthy middle aged males.
Kim, S W; Sohn, D W; Cho, Y-H; Yang, W S; Lee, K-U; Juh, R; Ahn, K-J; Chung, Y-A; Han, S-I; Lee, K H; Lee, C U; Chae, J-H
2006-01-01
The objective of the present study was to identify brain centers, whose activity changes are related to erotic visual stimuli in healthy, heterosexual, middle aged males. Ten heterosexual, right-handed males with normal sexual function were entered into the present study (mean age 52 years, range 46-55). All potential subjects were screened over 1 h interview, and were encouraged to fill out questionnaires including the Brief Male Sexual Function Inventory. All subjects with a history of sexual arousal disorder or erectile dysfunction were excluded. We performed functional brain magnetic resonance imaging (fMRI) in male volunteers when an alternatively combined erotic and nonerotic film was played for 14 min and 9 s. The major areas of activation associated with sexual arousal to visual stimuli were occipitotemporal area, anterior cingulate gyrus, insula, orbitofrontal cortex, caudate nucleus. However, hypothalamus and thalamus were not activated. We suggest that the nonactivation of hypothalamus and thalamus in middle aged males may be responsible for the lesser physiological arousal in response to the erotic visual stimuli.
Rossion, Bruno; Dricot, Laurence; Goebel, Rainer; Busigny, Thomas
2011-01-01
How a visual stimulus is initially categorized as a face in a network of human brain areas remains largely unclear. Hierarchical neuro-computational models of face perception assume that the visual stimulus is first decomposed in local parts in lower order visual areas. These parts would then be combined into a global representation in higher order face-sensitive areas of the occipito-temporal cortex. Here we tested this view in fMRI with visual stimuli that are categorized as faces based on their global configuration rather than their local parts (two-tones Mooney figures and Arcimboldo's facelike paintings). Compared to the same inverted visual stimuli that are not categorized as faces, these stimuli activated the right middle fusiform gyrus (“Fusiform face area”) and superior temporal sulcus (pSTS), with no significant activation in the posteriorly located inferior occipital gyrus (i.e., no “occipital face area”). This observation is strengthened by behavioral and neural evidence for normal face categorization of these stimuli in a brain-damaged prosopagnosic patient whose intact right middle fusiform gyrus and superior temporal sulcus are devoid of any potential face-sensitive inputs from the lesioned right inferior occipital cortex. Together, these observations indicate that face-preferential activation may emerge in higher order visual areas of the right hemisphere without any face-preferential inputs from lower order visual areas, supporting a non-hierarchical view of face perception in the visual cortex. PMID:21267432
Effect of eye position during human visual-vestibular integration of heading perception.
Crane, Benjamin T
2017-09-01
Visual and inertial stimuli provide heading discrimination cues. Integration of these multisensory stimuli has been demonstrated to depend on their relative reliability. However, the reference frame of visual stimuli is eye centered while inertia is head centered, and it remains unclear how these are reconciled with combined stimuli. Seven human subjects completed a heading discrimination task consisting of a 2-s translation with a peak velocity of 16 cm/s. Eye position was varied between 0° and ±25° left/right. Experiments were done with inertial motion, visual motion, or a combined visual-inertial motion. Visual motion coherence varied between 35% and 100%. Subjects reported whether their perceived heading was left or right of the midline in a forced-choice task. With the inertial stimulus the eye position had an effect such that the point of subjective equality (PSE) shifted 4.6 ± 2.4° in the gaze direction. With the visual stimulus the PSE shift was 10.2 ± 2.2° opposite the gaze direction, consistent with retinotopic coordinates. Thus with eccentric eye positions the perceived inertial and visual headings were offset ~15°. During the visual-inertial conditions the PSE varied consistently with the relative reliability of these stimuli such that at low visual coherence the PSE was similar to that of the inertial stimulus and at high coherence it was closer to the visual stimulus. On average, the inertial stimulus was weighted near Bayesian ideal predictions, but there was significant deviation from ideal in individual subjects. These findings support visual and inertial cue integration occurring in independent coordinate systems. NEW & NOTEWORTHY In multiple cortical areas visual heading is represented in retinotopic coordinates while inertial heading is in body coordinates. It remains unclear whether multisensory integration occurs in a common coordinate system. The experiments address this using a multisensory integration task with eccentric gaze positions making the effect of coordinate systems clear. The results indicate that the coordinate systems remain separate to the perceptual level and that during the multisensory task the perception depends on relative stimulus reliability. Copyright © 2017 the American Physiological Society.
Huang, Kuo-Chen; Wang, Hsiu-Feng; Chen, Chun-Ching
2010-06-01
Effects of shape, size, and chromaticity of stimuli on participants' errors when estimating the size of simultaneously presented standard and comparison stimuli were examined. 48 Taiwanese college students ages 20 to 24 years old (M = 22.3, SD = 1.3) participated. Analysis showed that the error for estimated size was significantly greater for those in the low-vision group than for those in the normal-vision and severe-myopia groups. The errors were significantly greater with green and blue stimuli than with red stimuli. Circular stimuli produced smaller mean errors than did square stimuli. The actual size of the standard stimulus significantly affected the error for estimated size. Errors for estimations using smaller sizes were significantly higher than when the sizes were larger. Implications of the results for graphics-based interface design, particularly when taking account of visually impaired users, are discussed.
Using Prosopagnosia to Test and Modify Visual Recognition Theory.
O'Brien, Alexander M
2018-02-01
Biederman's contemporary theory of basic visual object recognition (Recognition-by-Components) is based on structural descriptions of objects and presumes 36 visual primitives (geons) people can discriminate, but there has been no empirical test of the actual use of these 36 geons to visually distinguish objects. In this study, we tested for the actual use of these geons in basic visual discrimination by comparing object discrimination performance patterns (when distinguishing varied stimuli) of an acquired prosopagnosia patient (LB) and healthy control participants. LB's prosopagnosia left her heavily reliant on structural descriptions or categorical object differences in visual discrimination tasks versus the control participants' additional ability to use face recognition or coordinate systems (Coordinate Relations Hypothesis). Thus, when LB performed comparably to control participants with a given stimulus, her restricted reliance on basic or categorical discriminations meant that the stimuli must be distinguishable on the basis of a geon feature. By varying stimuli in eight separate experiments and presenting all 36 geons, we discerned that LB coded only 12 (vs. 36) distinct visual primitives (geons), apparently reflective of human visual systems generally.
Neural mechanism for sensing fast motion in dim light.
Li, Ran; Wang, Yi
2013-11-07
Luminance is a fundamental property of visual scenes. A population of neurons in primary visual cortex (V1) is sensitive to uniform luminance. In natural vision, however, the retinal image often changes rapidly. Consequently the luminance signals visual cells receive are transiently varying. How V1 neurons respond to such luminance changes is unknown. By applying large static uniform stimuli or grating stimuli altering at 25 Hz that resemble the rapid luminance changes in the environment, we show that approximately 40% V1 cells responded to rapid luminance changes of uniform stimuli. Most of them strongly preferred luminance decrements. Importantly, when tested with drifting gratings, the preferred speeds of these cells were significantly higher than cells responsive to static grating stimuli but not to uniform stimuli. This responsiveness can be accounted for by the preferences for low spatial frequencies and high temporal frequencies. These luminance-sensitive cells subserve the detection of fast motion under the conditions of dim illumination.
Recalibration of the Multisensory Temporal Window of Integration Results from Changing Task Demands
Mégevand, Pierre; Molholm, Sophie; Nayak, Ashabari; Foxe, John J.
2013-01-01
The notion of the temporal window of integration, when applied in a multisensory context, refers to the breadth of the interval across which the brain perceives two stimuli from different sensory modalities as synchronous. It maintains a unitary perception of multisensory events despite physical and biophysical timing differences between the senses. The boundaries of the window can be influenced by attention and past sensory experience. Here we examined whether task demands could also influence the multisensory temporal window of integration. We varied the stimulus onset asynchrony between simple, short-lasting auditory and visual stimuli while participants performed two tasks in separate blocks: a temporal order judgment task that required the discrimination of subtle auditory-visual asynchronies, and a reaction time task to the first incoming stimulus irrespective of its sensory modality. We defined the temporal window of integration as the range of stimulus onset asynchronies where performance was below 75% in the temporal order judgment task, as well as the range of stimulus onset asynchronies where responses showed multisensory facilitation (race model violation) in the reaction time task. In 5 of 11 participants, we observed audio-visual stimulus onset asynchronies where reaction time was significantly accelerated (indicating successful integration in this task) while performance was accurate in the temporal order judgment task (indicating successful segregation in that task). This dissociation suggests that in some participants, the boundaries of the temporal window of integration can adaptively recalibrate in order to optimize performance according to specific task demands. PMID:23951203
Skilled deaf readers have an enhanced perceptual span in reading.
Bélanger, Nathalie N; Slattery, Timothy J; Mayberry, Rachel I; Rayner, Keith
2012-07-01
Recent evidence suggests that, compared with hearing people, deaf people have enhanced visual attention to simple stimuli viewed in the parafovea and periphery. Although a large part of reading involves processing the fixated words in foveal vision, readers also utilize information in parafoveal vision to preprocess upcoming words and decide where to look next. In the study reported here, we investigated whether auditory deprivation affects low-level visual processing during reading by comparing the perceptual span of deaf signers who were skilled and less-skilled readers with the perceptual span of skilled hearing readers. Compared with hearing readers, the two groups of deaf readers had a larger perceptual span than would be expected given their reading ability. These results provide the first evidence that deaf readers' enhanced attentional allocation to the parafovea is used during complex cognitive tasks, such as reading.
Visually-guided attention enhances target identification in a complex auditory scene.
Best, Virginia; Ozmeral, Erol J; Shinn-Cunningham, Barbara G
2007-06-01
In auditory scenes containing many similar sound sources, sorting of acoustic information into streams becomes difficult, which can lead to disruptions in the identification of behaviorally relevant targets. This study investigated the benefit of providing simple visual cues for when and/or where a target would occur in a complex acoustic mixture. Importantly, the visual cues provided no information about the target content. In separate experiments, human subjects either identified learned birdsongs in the presence of a chorus of unlearned songs or recalled strings of spoken digits in the presence of speech maskers. A visual cue indicating which loudspeaker (from an array of five) would contain the target improved accuracy for both kinds of stimuli. A cue indicating which time segment (out of a possible five) would contain the target also improved accuracy, but much more for birdsong than for speech. These results suggest that in real world situations, information about where a target of interest is located can enhance its identification, while information about when to listen can also be helpful when targets are unfamiliar or extremely similar to their competitors.
Visually-guided Attention Enhances Target Identification in a Complex Auditory Scene
Ozmeral, Erol J.; Shinn-Cunningham, Barbara G.
2007-01-01
In auditory scenes containing many similar sound sources, sorting of acoustic information into streams becomes difficult, which can lead to disruptions in the identification of behaviorally relevant targets. This study investigated the benefit of providing simple visual cues for when and/or where a target would occur in a complex acoustic mixture. Importantly, the visual cues provided no information about the target content. In separate experiments, human subjects either identified learned birdsongs in the presence of a chorus of unlearned songs or recalled strings of spoken digits in the presence of speech maskers. A visual cue indicating which loudspeaker (from an array of five) would contain the target improved accuracy for both kinds of stimuli. A cue indicating which time segment (out of a possible five) would contain the target also improved accuracy, but much more for birdsong than for speech. These results suggest that in real world situations, information about where a target of interest is located can enhance its identification, while information about when to listen can also be helpful when targets are unfamiliar or extremely similar to their competitors. PMID:17453308
Tohmi, Manavu; Kitaura, Hiroki; Komagata, Seiji; Kudoh, Masaharu; Shibuki, Katsuei
2006-11-08
Experience-dependent plasticity in the visual cortex was investigated using transcranial flavoprotein fluorescence imaging in mice anesthetized with urethane. On- and off-responses in the primary visual cortex were elicited by visual stimuli. Fluorescence responses and field potentials elicited by grating patterns decreased similarly as contrasts of visual stimuli were reduced. Fluorescence responses also decreased as spatial frequency of grating stimuli increased. Compared with intrinsic signal imaging in the same mice, fluorescence imaging showed faster responses with approximately 10 times larger signal changes. Retinotopic maps in the primary visual cortex and area LM were constructed using fluorescence imaging. After monocular deprivation (MD) of 4 d starting from postnatal day 28 (P28), deprived eye responses were suppressed compared with nondeprived eye responses in the binocular zone but not in the monocular zone. Imaging faithfully recapitulated a critical period for plasticity with maximal effects of MD observed around P28 and not in adulthood even under urethane anesthesia. Visual responses were compared before and after MD in the same mice, in which the skull was covered with clear acrylic dental resin. Deprived eye responses decreased after MD, whereas nondeprived eye responses increased. Effects of MD during a critical period were tested 2 weeks after reopening of the deprived eye. Significant ocular dominance plasticity was observed in responses elicited by moving grating patterns, but no long-lasting effect was found in visual responses elicited by light-emitting diode light stimuli. The present results indicate that transcranial flavoprotein fluorescence imaging is a powerful tool for investigating experience-dependent plasticity in the mouse visual cortex.
Organic light emitting board for dynamic interactive display
Kim, Eui Hyuk; Cho, Sung Hwan; Lee, Ju Han; Jeong, Beomjin; Kim, Richard Hahnkee; Yu, Seunggun; Lee, Tae-Woo; Shim, Wooyoung; Park, Cheolmin
2017-01-01
Interactive displays involve the interfacing of a stimuli-responsive sensor with a visual human-readable response. Here, we describe a polymeric electroluminescence-based stimuli-responsive display method that simultaneously detects external stimuli and visualizes the stimulant object. This organic light-emitting board is capable of both sensing and direct visualization of a variety of conductive information. Simultaneous sensing and visualization of the conductive substance is achieved when the conductive object is coupled with the light emissive material layer on application of alternating current. A variety of conductive materials can be detected regardless of their work functions, and thus information written by a conductive pen is clearly visualized, as is a human fingerprint with natural conductivity. Furthermore, we demonstrate that integration of the organic light-emitting board with a fluidic channel readily allows for dynamic monitoring of metallic liquid flow through the channel, which may be suitable for biological detection and imaging applications. PMID:28406151
Gestalt perception modulates early visual processing.
Herrmann, C S; Bosch, V
2001-04-17
We examined whether early visual processing reflects perceptual properties of a stimulus in addition to physical features. We recorded event-related potentials (ERPs) of 13 subjects in a visual classification task. We used four different stimuli which were all composed of four identical elements. One of the stimuli constituted an illusory Kanizsa square, another was composed of the same number of collinear line segments but the elements did not form a Gestalt. In addition, a target and a control stimulus were used which were arranged differently. These stimuli allow us to differentiate the processing of colinear line elements (stimulus features) and illusory figures (perceptual properties). The visual N170 in response to the illusory figure was significantly larger as compared to the other collinear stimulus. This is taken to indicate that the visual N170 reflects cognitive processes of Gestalt perception in addition to attentional processes and physical stimulus properties.
Attention distributed across sensory modalities enhances perceptual performance
Mishra, Jyoti; Gazzaley, Adam
2012-01-01
This study investigated the interaction between top-down attentional control and multisensory processing in humans. Using semantically congruent and incongruent audiovisual stimulus streams, we found target detection to be consistently improved in the setting of distributed audiovisual attention versus focused visual attention. This performance benefit was manifested as faster reaction times for congruent audiovisual stimuli, and as accuracy improvements for incongruent stimuli, resulting in a resolution of stimulus interference. Electrophysiological recordings revealed that these behavioral enhancements were associated with reduced neural processing of both auditory and visual components of the audiovisual stimuli under distributed vs. focused visual attention. These neural changes were observed at early processing latencies, within 100–300 ms post-stimulus onset, and localized to auditory, visual, and polysensory temporal cortices. These results highlight a novel neural mechanism for top-down driven performance benefits via enhanced efficacy of sensory neural processing during distributed audiovisual attention relative to focused visual attention. PMID:22933811
Left hemispheric advantage for numerical abilities in the bottlenose dolphin.
Kilian, Annette; von Fersen, Lorenzo; Güntürkün, Onur
2005-02-28
In a two-choice discrimination paradigm, a bottlenose dolphin discriminated relational dimensions between visual numerosity stimuli under monocular viewing conditions. After prior binocular acquisition of the task, two monocular test series with different number stimuli were conducted. In accordance with recent studies on visual lateralization in the bottlenose dolphin, our results revealed an overall advantage of the right visual field. Due to the complete decussation of the optic nerve fibers, this suggests a specialization of the left hemisphere for analysing relational features between stimuli as required in tests for numerical abilities. These processes are typically right hemisphere-based in other mammals (including humans) and birds. The present data provide further evidence for a general right visual field advantage in bottlenose dolphins for visual information processing. It is thus assumed that dolphins possess a unique functional architecture of their cerebral asymmetries. (c) 2004 Elsevier B.V. All rights reserved.
Organic light emitting board for dynamic interactive display
NASA Astrophysics Data System (ADS)
Kim, Eui Hyuk; Cho, Sung Hwan; Lee, Ju Han; Jeong, Beomjin; Kim, Richard Hahnkee; Yu, Seunggun; Lee, Tae-Woo; Shim, Wooyoung; Park, Cheolmin
2017-04-01
Interactive displays involve the interfacing of a stimuli-responsive sensor with a visual human-readable response. Here, we describe a polymeric electroluminescence-based stimuli-responsive display method that simultaneously detects external stimuli and visualizes the stimulant object. This organic light-emitting board is capable of both sensing and direct visualization of a variety of conductive information. Simultaneous sensing and visualization of the conductive substance is achieved when the conductive object is coupled with the light emissive material layer on application of alternating current. A variety of conductive materials can be detected regardless of their work functions, and thus information written by a conductive pen is clearly visualized, as is a human fingerprint with natural conductivity. Furthermore, we demonstrate that integration of the organic light-emitting board with a fluidic channel readily allows for dynamic monitoring of metallic liquid flow through the channel, which may be suitable for biological detection and imaging applications.
Neural circuits underlying visually evoked escapes in larval zebrafish
Dunn, Timothy W.; Gebhardt, Christoph; Naumann, Eva A.; Riegler, Clemens; Ahrens, Misha B.; Engert, Florian; Del Bene, Filippo
2015-01-01
SUMMARY Escape behaviors deliver organisms away from imminent catastrophe. Here, we characterize behavioral responses of freely swimming larval zebrafish to looming visual stimuli simulating predators. We report that the visual system alone can recruit lateralized, rapid escape motor programs, similar to those elicited by mechanosensory modalities. Two-photon calcium imaging of retino-recipient midbrain regions isolated the optic tectum as an important center processing looming stimuli, with ensemble activity encoding the critical image size determining escape latency. Furthermore, we describe activity in retinal ganglion cell terminals and superficial inhibitory interneurons in the tectum during looming and propose a model for how temporal dynamics in tectal periventricular neurons might arise from computations between these two fundamental constituents. Finally, laser ablations of hindbrain circuitry confirmed that visual and mechanosensory modalities share the same premotor output network. Together, we establish a circuit for the processing of aversive stimuli in the context of an innate visual behavior. PMID:26804997
Illusory visual motion stimulus elicits postural sway in migraine patients
Imaizumi, Shu; Honma, Motoyasu; Hibino, Haruo; Koyama, Shinichi
2015-01-01
Although the perception of visual motion modulates postural control, it is unknown whether illusory visual motion elicits postural sway. The present study examined the effect of illusory motion on postural sway in patients with migraine, who tend to be sensitive to it. We measured postural sway for both migraine patients and controls while they viewed static visual stimuli with and without illusory motion. The participants’ postural sway was measured when they closed their eyes either immediately after (Experiment 1), or 30 s after (Experiment 2), viewing the stimuli. The patients swayed more than the controls when they closed their eyes immediately after viewing the illusory motion (Experiment 1), and they swayed less than the controls when they closed their eyes 30 s after viewing it (Experiment 2). These results suggest that static visual stimuli with illusory motion can induce postural sway that may last for at least 30 s in patients with migraine. PMID:25972832
Li, Chenglin; Cao, Xiaohua
2017-01-01
For faces and Chinese characters, a left-side processing bias, in which observers rely more heavily on information conveyed by the left side of stimuli than the right side of stimuli, has been frequently reported in previous studies. However, it remains unclear whether this left-side bias effect is modulated by the reference stimuli's location. The present study adopted the chimeric stimuli task to investigate the influence of the presentation location of the reference stimuli on the left-side bias in face and Chinese character processing. The results demonstrated that when a reference face was presented in the left visual field of its chimeric images, which are centrally presented, the participants showed a preference higher than the no-bias threshold for the left chimeric face; this effect, however, was not observed in the right visual field. This finding indicates that the left-side bias effect in face processing is stronger when the reference face is in the left visual field. In contrast, the left-side bias was observed in Chinese character processing when the reference Chinese character was presented in either the left or right visual field. Together, these findings suggest that although faces and Chinese characters both have a left-side processing bias, the underlying neural mechanisms of this left-side bias might be different. PMID:29018391
Li, Chenglin; Cao, Xiaohua
2017-01-01
For faces and Chinese characters, a left-side processing bias, in which observers rely more heavily on information conveyed by the left side of stimuli than the right side of stimuli, has been frequently reported in previous studies. However, it remains unclear whether this left-side bias effect is modulated by the reference stimuli's location. The present study adopted the chimeric stimuli task to investigate the influence of the presentation location of the reference stimuli on the left-side bias in face and Chinese character processing. The results demonstrated that when a reference face was presented in the left visual field of its chimeric images, which are centrally presented, the participants showed a preference higher than the no-bias threshold for the left chimeric face; this effect, however, was not observed in the right visual field. This finding indicates that the left-side bias effect in face processing is stronger when the reference face is in the left visual field. In contrast, the left-side bias was observed in Chinese character processing when the reference Chinese character was presented in either the left or right visual field. Together, these findings suggest that although faces and Chinese characters both have a left-side processing bias, the underlying neural mechanisms of this left-side bias might be different.
Multiscale neural connectivity during human sensory processing in the brain
NASA Astrophysics Data System (ADS)
Maksimenko, Vladimir A.; Runnova, Anastasia E.; Frolov, Nikita S.; Makarov, Vladimir V.; Nedaivozov, Vladimir; Koronovskii, Alexey A.; Pisarchik, Alexander; Hramov, Alexander E.
2018-05-01
Stimulus-related brain activity is considered using wavelet-based analysis of neural interactions between occipital and parietal brain areas in alpha (8-12 Hz) and beta (15-30 Hz) frequency bands. We show that human sensory processing related to the visual stimuli perception induces brain response resulted in different ways of parieto-occipital interactions in these bands. In the alpha frequency band the parieto-occipital neuronal network is characterized by homogeneous increase of the interaction between all interconnected areas both within occipital and parietal lobes and between them. In the beta frequency band the occipital lobe starts to play a leading role in the dynamics of the occipital-parietal network: The perception of visual stimuli excites the visual center in the occipital area and then, due to the increase of parieto-occipital interactions, such excitation is transferred to the parietal area, where the attentional center takes place. In the case when stimuli are characterized by a high degree of ambiguity, we find greater increase of the interaction between interconnected areas in the parietal lobe due to the increase of human attention. Based on revealed mechanisms, we describe the complex response of the parieto-occipital brain neuronal network during the perception and primary processing of the visual stimuli. The results can serve as an essential complement to the existing theory of neural aspects of visual stimuli processing.
Realigning thunder and lightning: temporal adaptation to spatiotemporally distant events.
Navarra, Jordi; Fernández-Prieto, Irune; Garcia-Morera, Joel
2013-01-01
The brain is able to realign asynchronous signals that approximately coincide in both space and time. Given that many experience-based links between visual and auditory stimuli are established in the absence of spatiotemporal proximity, we investigated whether or not temporal realignment arises in these conditions. Participants received a 3-min exposure to visual and auditory stimuli that were separated by 706 ms and appeared either from the same (Experiment 1) or from different spatial positions (Experiment 2). A simultaneity judgment task (SJ) was administered right afterwards. Temporal realignment between vision and audition was observed, in both Experiment 1 and 2, when comparing the participants' SJs after this exposure phase with those obtained after a baseline exposure to audiovisual synchrony. However, this effect was present only when the visual stimuli preceded the auditory stimuli during the exposure to asynchrony. A similar pattern of results (temporal realignment after exposure to visual-leading asynchrony but not after exposure to auditory-leading asynchrony) was obtained using temporal order judgments (TOJs) instead of SJs (Experiment 3). Taken together, these results suggest that temporal recalibration still occurs for visual and auditory stimuli that fall clearly outside the so-called temporal window for multisensory integration and appear from different spatial positions. This temporal realignment may be modulated by long-term experience with the kind of asynchrony (vision-leading) that we most frequently encounter in the outside world (e.g., while perceiving distant events).
Kiesel, Andrea; Kunde, Wilfried; Pohl, Carsten; Berner, Michael P; Hoffmann, Joachim
2009-01-01
Expertise in a certain stimulus domain enhances perceptual capabilities. In the present article, the authors investigate whether expertise improves perceptual processing to an extent that allows complex visual stimuli to bias behavior unconsciously. Expert chess players judged whether a target chess configuration entailed a checking configuration. These displays were preceded by masked prime configurations that either represented a checking or a nonchecking configuration. Chess experts, but not novice chess players, revealed a subliminal response priming effect, that is, faster responding when prime and target displays were congruent (both checking or both nonchecking) rather than incongruent. Priming generalized to displays that were not used as targets, ruling out simple repetition priming effects. Thus, chess experts were able to judge unconsciously presented chess configurations as checking or nonchecking. A 2nd experiment demonstrated that experts' priming does not occur for simpler but uncommon chess configurations. The authors conclude that long-term practice prompts the acquisition of visual memories of chess configurations with integrated form-location conjunctions. These perceptual chunks enable complex visual processing outside of conscious awareness.
Sensory Optimization by Stochastic Tuning
Jurica, Peter; Gepshtein, Sergei; Tyukin, Ivan; van Leeuwen, Cees
2013-01-01
Individually, visual neurons are each selective for several aspects of stimulation, such as stimulus location, frequency content, and speed. Collectively, the neurons implement the visual system’s preferential sensitivity to some stimuli over others, manifested in behavioral sensitivity functions. We ask how the individual neurons are coordinated to optimize visual sensitivity. We model synaptic plasticity in a generic neural circuit, and find that stochastic changes in strengths of synaptic connections entail fluctuations in parameters of neural receptive fields. The fluctuations correlate with uncertainty of sensory measurement in individual neurons: the higher the uncertainty the larger the amplitude of fluctuation. We show that this simple relationship is sufficient for the stochastic fluctuations to steer sensitivities of neurons toward a characteristic distribution, from which follows a sensitivity function observed in human psychophysics, and which is predicted by a theory of optimal allocation of receptive fields. The optimal allocation arises in our simulations without supervision or feedback about system performance and independently of coupling between neurons, making the system highly adaptive and sensitive to prevailing stimulation. PMID:24219849
Electromagnetic articulography treatment for an adult with Broca's aphasia and apraxia of speech.
Katz, W F; Bharadwaj, S V; Carstens, B
1999-12-01
Electromagnetic articulography (EMA) was explored as a means of remediating [s]/[symbol in text] articulation deficits in the speech of an adult with Broca's aphasia and apraxia of speech. Over a 1-month period, the subject was provided with 2 different treatments in a counterbalanced procedure: (1) visually guided biofeedback concerning tongue-tip position and (2) a foil treatment in which a computer program delivered voicing-contrast stimuli for simple repetition. Kinematic and perceptual data suggest improvement resulting from visually guided biofeedback, both for nonspeech oral and, to a lesser extent, speech motor tasks. In contrast, the phonetic contrast treated in the foil condition showed only marginal improvement during the therapy session, with performance dropping back to baseline 10 weeks post-treatment. Although preliminary, the findings suggest that visual biofeedback concerning tongue-tip position can be used to treat nonspeech oral and (to a lesser extent) speech motor behavior in adults with Broca's aphasia and apraxia of speech.
Do visually salient stimuli reduce children's risky decisions?
Schwebel, David C; Lucas, Elizabeth K; Pearson, Alana
2009-09-01
Children tend to overestimate their physical abilities, and that tendency is related to risk for unintentional injury. This study tested whether or not children estimate their physical ability differently when exposed to stimuli that were highly visually salient due to fluorescent coloring. Sixty-nine 6-year-olds judged physical ability to complete laboratory-based physical tasks. Half judged ability using tasks that were painted black; the other half judged the same tasks, but the stimuli were striped black and fluorescent lime-green. Results suggest the two groups judged similarly, but children took longer to judge perceptually ambiguous tasks when those tasks were visually salient. In other words, visual salience increased decision-making time but not accuracy of judgment. These findings held true after controlling for demographic and temperament characteristics.
Attention to Multiple Objects Facilitates Their Integration in Prefrontal and Parietal Cortex.
Kim, Yee-Joon; Tsai, Jeffrey J; Ojemann, Jeffrey; Verghese, Preeti
2017-05-10
Selective attention is known to interact with perceptual organization. In visual scenes, individual objects that are distinct and discriminable may occur on their own, or in groups such as a stack of books. The main objective of this study is to probe the neural interaction that occurs between individual objects when attention is directed toward one or more objects. Here we record steady-state visual evoked potentials via electrocorticography to directly assess the responses to individual stimuli and to their interaction. When human participants attend to two adjacent stimuli, prefrontal and parietal cortex shows a selective enhancement of only the neural interaction between stimuli, but not the responses to individual stimuli. When only one stimulus is attended, the neural response to that stimulus is selectively enhanced in prefrontal and parietal cortex. In contrast, early visual areas generally manifest responses to individual stimuli and to their interaction regardless of attentional task, although a subset of the responses is modulated similarly to prefrontal and parietal cortex. Thus, the neural representation of the visual scene as one progresses up the cortical hierarchy becomes more highly task-specific and represents either individual stimuli or their interaction, depending on the behavioral goal. Attention to multiple objects facilitates an integration of objects akin to perceptual grouping. SIGNIFICANCE STATEMENT Individual objects in a visual scene are seen as distinct entities or as parts of a whole. Here we examine how attention to multiple objects affects their neural representation. Previous studies measured single-cell or fMRI responses and obtained only aggregate measures that combined the activity to individual stimuli as well as their potential interaction. Here, we directly measure electrocorticographic steady-state responses corresponding to individual objects and to their interaction using a frequency-tagging technique. Attention to two stimuli increases the interaction component that is a hallmark for perceptual integration of stimuli. Furthermore, this stimulus-specific interaction is represented in prefrontal and parietal cortex in a task-dependent manner. Copyright © 2017 the authors 0270-6474/17/374942-12$15.00/0.
Sugimoto, Fumie; Kimura, Motohiro; Takeda, Yuji; Katayama, Jun'ichi
2017-08-16
In a three-stimulus oddball task, the amplitude of P3a elicited by deviant stimuli increases with an increase in the difficulty of discriminating between standard and target stimuli (i.e. task-difficulty effect on P3a), indicating that attentional capture by deviant stimuli is enhanced with an increase in task difficulty. This enhancement of attentional capture may be explained in terms of the modulation of modality-nonspecific temporal attention; that is, the participant's attention directed to the predicted timing of stimulus presentation is stronger when the task difficulty increases, which results in enhanced attentional capture. The present study examined this possibility with a modified three-stimulus oddball task consisting of a visual standard, a visual target, and four types of deviant stimuli defined by a combination of two modalities (visual and auditory) and two presentation timings (predicted and unpredicted). We expected that if the modulation of temporal attention is involved in enhanced attentional capture, then the task-difficulty effect on P3a should be reduced for unpredicted compared with predicted deviant stimuli irrespective of their modality; this is because the influence of temporal attention should be markedly weaker for unpredicted compared with predicted deviant stimuli. The results showed that the task-difficulty effect on P3a was significantly reduced for unpredicted compared with predicted deviant stimuli in both the visual and the auditory modalities. This result suggests that the modulation of modality-nonspecific temporal attention induced by the increase in task difficulty is at least partly involved in the enhancement of attentional capture by deviant stimuli.
The “Visual Shock” of Francis Bacon: an essay in neuroesthetics
Zeki, Semir; Ishizu, Tomohiro
2013-01-01
In this paper we discuss the work of Francis Bacon in the context of his declared aim of giving a “visual shock.”We explore what this means in terms of brain activity and what insights into the brain's visual perceptive system his work gives. We do so especially with reference to the representation of faces and bodies in the human visual brain. We discuss the evidence that shows that both these categories of stimuli have a very privileged status in visual perception, compared to the perception of other stimuli, including man-made artifacts such as houses, chairs, and cars. We show that viewing stimuli that depart significantly from a normal representation of faces and bodies entails a significant difference in the pattern of brain activation. We argue that Bacon succeeded in delivering his “visual shock” because he subverted the normal neural representation of faces and bodies, without at the same time subverting the representation of man-made artifacts. PMID:24339812
The "Visual Shock" of Francis Bacon: an essay in neuroesthetics.
Zeki, Semir; Ishizu, Tomohiro
2013-01-01
In this paper we discuss the work of Francis Bacon in the context of his declared aim of giving a "visual shock."We explore what this means in terms of brain activity and what insights into the brain's visual perceptive system his work gives. We do so especially with reference to the representation of faces and bodies in the human visual brain. We discuss the evidence that shows that both these categories of stimuli have a very privileged status in visual perception, compared to the perception of other stimuli, including man-made artifacts such as houses, chairs, and cars. We show that viewing stimuli that depart significantly from a normal representation of faces and bodies entails a significant difference in the pattern of brain activation. We argue that Bacon succeeded in delivering his "visual shock" because he subverted the normal neural representation of faces and bodies, without at the same time subverting the representation of man-made artifacts.
Taniguchi, Darcy A. A.; Gagnon, Yakir; Wheeler, Benjamin R.; Johnsen, Sönke; Jaffe, Jules S.
2015-01-01
Cuttlefish are cephalopods capable of rapid camouflage responses to visual stimuli. However, it is not always clear to what these animals are responding. Previous studies have found cuttlefish to be more responsive to lateral stimuli rather than substrate. However, in previous works, the cuttlefish were allowed to settle next to the lateral stimuli. In this study, we examine whether juvenile cuttlefish (Sepia officinalis) respond more strongly to visual stimuli seen on the sides versus the bottom of an experimental aquarium, specifically when the animals are not allowed to be adjacent to the tank walls. We used the Sub Sea Holodeck, a novel aquarium that employs plasma display screens to create a variety of artificial visual environments without disturbing the animals. Once the cuttlefish were acclimated, we compared the variability of camouflage patterns that were elicited from displaying various stimuli on the bottom versus the sides of the Holodeck. To characterize the camouflage patterns, we classified them in terms of uniform, disruptive, and mottled patterning. The elicited camouflage patterns from different bottom stimuli were more variable than those elicited by different side stimuli, suggesting that S. officinalis responds more strongly to the patterns displayed on the bottom than the sides of the tank. We argue that the cuttlefish pay more attention to the bottom of the Holodeck because it is closer and thus more relevant for camouflage. PMID:26465786
Audio-visual synchrony and feature-selective attention co-amplify early visual processing.
Keitel, Christian; Müller, Matthias M
2016-05-01
Our brain relies on neural mechanisms of selective attention and converging sensory processing to efficiently cope with rich and unceasing multisensory inputs. One prominent assumption holds that audio-visual synchrony can act as a strong attractor for spatial attention. Here, we tested for a similar effect of audio-visual synchrony on feature-selective attention. We presented two superimposed Gabor patches that differed in colour and orientation. On each trial, participants were cued to selectively attend to one of the two patches. Over time, spatial frequencies of both patches varied sinusoidally at distinct rates (3.14 and 3.63 Hz), giving rise to pulse-like percepts. A simultaneously presented pure tone carried a frequency modulation at the pulse rate of one of the two visual stimuli to introduce audio-visual synchrony. Pulsed stimulation elicited distinct time-locked oscillatory electrophysiological brain responses. These steady-state responses were quantified in the spectral domain to examine individual stimulus processing under conditions of synchronous versus asynchronous tone presentation and when respective stimuli were attended versus unattended. We found that both, attending to the colour of a stimulus and its synchrony with the tone, enhanced its processing. Moreover, both gain effects combined linearly for attended in-sync stimuli. Our results suggest that audio-visual synchrony can attract attention to specific stimulus features when stimuli overlap in space.
Stone, David B.; Urrea, Laura J.; Aine, Cheryl J.; Bustillo, Juan R.; Clark, Vincent P.; Stephen, Julia M.
2011-01-01
In real-world settings, information from multiple sensory modalities is combined to form a complete, behaviorally salient percept - a process known as multisensory integration. While deficits in auditory and visual processing are often observed in schizophrenia, little is known about how multisensory integration is affected by the disorder. The present study examined auditory, visual, and combined audio-visual processing in schizophrenia patients using high-density electrical mapping. An ecologically relevant task was used to compare unisensory and multisensory evoked potentials from schizophrenia patients to potentials from healthy normal volunteers. Analysis of unisensory responses revealed a large decrease in the N100 component of the auditory-evoked potential, as well as early differences in the visual-evoked components in the schizophrenia group. Differences in early evoked responses to multisensory stimuli were also detected. Multisensory facilitation was assessed by comparing the sum of auditory and visual evoked responses to the audio-visual evoked response. Schizophrenia patients showed a significantly greater absolute magnitude response to audio-visual stimuli than to summed unisensory stimuli when compared to healthy volunteers, indicating significantly greater multisensory facilitation in the patient group. Behavioral responses also indicated increased facilitation from multisensory stimuli. The results represent the first report of increased multisensory facilitation in schizophrenia and suggest that, although unisensory deficits are present, compensatory mechanisms may exist under certain conditions that permit improved multisensory integration in individuals afflicted with the disorder. PMID:21807011
Statistical regularities in art: Relations with visual coding and perception.
Graham, Daniel J; Redies, Christoph
2010-07-21
Since at least 1935, vision researchers have used art stimuli to test human response to complex scenes. This is sensible given the "inherent interestingness" of art and its relation to the natural visual world. The use of art stimuli has remained popular, especially in eye tracking studies. Moreover, stimuli in common use by vision scientists are inspired by the work of famous artists (e.g., Mondrians). Artworks are also popular in vision science as illustrations of a host of visual phenomena, such as depth cues and surface properties. However, until recently, there has been scant consideration of the spatial, luminance, and color statistics of artwork, and even less study of ways that regularities in such statistics could affect visual processing. Furthermore, the relationship between regularities in art images and those in natural scenes has received little or no attention. In the past few years, there has been a concerted effort to study statistical regularities in art as they relate to neural coding and visual perception, and art stimuli have begun to be studied in rigorous ways, as natural scenes have been. In this minireview, we summarize quantitative studies of links between regular statistics in artwork and processing in the visual stream. The results of these studies suggest that art is especially germane to understanding human visual coding and perception, and it therefore warrants wider study. Copyright 2010 Elsevier Ltd. All rights reserved.
Effects of Sound Frequency on Audiovisual Integration: An Event-Related Potential Study.
Yang, Weiping; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Ren, Yanna; Takahashi, Satoshi; Wu, Jinglong
2015-01-01
A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190-210 ms, for 1 kHz stimuli from 170-200 ms, for 2.5 kHz stimuli from 140-200 ms, 5 kHz stimuli from 100-200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300-340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies.
Testing memory for unseen visual stimuli in patients with extinction and spatial neglect.
Vuilleumier, Patrik; Schwartz, Sophie; Clarke, Karen; Husain, Masud; Driver, Jon
2002-08-15
Visual extinction after right parietal damage involves a loss of awareness for stimuli in the contralesional field when presented concurrently with ipsilesional stimuli, although contralesional stimuli are still perceived if presented alone. However, extinguished stimuli can still receive some residual on-line processing, without awareness. Here we examined whether such residual processing of extinguished stimuli can produce implicit and/or explicit memory traces lasting many minutes. We tested four patients with right parietal damage and left extinction on two sessions, each including distinct study and subsequent test phases. At study, pictures of objects were shown briefly in the right, left, or both fields. Patients were asked to name them without memory instructions (Session 1) or to make an indoor/outdoor categorization and memorize them (Session 2). They extinguished most left stimuli on bilateral presentation. During the test (up to 48 min later), fragmented pictures of the previously exposed objects (or novel objects) were presented alone in either field. Patients had to identify each object and then judge whether it had previously been exposed. Identification of fragmented pictures was better for previously exposed objects that had been consciously seen and critically also for objects that had been extinguished (as compared with novel objects), with no influence of the depth of processing during study. By contrast, explicit recollection occurred only for stimuli that were consciously seen at study and increased with depth of processing. These results suggest implicit but not explicit memory for extinguished visual stimuli in parietal patients.
Visual Memories Bypass Normalization.
Bloem, Ilona M; Watanabe, Yurika L; Kibbe, Melissa M; Ling, Sam
2018-05-01
How distinct are visual memory representations from visual perception? Although evidence suggests that briefly remembered stimuli are represented within early visual cortices, the degree to which these memory traces resemble true visual representations remains something of a mystery. Here, we tested whether both visual memory and perception succumb to a seemingly ubiquitous neural computation: normalization. Observers were asked to remember the contrast of visual stimuli, which were pitted against each other to promote normalization either in perception or in visual memory. Our results revealed robust normalization between visual representations in perception, yet no signature of normalization occurring between working memory stores-neither between representations in memory nor between memory representations and visual inputs. These results provide unique insight into the nature of visual memory representations, illustrating that visual memory representations follow a different set of computational rules, bypassing normalization, a canonical visual computation.
Visual Memories Bypass Normalization
Bloem, Ilona M.; Watanabe, Yurika L.; Kibbe, Melissa M.; Ling, Sam
2018-01-01
How distinct are visual memory representations from visual perception? Although evidence suggests that briefly remembered stimuli are represented within early visual cortices, the degree to which these memory traces resemble true visual representations remains something of a mystery. Here, we tested whether both visual memory and perception succumb to a seemingly ubiquitous neural computation: normalization. Observers were asked to remember the contrast of visual stimuli, which were pitted against each other to promote normalization either in perception or in visual memory. Our results revealed robust normalization between visual representations in perception, yet no signature of normalization occurring between working memory stores—neither between representations in memory nor between memory representations and visual inputs. These results provide unique insight into the nature of visual memory representations, illustrating that visual memory representations follow a different set of computational rules, bypassing normalization, a canonical visual computation. PMID:29596038
Narayanan, Amal; Chandel, Shubham; Ghosh, Nirmalya; De, Priyadarsi
2015-09-15
Probing volume phase transition behavior of superdiluted polymer solutions both micro- and macroscopically still persists as an outstanding challenge. In this regard, we have explored 4 × 4 spectral Mueller matrix measurement and its inverse analysis for excavating the microarchitectural facts about stimuli responsiveness of "smart" polymers. Phase separation behavior of thermoresponsive poly(N-isopropylacrylamide) (PNIPAM) and pH responsive poly(N,N-(dimethylamino)ethyl methacrylate) (PDMAEMA) and their copolymers were analyzed in terms of Mueller matrix derived polarization parameters, namely, depolarization (Δ), diattenuation (d), and linear retardance (δ). The Δ, d, and δ parameters provided useful information on both macro- and microstructural alterations during the phase separation. Additionally, the two step action ((i) breakage of polymer-water hydrogen bonding and (ii) polymer-polymer aggregation) at the molecular microenvironment during the cloud point generation was successfully probed via these parameters. It is demonstrated that, in comparison to the present techniques available for assessing the hydrophobic-hydrophilic switch over of simple stimuli-responsive polymers, Mueller matrix polarimetry offers an important advantage requiring a few hundred times dilute polymer solution (0.01 mg/mL, 1.1-1.4 μM) at a low-volume format.
Natural image statistics mediate brightness 'filling in'.
Dakin, Steven C; Bex, Peter J
2003-11-22
Although the human visual system can accurately estimate the reflectance (or lightness) of surfaces under enormous variations in illumination, two equiluminant grey regions can be induced to appear quite different simply by placing a light-dark luminance transition between them. This illusion, the Craik-Cornsweet-O'Brien (CCOB) effect, has been taken as evidence for a low-level 'filling-in' mechanism subserving lightness perception. Here, we present evidence that the mechanism responsible for the CCOB effect operates not via propagation of a neural signal across space but by amplification of the low spatial frequency (SF) structure of the image. We develop a simple computational model that relies on the statistics of natural scenes actively to reconstruct the image that is most likely to have caused an observed series of responses across SF channels. This principle is tested psychophysically by deriving classification images (CIs) for subjects' discrimination of the contrast polarity of CCOB stimuli masked with noise. CIs resemble 'filled-in' stimuli; i.e. observers rely on portions of the stimuli that contain no information per se but that correspond closely to the reported perceptual completion. As predicted by the model, the filling-in process is contingent on the presence of appropriate low SF structure.
Real time unsupervised learning of visual stimuli in neuromorphic VLSI systems
Giulioni, Massimiliano; Corradi, Federico; Dante, Vittorio; del Giudice, Paolo
2015-01-01
Neuromorphic chips embody computational principles operating in the nervous system, into microelectronic devices. In this domain it is important to identify computational primitives that theory and experiments suggest as generic and reusable cognitive elements. One such element is provided by attractor dynamics in recurrent networks. Point attractors are equilibrium states of the dynamics (up to fluctuations), determined by the synaptic structure of the network; a ‘basin’ of attraction comprises all initial states leading to a given attractor upon relaxation, hence making attractor dynamics suitable to implement robust associative memory. The initial network state is dictated by the stimulus, and relaxation to the attractor state implements the retrieval of the corresponding memorized prototypical pattern. In a previous work we demonstrated that a neuromorphic recurrent network of spiking neurons and suitably chosen, fixed synapses supports attractor dynamics. Here we focus on learning: activating on-chip synaptic plasticity and using a theory-driven strategy for choosing network parameters, we show that autonomous learning, following repeated presentation of simple visual stimuli, shapes a synaptic connectivity supporting stimulus-selective attractors. Associative memory develops on chip as the result of the coupled stimulus-driven neural activity and ensuing synaptic dynamics, with no artificial separation between learning and retrieval phases. PMID:26463272
Leibo, Joel Z.; Liao, Qianli; Freiwald, Winrich A.; Anselmi, Fabio; Poggio, Tomaso
2017-01-01
SUMMARY The primate brain contains a hierarchy of visual areas, dubbed the ventral stream, which rapidly computes object representations that are both specific for object identity and robust against identity-preserving transformations like depth-rotations [1, 2]. Current computational models of object recognition, including recent deep learning networks, generate these properties through a hierarchy of alternating selectivity-increasing filtering and tolerance-increasing pooling operations, similar to simple-complex cells operations [3, 4, 5, 6]. Here we prove that a class of hierarchical architectures and a broad set of biologically plausible learning rules generate approximate invariance to identity-preserving transformations at the top level of the processing hierarchy. However, all past models tested failed to reproduce the most salient property of an intermediate representation of a three-level face-processing hierarchy in the brain: mirror-symmetric tuning to head orientation [7]. Here we demonstrate that one specific biologically-plausible Hebb-type learning rule generates mirror-symmetric tuning to bilaterally symmetric stimuli like faces at intermediate levels of the architecture and show why it does so. Thus the tuning properties of individual cells inside the visual stream appear to result from group properties of the stimuli they encode and to reflect the learning rules that sculpted the information-processing system within which they reside. PMID:27916522
Hernández, Oscar E; Zurek, Eduardo E
2013-05-15
We present a software tool called SENB, which allows the geometric and biophysical neuronal properties in a simple computational model of a Hodgkin-Huxley (HH) axon to be changed. The aim of this work is to develop a didactic and easy-to-use computational tool in the NEURON simulation environment, which allows graphical visualization of both the passive and active conduction parameters and the geometric characteristics of a cylindrical axon with HH properties. The SENB software offers several advantages for teaching and learning electrophysiology. First, SENB offers ease and flexibility in determining the number of stimuli. Second, SENB allows immediate and simultaneous visualization, in the same window and time frame, of the evolution of the electrophysiological variables. Third, SENB calculates parameters such as time and space constants, stimuli frequency, cellular area and volume, sodium and potassium equilibrium potentials, and propagation velocity of the action potentials. Furthermore, it allows the user to see all this information immediately in the main window. Finally, with just one click SENB can save an image of the main window as evidence. The SENB software is didactic and versatile, and can be used to improve and facilitate the teaching and learning of the underlying mechanisms in the electrical activity of an axon using the biophysical properties of the squid giant axon.
Motivationally Significant Stimuli Show Visual Prior Entry: Evidence for Attentional Capture
ERIC Educational Resources Information Center
West, Greg L.; Anderson, Adam A. K.; Pratt, Jay
2009-01-01
Previous studies that have found attentional capture effects for stimuli of motivational significance do not directly measure initial attentional deployment, leaving it unclear to what extent these items produce attentional capture. Visual prior entry, as measured by temporal order judgments (TOJs), rests on the premise that allocated attention…
Visual Categorization of Natural Movies by Rats
Vinken, Kasper; Vermaercke, Ben
2014-01-01
Visual categorization of complex, natural stimuli has been studied for some time in human and nonhuman primates. Recent interest in the rodent as a model for visual perception, including higher-level functional specialization, leads to the question of how rodents would perform on a categorization task using natural stimuli. To answer this question, rats were trained in a two-alternative forced choice task to discriminate movies containing rats from movies containing other objects and from scrambled movies (ordinate-level categorization). Subsequently, transfer to novel, previously unseen stimuli was tested, followed by a series of control probes. The results show that the animals are capable of acquiring a decision rule by abstracting common features from natural movies to generalize categorization to new stimuli. Control probes demonstrate that they did not use single low-level features, such as motion energy or (local) luminance. Significant generalization was even present with stationary snapshots from untrained movies. The variability within and between training and test stimuli, the complexity of natural movies, and the control experiments and analyses all suggest that a more high-level rule based on more complex stimulus features than local luminance-based cues was used to classify the novel stimuli. In conclusion, natural stimuli can be used to probe ordinate-level categorization in rats. PMID:25100598
Predicting Visual Consciousness Electrophysiologically from Intermittent Binocular Rivalry
O’Shea, Robert P.; Kornmeier, Jürgen; Roeber, Urte
2013-01-01
Purpose We sought brain activity that predicts visual consciousness. Methods We used electroencephalography (EEG) to measure brain activity to a 1000-ms display of sine-wave gratings, oriented vertically in one eye and horizontally in the other. This display yields binocular rivalry: irregular alternations in visual consciousness between the images viewed by the eyes. We replaced both gratings with 200 ms of darkness, the gap, before showing a second display of the same rival gratings for another 1000 ms. We followed this by a 1000-ms mask then a 2000-ms inter-trial interval (ITI). Eleven participants pressed keys after the second display in numerous trials to say whether the orientation of the visible grating changed from before to after the gap or not. Each participant also responded to numerous non-rivalry trials in which the gratings had identical orientations for the two eyes and for which the orientation of both either changed physically after the gap or did not. Results We found that greater activity from lateral occipital-parietal-temporal areas about 180 ms after initial onset of rival stimuli predicted a change in visual consciousness more than 1000 ms later, on re-presentation of the rival stimuli. We also found that less activity from parietal, central, and frontal electrodes about 400 ms after initial onset of rival stimuli predicted a change in visual consciousness about 800 ms later, on re-presentation of the rival stimuli. There was no such predictive activity when the change in visual consciousness occurred because the stimuli changed physically. Conclusion We found early EEG activity that predicted later visual consciousness. Predictive activity 180 ms after onset of the first display may reflect adaption of the neurons mediating visual consciousness in our displays. Predictive activity 400 ms after onset of the first display may reflect a less-reliable brain state mediating visual consciousness. PMID:24124536
Specific excitatory connectivity for feature integration in mouse primary visual cortex
Molina-Luna, Patricia; Roth, Morgane M.
2017-01-01
Local excitatory connections in mouse primary visual cortex (V1) are stronger and more prevalent between neurons that share similar functional response features. However, the details of how functional rules for local connectivity shape neuronal responses in V1 remain unknown. We hypothesised that complex responses to visual stimuli may arise as a consequence of rules for selective excitatory connectivity within the local network in the superficial layers of mouse V1. In mouse V1 many neurons respond to overlapping grating stimuli (plaid stimuli) with highly selective and facilitatory responses, which are not simply predicted by responses to single gratings presented alone. This complexity is surprising, since excitatory neurons in V1 are considered to be mainly tuned to single preferred orientations. Here we examined the consequences for visual processing of two alternative connectivity schemes: in the first case, local connections are aligned with visual properties inherited from feedforward input (a ‘like-to-like’ scheme specifically connecting neurons that share similar preferred orientations); in the second case, local connections group neurons into excitatory subnetworks that combine and amplify multiple feedforward visual properties (a ‘feature binding’ scheme). By comparing predictions from large scale computational models with in vivo recordings of visual representations in mouse V1, we found that responses to plaid stimuli were best explained by assuming feature binding connectivity. Unlike under the like-to-like scheme, selective amplification within feature-binding excitatory subnetworks replicated experimentally observed facilitatory responses to plaid stimuli; explained selective plaid responses not predicted by grating selectivity; and was consistent with broad anatomical selectivity observed in mouse V1. Our results show that visual feature binding can occur through local recurrent mechanisms without requiring feedforward convergence, and that such a mechanism is consistent with visual responses and cortical anatomy in mouse V1. PMID:29240769
Visual feedback in stuttering therapy
NASA Astrophysics Data System (ADS)
Smolka, Elzbieta
1997-02-01
The aim of this paper is to present the results concerning the influence of visual echo and reverberation on the speech process of stutterers. Visual stimuli along with the influence of acoustic and visual-acoustic stimuli have been compared. Following this the methods of implementing visual feedback with the aid of electroluminescent diodes directed by speech signals have been presented. The concept of a computerized visual echo based on the acoustic recognition of Polish syllabic vowels has been also presented. All the research nd trials carried out at our center, aside from cognitive aims, generally aim at the development of new speech correctors to be utilized in stuttering therapy.
Associative visual learning by tethered bees in a controlled visual environment.
Buatois, Alexis; Pichot, Cécile; Schultheiss, Patrick; Sandoz, Jean-Christophe; Lazzari, Claudio R; Chittka, Lars; Avarguès-Weber, Aurore; Giurfa, Martin
2017-10-10
Free-flying honeybees exhibit remarkable cognitive capacities but the neural underpinnings of these capacities cannot be studied in flying insects. Conversely, immobilized bees are accessible to neurobiological investigation but display poor visual learning. To overcome this limitation, we aimed at establishing a controlled visual environment in which tethered bees walking on a spherical treadmill learn to discriminate visual stimuli video projected in front of them. Freely flying bees trained to walk into a miniature Y-maze displaying these stimuli in a dark environment learned the visual discrimination efficiently when one of them (CS+) was paired with sucrose and the other with quinine solution (CS-). Adapting this discrimination to the treadmill paradigm with a tethered, walking bee was successful as bees exhibited robust discrimination and preferred the CS+ to the CS- after training. As learning was better in the maze, movement freedom, active vision and behavioral context might be important for visual learning. The nature of the punishment associated with the CS- also affects learning as quinine and distilled water enhanced the proportion of learners. Thus, visual learning is amenable to a controlled environment in which tethered bees learn visual stimuli, a result that is important for future neurobiological studies in virtual reality.
Liang, Maojin; Chen, Yuebo; Zhao, Fei; Zhang, Junpeng; Liu, Jiahao; Zhang, Xueyuan; Cai, Yuexin; Chen, Suijun; Li, Xianghui; Chen, Ling; Zheng, Yiqing
2017-09-01
Although visual processing recruitment of the auditory cortices has been reported previously in prelingually deaf children who have a rapidly developing brain and no auditory processing, the visual processing recruitment of auditory cortices might be different in processing different visual stimuli and may affect cochlear implant (CI) outcomes. Ten prelingually deaf children, 4 to 6 years old, were recruited for the study. Twenty prelingually deaf subjects, 4 to 6 years old with CIs for 1 year, were also recruited; 10 with well-performing CIs, 10 with poorly performing CIs. Ten age and sex-matched normal-hearing children were recruited as controls. Visual ("sound" photo [photograph with imaginative sound] and "nonsound" photo [photograph without imaginative sound]) evoked potentials were measured in all subjects. P1 at Oz and N1 at the bilateral temporal-frontal areas (FC3 and FC4) were compared. N1 amplitudes were strongest in the deaf children, followed by those with poorly performing CIs, controls and those with well-performing CIs. There was no significant difference between controls and those with well-performing CIs. "Sound" photo stimuli evoked a stronger N1 than "nonsound" photo stimuli. Further analysis showed that only at FC4 in deaf subjects and those with poorly performing CIs were the N1 responses to "sound" photo stimuli stronger than those to "nonsound" photo stimuli. No significant difference was found for the FC3 and FC4 areas. No significant difference was found in N1 latencies and P1 amplitudes or latencies. The results indicate enhanced visual recruitment of the auditory cortices in prelingually deaf children. Additionally, the decrement in visual recruitment of auditory cortices was related to good CI outcomes.
Harris, Jill; Kamke, Marc R
2014-11-01
Selective attention fundamentally alters sensory perception, but little is known about the functioning of attention in individuals who use a cochlear implant. This study aimed to investigate visual and auditory attention in adolescent cochlear implant users. Event related potentials were used to investigate the influence of attention on visual and auditory evoked potentials in six cochlear implant users and age-matched normally-hearing children. Participants were presented with streams of alternating visual and auditory stimuli in an oddball paradigm: each modality contained frequently presented 'standard' and infrequent 'deviant' stimuli. Across different blocks attention was directed to either the visual or auditory modality. For the visual stimuli attention boosted the early N1 potential, but this effect was larger for cochlear implant users. Attention was also associated with a later P3 component for the visual deviant stimulus, but there was no difference between groups in the later attention effects. For the auditory stimuli, attention was associated with a decrease in N1 latency as well as a robust P3 for the deviant tone. Importantly, there was no difference between groups in these auditory attention effects. The results suggest that basic mechanisms of auditory attention are largely normal in children who are proficient cochlear implant users, but that visual attention may be altered. Ultimately, a better understanding of how selective attention influences sensory perception in cochlear implant users will be important for optimising habilitation strategies. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Timing the impact of literacy on visual processing
Pegado, Felipe; Comerlato, Enio; Ventura, Fabricio; Jobert, Antoinette; Nakamura, Kimihiro; Buiatti, Marco; Ventura, Paulo; Dehaene-Lambertz, Ghislaine; Kolinsky, Régine; Morais, José; Braga, Lucia W.; Cohen, Laurent; Dehaene, Stanislas
2014-01-01
Learning to read requires the acquisition of an efficient visual procedure for quickly recognizing fine print. Thus, reading practice could induce a perceptual learning effect in early vision. Using functional magnetic resonance imaging (fMRI) in literate and illiterate adults, we previously demonstrated an impact of reading acquisition on both high- and low-level occipitotemporal visual areas, but could not resolve the time course of these effects. To clarify whether literacy affects early vs. late stages of visual processing, we measured event-related potentials to various categories of visual stimuli in healthy adults with variable levels of literacy, including completely illiterate subjects, early-schooled literate subjects, and subjects who learned to read in adulthood (ex-illiterates). The stimuli included written letter strings forming pseudowords, on which literacy is expected to have a major impact, as well as faces, houses, tools, checkerboards, and false fonts. To evaluate the precision with which these stimuli were encoded, we studied repetition effects by presenting the stimuli in pairs composed of repeated, mirrored, or unrelated pictures from the same category. The results indicate that reading ability is correlated with a broad enhancement of early visual processing, including increased repetition suppression, suggesting better exemplar discrimination, and increased mirror discrimination, as early as ∼100–150 ms in the left occipitotemporal region. These effects were found with letter strings and false fonts, but also were partially generalized to other visual categories. Thus, learning to read affects the magnitude, precision, and invariance of early visual processing. PMID:25422460
Timing the impact of literacy on visual processing.
Pegado, Felipe; Comerlato, Enio; Ventura, Fabricio; Jobert, Antoinette; Nakamura, Kimihiro; Buiatti, Marco; Ventura, Paulo; Dehaene-Lambertz, Ghislaine; Kolinsky, Régine; Morais, José; Braga, Lucia W; Cohen, Laurent; Dehaene, Stanislas
2014-12-09
Learning to read requires the acquisition of an efficient visual procedure for quickly recognizing fine print. Thus, reading practice could induce a perceptual learning effect in early vision. Using functional magnetic resonance imaging (fMRI) in literate and illiterate adults, we previously demonstrated an impact of reading acquisition on both high- and low-level occipitotemporal visual areas, but could not resolve the time course of these effects. To clarify whether literacy affects early vs. late stages of visual processing, we measured event-related potentials to various categories of visual stimuli in healthy adults with variable levels of literacy, including completely illiterate subjects, early-schooled literate subjects, and subjects who learned to read in adulthood (ex-illiterates). The stimuli included written letter strings forming pseudowords, on which literacy is expected to have a major impact, as well as faces, houses, tools, checkerboards, and false fonts. To evaluate the precision with which these stimuli were encoded, we studied repetition effects by presenting the stimuli in pairs composed of repeated, mirrored, or unrelated pictures from the same category. The results indicate that reading ability is correlated with a broad enhancement of early visual processing, including increased repetition suppression, suggesting better exemplar discrimination, and increased mirror discrimination, as early as ∼ 100-150 ms in the left occipitotemporal region. These effects were found with letter strings and false fonts, but also were partially generalized to other visual categories. Thus, learning to read affects the magnitude, precision, and invariance of early visual processing.
Visual and vestibular components of motion sickness.
Eyeson-Annan, M; Peterken, C; Brown, B; Atchison, D
1996-10-01
The relative importance of visual and vestibular information in the etiology of motion sickness (MS) is not well understood, but these factors can be manipulated by inducing Coriolis and pseudo-Coriolis effects in experimental subjects. We hypothesized that visual and vestibular information are equivalent in producing MS. The experiments reported here aim, in part, to examine the relative influence of Coriolis and pseudo-Coriolis effects in inducing MS. We induced MS symptoms by combinations of whole body rotation and tilt, and environment rotation and tilt, in 22 volunteer subjects. Subjects participated in all of the experiments with at least 2 d between each experiment to dissipate after-effects. We recorded MS signs and symptoms when only visual stimulation was applied, when only vestibular stimulation was applied, and when both visual and vestibular stimulation were applied under specific conditions of whole body and environmental tilt. Visual stimuli produced more symptoms of MS than vestibular stimuli when only visual or vestibular stimuli were used (ANOVA F = 7.94, df = 1, 21 p = 0.01), but there was no significant difference in MS production when combined visual and vestibular stimulation were used to produce the Coriolis effect or pseudo-Coriolis effect (ANOVA: F = 0.40, df = 1, 21 p = 0.53). This was further confirmed by examination of the order in which the symptoms occurred and the lack of a correlation between previous experience and visually induced MS. Visual information is more important than vestibular input in causing MS when these stimuli are presented in isolation. In conditions where both visual and vestibular information are present, cross-coupling appears to occur between the pseudo-Coriolis effect and the Coriolis effect, as these two conditions are not significantly different in producing MS symptoms.
Can, Wang; Zhuoran, Zhao; Zheng, Jin
2017-04-01
In the past 10 years, thousands of people have claimed to be affected by trypophobia, which is the fear of objects with small holes. Recent research suggests that people do not fear the holes; rather, images of clustered holes, which share basic visual characteristics with venomous organisms, lead to nonconscious fear. In the present study, both self-reported measures and the Preschool Single Category Implicit Association Test were adapted for use with preschoolers to investigate whether discomfort related to trypophobic stimuli was grounded in their visual features or based on a nonconsciously associated fear of venomous animals. The results indicated that trypophobic stimuli were associated with discomfort in children. This discomfort seemed to be related to the typical visual characteristics and pattern properties of trypophobic stimuli rather than to nonconscious associations with venomous animals. The association between trypophobic stimuli and venomous animals vanished when the typical visual characteristics of trypophobic features were removed from colored photos of venomous animals. Thus, the discomfort felt toward trypophobic images might be an instinctive response to their visual characteristics rather than the result of a learned but nonconscious association with venomous animals. Therefore, it is questionable whether it is justified to legitimize trypophobia.
Stekelenburg, Jeroen J; Vroomen, Jean
2012-01-01
In many natural audiovisual events (e.g., a clap of the two hands), the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have reported that there are distinct neural correlates of temporal (when) versus phonetic/semantic (which) content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where) in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual parts. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical sub-additive amplitude reductions (AV - V < A) were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that this N1 suppression was greater for the spatially congruent stimuli. A very early audiovisual interaction was also found at 40-60 ms (P50) in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.
Shape and color conjunction stimuli are represented as bound objects in visual working memory.
Luria, Roy; Vogel, Edward K
2011-05-01
The integrated object view of visual working memory (WM) argues that objects (rather than features) are the building block of visual WM, so that adding an extra feature to an object does not result in any extra cost to WM capacity. Alternative views have shown that complex objects consume additional WM storage capacity so that it may not be represented as bound objects. Additionally, it was argued that two features from the same dimension (i.e., color-color) do not form an integrated object in visual WM. This led some to argue for a "weak" object view of visual WM. We used the contralateral delay activity (the CDA) as an electrophysiological marker of WM capacity, to test those alternative hypotheses to the integrated object account. In two experiments we presented complex stimuli and color-color conjunction stimuli, and compared performance in displays that had one object but varying degrees of feature complexity. The results supported the integrated object account by showing that the CDA amplitude corresponded to the number of objects regardless of the number of features within each object, even for complex objects or color-color conjunction stimuli. Copyright © 2010 Elsevier Ltd. All rights reserved.
Adaptive optics without altering visual perception
DE, Koenig; NW, Hart; HJ, Hofer
2014-01-01
Adaptive optics combined with visual psychophysics creates the potential to study the relationship between visual function and the retina at the cellular scale. This potential is hampered, however, by visual interference from the wavefront-sensing beacon used during correction. For example, we have previously shown that even a dim, visible beacon can alter stimulus perception (Hofer, H. J., Blaschke, J., Patolia, J., & Koenig, D. E. (2012). Fixation light hue bias revisited: Implications for using adaptive optics to study color vision. Vision Research, 56, 49-56). Here we describe a simple strategy employing a longer wavelength (980nm) beacon that, in conjunction with appropriate restriction on timing and placement, allowed us to perform psychophysics when dark adapted without altering visual perception. The method was verified by comparing detection and color appearance of foveally presented small spot stimuli with and without the wavefront beacon present in 5 subjects. As an important caution, we found that significant perceptual interference can occur even with a subliminal beacon when additional measures are not taken to limit exposure. Consequently, the lack of perceptual interference should be verified for a given system, and not assumed based on invisibility of the beacon. PMID:24607992
Auditory enhancement of visual perception at threshold depends on visual abilities.
Caclin, Anne; Bouchet, Patrick; Djoulah, Farida; Pirat, Elodie; Pernier, Jacques; Giard, Marie-Hélène
2011-06-17
Whether or not multisensory interactions can improve detection thresholds, and thus widen the range of perceptible events is a long-standing debate. Here we revisit this question, by testing the influence of auditory stimuli on visual detection threshold, in subjects exhibiting a wide range of visual-only performance. Above the perceptual threshold, crossmodal interactions have indeed been reported to depend on the subject's performance when the modalities are presented in isolation. We thus tested normal-seeing subjects and short-sighted subjects wearing their usual glasses. We used a paradigm limiting potential shortcomings of previous studies: we chose a criterion-free threshold measurement procedure and precluded exogenous cueing effects by systematically presenting a visual cue whenever a visual target (a faint Gabor patch) might occur. Using this carefully controlled procedure, we found that concurrent sounds only improved visual detection thresholds in the sub-group of subjects exhibiting the poorest performance in the visual-only conditions. In these subjects, for oblique orientations of the visual stimuli (but not for vertical or horizontal targets), the auditory improvement was still present when visual detection was already helped with flanking visual stimuli generating a collinear facilitation effect. These findings highlight that crossmodal interactions are most efficient to improve perceptual performance when an isolated modality is deficient. Copyright © 2011 Elsevier B.V. All rights reserved.
Jolij, Jacob; Scholte, H Steven; van Gaal, Simon; Hodgson, Timothy L; Lamme, Victor A F
2011-12-01
Humans largely guide their behavior by their visual representation of the world. Recent studies have shown that visual information can trigger behavior within 150 msec, suggesting that visually guided responses to external events, in fact, precede conscious awareness of those events. However, is such a view correct? By using a texture discrimination task, we show that the brain relies on long-latency visual processing in order to guide perceptual decisions. Decreasing stimulus saliency leads to selective changes in long-latency visually evoked potential components reflecting scene segmentation. These latency changes are accompanied by almost equal changes in simple RTs and points of subjective simultaneity. Furthermore, we find a strong correlation between individual RTs and the latencies of scene segmentation related components in the visually evoked potentials, showing that the processes underlying these late brain potentials are critical in triggering a response. However, using the same texture stimuli in an antisaccade task, we found that reflexive, but erroneous, prosaccades, but not antisaccades, can be triggered by earlier visual processes. In other words: The brain can act quickly, but decides late. Differences between our study and earlier findings suggesting that action precedes conscious awareness can be explained by assuming that task demands determine whether a fast and unconscious, or a slower and conscious, representation is used to initiate a visually guided response.
Magnetic stimulation of visual cortex impairs perceptual learning.
Baldassarre, Antonello; Capotosto, Paolo; Committeri, Giorgia; Corbetta, Maurizio
2016-12-01
The ability to learn and process visual stimuli more efficiently is important for survival. Previous neuroimaging studies have shown that perceptual learning on a shape identification task differently modulates activity in both frontal-parietal cortical regions and visual cortex (Sigman et al., 2005;Lewis et al., 2009). Specifically, fronto-parietal regions (i.e. intra parietal sulcus, pIPS) became less activated for trained as compared to untrained stimuli, while visual regions (i.e. V2d/V3 and LO) exhibited higher activation for familiar shape. Here, after the intensive training, we employed transcranial magnetic stimulation over both visual occipital and parietal regions, previously shown to be modulated, to investigate their causal role in learning the shape identification task. We report that interference with V2d/V3 and LO increased reaction times to learned stimuli as compared to pIPS and Sham control condition. Moreover, the impairment observed after stimulation over the two visual regions was positive correlated. These results strongly support the causal role of the visual network in the control of the perceptual learning. Copyright © 2016 Elsevier Inc. All rights reserved.
1980-02-01
ADOAA82 342 OKLAHOMA UNIV NORMAN COLL OF EDUCATION F/B 5/9 TASK ANALYSIS SCHEMA BASED ON COGNITIVE STYLE AND SUPPLANFATION--ETC(U) FEB GO F B AUSBURN...separately- perceived fragments) 6. Tasks requiring use of a. Visual/haptic (pre- kinesthetic or tactile ference for kinesthetic stimuli stimuli; ability...to transform kinesthetic stimuli into visual images; ability to learn directly from tactile or kinesthet - ic impressions) b. Field independence/de
The effect of spatial attention on invisible stimuli.
Shin, Kilho; Stolte, Moritz; Chong, Sang Chul
2009-10-01
The influence of selective attention on visual processing is widespread. Recent studies have demonstrated that spatial attention can affect processing of invisible stimuli. However, it has been suggested that this effect is limited to low-level features, such as line orientations. The present experiments investigated whether spatial attention can influence both low-level (contrast threshold) and high-level (gender discrimination) adaptation, using the same method of attentional modulation for both types of stimuli. We found that spatial attention was able to increase the amount of adaptation to low- as well as to high-level invisible stimuli. These results suggest that attention can influence perceptual processes independent of visual awareness.
Sakaki, Michiko; Niki, Kazuhisa; Mather, Mara
2012-03-01
The present study addressed the hypothesis that emotional stimuli relevant to survival or reproduction (biologically emotional stimuli) automatically affect cognitive processing (e.g., attention, memory), while those relevant to social life (socially emotional stimuli) require elaborative processing to modulate attention and memory. Results of our behavioral studies showed that (1) biologically emotional images hold attention more strongly than do socially emotional images, (2) memory for biologically emotional images was enhanced even with limited cognitive resources, but (3) memory for socially emotional images was enhanced only when people had sufficient cognitive resources at encoding. Neither images' subjective arousal nor their valence modulated these patterns. A subsequent functional magnetic resonance imaging study revealed that biologically emotional images induced stronger activity in the visual cortex and greater functional connectivity between the amygdala and visual cortex than did socially emotional images. These results suggest that the interconnection between the amygdala and visual cortex supports enhanced attention allocation to biological stimuli. In contrast, socially emotional images evoked greater activity in the medial prefrontal cortex (MPFC) and yielded stronger functional connectivity between the amygdala and MPFC than did biological images. Thus, it appears that emotional processing of social stimuli involves elaborative processing requiring frontal lobe activity.
Sakaki, Michiko; Niki, Kazuhisa; Mather, Mara
2012-01-01
The present study addressed the hypothesis that emotional stimuli relevant to survival or reproduction (biologically emotional stimuli) automatically affect cognitive processing (e.g., attention; memory), while those relevant to social life (socially emotional stimuli) require elaborative processing to modulate attention and memory. Results of our behavioral studies showed that: a) biologically emotional images hold attention more strongly than socially emotional images, b) memory for biologically emotional images was enhanced even with limited cognitive resources, but c) memory for socially emotional images was enhanced only when people had sufficient cognitive resources at encoding. Neither images’ subjective arousal nor their valence modulated these patterns. A subsequent functional magnetic resonance imaging study revealed that biologically emotional images induced stronger activity in visual cortex and greater functional connectivity between amygdala and visual cortex than did socially emotional images. These results suggest that the interconnection between the amygdala and visual cortex supports enhanced attention allocation to biological stimuli. In contrast, socially emotional images evoked greater activity in medial prefrontal cortex (MPFC) and yielded stronger functional connectivity between amygdala and MPFC than biological images. Thus, it appears that emotional processing of social stimuli involves elaborative processing requiring frontal lobe activity. PMID:21964552
Effects of dividing attention during encoding on perceptual priming of unfamiliar visual objects.
Soldan, Anja; Mangels, Jennifer A; Cooper, Lynn A
2008-11-01
According to the distractor-selection hypothesis (Mulligan, 2003), dividing attention during encoding reduces perceptual priming when responses to non-critical (i.e., distractor) stimuli are selected frequently and simultaneously with critical stimulus encoding. Because direct support for this hypothesis comes exclusively from studies using familiar word stimuli, the present study tested whether the predictions of the distractor-selection hypothesis extend to perceptual priming of unfamiliar visual objects using the possible/impossible object decision test. Consistent with the distractor-selection hypothesis, Experiments 1 and 2 found no reduction in priming when the non-critical stimuli were presented infrequently and non-synchronously with the critical target stimuli, even though explicit recognition memory was reduced. In Experiment 3, non-critical stimuli were presented frequently and simultaneously during encoding of critical stimuli; however, no decrement in priming was detected, even when encoding time was reduced. These results suggest that priming in the possible/impossible object decision test is relatively immune to reductions in central attention and that not all aspects of the distractor-selection hypothesis generalise to priming of unfamiliar visual objects. Implications for theoretical models of object decision priming are discussed.
Effects of dividing attention during encoding on perceptual priming of unfamiliar visual objects
Soldan, Anja; Mangels, Jennifer A.; Cooper, Lynn A.
2008-01-01
According to the distractor-selection hypothesis (Mulligan, 2003), dividing attention during encoding reduces perceptual priming when responses to non-critical (i.e., distractor) stimuli are selected frequently and simultaneously with critical stimulus encoding. Because direct support for this hypothesis comes exclusively from studies using familiar word stimuli, the present study tested whether the predictions of the distractor-selection hypothesis extend to perceptual priming of unfamiliar visual objects using the possible/impossible object-decision test. Consistent with the distractor-selection hypothesis, Experiments 1 and 2 found no reduction in priming when the non-critical stimuli were presented infrequently and non-synchronously with the critical target stimuli, even though explicit recognition memory was reduced. In Experiment 3, non-critical stimuli were presented frequently and simultaneously during encoding of critical stimuli; however, no decrement in priming was detected, even when encoding time was reduced. These results suggest that priming in the possible/impossible object-decision test is relatively immune to reductions in central attention and that not all aspects of the distractor-selection hypothesis generalize to priming of unfamiliar visual objects. Implications for theoretical models of object-decision priming are discussed. PMID:18821167
Gamma band activity and the P3 reflect post-perceptual processes, not visual awareness
Pitts, Michael A.; Padwal, Jennifer; Fennelly, Daniel; Martínez, Antígona; Hillyard, Steven A.
2014-01-01
A primary goal in cognitive neuroscience is to identify neural correlates of conscious perception (NCC). By contrasting conditions in which subjects are aware versus unaware of identical visual stimuli, a number of candidate NCCs have emerged, among them induced gamma band activity in the EEG and the P3 event-related potential. In most previous studies, however, the critical stimuli were always directly relevant to the subjects’ task, such that aware versus unaware contrasts may well have included differences in post-perceptual processing in addition to differences in conscious perception per se. Here, in a series of EEG experiments, visual awareness and task relevance were manipulated independently. Induced gamma activity and the P3 were absent for task-irrelevant stimuli regardless of whether subjects were aware of such stimuli. For task-relevant stimuli, gamma and the P3 were robust and dissociable, indicating that each reflects distinct post-perceptual processes necessary for carrying-out the task but not for consciously perceiving the stimuli. Overall, this pattern of results challenges a number of previous proposals linking gamma band activity and the P3 to conscious perception. PMID:25063731
Axonal Conduction Delays, Brain State, and Corticogeniculate Communication.
Stoelzel, Carl R; Bereshpolova, Yulia; Alonso, Jose-Manuel; Swadlow, Harvey A
2017-06-28
Thalamocortical conduction times are short, but layer 6 corticothalamic axons display an enormous range of conduction times, some exceeding 40-50 ms. Here, we investigate (1) how axonal conduction times of corticogeniculate (CG) neurons are related to the visual information conveyed to the thalamus, and (2) how alert versus nonalert awake brain states affect visual processing across the spectrum of CG conduction times. In awake female Dutch-Belted rabbits, we found 58% of CG neurons to be visually responsive, and 42% to be unresponsive. All responsive CG neurons had simple, orientation-selective receptive fields, and generated sustained responses to stationary stimuli. CG axonal conduction times were strongly related to modulated firing rates (F1 values) generated by drifting grating stimuli, and their associated interspike interval distributions, suggesting a continuum of visual responsiveness spanning the spectrum of axonal conduction times. CG conduction times were also significantly related to visual response latency, contrast sensitivity (C-50 values), directional selectivity, and optimal stimulus velocity. Increasing alertness did not cause visually unresponsive CG neurons to become responsive and did not change the response linearity (F1/F0 ratios) of visually responsive CG neurons. However, for visually responsive CG neurons, increased alertness nearly doubled the modulated response amplitude to optimal visual stimulation (F1 values), significantly shortened response latency, and dramatically increased response reliability. These effects of alertness were uniform across the broad spectrum of CG axonal conduction times. SIGNIFICANCE STATEMENT Corticothalamic neurons of layer 6 send a dense feedback projection to thalamic nuclei that provide input to sensory neocortex. While sensory information reaches the cortex after brief thalamocortical axonal delays, corticothalamic axons can exhibit conduction delays of <2 ms to 40-50 ms. Here, in the corticogeniculate visual system of awake rabbits, we investigate the functional significance of this axonal diversity, and the effects of shifting alert/nonalert brain states on corticogeniculate processing. We show that axonal conduction times are strongly related to multiple visual response properties, suggesting a continuum of visual responsiveness spanning the spectrum of corticogeniculate axonal conduction times. We also show that transitions between awake brain states powerfully affect corticogeniculate processing, in some ways more strongly than in layer 4. Copyright © 2017 the authors 0270-6474/17/376342-17$15.00/0.
Decoding complex flow-field patterns in visual working memory.
Christophel, Thomas B; Haynes, John-Dylan
2014-05-01
There has been a long history of research on visual working memory. Whereas early studies have focused on the role of lateral prefrontal cortex in the storage of sensory information, this has been challenged by research in humans that has directly assessed the encoding of perceptual contents, pointing towards a role of visual and parietal regions during storage. In a previous study we used pattern classification to investigate the storage of complex visual color patterns across delay periods. This revealed coding of such contents in early visual and parietal brain regions. Here we aim to investigate whether the involvement of visual and parietal cortex is also observable for other types of complex, visuo-spatial pattern stimuli. Specifically, we used a combination of fMRI and multivariate classification to investigate the retention of complex flow-field stimuli defined by the spatial patterning of motion trajectories of random dots. Subjects were trained to memorize the precise spatial layout of these stimuli and to retain this information during an extended delay. We used a multivariate decoding approach to identify brain regions where spatial patterns of activity encoded the memorized stimuli. Content-specific memory signals were observable in motion sensitive visual area MT+ and in posterior parietal cortex that might encode spatial information in a modality independent manner. Interestingly, we also found information about the memorized visual stimulus in somatosensory cortex, suggesting a potential crossmodal contribution to memory. Our findings thus indicate that working memory storage of visual percepts might be distributed across unimodal, multimodal and even crossmodal brain regions. Copyright © 2014 Elsevier Inc. All rights reserved.
Selective attention determines emotional responses to novel visual stimuli.
Raymond, Jane E; Fenske, Mark J; Tavassoli, Nader T
2003-11-01
Distinct complex brain systems support selective attention and emotion, but connections between them suggest that human behavior should reflect reciprocal interactions of these systems. Although there is ample evidence that emotional stimuli modulate attentional processes, it is not known whether attention influences emotional behavior. Here we show that evaluation of the emotional tone (cheery/dreary) of complex but meaningless visual patterns can be modulated by the prior attentional state (attending vs. ignoring) used to process each pattern in a visual selection task. Previously ignored patterns were evaluated more negatively than either previously attended or novel patterns. Furthermore, this emotional devaluation of distracting stimuli was robust across different emotional contexts and response scales. Finding that negative affective responses are specifically generated for ignored stimuli points to a new functional role for attention and elaborates the link between attention and emotion. This finding also casts doubt on the conventional marketing wisdom that any exposure is good exposure.
Stimulus relevance modulates contrast adaptation in visual cortex
Keller, Andreas J; Houlton, Rachael; Kampa, Björn M; Lesica, Nicholas A; Mrsic-Flogel, Thomas D; Keller, Georg B; Helmchen, Fritjof
2017-01-01
A general principle of sensory processing is that neurons adapt to sustained stimuli by reducing their response over time. Most of our knowledge on adaptation in single cells is based on experiments in anesthetized animals. How responses adapt in awake animals, when stimuli may be behaviorally relevant or not, remains unclear. Here we show that contrast adaptation in mouse primary visual cortex depends on the behavioral relevance of the stimulus. Cells that adapted to contrast under anesthesia maintained or even increased their activity in awake naïve mice. When engaged in a visually guided task, contrast adaptation re-occurred for stimuli that were irrelevant for solving the task. However, contrast adaptation was reversed when stimuli acquired behavioral relevance. Regulation of cortical adaptation by task demand may allow dynamic control of sensory-evoked signal flow in the neocortex. DOI: http://dx.doi.org/10.7554/eLife.21589.001 PMID:28130922
Visual Prediction Error Spreads Across Object Features in Human Visual Cortex
Summerfield, Christopher; Egner, Tobias
2016-01-01
Visual cognition is thought to rely heavily on contextual expectations. Accordingly, previous studies have revealed distinct neural signatures for expected versus unexpected stimuli in visual cortex. However, it is presently unknown how the brain combines multiple concurrent stimulus expectations such as those we have for different features of a familiar object. To understand how an unexpected object feature affects the simultaneous processing of other expected feature(s), we combined human fMRI with a task that independently manipulated expectations for color and motion features of moving-dot stimuli. Behavioral data and neural signals from visual cortex were then interrogated to adjudicate between three possible ways in which prediction error (surprise) in the processing of one feature might affect the concurrent processing of another, expected feature: (1) feature processing may be independent; (2) surprise might “spread” from the unexpected to the expected feature, rendering the entire object unexpected; or (3) pairing a surprising feature with an expected feature might promote the inference that the two features are not in fact part of the same object. To formalize these rival hypotheses, we implemented them in a simple computational model of multifeature expectations. Across a range of analyses, behavior and visual neural signals consistently supported a model that assumes a mixing of prediction error signals across features: surprise in one object feature spreads to its other feature(s), thus rendering the entire object unexpected. These results reveal neurocomputational principles of multifeature expectations and indicate that objects are the unit of selection for predictive vision. SIGNIFICANCE STATEMENT We address a key question in predictive visual cognition: how does the brain combine multiple concurrent expectations for different features of a single object such as its color and motion trajectory? By combining a behavioral protocol that independently varies expectation of (and attention to) multiple object features with computational modeling and fMRI, we demonstrate that behavior and fMRI activity patterns in visual cortex are best accounted for by a model in which prediction error in one object feature spreads to other object features. These results demonstrate how predictive vision forms object-level expectations out of multiple independent features. PMID:27810936
Cognitive Food Processing in Binge-Eating Disorder: An Eye-Tracking Study.
Sperling, Ingmar; Baldofski, Sabrina; Lüthold, Patrick; Hilbert, Anja
2017-08-19
Studies indicate an attentional bias towards food in binge-eating disorder (BED); however, more evidence on attentional engagement and disengagement and processing of multiple attention-competing stimuli is needed. This study aimed to examine visual attention to food and non-food stimuli in BED. In n = 23 participants with full-syndrome and subsyndromal BED and n = 23 individually matched healthy controls, eye-tracking was used to assess attention to food and non-food stimuli during a free exploration paradigm and a visual search task. In the free exploration paradigm, groups did not differ in their initial fixation position. While both groups fixated non-food stimuli significantly longer than food stimuli, the BED group allocated significantly more attention towards food than controls. In the visual search task, groups did not differ in detection times. However, a significant detection bias for food was found in full-syndrome BED, but not in controls. An increased initial attention towards food was related to greater BED symptomatology and lower body mass index (BMI) only in full-syndrome BED, while a greater maintained attention to food was associated with lower BMI in controls. The results suggest food-biased visual attentional processing in adults with BED. Further studies should clarify the implications of attentional processes for the etiology and maintenance of BED.
Cognitive Food Processing in Binge-Eating Disorder: An Eye-Tracking Study
Sperling, Ingmar; Lüthold, Patrick; Hilbert, Anja
2017-01-01
Studies indicate an attentional bias towards food in binge-eating disorder (BED); however, more evidence on attentional engagement and disengagement and processing of multiple attention-competing stimuli is needed. This study aimed to examine visual attention to food and non-food stimuli in BED. In n = 23 participants with full-syndrome and subsyndromal BED and n = 23 individually matched healthy controls, eye-tracking was used to assess attention to food and non-food stimuli during a free exploration paradigm and a visual search task. In the free exploration paradigm, groups did not differ in their initial fixation position. While both groups fixated non-food stimuli significantly longer than food stimuli, the BED group allocated significantly more attention towards food than controls. In the visual search task, groups did not differ in detection times. However, a significant detection bias for food was found in full-syndrome BED, but not in controls. An increased initial attention towards food was related to greater BED symptomatology and lower body mass index (BMI) only in full-syndrome BED, while a greater maintained attention to food was associated with lower BMI in controls. The results suggest food-biased visual attentional processing in adults with BED. Further studies should clarify the implications of attentional processes for the etiology and maintenance of BED. PMID:28825607
Multisensory Motion Perception in 3–4 Month-Old Infants
Nava, Elena; Grassi, Massimo; Brenna, Viola; Croci, Emanuela; Turati, Chiara
2017-01-01
Human infants begin very early in life to take advantage of multisensory information by extracting the invariant amodal information that is conveyed redundantly by multiple senses. Here we addressed the question as to whether infants can bind multisensory moving stimuli, and whether this occurs even if the motion produced by the stimuli is only illusory. Three- to 4-month-old infants were presented with two bimodal pairings: visuo-tactile and audio-visual. Visuo-tactile pairings consisted of apparently vertically moving bars (the Barber Pole illusion) moving in either the same or opposite direction with a concurrent tactile stimulus consisting of strokes given on the infant’s back. Audio-visual pairings consisted of the Barber Pole illusion in its visual and auditory version, the latter giving the impression of a continuous rising or ascending pitch. We found that infants were able to discriminate congruently (same direction) vs. incongruently moving (opposite direction) pairs irrespective of modality (Experiment 1). Importantly, we also found that congruently moving visuo-tactile and audio-visual stimuli were preferred over incongruently moving bimodal stimuli (Experiment 2). Our findings suggest that very young infants are able to extract motion as amodal component and use it to match stimuli that only apparently move in the same direction. PMID:29187829
Ward, Jamie; Hovard, Peter; Jones, Alicia; Rothen, Nicolas
2013-01-01
Memory has been shown to be enhanced in grapheme-color synaesthesia, and this enhancement extends to certain visual stimuli (that don't induce synaesthesia) as well as stimuli comprised of graphemes (which do). Previous studies have used a variety of testing procedures to assess memory in synaesthesia (e.g., free recall, recognition, associative learning) making it hard to know the extent to which memory benefits are attributable to the stimulus properties themselves, the testing method, participant strategies, or some combination of these factors. In the first experiment, we use the same testing procedure (recognition memory) for a variety of stimuli (written words, non-words, scenes, and fractals) and also check which memorization strategies were used. We demonstrate that grapheme-color synaesthetes show enhanced memory across all these stimuli, but this is not found for a non-visual type of synaesthesia (lexical-gustatory). In the second experiment, the memory advantage for scenes is explored further by manipulating the properties of the old and new images (changing color, orientation, or object presence). Again, grapheme-color synaesthetes show a memory advantage for scenes across all manipulations. Although recognition memory is generally enhanced in this study, the largest effects were found for abstract visual images (fractals) and scenes for which color can be used to discriminate old/new status. PMID:24187542
Ward, Jamie; Hovard, Peter; Jones, Alicia; Rothen, Nicolas
2013-01-01
Memory has been shown to be enhanced in grapheme-color synaesthesia, and this enhancement extends to certain visual stimuli (that don't induce synaesthesia) as well as stimuli comprised of graphemes (which do). Previous studies have used a variety of testing procedures to assess memory in synaesthesia (e.g., free recall, recognition, associative learning) making it hard to know the extent to which memory benefits are attributable to the stimulus properties themselves, the testing method, participant strategies, or some combination of these factors. In the first experiment, we use the same testing procedure (recognition memory) for a variety of stimuli (written words, non-words, scenes, and fractals) and also check which memorization strategies were used. We demonstrate that grapheme-color synaesthetes show enhanced memory across all these stimuli, but this is not found for a non-visual type of synaesthesia (lexical-gustatory). In the second experiment, the memory advantage for scenes is explored further by manipulating the properties of the old and new images (changing color, orientation, or object presence). Again, grapheme-color synaesthetes show a memory advantage for scenes across all manipulations. Although recognition memory is generally enhanced in this study, the largest effects were found for abstract visual images (fractals) and scenes for which color can be used to discriminate old/new status.
Smets, Karolien; Moors, Pieter; Reynvoet, Bert
2016-01-01
Performance in a non-symbolic comparison task in which participants are asked to indicate the larger numerosity of two dot arrays, is assumed to be supported by the Approximate Number System (ANS). This system allows participants to judge numerosity independently from other visual cues. Supporting this idea, previous studies indicated that numerosity can be processed when visual cues are controlled for. Consequently, distinct types of visual cue control are assumed to be interchangeable. However, a previous study showed that the type of visual cue control affected performance using a simultaneous presentation of the stimuli in numerosity comparison. In the current study, we explored whether the influence of the type of visual cue control on performance disappeared when sequentially presenting each stimulus in numerosity comparison. While the influence of the applied type of visual cue control was significantly more evident in the simultaneous condition, sequentially presenting the stimuli did not completely exclude the influence of distinct types of visual cue control. Altogether, these results indicate that the implicit assumption that it is possible to compare performances across studies with a differential visual cue control is unwarranted and that the influence of the type of visual cue control partly depends on the presentation format of the stimuli. PMID:26869967
Audio-Visual Speech Perception Is Special
ERIC Educational Resources Information Center
Tuomainen, J.; Andersen, T.S.; Tiippana, K.; Sams, M.
2005-01-01
In face-to-face conversation speech is perceived by ear and eye. We studied the prerequisites of audio-visual speech perception by using perceptually ambiguous sine wave replicas of natural speech as auditory stimuli. When the subjects were not aware that the auditory stimuli were speech, they showed only negligible integration of auditory and…
Dynamic Prototypicality Effects in Visual Search
ERIC Educational Resources Information Center
Kayaert, Greet; Op de Beeck, Hans P.; Wagemans, Johan
2011-01-01
In recent studies, researchers have discovered a larger neural activation for stimuli that are more extreme exemplars of their stimulus class, compared with stimuli that are more prototypical. This has been shown for faces as well as for familiar and novel shape classes. We used a visual search task to look for a behavioral correlate of these…
ERIC Educational Resources Information Center
Stewart, Claire R.; Sanchez, Sandra S.; Grenesko, Emily L.; Brown, Christine M.; Chen, Colleen P.; Keehn, Brandon; Velasquez, Francisco; Lincoln, Alan J.; Müller, Ralph-Axel
2016-01-01
Atypical sensory responses are common in autism spectrum disorder (ASD). While evidence suggests impaired auditory-visual integration for verbal information, findings for nonverbal stimuli are inconsistent. We tested for sensory symptoms in children with ASD (using the Adolescent/Adult Sensory Profile) and examined unisensory and bisensory…
Determining the Capacity of Time-Based Selection
ERIC Educational Resources Information Center
Watson, Derrick G.; Kunar, Melina A.
2012-01-01
In visual search, a set of distractor items can be suppressed from future selection if they are presented (previewed) before a second set of search items arrive. This "visual marking" mechanism provides a top-down way of prioritizing the selection of new stimuli, at the expense of old stimuli already in the field (Watson & Humphreys,…
Functional neuronal processing of body odors differs from that of similar common odors.
Lundström, Johan N; Boyle, Julie A; Zatorre, Robert J; Jones-Gotman, Marilyn
2008-06-01
Visual and auditory stimuli of high social and ecological importance are processed in the brain by specialized neuronal networks. To date, this has not been demonstrated for olfactory stimuli. By means of positron emission tomography, we sought to elucidate the neuronal substrates behind body odor perception to answer the question of whether the central processing of body odors differs from perceptually similar nonbody odors. Body odors were processed by a network that was distinctly separate from common odors, indicating a separation in the processing of odors based on their source. Smelling a friend's body odor activated regions previously seen for familiar stimuli, whereas smelling a stranger activated amygdala and insular regions akin to what has previously been demonstrated for fearful stimuli. The results provide evidence that social olfactory stimuli of high ecological relevance are processed by specialized neuronal networks similar to what has previously been demonstrated for auditory and visual stimuli.
Hill, N J; Schölkopf, B
2012-01-01
We report on the development and online testing of an EEG-based brain-computer interface (BCI) that aims to be usable by completely paralysed users—for whom visual or motor-system-based BCIs may not be suitable, and among whom reports of successful BCI use have so far been very rare. The current approach exploits covert shifts of attention to auditory stimuli in a dichotic-listening stimulus design. To compare the efficacy of event-related potentials (ERPs) and steady-state auditory evoked potentials (SSAEPs), the stimuli were designed such that they elicited both ERPs and SSAEPs simultaneously. Trial-by-trial feedback was provided online, based on subjects’ modulation of N1 and P3 ERP components measured during single 5-second stimulation intervals. All 13 healthy subjects were able to use the BCI, with performance in a binary left/right choice task ranging from 75% to 96% correct across subjects (mean 85%). BCI classification was based on the contrast between stimuli in the attended stream and stimuli in the unattended stream, making use of every stimulus, rather than contrasting frequent standard and rare “oddball” stimuli. SSAEPs were assessed offline: for all subjects, spectral components at the two exactly-known modulation frequencies allowed discrimination of pre-stimulus from stimulus intervals, and of left-only stimuli from right-only stimuli when one side of the dichotic stimulus pair was muted. However, attention-modulation of SSAEPs was not sufficient for single-trial BCI communication, even when the subject’s attention was clearly focused well enough to allow classification of the same trials via ERPs. ERPs clearly provided a superior basis for BCI. The ERP results are a promising step towards the development of a simple-to-use, reliable yes/no communication system for users in the most severely paralysed states, as well as potential attention-monitoring and -training applications outside the context of assistive technology. PMID:22333135
NASA Astrophysics Data System (ADS)
Hill, N. J.; Schölkopf, B.
2012-04-01
We report on the development and online testing of an electroencephalogram-based brain-computer interface (BCI) that aims to be usable by completely paralysed users—for whom visual or motor-system-based BCIs may not be suitable, and among whom reports of successful BCI use have so far been very rare. The current approach exploits covert shifts of attention to auditory stimuli in a dichotic-listening stimulus design. To compare the efficacy of event-related potentials (ERPs) and steady-state auditory evoked potentials (SSAEPs), the stimuli were designed such that they elicited both ERPs and SSAEPs simultaneously. Trial-by-trial feedback was provided online, based on subjects' modulation of N1 and P3 ERP components measured during single 5 s stimulation intervals. All 13 healthy subjects were able to use the BCI, with performance in a binary left/right choice task ranging from 75% to 96% correct across subjects (mean 85%). BCI classification was based on the contrast between stimuli in the attended stream and stimuli in the unattended stream, making use of every stimulus, rather than contrasting frequent standard and rare ‘oddball’ stimuli. SSAEPs were assessed offline: for all subjects, spectral components at the two exactly known modulation frequencies allowed discrimination of pre-stimulus from stimulus intervals, and of left-only stimuli from right-only stimuli when one side of the dichotic stimulus pair was muted. However, attention modulation of SSAEPs was not sufficient for single-trial BCI communication, even when the subject's attention was clearly focused well enough to allow classification of the same trials via ERPs. ERPs clearly provided a superior basis for BCI. The ERP results are a promising step towards the development of a simple-to-use, reliable yes/no communication system for users in the most severely paralysed states, as well as potential attention-monitoring and -training applications outside the context of assistive technology.
Affective Overload: The Effect of Emotive Visual Stimuli on Target Vocabulary Retrieval.
Çetin, Yakup; Griffiths, Carol; Özel, Zeynep Ebrar Yetkiner; Kinay, Hüseyin
2016-04-01
There has been considerable interest in cognitive load in recent years, but the effect of affective load and its relationship to mental functioning has not received as much attention. In order to investigate the effects of affective stimuli on cognitive function as manifest in the ability to remember foreign language vocabulary, two groups of student volunteers (N = 64) aged from 17 to 25 years were shown a Powerpoint presentation of 21 target language words with a picture, audio, and written form for every word. The vocabulary was presented in comfortable rooms with padded chairs and the participants were provided with snacks so that they would be comfortable and relaxed. After the Powerpoint they were exposed to two forms of visual stimuli for 27 min. The different formats contained either visually affective content (sexually suggestive, violent or frightening material) or neutral content (a nature documentary). The group which was exposed to the emotive visual stimuli remembered significantly fewer words than the group which watched the emotively neutral nature documentary. Implications of this finding are discussed and suggestions made for ongoing research.
Fradcourt, B; Peyrin, C; Baciu, M; Campagne, A
2013-10-01
Previous studies performed on visual processing of emotional stimuli have revealed preference for a specific type of visual spatial frequencies (high spatial frequency, HSF; low spatial frequency, LSF) according to task demands. The majority of studies used a face and focused on the appraisal of the emotional state of others. The present behavioral study investigates the relative role of spatial frequencies on processing emotional natural scenes during two explicit cognitive appraisal tasks, one emotional, based on the self-emotional experience and one motivational, based on the tendency to action. Our results suggest that HSF information was the most relevant to rapidly identify the self-emotional experience (unpleasant, pleasant, and neutral) while LSF was required to rapidly identify the tendency to action (avoidance, approach, and no action). The tendency to action based on LSF analysis showed a priority for unpleasant stimuli whereas the identification of emotional experience based on HSF analysis showed a priority for pleasant stimuli. The present study confirms the interest of considering both emotional and motivational characteristics of visual stimuli. Copyright © 2013 Elsevier Inc. All rights reserved.
Bernstein, Lynne E.; Jiang, Jintao; Pantazis, Dimitrios; Lu, Zhong-Lin; Joshi, Anand
2011-01-01
The talking face affords multiple types of information. To isolate cortical sites with responsibility for integrating linguistically relevant visual speech cues, speech and non-speech face gestures were presented in natural video and point-light displays during fMRI scanning at 3.0T. Participants with normal hearing viewed the stimuli and also viewed localizers for the fusiform face area (FFA), the lateral occipital complex (LOC), and the visual motion (V5/MT) regions of interest (ROIs). The FFA, the LOC, and V5/MT were significantly less activated for speech relative to non-speech and control stimuli. Distinct activation of the posterior superior temporal sulcus and the adjacent middle temporal gyrus to speech, independent of media, was obtained in group analyses. Individual analyses showed that speech and non-speech stimuli were associated with adjacent but different activations, with the speech activations more anterior. We suggest that the speech activation area is the temporal visual speech area (TVSA), and that it can be localized with the combination of stimuli used in this study. PMID:20853377
Woi, Pui Juan; Kaur, Sharanjeet; Waugh, Sarah J.; Hairol, Mohd Izzuddin
2016-01-01
The human visual system is sensitive in detecting objects that have different luminance level from their background, known as first-order or luminance-modulated (LM) stimuli. We are also able to detect objects that have the same mean luminance as their background, only differing in contrast (or other attributes). Such objects are known as second-order or contrast-modulated (CM), stimuli. CM stimuli are thought to be processed in higher visual areas compared to LM stimuli, and may be more susceptible to ageing. We compared visual acuities (VA) of five healthy older adults (54.0±1.83 years old) and five healthy younger adults (25.4±1.29 years old) with LM and CM letters under monocular and binocular viewing. For monocular viewing, age had no effect on VA [F(1, 8)= 2.50, p> 0.05]. However, there was a significant main effect of age on VA under binocular viewing [F(1, 8)= 5.67, p< 0.05]. Binocular VA with CM letters in younger adults was approximately two lines better than that in older adults. For LM, binocular summation ratios were similar for older (1.16±0.21) and younger (1.15±0.06) adults. For CM, younger adults had higher binocular summation ratio (1.39±0.08) compared to older adults (1.12±0.09). Binocular viewing improved VA with LM letters for both groups similarly. However, in older adults, binocular viewing did not improve VA with CM letters as much as in younger adults. This could reflect a decline of higher visual areas due to ageing process, most likely higher than V1, which may be missed if measured with luminance-based stimuli alone. PMID:28184281
Manipulation of the extrastriate frontal loop can resolve visual disability in blindsight patients.
Badgaiyan, Rajendra D
2012-12-01
Patients with blindsight are not consciously aware of visual stimuli in the affected field of vision but retain nonconscious perception. This disability can be resolved if nonconsciously perceived information can be brought to their conscious awareness. It can be accomplished by manipulating neural network of visual awareness. To understand this network, we studied the pattern of cortical activity elicited during processing of visual stimuli with or without conscious awareness. The analysis indicated that a re-entrant signaling loop between the area V3A (located in the extrastriate cortex) and the frontal cortex is critical for processing conscious awareness. The loop is activated by visual signals relayed in the primary visual cortex, which is damaged in blindsight patients. Because of the damage, V3A-frontal loop is not activated and the signals are not processed for conscious awareness. These patients however continue to receive visual signals through the lateral geniculate nucleus. Since these signals do not activate the V3A-frontal loop, the stimuli are not consciously perceived. If visual input from the lateral geniculate nucleus is appropriately manipulated and made to activate the V3A-frontal loop, blindsight patients can regain conscious vision. Published by Elsevier Ltd.
Color vision in attention-deficit/hyperactivity disorder: a pilot visual evoked potential study.
Kim, Soyeon; Banaschewski, Tobias; Tannock, Rosemary
2015-01-01
Individuals with attention-deficit/hyperactivity disorder (ADHD) are reported to manifest visual problems (including ophthalmological and color perception, particularly for blue-yellow stimuli), but findings are inconsistent. Accordingly, this study investigated visual function and color perception in adolescents with ADHD using color Visual Evoked Potentials (cVEP), which provides an objective measure of color perception. Thirty-one adolescents (aged 13-18), 16 with a confirmed diagnosis of ADHD, and 15 healthy peers, matched for age, gender, and IQ participated in the study. All underwent an ophthalmological exam, as well as electrophysiological testing color Visual Evoked Potentials (cVEP), which measured the latency and amplitude of the neural P1 response to chromatic (blue-yellow, red-green) and achromatic stimuli. No intergroup differences were found in the ophthalmological exam. However, significantly larger P1 amplitude was found for blue and yellow stimuli, but not red/green or achromatic stimuli, in the ADHD group (particularly in the medicated group) compared to controls. Larger amplitude in the P1 component for blue-yellow in the ADHD group compared to controls may account for the lack of difference in color perception tasks. We speculate that the larger amplitude for blue-yellow stimuli in early sensory processing (P1) might reflect a compensatory strategy for underlying problems including compromised retinal input of s-cones due to hypo-dopaminergic tone. Copyright © 2014 Spanish General Council of Optometry. Published by Elsevier Espana. All rights reserved.
Neurons Forming Optic Glomeruli Compute Figure–Ground Discriminations in Drosophila
Aptekar, Jacob W.; Keleş, Mehmet F.; Lu, Patrick M.; Zolotova, Nadezhda M.
2015-01-01
Many animals rely on visual figure–ground discrimination to aid in navigation, and to draw attention to salient features like conspecifics or predators. Even figures that are similar in pattern and luminance to the visual surroundings can be distinguished by the optical disparity generated by their relative motion against the ground, and yet the neural mechanisms underlying these visual discriminations are not well understood. We show in flies that a diverse array of figure–ground stimuli containing a motion-defined edge elicit statistically similar behavioral responses to one another, and statistically distinct behavioral responses from ground motion alone. From studies in larger flies and other insect species, we hypothesized that the circuitry of the lobula—one of the four, primary neuropiles of the fly optic lobe—performs this visual discrimination. Using calcium imaging of input dendrites, we then show that information encoded in cells projecting from the lobula to discrete optic glomeruli in the central brain group these sets of figure–ground stimuli in a homologous manner to the behavior; “figure-like” stimuli are coded similar to one another and “ground-like” stimuli are encoded differently. One cell class responds to the leading edge of a figure and is suppressed by ground motion. Two other classes cluster any figure-like stimuli, including a figure moving opposite the ground, distinctly from ground alone. This evidence demonstrates that lobula outputs provide a diverse basis set encoding visual features necessary for figure detection. PMID:25972183
Neurons forming optic glomeruli compute figure-ground discriminations in Drosophila.
Aptekar, Jacob W; Keleş, Mehmet F; Lu, Patrick M; Zolotova, Nadezhda M; Frye, Mark A
2015-05-13
Many animals rely on visual figure-ground discrimination to aid in navigation, and to draw attention to salient features like conspecifics or predators. Even figures that are similar in pattern and luminance to the visual surroundings can be distinguished by the optical disparity generated by their relative motion against the ground, and yet the neural mechanisms underlying these visual discriminations are not well understood. We show in flies that a diverse array of figure-ground stimuli containing a motion-defined edge elicit statistically similar behavioral responses to one another, and statistically distinct behavioral responses from ground motion alone. From studies in larger flies and other insect species, we hypothesized that the circuitry of the lobula--one of the four, primary neuropiles of the fly optic lobe--performs this visual discrimination. Using calcium imaging of input dendrites, we then show that information encoded in cells projecting from the lobula to discrete optic glomeruli in the central brain group these sets of figure-ground stimuli in a homologous manner to the behavior; "figure-like" stimuli are coded similar to one another and "ground-like" stimuli are encoded differently. One cell class responds to the leading edge of a figure and is suppressed by ground motion. Two other classes cluster any figure-like stimuli, including a figure moving opposite the ground, distinctly from ground alone. This evidence demonstrates that lobula outputs provide a diverse basis set encoding visual features necessary for figure detection. Copyright © 2015 the authors 0270-6474/15/357587-13$15.00/0.
Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F
2018-01-01
Classically understood as a deficit in spatial vision, amblyopia is increasingly recognized to also impair audiovisual multisensory processing. Studies to date, however, have not determined whether the audiovisual abnormalities reflect a failure of multisensory integration, or an optimal strategy in the face of unisensory impairment. We use the ventriloquism effect and the maximum-likelihood estimation (MLE) model of optimal integration to investigate integration of audiovisual spatial information in amblyopia. Participants with unilateral amblyopia (n = 14; mean age 28.8 years; 7 anisometropic, 3 strabismic, 4 mixed mechanism) and visually normal controls (n = 16, mean age 29.2 years) localized brief unimodal auditory, unimodal visual, and bimodal (audiovisual) stimuli during binocular viewing using a location discrimination task. A subset of bimodal trials involved the ventriloquism effect, an illusion in which auditory and visual stimuli originating from different locations are perceived as originating from a single location. Localization precision and bias were determined by psychometric curve fitting, and the observed parameters were compared with predictions from the MLE model. Spatial localization precision was significantly reduced in the amblyopia group compared with the control group for unimodal visual, unimodal auditory, and bimodal stimuli. Analyses of localization precision and bias for bimodal stimuli showed no significant deviations from the MLE model in either the amblyopia group or the control group. Despite pervasive deficits in localization precision for visual, auditory, and audiovisual stimuli, audiovisual integration remains intact and optimal in unilateral amblyopia.
Realigning Thunder and Lightning: Temporal Adaptation to Spatiotemporally Distant Events
Navarra, Jordi; Fernández-Prieto, Irune; Garcia-Morera, Joel
2013-01-01
The brain is able to realign asynchronous signals that approximately coincide in both space and time. Given that many experience-based links between visual and auditory stimuli are established in the absence of spatiotemporal proximity, we investigated whether or not temporal realignment arises in these conditions. Participants received a 3-min exposure to visual and auditory stimuli that were separated by 706 ms and appeared either from the same (Experiment 1) or from different spatial positions (Experiment 2). A simultaneity judgment task (SJ) was administered right afterwards. Temporal realignment between vision and audition was observed, in both Experiment 1 and 2, when comparing the participants’ SJs after this exposure phase with those obtained after a baseline exposure to audiovisual synchrony. However, this effect was present only when the visual stimuli preceded the auditory stimuli during the exposure to asynchrony. A similar pattern of results (temporal realignment after exposure to visual-leading asynchrony but not after exposure to auditory-leading asynchrony) was obtained using temporal order judgments (TOJs) instead of SJs (Experiment 3). Taken together, these results suggest that temporal recalibration still occurs for visual and auditory stimuli that fall clearly outside the so-called temporal window for multisensory integration and appear from different spatial positions. This temporal realignment may be modulated by long-term experience with the kind of asynchrony (vision-leading) that we most frequently encounter in the outside world (e.g., while perceiving distant events). PMID:24391928
Martins, Kelly Vasconcelos Chaves; Gil, Daniela
2017-01-01
Introduction The registry of the component P1 of the cortical auditory evoked potential has been widely used to analyze the behavior of auditory pathways in response to cochlear implant stimulation. Objective To determine the influence of aural rehabilitation in the parameters of latency and amplitude of the P1 cortical auditory evoked potential component elicited by simple auditory stimuli (tone burst) and complex stimuli (speech) in children with cochlear implants. Method The study included six individuals of both genders aged 5 to 10 years old who have been cochlear implant users for at least 12 months, and who attended auditory rehabilitation with an aural rehabilitation therapy approach. Participants were submitted to research of the cortical auditory evoked potential at the beginning of the study and after 3 months of aural rehabilitation. To elicit the responses, simple stimuli (tone burst) and complex stimuli (speech) were used and presented in free field at 70 dB HL. The results were statistically analyzed, and both evaluations were compared. Results There was no significant difference between the type of eliciting stimulus of the cortical auditory evoked potential for the latency and the amplitude of P1. There was a statistically significant difference in the P1 latency between the evaluations for both stimuli, with reduction of the latency in the second evaluation after 3 months of auditory rehabilitation. There was no statistically significant difference regarding the amplitude of P1 under the two types of stimuli or in the two evaluations. Conclusion A decrease in latency of the P1 component elicited by both simple and complex stimuli was observed within a three-month interval in children with cochlear implant undergoing aural rehabilitation. PMID:29018498
Xiao, Jianbo
2015-01-01
Segmenting visual scenes into distinct objects and surfaces is a fundamental visual function. To better understand the underlying neural mechanism, we investigated how neurons in the middle temporal cortex (MT) of macaque monkeys represent overlapping random-dot stimuli moving transparently in slightly different directions. It has been shown that the neuronal response elicited by two stimuli approximately follows the average of the responses elicited by the constituent stimulus components presented alone. In this scheme of response pooling, the ability to segment two simultaneously presented motion directions is limited by the width of the tuning curve to motion in a single direction. We found that, although the population-averaged neuronal tuning showed response averaging, subgroups of neurons showed distinct patterns of response tuning and were capable of representing component directions that were separated by a small angle—less than the tuning width to unidirectional stimuli. One group of neurons preferentially represented the component direction at a specific side of the bidirectional stimuli, weighting one stimulus component more strongly than the other. Another group of neurons pooled the component responses nonlinearly and showed two separate peaks in their tuning curves even when the average of the component responses was unimodal. We also show for the first time that the direction tuning of MT neurons evolved from initially representing the vector-averaged direction of slightly different stimuli to gradually representing the component directions. Our results reveal important neural processes underlying image segmentation and suggest that information about slightly different stimulus components is computed dynamically and distributed across neurons. SIGNIFICANCE STATEMENT Natural scenes often contain multiple entities. The ability to segment visual scenes into distinct objects and surfaces is fundamental to sensory processing and is crucial for generating the perception of our environment. Because cortical neurons are broadly tuned to a given visual feature, segmenting two stimuli that differ only slightly is a challenge for the visual system. In this study, we discovered that many neurons in the visual cortex are capable of representing individual components of slightly different stimuli by selectively and nonlinearly pooling the responses elicited by the stimulus components. We also show for the first time that the neural representation of individual stimulus components developed over a period of ∼70–100 ms, revealing a dynamic process of image segmentation. PMID:26658869
Age-related differences in audiovisual interactions of semantically different stimuli.
Viggiano, Maria Pia; Giovannelli, Fabio; Giganti, Fiorenza; Rossi, Arianna; Metitieri, Tiziana; Rebai, Mohamed; Guerrini, Renzo; Cincotta, Massimo
2017-01-01
Converging results have shown that adults benefit from congruent multisensory stimulation in the identification of complex stimuli, whereas the developmental trajectory of the ability to integrate multisensory inputs in children is less well understood. In this study we explored the effects of audiovisual semantic congruency on identification of visually presented stimuli belonging to different categories, using a cross-modal approach. Four groups of children ranging in age from 6 to 13 years and adults were administered an object identification task of visually presented pictures belonging to living and nonliving entities. Stimuli were presented in visual, congruent audiovisual, incongruent audiovisual, and noise conditions. Results showed that children under 12 years of age did not benefit from multisensory presentation in speeding up the identification. In children the incoherent audiovisual condition had an interfering effect, especially for the identification of living things. These data suggest that the facilitating effect of the audiovisual interaction into semantic factors undergoes developmental changes and the consolidation of adult-like processing of multisensory stimuli begins in late childhood. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Computer programming for generating visual stimuli.
Bukhari, Farhan; Kurylo, Daniel D
2008-02-01
Critical to vision research is the generation of visual displays with precise control over stimulus metrics. Generating stimuli often requires adapting commercial software or developing specialized software for specific research applications. In order to facilitate this process, we give here an overview that allows nonexpert users to generate and customize stimuli for vision research. We first give a review of relevant hardware and software considerations, to allow the selection of display hardware, operating system, programming language, and graphics packages most appropriate for specific research applications. We then describe the framework of a generic computer program that can be adapted for use with a broad range of experimental applications. Stimuli are generated in the context of trial events, allowing the display of text messages, the monitoring of subject responses and reaction times, and the inclusion of contingency algorithms. This approach allows direct control and management of computer-generated visual stimuli while utilizing the full capabilities of modern hardware and software systems. The flowchart and source code for the stimulus-generating program may be downloaded from www.psychonomic.org/archive.
Effects of Sound Frequency on Audiovisual Integration: An Event-Related Potential Study
Yang, Weiping; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Ren, Yanna; Takahashi, Satoshi; Wu, Jinglong
2015-01-01
A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190–210 ms, for 1 kHz stimuli from 170–200 ms, for 2.5 kHz stimuli from 140–200 ms, 5 kHz stimuli from 100–200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300–340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies. PMID:26384256
Where's Wally: the influence of visual salience on referring expression generation.
Clarke, Alasdair D F; Elsner, Micha; Rohde, Hannah
2013-01-01
REFERRING EXPRESSION GENERATION (REG) PRESENTS THE CONVERSE PROBLEM TO VISUAL SEARCH: given a scene and a specified target, how does one generate a description which would allow somebody else to quickly and accurately locate the target?Previous work in psycholinguistics and natural language processing has failed to find an important and integrated role for vision in this task. That previous work, which relies largely on simple scenes, tends to treat vision as a pre-process for extracting feature categories that are relevant to disambiguation. However, the visual search literature suggests that some descriptions are better than others at enabling listeners to search efficiently within complex stimuli. This paper presents a study testing whether participants are sensitive to visual features that allow them to compose such "good" descriptions. Our results show that visual properties (salience, clutter, area, and distance) influence REG for targets embedded in images from the Where's Wally? books. Referring expressions for large targets are shorter than those for smaller targets, and expressions about targets in highly cluttered scenes use more words. We also find that participants are more likely to mention non-target landmarks that are large, salient, and in close proximity to the target. These findings identify a key role for visual salience in language production decisions and highlight the importance of scene complexity for REG.
Rapid innate defensive responses of mice to looming visual stimuli.
Yilmaz, Melis; Meister, Markus
2013-10-21
Much of brain science is concerned with understanding the neural circuits that underlie specific behaviors. While the mouse has become a favorite experimental subject, the behaviors of this species are still poorly explored. For example, the mouse retina, like that of other mammals, contains ∼20 different circuits that compute distinct features of the visual scene [1, 2]. By comparison, only a handful of innate visual behaviors are known in this species--the pupil reflex [3], phototaxis [4], the optomotor response [5], and the cliff response [6]--two of which are simple reflexes that require little visual processing. We explored the behavior of mice under a visual display that simulates an approaching object, which causes defensive reactions in some other species [7, 8]. We show that mice respond to this stimulus either by initiating escape within a second or by freezing for an extended period. The probability of these defensive behaviors is strongly dependent on the parameters of the visual stimulus. Directed experiments identify candidate retinal circuits underlying the behavior and lead the way into detailed study of these neural pathways. This response is a new addition to the repertoire of innate defensive behaviors in the mouse that allows the detection and avoidance of aerial predators. Copyright © 2013 Elsevier Ltd. All rights reserved.
Neural Mechanisms of Selective Visual Attention.
Moore, Tirin; Zirnsak, Marc
2017-01-03
Selective visual attention describes the tendency of visual processing to be confined largely to stimuli that are relevant to behavior. It is among the most fundamental of cognitive functions, particularly in humans and other primates for whom vision is the dominant sense. We review recent progress in identifying the neural mechanisms of selective visual attention. We discuss evidence from studies of different varieties of selective attention and examine how these varieties alter the processing of stimuli by neurons within the visual system, current knowledge of their causal basis, and methods for assessing attentional dysfunctions. In addition, we identify some key questions that remain in identifying the neural mechanisms that give rise to the selective processing of visual information.
Enhanced attention amplifies face adaptation.
Rhodes, Gillian; Jeffery, Linda; Evangelista, Emma; Ewing, Louise; Peters, Marianne; Taylor, Libby
2011-08-15
Perceptual adaptation not only produces striking perceptual aftereffects, but also enhances coding efficiency and discrimination by calibrating coding mechanisms to prevailing inputs. Attention to simple stimuli increases adaptation, potentially enhancing its functional benefits. Here we show that attention also increases adaptation to faces. In Experiment 1, face identity aftereffects increased when attention to adapting faces was increased using a change detection task. In Experiment 2, figural (distortion) face aftereffects increased when attention was increased using a snap game (detecting immediate repeats) during adaptation. Both were large effects. Contributions of low-level adaptation were reduced using free viewing (both experiments) and a size change between adapt and test faces (Experiment 2). We suggest that attention may enhance adaptation throughout the entire cortical visual pathway, with functional benefits well beyond the immediate advantages of selective processing of potentially important stimuli. These results highlight the potential to facilitate adaptive updating of face-coding mechanisms by strategic deployment of attentional resources. Copyright © 2011 Elsevier Ltd. All rights reserved.
Multi-sensory integration in a small brain
NASA Astrophysics Data System (ADS)
Gepner, Ruben; Wolk, Jason; Gershow, Marc
Understanding how fluctuating multi-sensory stimuli are integrated and transformed in neural circuits has proved a difficult task. To address this question, we study the sensori-motor transformations happening in the brain of the Drosophila larva, a tractable model system with about 10,000 neurons. Using genetic tools that allow us to manipulate the activity of individual brain cells through their transparent body, we observe the stochastic decisions made by freely-behaving animals as their visual and olfactory environments fluctuate independently. We then use simple linear-nonlinear models to correlate outputs with relevant features in the inputs, and adaptive filtering processes to track changes in these relevant parameters used by the larva's brain to make decisions. We show how these techniques allow us to probe how statistics of stimuli from different sensory modalities combine to affect behavior, and can potentially guide our understanding of how neural circuits are anatomically and functionally integrated. Supported by NIH Grant 1DP2EB022359 and NSF Grant PHY-1455015.
Introducing Computational Fluid Dynamics Simulation into Olfactory Display
NASA Astrophysics Data System (ADS)
Ishida, Hiroshi; Yoshida, Hitoshi; Nakamoto, Takamichi
An olfactory display is a device that delivers various odors to the user's nose. It can be used to add special effects to movies and games by releasing odors relevant to the scenes shown on the screen. In order to provide high-presence olfactory stimuli to the users, the display must be able to generate realistic odors with appropriate concentrations in a timely manner together with visual and audio playbacks. In this paper, we propose to use computational fluid dynamics (CFD) simulations in conjunction with the olfactory display. Odor molecules released from their source are transported mainly by turbulent flow, and their behavior can be extremely complicated even in a simple indoor environment. In the proposed system, a CFD solver is employed to calculate the airflow field and the odor dispersal in the given environment. An odor blender is used to generate the odor with the concentration determined based on the calculated odor distribution. Experimental results on presenting odor stimuli synchronously with movie clips show the effectiveness of the proposed system.
Accessory stimulus modulates executive function during stepping task
Watanabe, Tatsunori; Koyama, Soichiro; Tanabe, Shigeo
2015-01-01
When multiple sensory modalities are simultaneously presented, reaction time can be reduced while interference enlarges. The purpose of this research was to examine the effects of task-irrelevant acoustic accessory stimuli simultaneously presented with visual imperative stimuli on executive function during stepping. Executive functions were assessed by analyzing temporal events and errors in the initial weight transfer of the postural responses prior to a step (anticipatory postural adjustment errors). Eleven healthy young adults stepped forward in response to a visual stimulus. We applied a choice reaction time task and the Simon task, which consisted of congruent and incongruent conditions. Accessory stimuli were randomly presented with the visual stimuli. Compared with trials without accessory stimuli, the anticipatory postural adjustment error rates were higher in trials with accessory stimuli in the incongruent condition and the reaction times were shorter in trials with accessory stimuli in all the task conditions. Analyses after division of trials according to whether anticipatory postural adjustment error occurred or not revealed that the reaction times of trials with anticipatory postural adjustment errors were reduced more than those of trials without anticipatory postural adjustment errors in the incongruent condition. These results suggest that accessory stimuli modulate the initial motor programming of stepping by lowering decision threshold and exclusively under spatial incompatibility facilitate automatic response activation. The present findings advance the knowledge of intersensory judgment processes during stepping and may aid in the development of intervention and evaluation tools for individuals at risk of falls. PMID:25925321
Brocher, Andreas; Harbecke, Raphael; Graf, Tim; Memmert, Daniel; Hüttermann, Stefanie
2018-03-07
We tested the link between pupil size and the task effort involved in covert shifts of visual attention. The goal of this study was to establish pupil size as a marker of attentional shifting in the absence of luminance manipulations. In three experiments, participants evaluated two stimuli that were presented peripherally, appearing equidistant from and on opposite sides of eye fixation. The angle between eye fixation and the peripherally presented target stimuli varied from 12.5° to 42.5°. The evaluation of more distant stimuli led to poorer performance than did the evaluation of more proximal stimuli throughout our study, confirming that the former required more effort than the latter. In addition, in Experiment 1 we found that pupil size increased with increasing angle and that this effect could not be reduced to the operation of low-level visual processes in the task. In Experiment 2 the pupil dilated more strongly overall when participants evaluated the target stimuli, which required shifts of attention, than when they merely reported on the target's presence versus absence. Both conditions yielded larger pupils for more distant than for more proximal stimuli, however. In Experiment 3, we manipulated task difficulty more directly, by changing the contrast at which the target stimuli were presented. We replicated the results from Experiment 1 only with the high-contrast stimuli. With stimuli of low contrast, ceiling effects in pupil size were observed. Our data show that the link between task effort and pupil size can be used to track the degree to which an observer covertly shifts attention to or detects stimuli in peripheral vision.
Stone, David B; Urrea, Laura J; Aine, Cheryl J; Bustillo, Juan R; Clark, Vincent P; Stephen, Julia M
2011-10-01
In real-world settings, information from multiple sensory modalities is combined to form a complete, behaviorally salient percept - a process known as multisensory integration. While deficits in auditory and visual processing are often observed in schizophrenia, little is known about how multisensory integration is affected by the disorder. The present study examined auditory, visual, and combined audio-visual processing in schizophrenia patients using high-density electrical mapping. An ecologically relevant task was used to compare unisensory and multisensory evoked potentials from schizophrenia patients to potentials from healthy normal volunteers. Analysis of unisensory responses revealed a large decrease in the N100 component of the auditory-evoked potential, as well as early differences in the visual-evoked components in the schizophrenia group. Differences in early evoked responses to multisensory stimuli were also detected. Multisensory facilitation was assessed by comparing the sum of auditory and visual evoked responses to the audio-visual evoked response. Schizophrenia patients showed a significantly greater absolute magnitude response to audio-visual stimuli than to summed unisensory stimuli when compared to healthy volunteers, indicating significantly greater multisensory facilitation in the patient group. Behavioral responses also indicated increased facilitation from multisensory stimuli. The results represent the first report of increased multisensory facilitation in schizophrenia and suggest that, although unisensory deficits are present, compensatory mechanisms may exist under certain conditions that permit improved multisensory integration in individuals afflicted with the disorder. Copyright © 2011 Elsevier Ltd. All rights reserved.
Audiovisual Rehabilitation in Hemianopia: A Model-Based Theoretical Investigation
Magosso, Elisa; Cuppini, Cristiano; Bertini, Caterina
2017-01-01
Hemianopic patients exhibit visual detection improvement in the blind field when audiovisual stimuli are given in spatiotemporally coincidence. Beyond this “online” multisensory improvement, there is evidence of long-lasting, “offline” effects induced by audiovisual training: patients show improved visual detection and orientation after they were trained to detect and saccade toward visual targets given in spatiotemporal proximity with auditory stimuli. These effects are ascribed to the Superior Colliculus (SC), which is spared in these patients and plays a pivotal role in audiovisual integration and oculomotor behavior. Recently, we developed a neural network model of audiovisual cortico-collicular loops, including interconnected areas representing the retina, striate and extrastriate visual cortices, auditory cortex, and SC. The network simulated unilateral V1 lesion with possible spared tissue and reproduced “online” effects. Here, we extend the previous network to shed light on circuits, plastic mechanisms, and synaptic reorganization that can mediate the training effects and functionally implement visual rehabilitation. The network is enriched by the oculomotor SC-brainstem route, and Hebbian mechanisms of synaptic plasticity, and is used to test different training paradigms (audiovisual/visual stimulation in eye-movements/fixed-eyes condition) on simulated patients. Results predict different training effects and associate them to synaptic changes in specific circuits. Thanks to the SC multisensory enhancement, the audiovisual training is able to effectively strengthen the retina-SC route, which in turn can foster reinforcement of the SC-brainstem route (this occurs only in eye-movements condition) and reinforcement of the SC-extrastriate route (this occurs in presence of survived V1 tissue, regardless of eye condition). The retina-SC-brainstem circuit may mediate compensatory effects: the model assumes that reinforcement of this circuit can translate visual stimuli into short-latency saccades, possibly moving the stimuli into visual detection regions. The retina-SC-extrastriate circuit is related to restitutive effects: visual stimuli can directly elicit visual detection with no need for eye movements. Model predictions and assumptions are critically discussed in view of existing behavioral and neurophysiological data, forecasting that other oculomotor compensatory mechanisms, beyond short-latency saccades, are likely involved, and stimulating future experimental and theoretical investigations. PMID:29326578
Audiovisual Rehabilitation in Hemianopia: A Model-Based Theoretical Investigation.
Magosso, Elisa; Cuppini, Cristiano; Bertini, Caterina
2017-01-01
Hemianopic patients exhibit visual detection improvement in the blind field when audiovisual stimuli are given in spatiotemporally coincidence. Beyond this "online" multisensory improvement, there is evidence of long-lasting, "offline" effects induced by audiovisual training: patients show improved visual detection and orientation after they were trained to detect and saccade toward visual targets given in spatiotemporal proximity with auditory stimuli. These effects are ascribed to the Superior Colliculus (SC), which is spared in these patients and plays a pivotal role in audiovisual integration and oculomotor behavior. Recently, we developed a neural network model of audiovisual cortico-collicular loops, including interconnected areas representing the retina, striate and extrastriate visual cortices, auditory cortex, and SC. The network simulated unilateral V1 lesion with possible spared tissue and reproduced "online" effects. Here, we extend the previous network to shed light on circuits, plastic mechanisms, and synaptic reorganization that can mediate the training effects and functionally implement visual rehabilitation. The network is enriched by the oculomotor SC-brainstem route, and Hebbian mechanisms of synaptic plasticity, and is used to test different training paradigms (audiovisual/visual stimulation in eye-movements/fixed-eyes condition) on simulated patients. Results predict different training effects and associate them to synaptic changes in specific circuits. Thanks to the SC multisensory enhancement, the audiovisual training is able to effectively strengthen the retina-SC route, which in turn can foster reinforcement of the SC-brainstem route (this occurs only in eye-movements condition) and reinforcement of the SC-extrastriate route (this occurs in presence of survived V1 tissue, regardless of eye condition). The retina-SC-brainstem circuit may mediate compensatory effects: the model assumes that reinforcement of this circuit can translate visual stimuli into short-latency saccades, possibly moving the stimuli into visual detection regions. The retina-SC-extrastriate circuit is related to restitutive effects: visual stimuli can directly elicit visual detection with no need for eye movements. Model predictions and assumptions are critically discussed in view of existing behavioral and neurophysiological data, forecasting that other oculomotor compensatory mechanisms, beyond short-latency saccades, are likely involved, and stimulating future experimental and theoretical investigations.
Locomotion Enhances Neural Encoding of Visual Stimuli in Mouse V1
2017-01-01
Neurons in mouse primary visual cortex (V1) are selective for particular properties of visual stimuli. Locomotion causes a change in cortical state that leaves their selectivity unchanged but strengthens their responses. Both locomotion and the change in cortical state are thought to be initiated by projections from the mesencephalic locomotor region, the latter through a disinhibitory circuit in V1. By recording simultaneously from a large number of single neurons in alert mice viewing moving gratings, we investigated the relationship between locomotion and the information contained within the neural population. We found that locomotion improved encoding of visual stimuli in V1 by two mechanisms. First, locomotion-induced increases in firing rates enhanced the mutual information between visual stimuli and single neuron responses over a fixed window of time. Second, stimulus discriminability was improved, even for fixed population firing rates, because of a decrease in noise correlations across the population. These two mechanisms contributed differently to improvements in discriminability across cortical layers, with changes in firing rates most important in the upper layers and changes in noise correlations most important in layer V. Together, these changes resulted in a threefold to fivefold reduction in the time needed to precisely encode grating direction and orientation. These results support the hypothesis that cortical state shifts during locomotion to accommodate an increased load on the visual system when mice are moving. SIGNIFICANCE STATEMENT This paper contains three novel findings about the representation of information in neurons within the primary visual cortex of the mouse. First, we show that locomotion reduces by at least a factor of 3 the time needed for information to accumulate in the visual cortex that allows the distinction of different visual stimuli. Second, we show that the effect of locomotion is to increase information in cells of all layers of the visual cortex. Third, we show that the means by which information is enhanced by locomotion differs between the upper layers, where the major effect is the increasing of firing rates, and in layer V, where the major effect is the reduction in noise correlations. PMID:28264980
Front-Presented Looming Sound Selectively Alters the Perceived Size of a Visual Looming Object.
Yamasaki, Daiki; Miyoshi, Kiyofumi; Altmann, Christian F; Ashida, Hiroshi
2018-07-01
In spite of accumulating evidence for the spatial rule governing cross-modal interaction according to the spatial consistency of stimuli, it is still unclear whether 3D spatial consistency (i.e., front/rear of the body) of stimuli also regulates audiovisual interaction. We investigated how sounds with increasing/decreasing intensity (looming/receding sound) presented from the front and rear space of the body impact the size perception of a dynamic visual object. Participants performed a size-matching task (Experiments 1 and 2) and a size adjustment task (Experiment 3) of visual stimuli with increasing/decreasing diameter, while being exposed to a front- or rear-presented sound with increasing/decreasing intensity. Throughout these experiments, we demonstrated that only the front-presented looming sound caused overestimation of the spatially consistent looming visual stimulus in size, but not of the spatially inconsistent and the receding visual stimulus. The receding sound had no significant effect on vision. Our results revealed that looming sound alters dynamic visual size perception depending on the consistency in the approaching quality and the front-rear spatial location of audiovisual stimuli, suggesting that the human brain differently processes audiovisual inputs based on their 3D spatial consistency. This selective interaction between looming signals should contribute to faster detection of approaching threats. Our findings extend the spatial rule governing audiovisual interaction into 3D space.
Brady, Timothy F.; Störmer, Viola S.; Alvarez, George A.
2016-01-01
Visual working memory is the cognitive system that holds visual information active to make it resistant to interference from new perceptual input. Information about simple stimuli—colors and orientations—is encoded into working memory rapidly: In under 100 ms, working memory ‟fills up,” revealing a stark capacity limit. However, for real-world objects, the same behavioral limits do not hold: With increasing encoding time, people store more real-world objects and do so with more detail. This boost in performance for real-world objects is generally assumed to reflect the use of a separate episodic long-term memory system, rather than working memory. Here we show that this behavioral increase in capacity with real-world objects is not solely due to the use of separate episodic long-term memory systems. In particular, we show that this increase is a result of active storage in working memory, as shown by directly measuring neural activity during the delay period of a working memory task using EEG. These data challenge fixed-capacity working memory models and demonstrate that working memory and its capacity limitations are dependent upon our existing knowledge. PMID:27325767
The level of audiovisual print-speech integration deficits in dyslexia.
Kronschnabel, Jens; Brem, Silvia; Maurer, Urs; Brandeis, Daniel
2014-09-01
The classical phonological deficit account of dyslexia is increasingly linked to impairments in grapho-phonological conversion, and to dysfunctions in superior temporal regions associated with audiovisual integration. The present study investigates mechanisms of audiovisual integration in typical and impaired readers at the critical developmental stage of adolescence. Congruent and incongruent audiovisual as well as unimodal (visual only and auditory only) material was presented. Audiovisual presentations were single letters and three-letter (consonant-vowel-consonant) stimuli accompanied by matching or mismatching speech sounds. Three-letter stimuli exhibited fast phonetic transitions as in real-life language processing and reading. Congruency effects, i.e. different brain responses to congruent and incongruent stimuli were taken as an indicator of audiovisual integration at a phonetic level (grapho-phonological conversion). Comparisons of unimodal and audiovisual stimuli revealed basic, more sensory aspects of audiovisual integration. By means of these two criteria of audiovisual integration, the generalizability of audiovisual deficits in dyslexia was tested. Moreover, it was expected that the more naturalistic three-letter stimuli are superior to single letters in revealing group differences. Electrophysiological and hemodynamic (EEG and fMRI) data were acquired simultaneously in a simple target detection task. Applying the same statistical models to event-related EEG potentials and fMRI responses allowed comparing the effects detected by the two techniques at a descriptive level. Group differences in congruency effects (congruent against incongruent) were observed in regions involved in grapho-phonological processing, including the left inferior frontal and angular gyri and the inferotemporal cortex. Importantly, such differences also emerged in superior temporal key regions. Three-letter stimuli revealed stronger group differences than single letters. No significant differences in basic measures of audiovisual integration emerged. Convergence of hemodynamic and electrophysiological signals appeared to be limited and mainly occurred for highly significant and large effects in visual cortices. The findings suggest efficient superior temporal tuning to audiovisual congruency in controls. In impaired readers, however, grapho-phonological conversion is effortful and inefficient, although basic audiovisual mechanisms seem intact. This unprecedented demonstration of audiovisual deficits in adolescent dyslexics provides critical evidence that the phonological deficit might be explained by impaired audiovisual integration at a phonetic level, especially for naturalistic and word-like stimulation. Copyright © 2014 Elsevier Ltd. All rights reserved.
Children's Criteria for Representational Adequacy in the Perception of Simple Sonic Stimuli
ERIC Educational Resources Information Center
Verschaffel, Lieven; Reybrouck, Mark; Jans, Christine; Van Dooren, Wim
2010-01-01
This study investigates children's metarepresentational competence with regard to listening to and making sense of simple sonic stimuli. Using diSessa's (2003) work on metarepresentational competence in mathematics and sciences as theoretical and empirical background, it aims to assess children's criteria for representational adequacy of graphical…
Walter, Sabrina; Keitel, Christian; Müller, Matthias M
2016-01-01
Visual attention can be focused concurrently on two stimuli at noncontiguous locations while intermediate stimuli remain ignored. Nevertheless, behavioral performance in multifocal attention tasks falters when attended stimuli fall within one visual hemifield as opposed to when they are distributed across left and right hemifields. This "different-hemifield advantage" has been ascribed to largely independent processing capacities of each cerebral hemisphere in early visual cortices. Here, we investigated how this advantage influences the sustained division of spatial attention. We presented six isoeccentric light-emitting diodes (LEDs) in the lower visual field, each flickering at a different frequency. Participants attended to two LEDs that were spatially separated by an intermediate LED and responded to synchronous events at to-be-attended LEDs. Task-relevant pairs of LEDs were either located in the same hemifield ("within-hemifield" conditions) or separated by the vertical meridian ("across-hemifield" conditions). Flicker-driven brain oscillations, steady-state visual evoked potentials (SSVEPs), indexed the allocation of attention to individual LEDs. Both behavioral performance and SSVEPs indicated enhanced processing of attended LED pairs during "across-hemifield" relative to "within-hemifield" conditions. Moreover, SSVEPs demonstrated effective filtering of intermediate stimuli in "across-hemifield" condition only. Thus, despite identical physical distances between LEDs of attended pairs, the spatial profiles of gain effects differed profoundly between "across-hemifield" and "within-hemifield" conditions. These findings corroborate that early cortical visual processing stages rely on hemisphere-specific processing capacities and highlight their limiting role in the concurrent allocation of visual attention to multiple locations.
Lazar, Aurel A; Slutskiy, Yevgeniy B; Zhou, Yiyin
2015-03-01
Past work demonstrated how monochromatic visual stimuli could be faithfully encoded and decoded under Nyquist-type rate conditions. Color visual stimuli were then traditionally encoded and decoded in multiple separate monochromatic channels. The brain, however, appears to mix information about color channels at the earliest stages of the visual system, including the retina itself. If information about color is mixed and encoded by a common pool of neurons, how can colors be demixed and perceived? We present Color Video Time Encoding Machines (Color Video TEMs) for encoding color visual stimuli that take into account a variety of color representations within a single neural circuit. We then derive a Color Video Time Decoding Machine (Color Video TDM) algorithm for color demixing and reconstruction of color visual scenes from spikes produced by a population of visual neurons. In addition, we formulate Color Video Channel Identification Machines (Color Video CIMs) for functionally identifying color visual processing performed by a spiking neural circuit. Furthermore, we derive a duality between TDMs and CIMs that unifies the two and leads to a general theory of neural information representation for stereoscopic color vision. We provide examples demonstrating that a massively parallel color visual neural circuit can be first identified with arbitrary precision and its spike trains can be subsequently used to reconstruct the encoded stimuli. We argue that evaluation of the functional identification methodology can be effectively and intuitively performed in the stimulus space. In this space, a signal reconstructed from spike trains generated by the identified neural circuit can be compared to the original stimulus. Copyright © 2014 Elsevier Ltd. All rights reserved.
An exploratory study of temporal integration in the peripheral retina of myopes
NASA Astrophysics Data System (ADS)
Macedo, Antonio F.; Encarnação, Tito J.; Vilarinho, Daniel; Baptista, António M. G.
2017-08-01
The visual system takes time to respond to visual stimuli, neurons need to accumulate information over a time span in order to fire. Visual information perceived by the peripheral retina might be impaired by imperfect peripheral optics leading to myopia development. This study explored the effect of eccentricity, moderate myopia and peripheral refraction in temporal visual integration. Myopes and emmetropes showed similar performance at detecting briefly flashed stimuli in different retinal locations. Our results show evidence that moderate myopes have normal visual integration when refractive errors are corrected with contact lens; however, the tendency to increased temporal integration thresholds observed in myopes deserves further investigation.
The visual attention span deficit in dyslexia is visual and not verbal.
Lobier, Muriel; Zoubrinetzky, Rachel; Valdois, Sylviane
2012-06-01
The visual attention (VA) span deficit hypothesis of dyslexia posits that letter string deficits are a consequence of impaired visual processing. Alternatively, some have interpreted this deficit as resulting from a visual-to-phonology code mapping impairment. This study aims to disambiguate between the two interpretations by investigating performance in a non-verbal character string visual categorization task with verbal and non-verbal stimuli. Results show that VA span ability predicts performance for the non-verbal visual processing task in normal reading children. Furthermore, VA span impaired dyslexic children are also impaired for the categorization task independently of stimuli type. This supports the hypothesis that the underlying impairment responsible for the VA span deficit is visual, not verbal. Copyright © 2011 Elsevier Srl. All rights reserved.
Object perception is selectively slowed by a visually similar working memory load.
Robinson, Alan; Manzi, Alberto; Triesch, Jochen
2008-12-22
The capacity of visual working memory has been extensively characterized, but little work has investigated how occupying visual memory influences other aspects of cognition and perception. Here we show a novel effect: maintaining an item in visual working memory slows processing of similar visual stimuli during the maintenance period. Subjects judged the gender of computer rendered faces or the naturalness of body postures while maintaining different visual memory loads. We found that when stimuli of the same class (faces or bodies) were maintained in memory, perceptual judgments were slowed. Interestingly, this is the opposite of what would be predicted from traditional priming. Our results suggest there is interference between visual working memory and perception, caused by visual similarity between new perceptual input and items already encoded in memory.
NASA Astrophysics Data System (ADS)
Tan, Bingyao; Mason, Erik; MacLellan, Ben; Bizheva, Kostadinka
2017-02-01
Visually evoked changes of retinal blood flow can serve as an important research tool to investigate eye disease such as glaucoma and diabetic retinopathy. In this study we used a combined, research-grade, high-resolution Doppler OCT+ERG system to study changes in the retinal blood flow (RBF) and retinal neuronal activity in response to visual stimuli of different intensities, durations and type (flicker vs single flash). Specifically, we used white light stimuli of 10 ms and 200 ms single flash, 1s and 2s for flickers stimuli of 20% duty cycle. The study was conducted in-vivo in pigmented rats. Both single flash (SF) and flicker stimuli caused increase in the RBF. The 10 ms SF stimulus did not generate any consistent measurable response, while the 200 ms SF of the same intensity generated 4% change in the RBF peaking at 1.5 s after the stimulus onset. Single flash stimuli introduced 2x smaller change in RBF and 30% earlier RBF peak response compared to flicker stimuli of the same intensity and duration. Doubling the intensity of SF or flicker stimuli increased the RBF peak magnitude by 1.5x. Shortening the flicker stimulus duration by 2x increased the RBF recovery rate by 2x, however, had no effect on the rate of RBF change from baseline to peak.
Visual categorization of natural movies by rats.
Vinken, Kasper; Vermaercke, Ben; Op de Beeck, Hans P
2014-08-06
Visual categorization of complex, natural stimuli has been studied for some time in human and nonhuman primates. Recent interest in the rodent as a model for visual perception, including higher-level functional specialization, leads to the question of how rodents would perform on a categorization task using natural stimuli. To answer this question, rats were trained in a two-alternative forced choice task to discriminate movies containing rats from movies containing other objects and from scrambled movies (ordinate-level categorization). Subsequently, transfer to novel, previously unseen stimuli was tested, followed by a series of control probes. The results show that the animals are capable of acquiring a decision rule by abstracting common features from natural movies to generalize categorization to new stimuli. Control probes demonstrate that they did not use single low-level features, such as motion energy or (local) luminance. Significant generalization was even present with stationary snapshots from untrained movies. The variability within and between training and test stimuli, the complexity of natural movies, and the control experiments and analyses all suggest that a more high-level rule based on more complex stimulus features than local luminance-based cues was used to classify the novel stimuli. In conclusion, natural stimuli can be used to probe ordinate-level categorization in rats. Copyright © 2014 the authors 0270-6474/14/3410645-14$15.00/0.
Lower pitch is larger, yet falling pitches shrink.
Eitan, Zohar; Schupak, Asi; Gotler, Alex; Marks, Lawrence E
2014-01-01
Experiments using diverse paradigms, including speeded discrimination, indicate that pitch and visually-perceived size interact perceptually, and that higher pitch is congruent with smaller size. While nearly all of these studies used static stimuli, here we examine the interaction of dynamic pitch and dynamic size, using Garner's speeded discrimination paradigm. Experiment 1 examined the interaction of continuous rise/fall in pitch and increase/decrease in object size. Experiment 2 examined the interaction of static pitch and size (steady high/low pitches and large/small visual objects), using an identical procedure. Results indicate that static and dynamic auditory and visual stimuli interact in opposite ways. While for static stimuli (Experiment 2), higher pitch is congruent with smaller size (as suggested by earlier work), for dynamic stimuli (Experiment 1), ascending pitch is congruent with growing size, and descending pitch with shrinking size. In addition, while static stimuli (Experiment 2) exhibit both congruence and Garner effects, dynamic stimuli (Experiment 1) present congruence effects without Garner interference, a pattern that is not consistent with prevalent interpretations of Garner's paradigm. Our interpretation of these results focuses on effects of within-trial changes on processing in dynamic tasks and on the association of changes in apparent size with implied changes in distance. Results suggest that static and dynamic stimuli can differ substantially in their cross-modal mappings, and may rely on different processing mechanisms.
Durai, Mithila; O'Keeffe, Mary G; Searchfield, Grant D
2017-03-01
Existing evidence suggests a strong relationship between tinnitus and emotion. The objective of this study was to examine the effects of short-term emotional changes along valence and arousal dimensions on tinnitus outcomes. Emotional stimuli were presented in two different modalities: auditory and visual. The authors hypothesized that (1) negative valence (unpleasant) stimuli and/or high arousal stimuli will lead to greater tinnitus loudness and annoyance than positive valence and/or low arousal stimuli, and (2) auditory emotional stimuli, which are in the same modality as the tinnitus, will exhibit a greater effect on tinnitus outcome measures than visual stimuli. Auditory and visual emotive stimuli were administered to 22 participants (12 females and 10 males) with chronic tinnitus, recruited via email invitations send out to the University of Auckland Tinnitus Research Volunteer Database. Emotional stimuli used were taken from the International Affective Digital Sounds- Version 2 (IADS-2) and the International Affective Picture System (IAPS) (Bradley and Lang, 2007a, 2007b). The Emotion Regulation Questionnaire (Gross and John, 2003) was administered alongside subjective ratings of tinnitus loudness and annoyance, and psychoacoustic sensation level matches to external sounds. Males had significantly different emotional regulation scores than females. Negative valence emotional auditory stimuli led to higher tinnitus loudness ratings in males and females and higher annoyance ratings in males only; loudness matches of tinnitus remained unchanged. The visual stimuli did not have an effect on tinnitus ratings. The results are discussed relative to the Adaptation Level Theory Model of Tinnitus. The results indicate that the negative valence dimension of emotion is associated with increased tinnitus magnitude judgements and gender effects may also be present, but only when the emotional stimulus is in the auditory modality. Sounds with emotional associations may be used for sound therapy for tinnitus relief; it is of interest to determine whether the emotional component of sound treatments can play a role in reversing the negative responses discussed in this paper. Copyright © 2016 Elsevier B.V. All rights reserved.
Color categories affect pre-attentive color perception.
Clifford, Alexandra; Holmes, Amanda; Davies, Ian R L; Franklin, Anna
2010-10-01
Categorical perception (CP) of color is the faster and/or more accurate discrimination of colors from different categories than equivalently spaced colors from the same category. Here, we investigate whether color CP at early stages of chromatic processing is independent of top-down modulation from attention. A visual oddball task was employed where frequent and infrequent colored stimuli were either same- or different-category, with chromatic differences equated across conditions. Stimuli were presented peripheral to a central distractor task to elicit an event-related potential (ERP) known as the visual mismatch negativity (vMMN). The vMMN is an index of automatic and pre-attentive visual change detection arising from generating loci in visual cortices. The results revealed a greater vMMN for different-category than same-category change detection when stimuli appeared in the lower visual field, and an absence of attention-related ERP components. The findings provide the first clear evidence for an automatic and pre-attentive categorical code for color. Copyright © 2010 Elsevier B.V. All rights reserved.
Independence between implicit and explicit processing as revealed by the Simon effect.
Lo, Shih-Yu; Yeh, Su-Ling
2011-09-01
Studies showing human behavior influenced by subliminal stimuli mainly focus on implicit processing per se, and little is known about its interaction with explicit processing. We examined this by using the Simon effect, wherein a task-irrelevant spatial distracter interferes with lateralized response. Lo and Yeh (2008) found that the visual Simon effect, although it occurred when participants were aware of the visual distracters, did not occur with subliminal visual distracters. We used the same paradigm and examined whether subliminal and supra-threshold stimuli are processed independently by adding a supra-threshold auditory distracter to ascertain whether it would interact with the subliminal visual distracter. Results showed auditory Simon effect, but there was still no visual Simon effect, indicating that supra-threshold and subliminal stimuli are processed separately in independent streams. In contrast to the traditional view that implicit processing precedes explicit processing, our results suggest that they operate independently in a parallel fashion. Copyright © 2010 Elsevier Inc. All rights reserved.
Visual attention modulates brain activation to angry voices.
Mothes-Lasch, Martin; Mentzel, Hans-Joachim; Miltner, Wolfgang H R; Straube, Thomas
2011-06-29
In accordance with influential models proposing prioritized processing of threat, previous studies have shown automatic brain responses to angry prosody in the amygdala and the auditory cortex under auditory distraction conditions. However, it is unknown whether the automatic processing of angry prosody is also observed during cross-modal distraction. The current fMRI study investigated brain responses to angry versus neutral prosodic stimuli during visual distraction. During scanning, participants were exposed to angry or neutral prosodic stimuli while visual symbols were displayed simultaneously. By means of task requirements, participants either attended to the voices or to the visual stimuli. While the auditory task revealed pronounced activation in the auditory cortex and amygdala to angry versus neutral prosody, this effect was absent during the visual task. Thus, our results show a limitation of the automaticity of the activation of the amygdala and auditory cortex to angry prosody. The activation of these areas to threat-related voices depends on modality-specific attention.
Learning Enhances Sensory and Multiple Non-sensory Representations in Primary Visual Cortex
Poort, Jasper; Khan, Adil G.; Pachitariu, Marius; Nemri, Abdellatif; Orsolic, Ivana; Krupic, Julija; Bauza, Marius; Sahani, Maneesh; Keller, Georg B.; Mrsic-Flogel, Thomas D.; Hofer, Sonja B.
2015-01-01
Summary We determined how learning modifies neural representations in primary visual cortex (V1) during acquisition of a visually guided behavioral task. We imaged the activity of the same layer 2/3 neuronal populations as mice learned to discriminate two visual patterns while running through a virtual corridor, where one pattern was rewarded. Improvements in behavioral performance were closely associated with increasingly distinguishable population-level representations of task-relevant stimuli, as a result of stabilization of existing and recruitment of new neurons selective for these stimuli. These effects correlated with the appearance of multiple task-dependent signals during learning: those that increased neuronal selectivity across the population when expert animals engaged in the task, and those reflecting anticipation or behavioral choices specifically in neuronal subsets preferring the rewarded stimulus. Therefore, learning engages diverse mechanisms that modify sensory and non-sensory representations in V1 to adjust its processing to task requirements and the behavioral relevance of visual stimuli. PMID:26051421
Executive working memory load induces inattentional blindness.
Fougnie, Daryl; Marois, René
2007-02-01
When attention is engaged in a task, unexpected events in the visual scene may go undetected, a phenomenon known as inattentional blindness (IB). At what stage of information processing must attention be engaged for IB to occur? Although manipulations that tax visuospatial attention can induce IB, the evidence is more equivocal for tasks that engage attention at late, central stages of information processing. Here, we tested whether IB can be specifically induced by central executive processes. An unexpected visual stimulus was presented during the retention interval of a working memory task that involved either simply maintaining verbal material or rearranging the material into alphabetical order. The unexpected stimulus was more likely to be missed during manipulation than during simple maintenance of the verbal information. Thus, the engagement of executive processes impairs the ability to detect unexpected, task-irrelevant stimuli, suggesting that IB can result from central, amodal stages of processing.
Monkey Visual Short-Term Memory Directly Compared to Humans
Elmore, L. Caitlin; Wright, Anthony A.
2015-01-01
Two adult rhesus monkeys were trained to detect which item in an array of memory items had changed using the same stimuli, viewing times, and delays as used with humans. Although the monkeys were extensively trained, they were less accurate than humans with the same array sizes (2, 4, & 6 items), with both stimulus types (colored squares, clip art), and showed calculated memory capacities of about one item (or less). Nevertheless, the memory results from both monkeys and humans for both stimulus types were well characterized by the inverse power-law of display size. This characterization provides a simple and straightforward summary of a fundamental process of visual short-term memory (how VSTM declines with memory load) that emphasizes species similarities based upon similar functional relationships. By more closely matching of monkey testing parameters to those of humans, the similar functional relationships strengthen the evidence suggesting similar processes underlying monkey and human VSTM. PMID:25706544
Adaptation disrupts motion integration in the primate dorsal stream
Patterson, Carlyn A.; Wissig, Stephanie C.; Kohn, Adam
2014-01-01
Summary Sensory systems adjust continuously to the environment. The effects of recent sensory experience—or adaptation—are typically assayed by recording in a relevant subcortical or cortical network. However, adaptation effects cannot be localized to a single, local network. Adjustments in one circuit or area will alter the input provided to others, with unclear consequences for computations implemented in the downstream circuit. Here we show that prolonged adaptation with drifting gratings, which alters responses in the early visual system, impedes the ability of area MT neurons to integrate motion signals in plaid stimuli. Perceptual experiments reveal a corresponding loss of plaid coherence. A simple computational model shows how the altered representation of motion signals in early cortex can derail integration in MT. Our results suggest that the effects of adaptation cascade through the visual system, derailing the downstream representation of distinct stimulus attributes. PMID:24507198
Cortical oscillations related to processing congruent and incongruent grapheme-phoneme pairs.
Herdman, Anthony T; Fujioka, Takako; Chau, Wilkin; Ross, Bernhard; Pantev, Christo; Picton, Terence W
2006-05-15
In this study, we investigated changes in cortical oscillations following congruent and incongruent grapheme-phoneme stimuli. Hiragana graphemes and phonemes were simultaneously presented as congruent or incongruent audiovisual stimuli to native Japanese-speaking participants. The discriminative reaction time was 57 ms shorter for congruent than incongruent stimuli. Analysis of MEG responses using synthetic aperture magnetometry (SAM) revealed that congruent stimuli evoked larger 2-10 Hz activity in the left auditory cortex within the first 250 ms after stimulus onset, and smaller 2-16 Hz activity in bilateral visual cortices between 250 and 500 ms. These results indicate that congruent visual input can modify cortical activity in the left auditory cortex.
Reinke, Karen S.; LaMontagne, Pamela J.; Habib, Reza
2011-01-01
Spatial attention has been argued to be adaptive by enhancing the processing of visual stimuli within the ‘spotlight of attention’. We previously reported that crude threat cues (backward masked fearful faces) facilitate spatial attention through a network of brain regions consisting of the amygdala, anterior cingulate and contralateral visual cortex. However, results from previous functional magnetic resonance imaging (fMRI) dot-probe studies have been inconclusive regarding a fearful face-elicited contralateral modulation of visual targets. Here, we tested the hypothesis that the capture of spatial attention by crude threat cues would facilitate processing of subsequently presented visual stimuli within the masked fearful face-elicited ‘spotlight of attention’ in the contralateral visual cortex. Participants performed a backward masked fearful face dot-probe task while brain activity was measured with fMRI. Masked fearful face left visual field trials enhanced activity for spatially congruent targets in the right superior occipital gyrus, fusiform gyrus and lateral occipital complex, while masked fearful face right visual field trials enhanced activity in the left middle occipital gyrus. These data indicate that crude threat elicited spatial attention enhances the processing of subsequent visual stimuli in contralateral occipital cortex, which may occur by lowering neural activation thresholds in this retinotopic location. PMID:20702500
Goodale, M A; Murison, R C
1975-05-02
The effects of bilateral removal of the superior colliculus or visual cortex on visually guided locomotor movements in rats performing a brightness discrimination task were investigated directly with the use of cine film. Rats with collicular lesions showed patterns of locomotion comparable to or more efficient than those of normal animals when approaching one of 5 small doors located at one end of a large open area. In contrast, animals with large but incomplete lesions of visual cortex were distinctly impaired in their visual control of approach responses to the same stimuli. On the other hand, rats with collicular damage showed no orienting reflex or evidence of distraction in the same task when novel visual or auditory stimuli were presented. However, both normal and visual-decorticate rats showed various components of the orienting reflex and disturbance in task performance when the same novel stimuli were presented. These results suggest that although the superior colliculus does not appear to be essential to the visual control of locomotor orientation, this midbrain structure might participate in the mediation of shifts in visual fixation and attention. Visual cortex, while contributing to visuospatial guidance of locomotor movements, might not play a significant role in the control and integration of the orienting reflex.
Mo, Lei; Xu, Guiping; Kay, Paul; Tan, Li-Hai
2011-01-01
Previous studies have shown that the effect of language on categorical perception of color is stronger when stimuli are presented in the right visual field than in the left. To examine whether this lateralized effect occurs preattentively at an early stage of processing, we monitored the visual mismatch negativity, which is a component of the event-related potential of the brain to an unfamiliar stimulus among a temporally presented series of stimuli. In the oddball paradigm we used, the deviant stimuli were unrelated to the explicit task. A significant interaction between color-pair type (within-category vs. between-category) and visual field (left vs. right) was found. The amplitude of the visual mismatch negativity component evoked by the within-category deviant was significantly smaller than that evoked by the between-category deviant when displayed in the right visual field, but no such difference was observed for the left visual field. This result constitutes electroencephalographic evidence that the lateralized Whorf effect per se occurs out of awareness and at an early stage of processing. PMID:21844340
Threat as a feature in visual semantic object memory.
Calley, Clifford S; Motes, Michael A; Chiang, H-Sheng; Buhl, Virginia; Spence, Jeffrey S; Abdi, Hervé; Anand, Raksha; Maguire, Mandy; Estevez, Leonardo; Briggs, Richard; Freeman, Thomas; Kraut, Michael A; Hart, John
2013-08-01
Threatening stimuli have been found to modulate visual processes related to perception and attention. The present functional magnetic resonance imaging (fMRI) study investigated whether threat modulates visual object recognition of man-made and naturally occurring categories of stimuli. Compared with nonthreatening pictures, threatening pictures of real items elicited larger fMRI BOLD signal changes in medial visual cortices extending inferiorly into the temporo-occipital (TO) "what" pathways. This region elicited greater signal changes for threatening items compared to nonthreatening from both the natural-occurring and man-made stimulus supraordinate categories, demonstrating a featural component to these visual processing areas. Two additional loci of signal changes within more lateral inferior TO areas (bilateral BA18 and 19 as well as the right ventral temporal lobe) were detected for a category-feature interaction, with stronger responses to man-made (category) threatening (feature) stimuli than to natural threats. The findings are discussed in terms of visual recognition of processing efficiently or rapidly groups of items that confer an advantage for survival. Copyright © 2012 Wiley Periodicals, Inc.
In-group modulation of perceptual matching.
Moradi, Zargol; Sui, Jie; Hewstone, Miles; Humphreys, Glyn W
2015-10-01
We report a novel effect of in-group bias on a task requiring simple perceptual matching of stimuli. Football fans were instructed to associate the badges of their favorite football team (in-group), a rival team (out-group), and neutral teams with simple geometric shapes. Responses to matching in-group stimuli were more efficient, and discriminability was enhanced, as compared to out-group stimuli (rival and neutral)-a result that occurred even when participants responded only to the (equally familiar) geometric shapes. Across individuals, the in-group bias on shape matching was correlated with measures of group satisfaction, and similar results were found when football fans performed the task, in the context of both the football ground and a laboratory setting. We also observed effects of in-group bias on the response criteria in some but not all of the experiments. In control studies, the advantage for in-group stimuli was not found in an independent sample of participants who were not football fans. This indicates that there was not an intrinsic advantage for the stimuli that were "in-group" for football fans. Also, performance did not differ for familiar versus unfamiliar stimuli without in-group associations. These findings indicate that group identification can affect simple shape matching.
Nelson, D E; Takahashi, J S
1991-01-01
1. Light-induced phase shifts of the circadian rhythm of wheel-running activity were used to measure the photic sensitivity of a circadian pacemaker and the visual pathway that conveys light information to it in the golden hamster (Mesocricetus auratus). The sensitivity to stimulus irradiance and duration was assessed by measuring the magnitude of phase-shift responses to photic stimuli of different irradiance and duration. The visual sensitivity was also measured at three different phases of the circadian rhythm. 2. The stimulus-response curves measured at different circadian phases suggest that the maximum phase-shift is the only aspect of visual responsivity to change as a function of the circadian day. The half-saturation constants (sigma) for the stimulus-response curves are not significantly different over the three circadian phases tested. The photic sensitivity to irradiance (1/sigma) appears to remain constant over the circadian day. 3. The hamster circadian pacemaker and the photoreceptive system that subserves it are more sensitive to the irradiance of longer-duration stimuli than to irradiance of briefer stimuli. The system is maximally sensitive to the irradiance of stimuli of 300 s and longer in duration. A quantitative model is presented to explain the changes that occur in the stimulus-response curves as a function of photic stimulus duration. 4. The threshold for photic stimulation of the hamster circadian pacemaker is also quite high. The threshold irradiance (the minimum irradiance necessary to induce statistically significant responses) is approximately 10(11) photons cm-2 s-1 for optimal stimulus durations. This threshold is equivalent to a luminance at the cornea of 0.1 cd m-2. 5. We also measured the sensitivity of this visual pathway to the total number of photons in a stimulus. This system is maximally sensitive to photons in stimuli between 30 and 3600 s in duration. The maximum quantum efficiency of photic integration occurs in 300 s stimuli. 6. These results suggest that the visual pathways that convey light information to the mammalian circadian pacemaker possess several unique characteristics. These pathways are relatively insensitive to light irradiance and also integrate light inputs over relatively long durations. This visual system, therefore, possesses an optimal sensitivity of 'tuning' to total photons delivered in stimuli of several minutes in duration. Together these characteristics may make this visual system unresponsive to environmental 'noise' that would interfere with the entrainment of circadian rhythms to light-dark cycles. PMID:1895235
Visual and auditory perception in preschool children at risk for dyslexia.
Ortiz, Rosario; Estévez, Adelina; Muñetón, Mercedes; Domínguez, Carolina
2014-11-01
Recently, there has been renewed interest in perceptive problems of dyslexics. A polemic research issue in this area has been the nature of the perception deficit. Another issue is the causal role of this deficit in dyslexia. Most studies have been carried out in adult and child literates; consequently, the observed deficits may be the result rather than the cause of dyslexia. This study addresses these issues by examining visual and auditory perception in children at risk for dyslexia. We compared children from preschool with and without risk for dyslexia in auditory and visual temporal order judgment tasks and same-different discrimination tasks. Identical visual and auditory, linguistic and nonlinguistic stimuli were presented in both tasks. The results revealed that the visual as well as the auditory perception of children at risk for dyslexia is impaired. The comparison between groups in auditory and visual perception shows that the achievement of children at risk was lower than children without risk for dyslexia in the temporal tasks. There were no differences between groups in auditory discrimination tasks. The difficulties of children at risk in visual and auditory perceptive processing affected both linguistic and nonlinguistic stimuli. Our conclusions are that children at risk for dyslexia show auditory and visual perceptive deficits for linguistic and nonlinguistic stimuli. The auditory impairment may be explained by temporal processing problems and these problems are more serious for processing language than for processing other auditory stimuli. These visual and auditory perceptive deficits are not the consequence of failing to learn to read, thus, these findings support the theory of temporal processing deficit. Copyright © 2014 Elsevier Ltd. All rights reserved.
Edmiston, E. Kale; McHugo, Maureen; Dukic, Mildred S.; Smith, Stephen D.; Abou-Khalil, Bassel; Eggers, Erica
2013-01-01
Emotionally arousing pictures induce increased activation of visual pathways relative to emotionally neutral images. A predominant model for the preferential processing and attention to emotional stimuli posits that the amygdala modulates sensory pathways through its projections to visual cortices. However, recent behavioral studies have found intact perceptual facilitation of emotional stimuli in individuals with amygdala damage. To determine the importance of the amygdala to modulations in visual processing, we used functional magnetic resonance imaging to examine visual cortical blood oxygenation level-dependent (BOLD) signal in response to emotionally salient and neutral images in a sample of human patients with unilateral medial temporal lobe resection that included the amygdala. Adults with right (n = 13) or left (n = 5) medial temporal lobe resections were compared with demographically matched healthy control participants (n = 16). In the control participants, both aversive and erotic images produced robust BOLD signal increases in bilateral primary and secondary visual cortices relative to neutral images. Similarly, all patients with amygdala resections showed enhanced visual cortical activations to erotic images both ipsilateral and contralateral to the lesion site. All but one of the amygdala resection patients showed similar enhancements to aversive stimuli and there were no significant group differences in visual cortex BOLD responses in patients compared with controls for either aversive or erotic images. Our results indicate that neither the right nor left amygdala is necessary for the heightened visual cortex BOLD responses observed during emotional stimulus presentation. These data challenge an amygdalo-centric model of emotional modulation and suggest that non-amygdalar processes contribute to the emotional modulation of sensory pathways. PMID:23825407
Zhang, Di; Sessa, Salvatore; Kong, Weisheng; Cosentino, Sarah; Magistro, Daniele; Ishii, Hiroyuki; Zecca, Massimiliano; Takanishi, Atsuo
2015-11-01
Current training for laparoscopy focuses only on the enhancement of manual skill and does not give advice on improving trainees' posture. However, a poor posture can result in increased static muscle loading, faster fatigue, and impaired psychomotor task performance. In this paper, the authors propose a method, named subliminal persuasion, which gives the trainee real-time advice for correcting the upper limb posture during laparoscopic training like the expert but leads to a lower increment in the workload. A 9-axis inertial measurement unit was used to compute the upper limb posture, and a Detection Reaction Time device was developed and used to measure the workload. A monitor displayed not only images from laparoscope, but also a visual stimulus, a transparent red cross superimposed to the laparoscopic images, when the trainee had incorrect upper limb posture. One group was exposed, when their posture was not correct during training, to a short (about 33 ms) subliminal visual stimulus. The control group instead was exposed to longer (about 660 ms) supraliminal visual stimuli. We found that subliminal visual stimulation is a valid method to improve trainees' upper limb posture during laparoscopic training. Moreover, the additional workload required for subconscious processing of subliminal visual stimuli is less than the one required for supraliminal visual stimuli, which is processed instead at the conscious level. We propose subliminal persuasion as a method to give subconscious real-time stimuli to improve upper limb posture during laparoscopic training. Its effectiveness and efficiency were confirmed against supraliminal stimuli transmitted at the conscious level: Subliminal persuasion improved upper limb posture of trainees, with a smaller increase on the overall workload.
Wallentin, Mikkel; Skakkebæk, Anne; Bojesen, Anders; Fedder, Jens; Laurberg, Peter; Østergaard, John R.; Hertz, Jens Michael; Pedersen, Anders Degn; Gravholt, Claus Højbjerg
2016-01-01
Klinefelter syndrome (47, XXY) (KS) is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS = 49; Controls = 49) responded to whether the words “GREEN” or “RED” were displayed in green or red (incongruent versus congruent colors). One of the colors was presented three times as often as the other, making it possible to study both congruency and adaptation effects independently. Auditory stimuli saying “GREEN” or “RED” had the same distribution, making it possible to study effects of perceptual modality as well as Frequency effects across modalities. We found that KS had an increased response to motor output in primary motor cortex and an increased response to auditory stimuli in auditory cortices, but no difference in primary visual cortices. KS displayed a diminished response to written visual stimuli in secondary visual regions near the Visual Word Form Area, consistent with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop) or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain's motor network with no difference between groups. No effects of testosterone level or brain volume were found. In sum, the present findings suggest that auditory and motor systems in KS are selectively affected, perhaps as a compensatory strategy, and that this is not a systemic effect as it is not seen in the visual system. PMID:26958463
Wallentin, Mikkel; Skakkebæk, Anne; Bojesen, Anders; Fedder, Jens; Laurberg, Peter; Østergaard, John R; Hertz, Jens Michael; Pedersen, Anders Degn; Gravholt, Claus Højbjerg
2016-01-01
Klinefelter syndrome (47, XXY) (KS) is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS = 49; Controls = 49) responded to whether the words "GREEN" or "RED" were displayed in green or red (incongruent versus congruent colors). One of the colors was presented three times as often as the other, making it possible to study both congruency and adaptation effects independently. Auditory stimuli saying "GREEN" or "RED" had the same distribution, making it possible to study effects of perceptual modality as well as Frequency effects across modalities. We found that KS had an increased response to motor output in primary motor cortex and an increased response to auditory stimuli in auditory cortices, but no difference in primary visual cortices. KS displayed a diminished response to written visual stimuli in secondary visual regions near the Visual Word Form Area, consistent with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop) or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain's motor network with no difference between groups. No effects of testosterone level or brain volume were found. In sum, the present findings suggest that auditory and motor systems in KS are selectively affected, perhaps as a compensatory strategy, and that this is not a systemic effect as it is not seen in the visual system.
Vinken, Kasper; Van den Bergh, Gert; Vermaercke, Ben; Op de Beeck, Hans P.
2016-01-01
In recent years, the rodent has come forward as a candidate model for investigating higher level visual abilities such as object vision. This view has been backed up substantially by evidence from behavioral studies that show rats can be trained to express visual object recognition and categorization capabilities. However, almost no studies have investigated the functional properties of rodent extrastriate visual cortex using stimuli that target object vision, leaving a gap compared with the primate literature. Therefore, we recorded single-neuron responses along a proposed ventral pathway in rat visual cortex to investigate hallmarks of primate neural object representations such as preference for intact versus scrambled stimuli and category-selectivity. We presented natural movies containing a rat or no rat as well as their phase-scrambled versions. Population analyses showed increased dissociation in representations of natural versus scrambled stimuli along the targeted stream, but without a clear preference for natural stimuli. Along the measured cortical hierarchy the neural response seemed to be driven increasingly by features that are not V1-like and destroyed by phase-scrambling. However, there was no evidence for category selectivity for the rat versus nonrat distinction. Together, these findings provide insights about differences and commonalities between rodent and primate visual cortex. PMID:27146315
Lykins, Amy D; Meana, Marta; Kambe, Gretchen
2006-10-01
As a first step in the investigation of the role of visual attention in the processing of erotic stimuli, eye-tracking methodology was employed to measure eye movements during erotic scene presentation. Because eye-tracking is a novel methodology in sexuality research, we attempted to determine whether the eye-tracker could detect differences (should they exist) in visual attention to erotic and non-erotic scenes. A total of 20 men and 20 women were presented with a series of erotic and non-erotic images and tracked their eye movements during image presentation. Comparisons between erotic and non-erotic image groups showed significant differences on two of three dependent measures of visual attention (number of fixations and total time) in both men and women. As hypothesized, there was a significant Stimulus x Scene Region interaction, indicating that participants visually attended to the body more in the erotic stimuli than in the non-erotic stimuli, as evidenced by a greater number of fixations and longer total time devoted to that region. These findings provide support for the application of eye-tracking methodology as a measure of visual attentional capture in sexuality research. Future applications of this methodology to expand our knowledge of the role of cognition in sexuality are suggested.
Split brain: divided perception but undivided consciousness.
Pinto, Yair; Neville, David A; Otten, Marte; Corballis, Paul M; Lamme, Victor A F; de Haan, Edward H F; Foschi, Nicoletta; Fabri, Mara
2017-05-01
In extensive studies with two split-brain patients we replicate the standard finding that stimuli cannot be compared across visual half-fields, indicating that each hemisphere processes information independently of the other. Yet, crucially, we show that the canonical textbook findings that a split-brain patient can only respond to stimuli in the left visual half-field with the left hand, and to stimuli in the right visual half-field with the right hand and verbally, are not universally true. Across a wide variety of tasks, split-brain patients with a complete and radiologically confirmed transection of the corpus callosum showed full awareness of presence, and well above chance-level recognition of location, orientation and identity of stimuli throughout the entire visual field, irrespective of response type (left hand, right hand, or verbally). Crucially, we used confidence ratings to assess conscious awareness. This revealed that also on high confidence trials, indicative of conscious perception, response type did not affect performance. These findings suggest that severing the cortical connections between hemispheres splits visual perception, but does not create two independent conscious perceivers within one brain. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
ERIC Educational Resources Information Center
Zupan, Barbra; Sussman, Joan E.
2009-01-01
Experiment 1 examined modality preferences in children and adults with normal hearing to combined auditory-visual stimuli. Experiment 2 compared modality preferences in children using cochlear implants participating in an auditory emphasized therapy approach to the children with normal hearing from Experiment 1. A second objective in both…
Response-dependent dynamics of cell-specific inhibition in cortical networks in vivo
El-Boustani, Sami; Sur, Mriganka
2014-01-01
In the visual cortex, inhibitory neurons alter the computations performed by target cells via combination of two fundamental operations, division and subtraction. The origins of these operations have been variously ascribed to differences in neuron classes, synapse location or receptor conductances. Here, by utilizing specific visual stimuli and single optogenetic probe pulses, we show that the function of parvalbumin-expressing and somatostatin-expressing neurons in mice in vivo is governed by the overlap of response timing between these neurons and their targets. In particular, somatostatin-expressing neurons respond at longer latencies to small visual stimuli compared with their target neurons and provide subtractive inhibition. With large visual stimuli, however, they respond at short latencies coincident with their target cells and switch to provide divisive inhibition. These results indicate that inhibition mediated by these neurons is a dynamic property of cortical circuits rather than an immutable property of neuronal classes. PMID:25504329
Meyer, Georg F.; Shao, Fei; White, Mark D.; Hopkins, Carl; Robotham, Antony J.
2013-01-01
Externally generated visual motion signals can cause the illusion of self-motion in space (vection) and corresponding visually evoked postural responses (VEPR). These VEPRs are not simple responses to optokinetic stimulation, but are modulated by the configuration of the environment. The aim of this paper is to explore what factors modulate VEPRs in a high quality virtual reality (VR) environment where real and virtual foreground objects served as static visual, auditory and haptic reference points. Data from four experiments on visually evoked postural responses show that: 1) visually evoked postural sway in the lateral direction is modulated by the presence of static anchor points that can be haptic, visual and auditory reference signals; 2) real objects and their matching virtual reality representations as visual anchors have different effects on postural sway; 3) visual motion in the anterior-posterior plane induces robust postural responses that are not modulated by the presence of reference signals or the reality of objects that can serve as visual anchors in the scene. We conclude that automatic postural responses for laterally moving visual stimuli are strongly influenced by the configuration and interpretation of the environment and draw on multisensory representations. Different postural responses were observed for real and virtual visual reference objects. On the basis that automatic visually evoked postural responses in high fidelity virtual environments should mimic those seen in real situations we propose to use the observed effect as a robust objective test for presence and fidelity in VR. PMID:23840760
The amygdala and basal forebrain as a pathway for motivationally guided attention.
Peck, Christopher J; Salzman, C Daniel
2014-10-08
Visual stimuli associated with rewards attract spatial attention. Neurophysiological mechanisms that mediate this process must register both the motivational significance and location of visual stimuli. Recent neurophysiological evidence indicates that the amygdala encodes information about both of these parameters. Furthermore, the firing rate of amygdala neurons predicts the allocation of spatial attention. One neural pathway through which the amygdala might influence attention involves the intimate and bidirectional connections between the amygdala and basal forebrain (BF), a brain area long implicated in attention. Neurons in the rhesus monkey amygdala and BF were therefore recorded simultaneously while subjects performed a detection task in which the stimulus-reward associations of visual stimuli modulated spatial attention. Neurons in BF were spatially selective for reward-predictive stimuli, much like the amygdala. The onset of reward-predictive signals in each brain area suggested different routes of processing for reward-predictive stimuli appearing in the ipsilateral and contralateral fields. Moreover, neurons in the amygdala, but not BF, tracked trial-to-trial fluctuations in spatial attention. These results suggest that the amygdala and BF could play distinct yet inter-related roles in influencing attention elicited by reward-predictive stimuli. Copyright © 2014 the authors 0270-6474/14/3413757-11$15.00/0.
The dynamic-stimulus advantage of visual symmetry perception.
Niimi, Ryosuke; Watanabe, Katsumi; Yokosawa, Kazuhiko
2008-09-01
It has been speculated that visual symmetry perception from dynamic stimuli involves mechanisms different from those for static stimuli. However, previous studies found no evidence that dynamic stimuli lead to active temporal processing and improve symmetry detection. In this study, four psychophysical experiments investigated temporal processing in symmetry perception using both dynamic and static stimulus presentations of dot patterns. In Experiment 1, rapid successive presentations of symmetric patterns (e.g., 16 patterns per 853 ms) produced more accurate discrimination of orientations of symmetry axes than static stimuli (single pattern presented through 853 ms). In Experiments 2-4, we confirmed that the dynamic-stimulus advantage depended upon presentation of a large number of unique patterns within a brief period (853 ms) in the dynamic conditions. Evidently, human vision takes advantage of temporal processing for symmetry perception from dynamic stimuli.
Combining MRI and VEP imaging to isolate the temporal response of visual cortical areas
NASA Astrophysics Data System (ADS)
Carney, Thom; Ales, Justin; Klein, Stanley A.
2008-02-01
The human brain has well over 30 cortical areas devoted to visual processing. Classical neuro-anatomical as well as fMRI studies have demonstrated that early visual areas have a retinotopic organization whereby adjacent locations in visual space are represented in adjacent areas of cortex within a visual area. At the 2006 Electronic Imaging meeting we presented a method using sprite graphics to obtain high resolution retinotopic visual evoked potential responses using multi-focal m-sequence technology (mfVEP). We have used this method to record mfVEPs from up to 192 non overlapping checkerboard stimulus patches scaled such that each patch activates about 12 mm2 of cortex in area V1 and even less in V2. This dense coverage enables us to incorporate cortical folding constraints, given by anatomical MRI and fMRI results from the same subject, to isolate the V1 and V2 temporal responses. Moreover, the method offers a simple means of validating the accuracy of the extracted V1 and V2 time functions by comparing the results between left and right hemispheres that have unique folding patterns and are processed independently. Previous VEP studies have been contradictory as to which area responds first to visual stimuli. This new method accurately separates the signals from the two areas and demonstrates that both respond with essentially the same latency. A new method is introduced which describes better ways to isolate cortical areas using an empirically determined forward model. The method includes a novel steady state mfVEP and complex SVD techniques. In addition, this evolving technology is put to use examining how stimulus attributes differentially impact the response in different cortical areas, in particular how fast nonlinear contrast processing occurs. This question is examined using both state triggered kernel estimation (STKE) and m-sequence "conditioned kernels". The analysis indicates different contrast gain control processes in areas V1 and V2. Finally we show that our m-sequence multi-focal stimuli have advantages for integrating EEG and MEG for improved dipole localization.
Enhanced Associative Memory for Colour (but Not Shape or Location) in Synaesthesia
ERIC Educational Resources Information Center
Pritchard, Jamie; Rothen, Nicolas; Coolbear, Daniel; Ward, Jamie
2013-01-01
People with grapheme-colour synaesthesia have been shown to have enhanced memory on a range of tasks using both stimuli that induce synaesthesia (e.g. words) and, more surprisingly, stimuli that do not (e.g. certain abstract visual stimuli). This study examines the latter by using multi-featured stimuli consisting of shape, colour and location…
Neural responses in the macaque v1 to bar stimuli with various lengths presented on the blind spot.
Matsumoto, Masayuki; Komatsu, Hidehiko
2005-05-01
Although there is no retinal input within the blind spot, it is filled with the same visual attributes as its surround. Earlier studies showed that neural responses are evoked at the retinotopic representation of the blind spot in the primary visual cortex (V1) when perceptual filling-in of a surface or completion of a bar occurs. To determine whether these neural responses correlate with perception, we recorded from V1 neurons whose receptive fields overlapped the blind spot. Bar stimuli of various lengths were presented at the blind spots of monkeys while they performed a fixation task. One end of the bar was fixed at a position outside the blind spot, and the position of the other end was varied. Perceived bar length was measured using a similar set of bar stimuli in human subjects. As long as one end of the bar was inside the blind spot, the perceived bar length remained constant, and when the bar exceeded the blind spot, perceptual completion occurred, and the perceived bar length increased substantially. Some V1 neurons of the monkey exhibited a significant increase in their activity when the bar exceeded the blind spot, even though the amount of the retinal stimulation increased only slightly. These response increases coincided with perceptual completion observed in human subjects and were much larger than would be expected from simple spatial summation and could not be explained by contextual modulation. We conclude that the completed bar appearing on the part of the receptive field embedded within the blind spot gave rise to the observed increase in neuronal activity.
Challenging Cognitive Control by Mirrored Stimuli in Working Memory Matching
Wirth, Maria; Gaschler, Robert
2017-01-01
Cognitive conflict has often been investigated by placing automatic processing originating from learned associations in competition with instructed task demands. Here we explore whether mirror generalization as a congenital mechanism can be employed to create cognitive conflict. Past research suggests that the visual system automatically generates an invariant representation of visual objects and their mirrored counterparts (i.e., mirror generalization), and especially so for lateral reversals (e.g., a cup seen from the left side vs. right side). Prior work suggests that mirror generalization can be reduced or even overcome by learning (i.e., for those visual objects for which it is not appropriate, such as letters d and b). We, therefore, minimized prior practice on resolving conflicts involving mirror generalization by using kanji stimuli as non-verbal and unfamiliar material. In a 1-back task, participants had to check a stream of kanji stimuli for identical repetitions and avoid miss-categorizing mirror reversed stimuli as exact repetitions. Consistent with previous work, lateral reversals led to profound slowing of reaction times and lower accuracy in Experiment 1. Yet, different from previous reports suggesting that lateral reversals lead to stronger conflict, similar slowing for vertical and horizontal mirror transformations was observed in Experiment 2. Taken together, the results suggest that transformations of visual stimuli can be employed to challenge cognitive control in the 1-back task. PMID:28503160
Zhang, Yi; Chen, Lihan
2016-01-01
Recent studies of brain plasticity that pertain to time perception have shown that fast training of temporal discrimination in one modality, for example, the auditory modality, can improve performance of temporal discrimination in another modality, such as the visual modality. We here examined whether the perception of visual Ternus motion could be recalibrated through fast crossmodal statistical binding of temporal information and stimuli properties binding. We conducted two experiments, composed of three sessions each: pre-test, learning, and post-test. In both the pre-test and the post-test, participants classified the Ternus display as either “element motion” or “group motion.” For the training session in Experiment 1, we constructed two types of temporal structures, in which two consecutively presented sound beeps were dominantly (80%) flanked by one leading visual Ternus frame and by one lagging visual Ternus frame (VAAV) or dominantly inserted by two Ternus visual frames (AVVA). Participants were required to respond which interval (auditory vs. visual) was longer. In Experiment 2, we presented only a single auditory–visual pair but with similar temporal configurations as in Experiment 1, and asked participants to perform an audio–visual temporal order judgment. The results of these two experiments support that statistical binding of temporal information and stimuli properties can quickly and selectively recalibrate the sensitivity of perceiving visual motion, according to the protocols of the specific bindings. PMID:27065910
Visual processing in the central bee brain.
Paulk, Angelique C; Dacks, Andrew M; Phillips-Portillo, James; Fellous, Jean-Marc; Gronenberg, Wulfila
2009-08-12
Visual scenes comprise enormous amounts of information from which nervous systems extract behaviorally relevant cues. In most model systems, little is known about the transformation of visual information as it occurs along visual pathways. We examined how visual information is transformed physiologically as it is communicated from the eye to higher-order brain centers using bumblebees, which are known for their visual capabilities. We recorded intracellularly in vivo from 30 neurons in the central bumblebee brain (the lateral protocerebrum) and compared these neurons to 132 neurons from more distal areas along the visual pathway, namely the medulla and the lobula. In these three brain regions (medulla, lobula, and central brain), we examined correlations between the neurons' branching patterns and their responses primarily to color, but also to motion stimuli. Visual neurons projecting to the anterior central brain were generally color sensitive, while neurons projecting to the posterior central brain were predominantly motion sensitive. The temporal response properties differed significantly between these areas, with an increase in spike time precision across trials and a decrease in average reliable spiking as visual information processing progressed from the periphery to the central brain. These data suggest that neurons along the visual pathway to the central brain not only are segregated with regard to the physical features of the stimuli (e.g., color and motion), but also differ in the way they encode stimuli, possibly to allow for efficient parallel processing to occur.
Computational Model of Primary Visual Cortex Combining Visual Attention for Action Recognition
Shu, Na; Gao, Zhiyong; Chen, Xiangan; Liu, Haihua
2015-01-01
Humans can easily understand other people’s actions through visual systems, while computers cannot. Therefore, a new bio-inspired computational model is proposed in this paper aiming for automatic action recognition. The model focuses on dynamic properties of neurons and neural networks in the primary visual cortex (V1), and simulates the procedure of information processing in V1, which consists of visual perception, visual attention and representation of human action. In our model, a family of the three-dimensional spatial-temporal correlative Gabor filters is used to model the dynamic properties of the classical receptive field of V1 simple cell tuned to different speeds and orientations in time for detection of spatiotemporal information from video sequences. Based on the inhibitory effect of stimuli outside the classical receptive field caused by lateral connections of spiking neuron networks in V1, we propose surround suppressive operator to further process spatiotemporal information. Visual attention model based on perceptual grouping is integrated into our model to filter and group different regions. Moreover, in order to represent the human action, we consider the characteristic of the neural code: mean motion map based on analysis of spike trains generated by spiking neurons. The experimental evaluation on some publicly available action datasets and comparison with the state-of-the-art approaches demonstrate the superior performance of the proposed model. PMID:26132270
Tan, Bingyao; Mason, Erik; MacLellan, Benjamin; Bizheva, Kostadinka K
2017-03-01
To correlate visually evoked functional and blood flow changes in the rat retina measured simultaneously with a combined optical coherence tomography and electroretinography system (OCT+ERG). Male Brown Norway (n = 6) rats were dark adapted and anesthetized with ketamine/xylazine. Visually evoked changes in the retinal blood flow (RBF) and functional response were measured simultaneously with an OCT+ERG system with 3-μm axial resolution in retinal tissue and 47-kHz image acquisition rate. Both single flash (10 and 200 ms) and flicker (10 Hz, 20% duty cycle, 1- and 2-second duration) stimuli were projected onto the retina with a custom visual stimulator, integrated into the OCT imaging probe. Total axial RBF was calculated from circular Doppler OCT scans by integrating over the arterial and venal flow. Temporary increase in the RBF was observed with the 10- and 200-ms continuous stimuli (∼1% and ∼4% maximum RBF change, respectively) and the 10-Hz flicker stimuli (∼8% for 1-second duration and ∼10% for 2-second duration). Doubling the flicker stimulus duration resulted in ∼25% increase in the RBF peak magnitude with no significant change in the peak latency. Single flash (200 ms) and flicker (10 Hz, 1 second) stimuli of the same illumination intensity and photon flux resulted in ∼2× larger peak RBF magnitude and ∼25% larger RBF peak latency for the flicker stimulus. Short, single flash and flicker stimuli evoked measureable RBF changes with larger RBF magnitude and peak latency observed for the flicker stimuli.
Brain response to visual sexual stimuli in homosexual pedophiles
Schiffer, Boris; Krueger, Tillmann; Paul, Thomas; de Greiff, Armin; Forsting, Michael; Leygraf, Norbert; Schedlowski, Manfred; Gizewski, Elke
2008-01-01
Objective The neurobiological mechanisms of deviant sexual preferences such as pedophilia are largely unknown. The objective of this study was to analyze whether brain activation patterns of homosexual pedophiles differed from those of a nonpedophile homosexual control group during visual sexual stimulation. Method A consecutive sample of 11 pedophile forensic inpatients exclusively attracted to boys and 12 age-matched homosexual control participants from a comparable socioeconomic stratum underwent functional magnetic resonance imaging during a visual sexual stimulation procedure that used sexually stimulating and emotionally neutral photographs. Sexual arousal was assessed according to a subjective rating scale. Results In contrast to sexually neutral pictures, in both groups sexually arousing pictures having both homosexual and pedophile content activated brain areas known to be involved in processing visual stimuli containing emotional content, including the occipitotemporal and prefrontal cortices. However, during presentation of the respective sexual stimuli, the thalamus, globus pallidus and striatum, which correspond to the key areas of the brain involved in sexual arousal and behaviour, showed significant activation in pedophiles, but not in control subjects. Conclusions Central processing of visual sexual stimuli in homosexual pedophiles seems to be comparable to that in nonpedophile control subjects. However, compared with homosexual control subjects, activation patterns in pedophiles refer more strongly to subcortical regions, which have previously been discussed in the context of processing reward signals and also play an important role in addictive and stimulus-controlled behaviour. Thus future studies should further elucidate the specificity of these brain regions for the processing of sexual stimuli in pedophilia and should address the generally weaker activation pattern in homosexual men. PMID:18197269
Brain response to visual sexual stimuli in homosexual pedophiles.
Schiffer, Boris; Krueger, Tillmann; Paul, Thomas; de Greiff, Armin; Forsting, Michael; Leygraf, Norbert; Schedlowski, Manfred; Gizewski, Elke
2008-01-01
The neurobiological mechanisms of deviant sexual preferences such as pedophilia are largely unknown. The objective of this study was to analyze whether brain activation patterns of homosexual pedophiles differed from those of a nonpedophile homosexual control group during visual sexual stimulation. A consecutive sample of 11 pedophile forensic inpatients exclusively attracted to boys and 12 age-matched homosexual control participants from a comparable socioeconomic stratum underwent functional magnetic resonance imaging during a visual sexual stimulation procedure that used sexually stimulating and emotionally neutral photographs. Sexual arousal was assessed according to a subjective rating scale. In contrast to sexually neutral pictures, in both groups sexually arousing pictures having both homosexual and pedophile content activated brain areas known to be involved in processing visual stimuli containing emotional content, including the occipitotemporal and prefrontal cortices. However, during presentation of the respective sexual stimuli, the thalamus, globus pallidus and striatum, which correspond to the key areas of the brain involved in sexual arousal and behaviour, showed significant activation in pedophiles, but not in control subjects. Central processing of visual sexual stimuli in homosexual pedophiles seems to be comparable to that in nonpedophile control subjects. However, compared with homosexual control subjects, activation patterns in pedophiles refer more strongly to subcortical regions, which have previously been discussed in the context of processing reward signals and also play an important role in addictive and stimulus-controlled behaviour. Thus future studies should further elucidate the specificity of these brain regions for the processing of sexual stimuli in pedophilia and should address the generally weaker activation pattern in homosexual men.
Sata, Yoshimi; Inagaki, Masumi; Shirane, Seiko; Kaga, Makiko
2002-07-01
In order to evaluate developmental change of visual perception, the P300 event-related potentials (ERPs) of visual oddball task were recorded in 34 healthy volunteers ranging from 7 to 37 years of age. The latency and amplitude of visual P300 in response to the Japanese ideogram stimuli (a pair of familiar Kanji characters or unfamiliar Kanji characters) and a pair of meaningless complicated figures were measured. Visual P300 was dominant at parietal area in almost all subjects. There was a significant difference of P300 latency among the three tasks. Reaction time to the both kind of Kanji tasks were significantly shorter than those to the complicated figure task. P300 latencies to the familiar Kanji, unfamiliar Kanji and figure stimuli decreased until 25.8, 26.9 and 29.4 years of age, respectively, and regression analysis revealed that a positive quadratic function could be fitted to the data. Around 9 years of age, the P300 latency/age slope was largest in the unfamiliar Kanji task. These findings suggest that visual P300 development depends on both the complexity of the tasks and specificity of the stimuli, which might reflect the variety in visual information processing.
The sensory components of high-capacity iconic memory and visual working memory.
Bradley, Claire; Pearson, Joel
2012-01-01
EARLY VISUAL MEMORY CAN BE SPLIT INTO TWO PRIMARY COMPONENTS: a high-capacity, short-lived iconic memory followed by a limited-capacity visual working memory that can last many seconds. Whereas a large number of studies have investigated visual working memory for low-level sensory features, much research on iconic memory has used more "high-level" alphanumeric stimuli such as letters or numbers. These two forms of memory are typically examined separately, despite an intrinsic overlap in their characteristics. Here, we used a purely sensory paradigm to examine visual short-term memory for 10 homogeneous items of three different visual features (color, orientation and motion) across a range of durations from 0 to 6 s. We found that the amount of information stored in iconic memory is smaller for motion than for color or orientation. Performance declined exponentially with longer storage durations and reached chance levels after ∼2 s. Further experiments showed that performance for the 10 items at 1 s was contingent on unperturbed attentional resources. In addition, for orientation stimuli, performance was contingent on the location of stimuli in the visual field, especially for short cue delays. Overall, our results suggest a smooth transition between an automatic, high-capacity, feature-specific sensory-iconic memory, and an effortful "lower-capacity" visual working memory.
Cross-Modal Matching of Audio-Visual German and French Fluent Speech in Infancy
Kubicek, Claudia; Hillairet de Boisferon, Anne; Dupierrix, Eve; Pascalis, Olivier; Lœvenbruck, Hélène; Gervain, Judit; Schwarzer, Gudrun
2014-01-01
The present study examined when and how the ability to cross-modally match audio-visual fluent speech develops in 4.5-, 6- and 12-month-old German-learning infants. In Experiment 1, 4.5- and 6-month-old infants’ audio-visual matching ability of native (German) and non-native (French) fluent speech was assessed by presenting auditory and visual speech information sequentially, that is, in the absence of temporal synchrony cues. The results showed that 4.5-month-old infants were capable of matching native as well as non-native audio and visual speech stimuli, whereas 6-month-olds perceived the audio-visual correspondence of native language stimuli only. This suggests that intersensory matching narrows for fluent speech between 4.5 and 6 months of age. In Experiment 2, auditory and visual speech information was presented simultaneously, therefore, providing temporal synchrony cues. Here, 6-month-olds were found to match native as well as non-native speech indicating facilitation of temporal synchrony cues on the intersensory perception of non-native fluent speech. Intriguingly, despite the fact that audio and visual stimuli cohered temporally, 12-month-olds matched the non-native language only. Results were discussed with regard to multisensory perceptual narrowing during the first year of life. PMID:24586651
Cicmil, Nela; Bridge, Holly; Parker, Andrew J.; Woolrich, Mark W.; Krug, Kristine
2014-01-01
Magnetoencephalography (MEG) allows the physiological recording of human brain activity at high temporal resolution. However, spatial localization of the source of the MEG signal is an ill-posed problem as the signal alone cannot constrain a unique solution and additional prior assumptions must be enforced. An adequate source reconstruction method for investigating the human visual system should place the sources of early visual activity in known locations in the occipital cortex. We localized sources of retinotopic MEG signals from the human brain with contrasting reconstruction approaches (minimum norm, multiple sparse priors, and beamformer) and compared these to the visual retinotopic map obtained with fMRI in the same individuals. When reconstructing brain responses to visual stimuli that differed by angular position, we found reliable localization to the appropriate retinotopic visual field quadrant by a minimum norm approach and by beamforming. Retinotopic map eccentricity in accordance with the fMRI map could not consistently be localized using an annular stimulus with any reconstruction method, but confining eccentricity stimuli to one visual field quadrant resulted in significant improvement with the minimum norm. These results inform the application of source analysis approaches for future MEG studies of the visual system, and indicate some current limits on localization accuracy of MEG signals. PMID:24904268
Visual search for emotional expressions: Effect of stimulus set on anger and happiness superiority.
Savage, Ruth A; Becker, Stefanie I; Lipp, Ottmar V
2016-01-01
Prior reports of preferential detection of emotional expressions in visual search have yielded inconsistent results, even for face stimuli that avoid obvious expression-related perceptual confounds. The current study investigated inconsistent reports of anger and happiness superiority effects using face stimuli drawn from the same database. Experiment 1 excluded procedural differences as a potential factor, replicating a happiness superiority effect in a procedure that previously yielded an anger superiority effect. Experiments 2a and 2b confirmed that image colour or poser gender did not account for prior inconsistent findings. Experiments 3a and 3b identified stimulus set as the critical variable, revealing happiness or anger superiority effects for two partially overlapping sets of face stimuli. The current results highlight the critical role of stimulus selection for the observation of happiness or anger superiority effects in visual search even for face stimuli that avoid obvious expression related perceptual confounds and are drawn from a single database.
Sequential sensory and decision processing in posterior parietal cortex
Ibos, Guilhem; Freedman, David J
2017-01-01
Decisions about the behavioral significance of sensory stimuli often require comparing sensory inference of what we are looking at to internal models of what we are looking for. Here, we test how neuronal selectivity for visual features is transformed into decision-related signals in posterior parietal cortex (area LIP). Monkeys performed a visual matching task that required them to detect target stimuli composed of conjunctions of color and motion-direction. Neuronal recordings from area LIP revealed two main findings. First, the sequential processing of visual features and the selection of target-stimuli suggest that LIP is involved in transforming sensory information into decision-related signals. Second, the patterns of color and motion selectivity and their impact on decision-related encoding suggest that LIP plays a role in detecting target stimuli by comparing bottom-up sensory inputs (what the monkeys were looking at) and top-down cognitive encoding inputs (what the monkeys were looking for). DOI: http://dx.doi.org/10.7554/eLife.23743.001 PMID:28418332
Death anxiety and visual oculomotor processing of arousing stimuli in a free view setting.
Wendelberg, Linda; Volden, Frode; Yildirim-Yayilgan, Sule
2017-04-01
The main goal of this study was to determine how death anxiety (DA) affects visual processing when confronted with arousing stimuli. A total of 26 males and females were primed with either DA or a neutral primer and were given a free view/free choice task where eye movement was measured using an eye tracker. The goal was to identify measurable/observable indicators of whether the subjects were under the influence of DA during the free view. We conducted an eye tracking study because this is an area where we believe it is possible to find observable indicators. Ultimately, we observed some changes in the visual behavior, such as a prolonged average latency, altered sensitivity to the repetition of stimuli, longer fixations, less time in saccadic activity, and fewer classifications related to focal and ambient processing, which appear to occur under the influence of DA when the subjects are confronted with arousing stimuli. © 2017 Scandinavian Psychological Associations and John Wiley & Sons Ltd.
Muralidharan, Vignesh; Balasubramani, Pragathi P; Chakravarthy, V Srinivasa; Gilat, Moran; Lewis, Simon J G; Moustafa, Ahmed A
2016-01-01
Experimental data show that perceptual cues can either exacerbate or ameliorate freezing of gait (FOG) in Parkinson's Disease (PD). For example, simple visual stimuli like stripes on the floor can alleviate freezing whereas complex stimuli like narrow doorways can trigger it. We present a computational model of the cognitive and motor cortico-basal ganglia loops that explains the effects of sensory and cognitive processes on FOG. The model simulates strong causative factors of FOG including decision conflict (a disagreement of various sensory stimuli in their association with a response) and cognitive load (complexity of coupling a stimulus with downstream mechanisms that control gait execution). Specifically, the model simulates gait of PD patients (freezers and non-freezers) as they navigate a series of doorways while simultaneously responding to several Stroop word cues in a virtual reality setup. The model is based on an actor-critic architecture of Reinforcement Learning involving Utility-based decision making, where Utility is a weighted sum of Value and Risk functions. The model accounts for the following experimental data: (a) the increased foot-step latency seen in relation to high conflict cues, (b) the high number of motor arrests seen in PD freezers when faced with a complex cue compared to the simple cue, and (c) the effect of dopamine medication on these motor arrests. The freezing behavior arises as a result of addition of task parameters (doorways and cues) and not due to inherent differences in the subject group. The model predicts a differential role of risk sensitivity in PD freezers and non-freezers in the cognitive and motor loops. Additionally this first-of-its-kind model provides a plausible framework for understanding the influence of cognition on automatic motor actions in controls and Parkinson's Disease.
Muralidharan, Vignesh; Balasubramani, Pragathi P.; Chakravarthy, V. Srinivasa; Gilat, Moran; Lewis, Simon J. G.; Moustafa, Ahmed A.
2017-01-01
Experimental data show that perceptual cues can either exacerbate or ameliorate freezing of gait (FOG) in Parkinson's Disease (PD). For example, simple visual stimuli like stripes on the floor can alleviate freezing whereas complex stimuli like narrow doorways can trigger it. We present a computational model of the cognitive and motor cortico-basal ganglia loops that explains the effects of sensory and cognitive processes on FOG. The model simulates strong causative factors of FOG including decision conflict (a disagreement of various sensory stimuli in their association with a response) and cognitive load (complexity of coupling a stimulus with downstream mechanisms that control gait execution). Specifically, the model simulates gait of PD patients (freezers and non-freezers) as they navigate a series of doorways while simultaneously responding to several Stroop word cues in a virtual reality setup. The model is based on an actor-critic architecture of Reinforcement Learning involving Utility-based decision making, where Utility is a weighted sum of Value and Risk functions. The model accounts for the following experimental data: (a) the increased foot-step latency seen in relation to high conflict cues, (b) the high number of motor arrests seen in PD freezers when faced with a complex cue compared to the simple cue, and (c) the effect of dopamine medication on these motor arrests. The freezing behavior arises as a result of addition of task parameters (doorways and cues) and not due to inherent differences in the subject group. The model predicts a differential role of risk sensitivity in PD freezers and non-freezers in the cognitive and motor loops. Additionally this first-of-its-kind model provides a plausible framework for understanding the influence of cognition on automatic motor actions in controls and Parkinson's Disease. PMID:28119584
Image processing analysis of traditional Gestalt vision experiments
NASA Astrophysics Data System (ADS)
McCann, John J.
2002-06-01
In the late 19th century, the Gestalt Psychology rebelled against the popular new science of Psychophysics. The Gestalt revolution used many fascinating visual examples to illustrate that the whole is greater than the sum of all the parts. Color constancy was an important example. The physical interpretation of sensations and their quantification by JNDs and Weber fractions were met with innumerable examples in which two 'identical' physical stimuli did not look the same. The fact that large changes in the color of the illumination failed to change color appearance in real scenes demanded something more than quantifying the psychophysical response of a single pixel. The debates continues today with proponents of both physical, pixel-based colorimetry and perceptual, image- based cognitive interpretations. Modern instrumentation has made colorimetric pixel measurement universal. As well, new examples of unconscious inference continue to be reported in the literature. Image processing provides a new way of analyzing familiar Gestalt displays. Since the pioneering experiments by Fergus Campbell and Land, we know that human vision has independent spatial channels and independent color channels. Color matching data from color constancy experiments agrees with spatial comparison analysis. In this analysis, simple spatial processes can explain the different appearances of 'identical' stimuli by analyzing the multiresolution spatial properties of their surrounds. Benary's Cross, White's Effect, the Checkerboard Illusion and the Dungeon Illusion can all be understood by the analysis of their low-spatial-frequency components. Just as with color constancy, these Gestalt images are most simply described by the analysis of spatial components. Simple spatial mechanisms account for the appearance of 'identical' stimuli in complex scenes. It does not require complex, cognitive processes to calculate appearances in familiar Gestalt experiments.
Miniature Brain Decision Making in Complex Visual Environments
2008-07-18
release; distribution unlimited. 13. SUPPLEMENTARY NOTES 14. ABSTRACT The grantee investigated, using the honeybee ( Apis mellifera ) as a model...successful for understanding face processing in both human adults and infants. Individual honeybees ( Apis mellifera ) were trained with...for 30 bees (group 3) of the target stimuli. Bernard J, Stach S, Giurfa M (2007) Categorization of visual stimuli in the honeybee Apis mellifera
Deletion of the GluA1 AMPA Receptor Subunit Alters the Expression of Short-Term Memory
ERIC Educational Resources Information Center
Sanderson, David J.; Sprengel, Rolf; Seeburg, Peter H.; Bannerman, David M.
2011-01-01
Deletion of the GluA1 AMPA receptor subunit selectively impairs short-term memory for spatial locations. We further investigated this deficit by examining memory for discrete nonspatial visual stimuli in an operant chamber. Unconditioned suppression of magazine responding to visual stimuli was measured in wild-type and GluA1 knockout mice.…
Steady-State Visual Evoked Potentials and Phase Synchronization in Migraine Patients
NASA Astrophysics Data System (ADS)
Angelini, L.; Tommaso, M. De; Guido, M.; Hu, K.; Ivanov, P. Ch.; Marinazzo, D.; Nardulli, G.; Nitti, L.; Pellicoro, M.; Pierro, C.; Stramaglia, S.
2004-07-01
We investigate phase synchronization in EEG recordings from migraine patients. We use the analytic signal technique, based on the Hilbert transform, and find that migraine brains are characterized by enhanced alpha band phase synchronization in the presence of visual stimuli. Our findings show that migraine patients have an overactive regulatory mechanism that renders them more sensitive to external stimuli.
ERIC Educational Resources Information Center
Patching, Geoffrey R.; Englund, Mats P.; Hellstrom, Ake
2012-01-01
Despite the importance of both response probability and response time for testing models of choice, there is a dearth of chronometric studies examining systematic asymmetries that occur over time- and space-orders in the method of paired comparisons. In this study, systematic asymmetries in discriminating the magnitude of paired visual stimuli are…
Fear Processing in Dental Phobia during Crossmodal Symptom Provocation: An fMRI Study
Maslowski, Nina Isabel; Wittchen, Hans-Ulrich; Lueken, Ulrike
2014-01-01
While previous studies successfully identified the core neural substrates of the animal subtype of specific phobia, only few and inconsistent research is available for dental phobia. These findings might partly relate to the fact that, typically, visual stimuli were employed. The current study aimed to investigate the influence of stimulus modality on neural fear processing in dental phobia. Thirteen dental phobics (DP) and thirteen healthy controls (HC) attended a block-design functional magnetic resonance imaging (fMRI) symptom provocation paradigm encompassing both visual and auditory stimuli. Drill sounds and matched neutral sinus tones served as auditory stimuli and dentist scenes and matched neutral videos as visual stimuli. Group comparisons showed increased activation in the insula, anterior cingulate cortex, orbitofrontal cortex, and thalamus in DP compared to HC during auditory but not visual stimulation. On the contrary, no differential autonomic reactions were observed in DP. Present results are largely comparable to brain areas identified in animal phobia, but also point towards a potential downregulation of autonomic outflow by neural fear circuits in this disorder. Findings enlarge our knowledge about neural correlates of dental phobia and may help to understand the neural underpinnings of the clinical and physiological characteristics of the disorder. PMID:24738049
Functional Connectivity in Frequency-Tagged Cortical Networks During Active Harm Avoidance
Miskovic, Vladimir; Príncipe, José C.; Keil, Andreas
2015-01-01
Abstract Many behavioral and cognitive processes are grounded in widespread and dynamic communication between brain regions. Thus, the quantification of functional connectivity with high temporal resolution is highly desirable for capturing in vivo brain function. However, many of the commonly used measures of functional connectivity capture only linear signal dependence and are based entirely on relatively simple quantitative measures such as mean and variance. In this study, the authors used a recently developed algorithm, the generalized measure of association (GMA), to quantify dynamic changes in cortical connectivity using steady-state visual evoked potentials (ssVEPs) measured in the context of a conditioned behavioral avoidance task. GMA uses a nonparametric estimator of statistical dependence based on ranks that are efficient and capable of providing temporal precision roughly corresponding to the timing of cognitive acts (∼100–200 msec). Participants viewed simple gratings predicting the presence/absence of an aversive loud noise, co-occurring with peripheral cues indicating whether the loud noise could be avoided by means of a key press (active) or not (passive). For active compared with passive trials, heightened connectivity between visual and central areas was observed in time segments preceding and surrounding the avoidance cue. Viewing of the threat stimuli also led to greater initial connectivity between occipital and central regions, followed by heightened local coupling among visual regions surrounding the motor response. Local neural coupling within extended visual regions was sustained throughout major parts of the viewing epoch. These findings are discussed in a framework of flexible synchronization between cortical networks as a function of experience and active sensorimotor coupling. PMID:25557925