Cognitive processing in the primary visual cortex: from perception to memory.
Supèr, Hans
2002-01-01
The primary visual cortex is the first cortical area of the visual system that receives information from the external visual world. Based on the receptive field characteristics of the neurons in this area, it has been assumed that the primary visual cortex is a pure sensory area extracting basic elements of the visual scene. This information is then subsequently further processed upstream in the higher-order visual areas and provides us with perception and storage of the visual environment. However, recent findings show that such neural implementations are observed in the primary visual cortex. These neural correlates are expressed by the modulated activity of the late response of a neuron to a stimulus, and most likely depend on recurrent interactions between several areas of the visual system. This favors the concept of a distributed nature of visual processing in perceptual organization.
Aging and feature search: the effect of search area.
Burton-Danner, K; Owsley, C; Jackson, G R
2001-01-01
The preattentive system involves the rapid parallel processing of visual information in the visual scene so that attention can be directed to meaningful objects and locations in the environment. This study used the feature search methodology to examine whether there are aging-related deficits in parallel-processing capabilities when older adults are required to visually search a large area of the visual field. Like young subjects, older subjects displayed flat, near-zero slopes for the Reaction Time x Set Size function when searching over a broad area (30 degrees radius) of the visual field, implying parallel processing of the visual display. These same older subjects exhibited impairment in another task, also dependent on parallel processing, performed over the same broad field area; this task, called the useful field of view test, has more complex task demands. Results imply that aging-related breakdowns of parallel processing over a large visual field area are not likely to emerge when required responses are simple, there is only one task to perform, and there is no limitation on visual inspection time.
Neural mechanisms underlying sound-induced visual motion perception: An fMRI study.
Hidaka, Souta; Higuchi, Satomi; Teramoto, Wataru; Sugita, Yoichi
2017-07-01
Studies of crossmodal interactions in motion perception have reported activation in several brain areas, including those related to motion processing and/or sensory association, in response to multimodal (e.g., visual and auditory) stimuli that were both in motion. Recent studies have demonstrated that sounds can trigger illusory visual apparent motion to static visual stimuli (sound-induced visual motion: SIVM): A visual stimulus blinking at a fixed location is perceived to be moving laterally when an alternating left-right sound is also present. Here, we investigated brain activity related to the perception of SIVM using a 7T functional magnetic resonance imaging technique. Specifically, we focused on the patterns of neural activities in SIVM and visually induced visual apparent motion (VIVM). We observed shared activations in the middle occipital area (V5/hMT), which is thought to be involved in visual motion processing, for SIVM and VIVM. Moreover, as compared to VIVM, SIVM resulted in greater activation in the superior temporal area and dominant functional connectivity between the V5/hMT area and the areas related to auditory and crossmodal motion processing. These findings indicate that similar but partially different neural mechanisms could be involved in auditory-induced and visually-induced motion perception, and neural signals in auditory, visual, and, crossmodal motion processing areas closely and directly interact in the perception of SIVM. Copyright © 2017 Elsevier B.V. All rights reserved.
Visual perception and imagery: a new molecular hypothesis.
Bókkon, I
2009-05-01
Here, we put forward a redox molecular hypothesis about the natural biophysical substrate of visual perception and visual imagery. This hypothesis is based on the redox and bioluminescent processes of neuronal cells in retinotopically organized cytochrome oxidase-rich visual areas. Our hypothesis is in line with the functional roles of reactive oxygen and nitrogen species in living cells that are not part of haphazard process, but rather a very strict mechanism used in signaling pathways. We point out that there is a direct relationship between neuronal activity and the biophoton emission process in the brain. Electrical and biochemical processes in the brain represent sensory information from the external world. During encoding or retrieval of information, electrical signals of neurons can be converted into synchronized biophoton signals by bioluminescent radical and non-radical processes. Therefore, information in the brain appears not only as an electrical (chemical) signal but also as a regulated biophoton (weak optical) signal inside neurons. During visual perception, the topological distribution of photon stimuli on the retina is represented by electrical neuronal activity in retinotopically organized visual areas. These retinotopic electrical signals in visual neurons can be converted into synchronized biophoton signals by radical and non-radical processes in retinotopically organized mitochondria-rich areas. As a result, regulated bioluminescent biophotons can create intrinsic pictures (depictive representation) in retinotopically organized cytochrome oxidase-rich visual areas during visual imagery and visual perception. The long-term visual memory is interpreted as epigenetic information regulated by free radicals and redox processes. This hypothesis does not claim to solve the secret of consciousness, but proposes that the evolution of higher levels of complexity made the intrinsic picture representation of the external visual world possible by regulated redox and bioluminescent reactions in the visual system during visual perception and visual imagery.
Hertz, Uri; Amedi, Amir
2015-01-01
The classical view of sensory processing involves independent processing in sensory cortices and multisensory integration in associative areas. This hierarchical structure has been challenged by evidence of multisensory responses in sensory areas, and dynamic weighting of sensory inputs in associative areas, thus far reported independently. Here, we used a visual-to-auditory sensory substitution algorithm (SSA) to manipulate the information conveyed by sensory inputs while keeping the stimuli intact. During scan sessions before and after SSA learning, subjects were presented with visual images and auditory soundscapes. The findings reveal 2 dynamic processes. First, crossmodal attenuation of sensory cortices changed direction after SSA learning from visual attenuations of the auditory cortex to auditory attenuations of the visual cortex. Secondly, associative areas changed their sensory response profile from strongest response for visual to that for auditory. The interaction between these phenomena may play an important role in multisensory processing. Consistent features were also found in the sensory dominance in sensory areas and audiovisual convergence in associative area Middle Temporal Gyrus. These 2 factors allow for both stability and a fast, dynamic tuning of the system when required. PMID:24518756
Hertz, Uri; Amedi, Amir
2015-08-01
The classical view of sensory processing involves independent processing in sensory cortices and multisensory integration in associative areas. This hierarchical structure has been challenged by evidence of multisensory responses in sensory areas, and dynamic weighting of sensory inputs in associative areas, thus far reported independently. Here, we used a visual-to-auditory sensory substitution algorithm (SSA) to manipulate the information conveyed by sensory inputs while keeping the stimuli intact. During scan sessions before and after SSA learning, subjects were presented with visual images and auditory soundscapes. The findings reveal 2 dynamic processes. First, crossmodal attenuation of sensory cortices changed direction after SSA learning from visual attenuations of the auditory cortex to auditory attenuations of the visual cortex. Secondly, associative areas changed their sensory response profile from strongest response for visual to that for auditory. The interaction between these phenomena may play an important role in multisensory processing. Consistent features were also found in the sensory dominance in sensory areas and audiovisual convergence in associative area Middle Temporal Gyrus. These 2 factors allow for both stability and a fast, dynamic tuning of the system when required. © The Author 2014. Published by Oxford University Press.
Störmer, Viola S; Winther, Gesche N; Li, Shu-Chen; Andersen, Søren K
2013-03-20
Keeping track of multiple moving objects is an essential ability of visual perception. However, the mechanisms underlying this ability are not well understood. We instructed human observers to track five or seven independent randomly moving target objects amid identical nontargets and recorded steady-state visual evoked potentials (SSVEPs) elicited by these stimuli. Visual processing of moving targets, as assessed by SSVEP amplitudes, was continuously facilitated relative to the processing of identical but irrelevant nontargets. The cortical sources of this enhancement were located to areas including early visual cortex V1-V3 and motion-sensitive area MT, suggesting that the sustained multifocal attentional enhancement during multiple object tracking already operates at hierarchically early stages of visual processing. Consistent with this interpretation, the magnitude of attentional facilitation during tracking in a single trial predicted the speed of target identification at the end of the trial. Together, these findings demonstrate that attention can flexibly and dynamically facilitate the processing of multiple independent object locations in early visual areas and thereby allow for tracking of these objects.
NASA Astrophysics Data System (ADS)
Konishi, Tsuyoshi; Tanida, Jun; Ichioka, Yoshiki
1995-06-01
A novel technique, the visual-area coding technique (VACT), for the optical implementation of fuzzy logic with the capability of visualization of the results is presented. This technique is based on the microfont method and is considered to be an instance of digitized analog optical computing. Huge amounts of data can be processed in fuzzy logic with the VACT. In addition, real-time visualization of the processed result can be accomplished.
Differential processing of binocular and monocular gloss cues in human visual cortex
Di Luca, Massimiliano; Ban, Hiroshi; Muryy, Alexander; Fleming, Roland W.
2016-01-01
The visual impression of an object's surface reflectance (“gloss”) relies on a range of visual cues, both monocular and binocular. Whereas previous imaging work has identified processing within ventral visual areas as important for monocular cues, little is known about cortical areas involved in processing binocular cues. Here, we used human functional MRI (fMRI) to test for brain areas selectively involved in the processing of binocular cues. We manipulated stereoscopic information to create four conditions that differed in their disparity structure and in the impression of surface gloss that they evoked. We performed multivoxel pattern analysis to find areas whose fMRI responses allow classes of stimuli to be distinguished based on their depth structure vs. material appearance. We show that higher dorsal areas play a role in processing binocular gloss information, in addition to known ventral areas involved in material processing, with ventral area lateral occipital responding to both object shape and surface material properties. Moreover, we tested for similarities between the representation of gloss from binocular cues and monocular cues. Specifically, we tested for transfer in the decoding performance of an algorithm trained on glossy vs. matte objects defined by either binocular or by monocular cues. We found transfer effects from monocular to binocular cues in dorsal visual area V3B/kinetic occipital (KO), suggesting a shared representation of the two cues in this area. These results indicate the involvement of mid- to high-level visual circuitry in the estimation of surface material properties, with V3B/KO potentially playing a role in integrating monocular and binocular cues. PMID:26912596
The effect of early visual deprivation on the neural bases of multisensory processing.
Guerreiro, Maria J S; Putzar, Lisa; Röder, Brigitte
2015-06-01
Developmental vision is deemed to be necessary for the maturation of multisensory cortical circuits. Thus far, this has only been investigated in animal studies, which have shown that congenital visual deprivation markedly reduces the capability of neurons to integrate cross-modal inputs. The present study investigated the effect of transient congenital visual deprivation on the neural mechanisms of multisensory processing in humans. We used functional magnetic resonance imaging to compare responses of visual and auditory cortical areas to visual, auditory and audio-visual stimulation in cataract-reversal patients and normally sighted controls. The results showed that cataract-reversal patients, unlike normally sighted controls, did not exhibit multisensory integration in auditory areas. Furthermore, cataract-reversal patients, but not normally sighted controls, exhibited lower visual cortical processing within visual cortex during audio-visual stimulation than during visual stimulation. These results indicate that congenital visual deprivation affects the capability of cortical areas to integrate cross-modal inputs in humans, possibly because visual processing is suppressed during cross-modal stimulation. Arguably, the lack of vision in the first months after birth may result in a reorganization of visual cortex, including the suppression of noisy visual input from the deprived retina in order to reduce interference during auditory processing. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Visual short-term memory: activity supporting encoding and maintenance in retinotopic visual cortex.
Sneve, Markus H; Alnæs, Dag; Endestad, Tor; Greenlee, Mark W; Magnussen, Svein
2012-10-15
Recent studies have demonstrated that retinotopic cortex maintains information about visual stimuli during retention intervals. However, the process by which transient stimulus-evoked sensory responses are transformed into enduring memory representations is unknown. Here, using fMRI and short-term visual memory tasks optimized for univariate and multivariate analysis approaches, we report differential involvement of human retinotopic areas during memory encoding of the low-level visual feature orientation. All visual areas show weaker responses when memory encoding processes are interrupted, possibly due to effects in orientation-sensitive primary visual cortex (V1) propagating across extrastriate areas. Furthermore, intermediate areas in both dorsal (V3a/b) and ventral (LO1/2) streams are significantly more active during memory encoding compared with non-memory (active and passive) processing of the same stimulus material. These effects in intermediate visual cortex are also observed during memory encoding of a different stimulus feature (spatial frequency), suggesting that these areas are involved in encoding processes on a higher level of representation. Using pattern-classification techniques to probe the representational content in visual cortex during delay periods, we further demonstrate that simply initiating memory encoding is not sufficient to produce long-lasting memory traces. Rather, active maintenance appears to underlie the observed memory-specific patterns of information in retinotopic cortex. Copyright © 2012 Elsevier Inc. All rights reserved.
Visual cortex activation in kinesthetic guidance of reaching.
Darling, W G; Seitz, R J; Peltier, S; Tellmann, L; Butler, A J
2007-06-01
The purpose of this research was to determine the cortical circuit involved in encoding and controlling kinesthetically guided reaching movements. We used (15)O-butanol positron emission tomography in ten blindfolded able-bodied volunteers in a factorial experiment in which arm (left/right) used to encode target location and to reach back to the remembered location and hemispace of target location (left/right side of midsagittal plane) varied systematically. During encoding of a target the experimenter guided the hand to touch the index fingertip to an external target and then returned the hand to the start location. After a short delay the subject voluntarily moved the same hand back to the remembered target location. SPM99 analysis of the PET data contrasting left versus right hand reaching showed increased (P < 0.05, corrected) neural activity in the sensorimotor cortex, premotor cortex and posterior parietal lobule (PPL) contralateral to the moving hand. Additional neural activation was observed in prefrontal cortex and visual association areas of occipital and parietal lobes contralateral and ipsilateral to the reaching hand. There was no statistically significant effect of target location in left versus right hemispace nor was there an interaction of hand and hemispace effects. Structural equation modeling showed that parietal lobe visual association areas contributed to kinesthetic processing by both hands but occipital lobe visual areas contributed only during dominant hand kinesthetic processing. This visual processing may also involve visualization of kinesthetically guided target location and use of the same network employed to guide reaches to visual targets when reaching to kinesthetic targets. The present work clearly demonstrates a network for kinesthetic processing that includes higher visual processing areas in the PPL for both upper limbs and processing in occipital lobe visual areas for the dominant limb.
Differential processing of binocular and monocular gloss cues in human visual cortex.
Sun, Hua-Chun; Di Luca, Massimiliano; Ban, Hiroshi; Muryy, Alexander; Fleming, Roland W; Welchman, Andrew E
2016-06-01
The visual impression of an object's surface reflectance ("gloss") relies on a range of visual cues, both monocular and binocular. Whereas previous imaging work has identified processing within ventral visual areas as important for monocular cues, little is known about cortical areas involved in processing binocular cues. Here, we used human functional MRI (fMRI) to test for brain areas selectively involved in the processing of binocular cues. We manipulated stereoscopic information to create four conditions that differed in their disparity structure and in the impression of surface gloss that they evoked. We performed multivoxel pattern analysis to find areas whose fMRI responses allow classes of stimuli to be distinguished based on their depth structure vs. material appearance. We show that higher dorsal areas play a role in processing binocular gloss information, in addition to known ventral areas involved in material processing, with ventral area lateral occipital responding to both object shape and surface material properties. Moreover, we tested for similarities between the representation of gloss from binocular cues and monocular cues. Specifically, we tested for transfer in the decoding performance of an algorithm trained on glossy vs. matte objects defined by either binocular or by monocular cues. We found transfer effects from monocular to binocular cues in dorsal visual area V3B/kinetic occipital (KO), suggesting a shared representation of the two cues in this area. These results indicate the involvement of mid- to high-level visual circuitry in the estimation of surface material properties, with V3B/KO potentially playing a role in integrating monocular and binocular cues. Copyright © 2016 the American Physiological Society.
Retinotopically specific reorganization of visual cortex for tactile pattern recognition
Cheung, Sing-Hang; Fang, Fang; He, Sheng; Legge, Gordon E.
2009-01-01
Although previous studies have shown that Braille reading and other tactile-discrimination tasks activate the visual cortex of blind and sighted people [1–5], it is not known whether this kind of cross-modal reorganization is influenced by retinotopic organization. We have addressed this question by studying S, a visually impaired adult with the rare ability to read print visually and Braille by touch. S had normal visual development until age six years, and thereafter severe acuity reduction due to corneal opacification, but no evidence of visual-field loss. Functional magnetic resonance imaging (fMRI) revealed that, in S’s early visual areas, tactile information processing activated what would be the foveal representation for normally-sighted individuals, and visual information processing activated what would be the peripheral representation. Control experiments showed that this activation pattern was not due to visual imagery. S’s high-level visual areas which correspond to shape- and object-selective areas in normally-sighted individuals were activated by both visual and tactile stimuli. The retinotopically specific reorganization in early visual areas suggests an efficient redistribution of neural resources in the visual cortex. PMID:19361999
ERIC Educational Resources Information Center
Reinke, Karen; Fernandes, Myra; Schwindt, Graeme; O'Craven, Kathleen; Grady, Cheryl L.
2008-01-01
The functional specificity of the brain region known as the Visual Word Form Area (VWFA) was examined using fMRI. We explored whether this area serves a general role in processing symbolic stimuli, rather than being selective for the processing of words. Brain activity was measured during a visual 1-back task to English words, meaningful symbols…
Perceptual deficits of object identification: apperceptive agnosia.
Milner, A David; Cavina-Pratesi, Cristiana
2018-01-01
It is argued here that apperceptive object agnosia (generally now known as visual form agnosia) is in reality not a kind of agnosia, but rather a form of "imperception" (to use the term coined by Hughlings Jackson). We further argue that its proximate cause is a bilateral loss (or functional loss) of the visual form processing systems embodied in the human lateral occipital cortex (area LO). According to the dual-system model of cortical visual processing elaborated by Milner and Goodale (2006), area LO constitutes a crucial component of the ventral stream, and indeed is essential for providing the figural qualities inherent in our normal visual perception of the world. According to this account, the functional loss of area LO would leave only spared visual areas within the occipito-parietal dorsal stream - dedicated to the control of visually-guided actions - potentially able to provide some aspects of visual shape processing in patients with apperceptive agnosia. We review the relevant evidence from such individuals, concentrating particularly on the well-researched patient D.F. We conclude that studies of this kind can provide useful pointers to an understanding of the processing characteristics of parietal-lobe visual mechanisms and their interactions with occipitotemporal perceptual systems in the guidance of action. Copyright © 2018 Elsevier B.V. All rights reserved.
Curvature-processing network in macaque visual cortex
Yue, Xiaomin; Pourladian, Irene S.; Tootell, Roger B. H.; Ungerleider, Leslie G.
2014-01-01
Our visual environment abounds with curved features. Thus, the goal of understanding visual processing should include the processing of curved features. Using functional magnetic resonance imaging in behaving monkeys, we demonstrated a network of cortical areas selective for the processing of curved features. This network includes three distinct hierarchically organized regions within the ventral visual pathway: a posterior curvature-biased patch (PCP) located in the near-foveal representation of dorsal V4, a middle curvature-biased patch (MCP) located on the ventral lip of the posterior superior temporal sulcus (STS) in area TEO, and an anterior curvature-biased patch (ACP) located just below the STS in anterior area TE. Our results further indicate that the processing of curvature becomes increasingly complex from PCP to ACP. The proximity of the curvature-processing network to the well-known face-processing network suggests a possible functional link between them. PMID:25092328
Cocchi, Luca; Sale, Martin V; L Gollo, Leonardo; Bell, Peter T; Nguyen, Vinh T; Zalesky, Andrew; Breakspear, Michael; Mattingley, Jason B
2016-01-01
Within the primate visual system, areas at lower levels of the cortical hierarchy process basic visual features, whereas those at higher levels, such as the frontal eye fields (FEF), are thought to modulate sensory processes via feedback connections. Despite these functional exchanges during perception, there is little shared activity between early and late visual regions at rest. How interactions emerge between regions encompassing distinct levels of the visual hierarchy remains unknown. Here we combined neuroimaging, non-invasive cortical stimulation and computational modelling to characterize changes in functional interactions across widespread neural networks before and after local inhibition of primary visual cortex or FEF. We found that stimulation of early visual cortex selectively increased feedforward interactions with FEF and extrastriate visual areas, whereas identical stimulation of the FEF decreased feedback interactions with early visual areas. Computational modelling suggests that these opposing effects reflect a fast-slow timescale hierarchy from sensory to association areas. DOI: http://dx.doi.org/10.7554/eLife.15252.001 PMID:27596931
Cocchi, Luca; Sale, Martin V; L Gollo, Leonardo; Bell, Peter T; Nguyen, Vinh T; Zalesky, Andrew; Breakspear, Michael; Mattingley, Jason B
2016-09-06
Within the primate visual system, areas at lower levels of the cortical hierarchy process basic visual features, whereas those at higher levels, such as the frontal eye fields (FEF), are thought to modulate sensory processes via feedback connections. Despite these functional exchanges during perception, there is little shared activity between early and late visual regions at rest. How interactions emerge between regions encompassing distinct levels of the visual hierarchy remains unknown. Here we combined neuroimaging, non-invasive cortical stimulation and computational modelling to characterize changes in functional interactions across widespread neural networks before and after local inhibition of primary visual cortex or FEF. We found that stimulation of early visual cortex selectively increased feedforward interactions with FEF and extrastriate visual areas, whereas identical stimulation of the FEF decreased feedback interactions with early visual areas. Computational modelling suggests that these opposing effects reflect a fast-slow timescale hierarchy from sensory to association areas.
Peigneux, P; Salmon, E; van der Linden, M; Garraux, G; Aerts, J; Delfiore, G; Degueldre, C; Luxen, A; Orban, G; Franck, G
2000-06-01
Humans, like numerous other species, strongly rely on the observation of gestures of other individuals in their everyday life. It is hypothesized that the visual processing of human gestures is sustained by a specific functional architecture, even at an early prelexical cognitive stage, different from that required for the processing of other visual entities. In the present PET study, the neural basis of visual gesture analysis was investigated with functional neuroimaging of brain activity during naming and orientation tasks performed on pictures of either static gestures (upper-limb postures) or tridimensional objects. To prevent automatic object-related cerebral activation during the visual processing of postures, only intransitive postures were selected, i. e., symbolic or meaningless postures which do not imply the handling of objects. Conversely, only intransitive objects which cannot be handled were selected to prevent gesture-related activation during their visual processing. Results clearly demonstrate a significant functional segregation between the processing of static intransitive postures and the processing of intransitive tridimensional objects. Visual processing of objects elicited mainly occipital and fusiform gyrus activity, while visual processing of postures strongly activated the lateral occipitotemporal junction, encroaching upon area MT/V5, involved in motion analysis. These findings suggest that the lateral occipitotemporal junction, working in association with area MT/V5, plays a prominent role in the high-level perceptual analysis of gesture, namely the construction of its visual representation, available for subsequent recognition or imitation. Copyright 2000 Academic Press.
Matsui, Teppei; Ohki, Kenichi
2013-01-01
Higher order visual areas that receive input from the primary visual cortex (V1) are specialized for the processing of distinct features of visual information. However, it is still incompletely understood how this functional specialization is acquired. Here we used in vivo two photon calcium imaging in the mouse visual cortex to investigate whether this functional distinction exists at as early as the level of projections from V1 to two higher order visual areas, AL and LM. Specifically, we examined whether sharpness of orientation and direction selectivity and optimal spatial and temporal frequency of projection neurons from V1 to higher order visual areas match with that of target areas. We found that the V1 input to higher order visual areas were indeed functionally distinct: AL preferentially received inputs from V1 that were more orientation and direction selective and tuned for lower spatial frequency compared to projection of V1 to LM, consistent with functional differences between AL and LM. The present findings suggest that selective projections from V1 to higher order visual areas initiates parallel processing of sensory information in the visual cortical network. PMID:24068987
Caspers, Julian; Zilles, Karl; Amunts, Katrin; Laird, Angela R.; Fox, Peter T.; Eickhoff, Simon B.
2016-01-01
The ventral stream of the human extrastriate visual cortex shows a considerable functional heterogeneity from early visual processing (posterior) to higher, domain-specific processing (anterior). The fusiform gyrus hosts several of those “high-level” functional areas. We recently found a subdivision of the posterior fusiform gyrus on the microstructural level, that is, two distinct cytoarchitectonic areas, FG1 and FG2 (Caspers et al., Brain Structure & Function, 2013). To gain a first insight in the function of these two areas, here we studied their behavioral involvement and coactivation patterns by means of meta-analytic connectivity modeling based on the BrainMap database (www.brainmap.org), using probabilistic maps of these areas as seed regions. The coactivation patterns of the areas support the concept of a common involvement in a core network subserving different cognitive tasks, that is, object recognition, visual language perception, or visual attention. In addition, the analysis supports the previous cytoarchitectonic parcellation, indicating that FG1 appears as a transitional area between early and higher visual cortex and FG2 as a higher-order one. The latter area is furthermore lateralized, as it shows strong relations to the visual language processing system in the left hemisphere, while its right side is stronger associated with face selective regions. These findings indicate that functional lateralization of area FG2 relies on a different pattern of connectivity rather than side-specific cytoarchitectonic features. PMID:24038902
Galvez-Pol, A; Calvo-Merino, B; Capilla, A; Forster, B
2018-07-01
Working memory (WM) supports temporary maintenance of task-relevant information. This process is associated with persistent activity in the sensory cortex processing the information (e.g., visual stimuli activate visual cortex). However, we argue here that more multifaceted stimuli moderate this sensory-locked activity and recruit distinctive cortices. Specifically, perception of bodies recruits somatosensory cortex (SCx) beyond early visual areas (suggesting embodiment processes). Here we explore persistent activation in processing areas beyond the sensory cortex initially relevant to the modality of the stimuli. Using visual and somatosensory evoked-potentials in a visual WM task, we isolated different levels of visual and somatosensory involvement during encoding of body and non-body-related images. Persistent activity increased in SCx only when maintaining body images in WM, whereas visual/posterior regions' activity increased significantly when maintaining non-body images. Our results bridge WM and embodiment frameworks, supporting a dynamic WM process where the nature of the information summons specific processing resources. Copyright © 2018 Elsevier Inc. All rights reserved.
Visual Cortical Representation of Whole Words and Hemifield-split Word Parts.
Strother, Lars; Coros, Alexandra M; Vilis, Tutis
2016-02-01
Reading requires the neural integration of visual word form information that is split between our retinal hemifields. We examined multiple visual cortical areas involved in this process by measuring fMRI responses while observers viewed words that changed or repeated in one or both hemifields. We were specifically interested in identifying brain areas that exhibit decreased fMRI responses as a result of repeated versus changing visual word form information in each visual hemifield. Our method yielded highly significant effects of word repetition in a previously reported visual word form area (VWFA) in occipitotemporal cortex, which represents hemifield-split words as whole units. We also identified a more posterior occipital word form area (OWFA), which represents word form information in the right and left hemifields independently and is thus both functionally and anatomically distinct from the VWFA. Both the VWFA and the OWFA were left-lateralized in our study and strikingly symmetric in anatomical location relative to known face-selective visual cortical areas in the right hemisphere. Our findings are consistent with the observation that category-selective visual areas come in pairs and support the view that neural mechanisms in left visual cortex--especially those that evolved to support the visual processing of faces--are developmentally malleable and become incorporated into a left-lateralized visual word form network that supports rapid word recognition and reading.
Toward a Unified Theory of Visual Area V4
Roe, Anna W.; Chelazzi, Leonardo; Connor, Charles E.; Conway, Bevil R.; Fujita, Ichiro; Gallant, Jack L.; Lu, Haidong; Vanduffel, Wim
2016-01-01
Visual area V4 is a midtier cortical area in the ventral visual pathway. It is crucial for visual object recognition and has been a focus of many studies on visual attention. However, there is no unifying view of V4’s role in visual processing. Neither is there an understanding of how its role in feature processing interfaces with its role in visual attention. This review captures our current knowledge of V4, largely derived from electrophysiological and imaging studies in the macaque monkey. Based on recent discovery of functionally specific domains in V4, we propose that the unifying function of V4 circuitry is to enable selective extraction of specific functional domain-based networks, whether it be by bottom-up specification of object features or by top-down attentionally driven selection. PMID:22500626
Ludwig, Karin; Sterzer, Philipp; Kathmann, Norbert; Hesselmann, Guido
2016-10-01
As a functional organization principle in cortical visual information processing, the influential 'two visual systems' hypothesis proposes a division of labor between a dorsal "vision-for-action" and a ventral "vision-for-perception" stream. A core assumption of this model is that the two visual streams are differentially involved in visual awareness: ventral stream processing is closely linked to awareness while dorsal stream processing is not. In this functional magnetic resonance imaging (fMRI) study with human observers, we directly probed the stimulus-related information encoded in fMRI response patterns in both visual streams as a function of stimulus visibility. We parametrically modulated the visibility of face and tool stimuli by varying the contrasts of the masks in a continuous flash suppression (CFS) paradigm. We found that visibility - operationalized by objective and subjective measures - decreased proportionally with increasing log CFS mask contrast. Neuronally, this relationship was closely matched by ventral visual areas, showing a linear decrease of stimulus-related information with increasing mask contrast. Stimulus-related information in dorsal areas also showed a dependency on mask contrast, but the decrease rather followed a step function instead of a linear function. Together, our results suggest that both the ventral and the dorsal visual stream are linked to visual awareness, but neural activity in ventral areas more closely reflects graded differences in awareness compared to dorsal areas. Copyright © 2016 Elsevier Ltd. All rights reserved.
Information processing in the primate visual system - An integrated systems perspective
NASA Technical Reports Server (NTRS)
Van Essen, David C.; Anderson, Charles H.; Felleman, Daniel J.
1992-01-01
The primate visual system contains dozens of distinct areas in the cerebral cortex and several major subcortical structures. These subdivisions are extensively interconnected in a distributed hierarchical network that contains several intertwined processing streams. A number of strategies are used for efficient information processing within this hierarchy. These include linear and nonlinear filtering, passage through information bottlenecks, and coordinated use of multiple types of information. In addition, dynamic regulation of information flow within and between visual areas may provide the computational flexibility needed for the visual system to perform a broad spectrum of tasks accurately and at high resolution.
Wang, Quanxin; Burkhalter, Andreas
2013-01-23
Previous studies of intracortical connections in mouse visual cortex have revealed two subnetworks that resemble the dorsal and ventral streams in primates. Although calcium imaging studies have shown that many areas of the ventral stream have high spatial acuity whereas areas of the dorsal stream are highly sensitive for transient visual stimuli, there are some functional inconsistencies that challenge a simple grouping into "what/perception" and "where/action" streams known in primates. The superior colliculus (SC) is a major center for processing of multimodal sensory information and the motor control of orienting the eyes, head, and body. Visual processing is performed in superficial layers, whereas premotor activity is generated in deep layers of the SC. Because the SC is known to receive input from visual cortex, we asked whether the projections from 10 visual areas of the dorsal and ventral streams terminate in differential depth profiles within the SC. We found that inputs from primary visual cortex are by far the strongest. Projections from the ventral stream were substantially weaker, whereas the sparsest input originated from areas of the dorsal stream. Importantly, we found that ventral stream inputs terminated in superficial layers, whereas dorsal stream inputs tended to be patchy and either projected equally to superficial and deep layers or strongly preferred deep layers. The results suggest that the anatomically defined ventral and dorsal streams contain areas that belong to distinct functional systems, specialized for the processing of visual information and visually guided action, respectively.
Fujimaki, N; Miyauchi, S; Pütz, B; Sasaki, Y; Takino, R; Sakai, K; Tamada, T
1999-01-01
Functional magnetic resonance imaging was used to investigate neural activity during the judgment of visual stimuli in two groups of experiments using seven and five normal subjects. The subjects were given tasks designed differentially to involve orthographic (more generally, visual form), phonological, and lexico-semantic processes. These tasks included the judgments of whether a line was horizontal, whether a pseudocharacter or pseudocharacter string included a horizontal line, whether a Japanese katakana (phonogram) character or character string included a certain vowel, or whether a character string was meaningful (noun or verb) or meaningless. Neural activity related to the visual form process was commonly observed during judgments of both single real-characters and single pseudocharacters in lateral extrastriate visual cortex, the posterior ventral or medial occipito-temporal area, and the posterior inferior temporal area of both hemispheres. In contrast, left-lateralized activation was observed in the latter two areas during judgments of real- and pseudo-character strings. These results show that there is no katakana "word form center" whose activity is specific to real words. Activation related to the phonological process was observed, in Broca's area, the insula, the supramarginal gyrus, and the posterior superior temporal area, with greater activation in the left hemisphere. These activation foci for visual form and phonological processes of katakana also were reported for the English alphabet in previous studies. The present activation showed no additional areas for contrasts of noun judgment with other conditions and was similar between noun and verb judgment tasks, suggesting two possibilities: no strong semantic activation was produced, or the semantic process shared activation foci with the phonological process.
NASA Astrophysics Data System (ADS)
Nasaruddin, N. H.; Yusoff, A. N.; Kaur, S.
2014-11-01
The objective of this multiple-subjects functional magnetic resonance imaging (fMRI) study was to identify the common brain areas that are activated when viewing black-and-white checkerboard pattern stimuli of various shapes, pattern and size and to investigate specific brain areas that are involved in processing static and moving visual stimuli. Sixteen participants viewed the moving (expanding ring, rotating wedge, flipping hour glass and bowtie and arc quadrant) and static (full checkerboard) stimuli during an fMRI scan. All stimuli have black-and-white checkerboard pattern. Statistical parametric mapping (SPM) was used in generating brain activation. Differential analyses were implemented to separately search for areas involved in processing static and moving stimuli. In general, the stimuli of various shapes, pattern and size activated multiple brain areas mostly in the left hemisphere. The activation in the right middle temporal gyrus (MTG) was found to be significantly higher in processing moving visual stimuli as compared to static stimulus. In contrast, the activation in the left calcarine sulcus and left lingual gyrus were significantly higher for static stimulus as compared to moving stimuli. Visual stimulation of various shapes, pattern and size used in this study indicated left lateralization of activation. The involvement of the right MTG in processing moving visual information was evident from differential analysis, while the left calcarine sulcus and left lingual gyrus are the areas that are involved in the processing of static visual stimulus.
Neural representation of form-contingent color filling-in in the early visual cortex.
Hong, Sang Wook; Tong, Frank
2017-11-01
Perceptual filling-in exemplifies the constructive nature of visual processing. Color, a prominent surface property of visual objects, can appear to spread to neighboring areas that lack any color. We investigated cortical responses to a color filling-in illusion that effectively dissociates perceived color from the retinal input (van Lier, Vergeer, & Anstis, 2009). Observers adapted to a star-shaped stimulus with alternating red- and cyan-colored points to elicit a complementary afterimage. By presenting an achromatic outline that enclosed one of the two afterimage colors, perceptual filling-in of that color was induced in the unadapted central region. Visual cortical activity was monitored with fMRI, and analyzed using multivariate pattern analysis. Activity patterns in early visual areas (V1-V4) reliably distinguished between the two color-induced filled-in conditions, but only higher extrastriate visual areas showed the predicted correspondence with color perception. Activity patterns allowed for reliable generalization between filled-in colors and physical presentations of perceptually matched colors in areas V3 and V4, but not in earlier visual areas. These findings suggest that the perception of filled-in surface color likely requires more extensive processing by extrastriate visual areas, in order for the neural representation of surface color to become aligned with perceptually matched real colors.
Wang, Quanxin; Sporns, Olaf; Burkhalter, Andreas
2012-01-01
Much of the information used for visual perception and visually guided actions is processed in complex networks of connections within the cortex. To understand how this works in the normal brain and to determine the impact of disease, mice are promising models. In primate visual cortex, information is processed in a dorsal stream specialized for visuospatial processing and guided action and a ventral stream for object recognition. Here, we traced the outputs of 10 visual areas and used quantitative graph analytic tools of modern network science to determine, from the projection strengths in 39 cortical targets, the community structure of the network. We found a high density of the cortical graph that exceeded that previously shown in monkey. Each source area showed a unique distribution of projection weights across its targets (i.e. connectivity profile) that was well-fit by a lognormal function. Importantly, the community structure was strongly dependent on the location of the source area: outputs from medial/anterior extrastriate areas were more strongly linked to parietal, motor and limbic cortex, whereas lateral extrastriate areas were preferentially connected to temporal and parahippocampal cortex. These two subnetworks resemble dorsal and ventral cortical streams in primates, demonstrating that the basic layout of cortical networks is conserved across species. PMID:22457489
Pratt, Hillel; Bleich, Naomi; Mittelman, Nomi
2015-11-01
Spatio-temporal distributions of cortical activity to audio-visual presentations of meaningless vowel-consonant-vowels and the effects of audio-visual congruence/incongruence, with emphasis on the McGurk effect, were studied. The McGurk effect occurs when a clearly audible syllable with one consonant, is presented simultaneously with a visual presentation of a face articulating a syllable with a different consonant and the resulting percept is a syllable with a consonant other than the auditorily presented one. Twenty subjects listened to pairs of audio-visually congruent or incongruent utterances and indicated whether pair members were the same or not. Source current densities of event-related potentials to the first utterance in the pair were estimated and effects of stimulus-response combinations, brain area, hemisphere, and clarity of visual articulation were assessed. Auditory cortex, superior parietal cortex, and middle temporal cortex were the most consistently involved areas across experimental conditions. Early (<200 msec) processing of the consonant was overall prominent in the left hemisphere, except right hemisphere prominence in superior parietal cortex and secondary visual cortex. Clarity of visual articulation impacted activity in secondary visual cortex and Wernicke's area. McGurk perception was associated with decreased activity in primary and secondary auditory cortices and Wernicke's area before 100 msec, increased activity around 100 msec which decreased again around 180 msec. Activity in Broca's area was unaffected by McGurk perception and was only increased to congruent audio-visual stimuli 30-70 msec following consonant onset. The results suggest left hemisphere prominence in the effects of stimulus and response conditions on eight brain areas involved in dynamically distributed parallel processing of audio-visual integration. Initially (30-70 msec) subcortical contributions to auditory cortex, superior parietal cortex, and middle temporal cortex occur. During 100-140 msec, peristriate visual influences and Wernicke's area join in the processing. Resolution of incongruent audio-visual inputs is then attempted, and if successful, McGurk perception occurs and cortical activity in left hemisphere further increases between 170 and 260 msec.
Stereoscopic processing of crossed and uncrossed disparities in the human visual cortex.
Li, Yuan; Zhang, Chuncheng; Hou, Chunping; Yao, Li; Zhang, Jiacai; Long, Zhiying
2017-12-21
Binocular disparity provides a powerful cue for depth perception in a stereoscopic environment. Despite increasing knowledge of the cortical areas that process disparity from neuroimaging studies, the neural mechanism underlying disparity sign processing [crossed disparity (CD)/uncrossed disparity (UD)] is still poorly understood. In the present study, functional magnetic resonance imaging (fMRI) was used to explore different neural features that are relevant to disparity-sign processing. We performed an fMRI experiment on 27 right-handed healthy human volunteers by using both general linear model (GLM) and multi-voxel pattern analysis (MVPA) methods. First, GLM was used to determine the cortical areas that displayed different responses to different disparity signs. Second, MVPA was used to determine how the cortical areas discriminate different disparity signs. The GLM analysis results indicated that shapes with UD induced significantly stronger activity in the sub-region (LO) of the lateral occipital cortex (LOC) than those with CD. The results of MVPA based on region of interest indicated that areas V3d and V3A displayed higher accuracy in the discrimination of crossed and uncrossed disparities than LOC. The results of searchlight-based MVPA indicated that the dorsal visual cortex showed significantly higher prediction accuracy than the ventral visual cortex and the sub-region LO of LOC showed high accuracy in the discrimination of crossed and uncrossed disparities. The results may suggest the dorsal visual areas are more discriminative to the disparity signs than the ventral visual areas although they are not sensitive to the disparity sign processing. Moreover, the LO in the ventral visual cortex is relevant to the recognition of shapes with different disparity signs and discriminative to the disparity sign.
Compression and reflection of visually evoked cortical waves
Xu, Weifeng; Huang, Xiaoying; Takagaki, Kentaroh; Wu, Jian-young
2007-01-01
Summary Neuronal interactions between primary and secondary visual cortical areas are important for visual processing, but the spatiotemporal patterns of the interaction are not well understood. We used voltage-sensitive dye imaging to visualize neuronal activity in rat visual cortex and found novel visually evoked waves propagating from V1 to other visual areas. A primary wave originated in the monocular area of V1 and was “compressed” when propagating to V2. A reflected wave initiated after compression and propagated backward into V1. The compression occurred at the V1/V2 border, and local GABAA inhibition is important for the compression. The compression/reflection pattern provides a two-phase modulation: V1 is first depolarized by the primary wave and then V1 and V2 are simultaneously depolarized by the reflected and primary waves, respectively. The compression/reflection pattern only occurred for evoked but not for spontaneous waves, suggesting that it is organized by an internal mechanism associated with visual processing. PMID:17610821
"Visual" Cortex of Congenitally Blind Adults Responds to Syntactic Movement.
Lane, Connor; Kanjlia, Shipra; Omaki, Akira; Bedny, Marina
2015-09-16
Human cortex is comprised of specialized networks that support functions, such as visual motion perception and language processing. How do genes and experience contribute to this specialization? Studies of plasticity offer unique insights into this question. In congenitally blind individuals, "visual" cortex responds to auditory and tactile stimuli. Remarkably, recent evidence suggests that occipital areas participate in language processing. We asked whether in blindness, occipital cortices: (1) develop domain-specific responses to language and (2) respond to a highly specialized aspect of language-syntactic movement. Nineteen congenitally blind and 18 sighted participants took part in two fMRI experiments. We report that in congenitally blind individuals, but not in sighted controls, "visual" cortex is more active during sentence comprehension than during a sequence memory task with nonwords, or a symbolic math task. This suggests that areas of occipital cortex become selective for language, relative to other similar higher-cognitive tasks. Crucially, we find that these occipital areas respond more to sentences with syntactic movement but do not respond to the difficulty of math equations. We conclude that regions within the visual cortex of blind adults are involved in syntactic processing. Our findings suggest that the cognitive function of human cortical areas is largely determined by input during development. Human cortex is made up of specialized regions that perform different functions, such as visual motion perception and language processing. How do genes and experience contribute to this specialization? Studies of plasticity show that cortical areas can change function from one sensory modality to another. Here we demonstrate that input during development can alter cortical function even more dramatically. In blindness a subset of "visual" areas becomes specialized for language processing. Crucially, we find that the same "visual" areas respond to a highly specialized and uniquely human aspect of language-syntactic movement. These data suggest that human cortex has broad functional capacity during development, and input plays a major role in determining functional specialization. Copyright © 2015 the authors 0270-6474/15/3512859-10$15.00/0.
Lee, Junghee; Cohen, Mark S; Engel, Stephen A; Glahn, David; Nuechterlein, Keith H; Wynn, Jonathan K; Green, Michael F
2010-07-01
Visual masking paradigms assess the early part of visual information processing, which may reflect vulnerability measures for schizophrenia. We examined the neural substrates of visual backward performance in unaffected sibling of schizophrenia patients using functional magnetic resonance imaging (fMRI). Twenty-one unaffected siblings of schizophrenia patients and 19 healthy controls performed a backward masking task and three functional localizer tasks to identify three visual processing regions of interest (ROI): lateral occipital complex (LO), the motion-sensitive area, and retinotopic areas. In the masking task, we systematically manipulated stimulus onset asynchronies (SOAs). We analyzed fMRI data in two complementary ways: 1) an ROI approach for three visual areas, and 2) a whole-brain analysis. The groups did not differ in behavioral performance. For ROI analysis, both groups increased activation as SOAs increased in LO. Groups did not differ in activation levels of the three ROIs. For whole-brain analysis, controls increased activation as a function of SOAs, compared with siblings in several regions (i.e., anterior cingulate cortex, posterior cingulate cortex, inferior prefrontal cortex, inferior parietal lobule). The study found: 1) area LO showed sensitivity to the masking effect in both groups; 2) siblings did not differ from controls in activation of LO; and 3) groups differed significantly in several brain regions outside visual processing areas that have been related to attentional or re-entrant processes. These findings suggest that LO dysfunction may be a disease indicator rather than a risk indicator for schizophrenia. Copyright 2010 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
Top-Down Beta Enhances Bottom-Up Gamma
Thompson, William H.
2017-01-01
Several recent studies have demonstrated that the bottom-up signaling of a visual stimulus is subserved by interareal gamma-band synchronization, whereas top-down influences are mediated by alpha-beta band synchronization. These processes may implement top-down control of stimulus processing if top-down and bottom-up mediating rhythms are coupled via cross-frequency interaction. To test this possibility, we investigated Granger-causal influences among awake macaque primary visual area V1, higher visual area V4, and parietal control area 7a during attentional task performance. Top-down 7a-to-V1 beta-band influences enhanced visually driven V1-to-V4 gamma-band influences. This enhancement was spatially specific and largest when beta-band activity preceded gamma-band activity by ∼0.1 s, suggesting a causal effect of top-down processes on bottom-up processes. We propose that this cross-frequency interaction mechanistically subserves the attentional control of stimulus selection. SIGNIFICANCE STATEMENT Contemporary research indicates that the alpha-beta frequency band underlies top-down control, whereas the gamma-band mediates bottom-up stimulus processing. This arrangement inspires an attractive hypothesis, which posits that top-down beta-band influences directly modulate bottom-up gamma band influences via cross-frequency interaction. We evaluate this hypothesis determining that beta-band top-down influences from parietal area 7a to visual area V1 are correlated with bottom-up gamma frequency influences from V1 to area V4, in a spatially specific manner, and that this correlation is maximal when top-down activity precedes bottom-up activity. These results show that for top-down processes such as spatial attention, elevated top-down beta-band influences directly enhance feedforward stimulus-induced gamma-band processing, leading to enhancement of the selected stimulus. PMID:28592697
Combining MRI and VEP imaging to isolate the temporal response of visual cortical areas
NASA Astrophysics Data System (ADS)
Carney, Thom; Ales, Justin; Klein, Stanley A.
2008-02-01
The human brain has well over 30 cortical areas devoted to visual processing. Classical neuro-anatomical as well as fMRI studies have demonstrated that early visual areas have a retinotopic organization whereby adjacent locations in visual space are represented in adjacent areas of cortex within a visual area. At the 2006 Electronic Imaging meeting we presented a method using sprite graphics to obtain high resolution retinotopic visual evoked potential responses using multi-focal m-sequence technology (mfVEP). We have used this method to record mfVEPs from up to 192 non overlapping checkerboard stimulus patches scaled such that each patch activates about 12 mm2 of cortex in area V1 and even less in V2. This dense coverage enables us to incorporate cortical folding constraints, given by anatomical MRI and fMRI results from the same subject, to isolate the V1 and V2 temporal responses. Moreover, the method offers a simple means of validating the accuracy of the extracted V1 and V2 time functions by comparing the results between left and right hemispheres that have unique folding patterns and are processed independently. Previous VEP studies have been contradictory as to which area responds first to visual stimuli. This new method accurately separates the signals from the two areas and demonstrates that both respond with essentially the same latency. A new method is introduced which describes better ways to isolate cortical areas using an empirically determined forward model. The method includes a novel steady state mfVEP and complex SVD techniques. In addition, this evolving technology is put to use examining how stimulus attributes differentially impact the response in different cortical areas, in particular how fast nonlinear contrast processing occurs. This question is examined using both state triggered kernel estimation (STKE) and m-sequence "conditioned kernels". The analysis indicates different contrast gain control processes in areas V1 and V2. Finally we show that our m-sequence multi-focal stimuli have advantages for integrating EEG and MEG for improved dipole localization.
Neural basis of hierarchical visual form processing of Japanese Kanji characters.
Higuchi, Hiroki; Moriguchi, Yoshiya; Murakami, Hiroki; Katsunuma, Ruri; Mishima, Kazuo; Uno, Akira
2015-12-01
We investigated the neural processing of reading Japanese Kanji characters, which involves unique hierarchical visual processing, including the recognition of visual components specific to Kanji, such as "radicals." We performed functional MRI to measure brain activity in response to hierarchical visual stimuli containing (1) real Kanji characters (complete structure with semantic information), (2) pseudo Kanji characters (subcomponents without complete character structure), (3) artificial characters (character fragments), and (4) checkerboard (simple photic stimuli). As we expected, the peaks of the activation in response to different stimulus types were aligned within the left occipitotemporal visual region along the posterior-anterior axis in order of the structural complexity of the stimuli, from fragments (3) to complete characters (1). Moreover, only the real Kanji characters produced functional connectivity between the left inferotemporal area and the language area (left inferior frontal triangularis), while pseudo Kanji characters induced connectivity between the left inferotemporal area and the bilateral cerebellum and left putamen. Visual processing of Japanese Kanji takes place in the left occipitotemporal cortex, with a clear hierarchy within the region such that the neural activation differentiates the elements in Kanji characters' fragments, subcomponents, and semantics, with different patterns of connectivity to remote regions among the elements.
Pavlidou, Anastasia; Schnitzler, Alfons; Lange, Joachim
2014-05-01
The neural correlates of action recognition have been widely studied in visual and sensorimotor areas of the human brain. However, the role of neuronal oscillations involved during the process of action recognition remains unclear. Here, we were interested in how the plausibility of an action modulates neuronal oscillations in visual and sensorimotor areas. Subjects viewed point-light displays (PLDs) of biomechanically plausible and implausible versions of the same actions. Using magnetoencephalography (MEG), we examined dynamic changes of oscillatory activity during these action recognition processes. While both actions elicited oscillatory activity in visual and sensorimotor areas in several frequency bands, a significant difference was confined to the beta-band (∼20 Hz). An increase of power for plausible actions was observed in left temporal, parieto-occipital and sensorimotor areas of the brain, in the beta-band in successive order between 1650 and 2650 msec. These distinct spatio-temporal beta-band profiles suggest that the action recognition process is modulated by the degree of biomechanical plausibility of the action, and that spectral power in the beta-band may provide a functional interaction between visual and sensorimotor areas in humans. Copyright © 2014 Elsevier Ltd. All rights reserved.
Hu, Tjing-Tjing; Van den Bergh, Gert; Thorrez, Lieven; Heylen, Kevin; Eysel, Ulf T; Arckens, Lutgarde
2011-12-01
In cats with central retinal lesions, deprivation of the lesion projection zone (LPZ) in primary visual cortex (area 17) induces remapping of the cortical topography. Recovery of visually driven cortical activity in the LPZ involves distinct changes in protein expression. Recent observations, about molecular activity changes throughout area 17, challenge the view that its remote nondeprived parts would not be involved in this recovery process. We here investigated the dynamics of the protein expression pattern of remote nondeprived area 17 triggered by central retinal lesions to explore to what extent far peripheral area 17 would contribute to the topographic map reorganization inside the visual cortex. Using functional proteomics, we identified 40 proteins specifically differentially expressed between far peripheral area 17 of control and experimental animals 14 days to 8 months postlesion. Our results demonstrate that far peripheral area 17 is implicated in the functional adaptation to the visual deprivation, involving a meshwork of interacting proteins, operating in diverse pathways. In particular, endocytosis/exocytosis processes appeared to be essential via their intimate correlation with long-term potentiation and neurite outgrowth mechanisms.
Rosemann, Stephanie; Thiel, Christiane M
2018-07-15
Hearing loss is associated with difficulties in understanding speech, especially under adverse listening conditions. In these situations, seeing the speaker improves speech intelligibility in hearing-impaired participants. On the neuronal level, previous research has shown cross-modal plastic reorganization in the auditory cortex following hearing loss leading to altered processing of auditory, visual and audio-visual information. However, how reduced auditory input effects audio-visual speech perception in hearing-impaired subjects is largely unknown. We here investigated the impact of mild to moderate age-related hearing loss on processing audio-visual speech using functional magnetic resonance imaging. Normal-hearing and hearing-impaired participants performed two audio-visual speech integration tasks: a sentence detection task inside the scanner and the McGurk illusion outside the scanner. Both tasks consisted of congruent and incongruent audio-visual conditions, as well as auditory-only and visual-only conditions. We found a significantly stronger McGurk illusion in the hearing-impaired participants, which indicates stronger audio-visual integration. Neurally, hearing loss was associated with an increased recruitment of frontal brain areas when processing incongruent audio-visual, auditory and also visual speech stimuli, which may reflect the increased effort to perform the task. Hearing loss modulated both the audio-visual integration strength measured with the McGurk illusion and brain activation in frontal areas in the sentence task, showing stronger integration and higher brain activation with increasing hearing loss. Incongruent compared to congruent audio-visual speech revealed an opposite brain activation pattern in left ventral postcentral gyrus in both groups, with higher activation in hearing-impaired participants in the incongruent condition. Our results indicate that already mild to moderate hearing loss impacts audio-visual speech processing accompanied by changes in brain activation particularly involving frontal areas. These changes are modulated by the extent of hearing loss. Copyright © 2018 Elsevier Inc. All rights reserved.
Dynamic Integration of Task-Relevant Visual Features in Posterior Parietal Cortex
Freedman, David J.
2014-01-01
Summary The primate visual system consists of multiple hierarchically organized cortical areas, each specialized for processing distinct aspects of the visual scene. For example, color and form are encoded in ventral pathway areas such as V4 and inferior temporal cortex, while motion is preferentially processed in dorsal pathway areas such as the middle temporal area. Such representations often need to be integrated perceptually to solve tasks which depend on multiple features. We tested the hypothesis that the lateral intraparietal area (LIP) integrates disparate task-relevant visual features by recording from LIP neurons in monkeys trained to identify target stimuli composed of conjunctions of color and motion features. We show that LIP neurons exhibit integrative representations of both color and motion features when they are task relevant, and task-dependent shifts of both direction and color tuning. This suggests that LIP plays a role in flexibly integrating task-relevant sensory signals. PMID:25199703
The Influence of verbalization on the pattern of cortical activation during mental arithmetic
2012-01-01
Background The aim of the present functional magnetic resonance imaging (fMRI) study at 3 T was to investigate the influence of the verbal-visual cognitive style on cerebral activation patterns during mental arithmetic. In the domain of arithmetic, a visual style might for example mean to visualize numbers and (intermediate) results, and a verbal style might mean, that numbers and (intermediate) results are verbally repeated. In this study, we investigated, first, whether verbalizers show activations in areas for language processing, and whether visualizers show activations in areas for visual processing during mental arithmetic. Some researchers have proposed that the left and right intraparietal sulcus (IPS), and the left angular gyrus (AG), two areas involved in number processing, show some domain or modality specificity. That is, verbal for the left AG, and visual for the left and right IPS. We investigated, second, whether the activation in these areas implied in number processing depended on an individual's cognitive style. Methods 42 young healthy adults participated in the fMRI study. The study comprised two functional sessions. In the first session, subtraction and multiplication problems were presented in an event-related design, and in the second functional session, multiplications were presented in two formats, as Arabic numerals and as written number words, in an event-related design. The individual's habitual use of visualization and verbalization during mental arithmetic was assessed by a short self-report assessment. Results We observed in both functional sessions that the use of verbalization predicts activation in brain areas associated with language (supramarginal gyrus) and auditory processing (Heschl's gyrus, Rolandic operculum). However, we found no modulation of activation in the left AG as a function of verbalization. Conclusions Our results confirm that strong verbalizers use mental speech as a form of mental imagination more strongly than weak verbalizers. Moreover, our results suggest that the left AG has no specific affinity to the verbal domain and subserves number processing in a modality-general way. PMID:22404872
Feature-Selective Attentional Modulations in Human Frontoparietal Cortex.
Ester, Edward F; Sutterer, David W; Serences, John T; Awh, Edward
2016-08-03
Control over visual selection has long been framed in terms of a dichotomy between "source" and "site," where top-down feedback signals originating in frontoparietal cortical areas modulate or bias sensory processing in posterior visual areas. This distinction is motivated in part by observations that frontoparietal cortical areas encode task-level variables (e.g., what stimulus is currently relevant or what motor outputs are appropriate), while posterior sensory areas encode continuous or analog feature representations. Here, we present evidence that challenges this distinction. We used fMRI, a roving searchlight analysis, and an inverted encoding model to examine representations of an elementary feature property (orientation) across the entire human cortical sheet while participants attended either the orientation or luminance of a peripheral grating. Orientation-selective representations were present in a multitude of visual, parietal, and prefrontal cortical areas, including portions of the medial occipital cortex, the lateral parietal cortex, and the superior precentral sulcus (thought to contain the human homolog of the macaque frontal eye fields). Additionally, representations in many-but not all-of these regions were stronger when participants were instructed to attend orientation relative to luminance. Collectively, these findings challenge models that posit a strict segregation between sources and sites of attentional control on the basis of representational properties by demonstrating that simple feature values are encoded by cortical regions throughout the visual processing hierarchy, and that representations in many of these areas are modulated by attention. Influential models of visual attention posit a distinction between top-down control and bottom-up sensory processing networks. These models are motivated in part by demonstrations showing that frontoparietal cortical areas associated with top-down control represent abstract or categorical stimulus information, while visual areas encode parametric feature information. Here, we show that multivariate activity in human visual, parietal, and frontal cortical areas encode representations of a simple feature property (orientation). Moreover, representations in several (though not all) of these areas were modulated by feature-based attention in a similar fashion. These results provide an important challenge to models that posit dissociable top-down control and sensory processing networks on the basis of representational properties. Copyright © 2016 the authors 0270-6474/16/368188-12$15.00/0.
A number-form area in the blind
Abboud, Sami; Maidenbaum, Shachar; Dehaene, Stanislas; Amedi, Amir
2015-01-01
Distinct preference for visual number symbols was recently discovered in the human right inferior temporal gyrus (rITG). It remains unclear how this preference emerges, what is the contribution of shape biases to its formation and whether visual processing underlies it. Here we use congenital blindness as a model for brain development without visual experience. During fMRI, we present blind subjects with shapes encoded using a novel visual-to-music sensory-substitution device (The EyeMusic). Greater activation is observed in the rITG when subjects process symbols as numbers compared with control tasks on the same symbols. Using resting-state fMRI in the blind and sighted, we further show that the areas with preference for numerals and letters exhibit distinct patterns of functional connectivity with quantity and language-processing areas, respectively. Our findings suggest that specificity in the ventral ‘visual’ stream can emerge independently of sensory modality and visual experience, under the influence of distinct connectivity patterns. PMID:25613599
Automated objective characterization of visual field defects in 3D
NASA Technical Reports Server (NTRS)
Fink, Wolfgang (Inventor)
2006-01-01
A method and apparatus for electronically performing a visual field test for a patient. A visual field test pattern is displayed to the patient on an electronic display device and the patient's responses to the visual field test pattern are recorded. A visual field representation is generated from the patient's responses. The visual field representation is then used as an input into a variety of automated diagnostic processes. In one process, the visual field representation is used to generate a statistical description of the rapidity of change of a patient's visual field at the boundary of a visual field defect. In another process, the area of a visual field defect is calculated using the visual field representation. In another process, the visual field representation is used to generate a statistical description of the volume of a patient's visual field defect.
Mismatch Negativity with Visual-only and Audiovisual Speech
Ponton, Curtis W.; Bernstein, Lynne E.; Auer, Edward T.
2009-01-01
The functional organization of cortical speech processing is thought to be hierarchical, increasing in complexity and proceeding from primary sensory areas centrifugally. The current study used the mismatch negativity (MMN) obtained with electrophysiology (EEG) to investigate the early latency period of visual speech processing under both visual-only (VO) and audiovisual (AV) conditions. Current density reconstruction (CDR) methods were used to model the cortical MMN generator locations. MMNs were obtained with VO and AV speech stimuli at early latencies (approximately 82-87 ms peak in time waveforms relative to the acoustic onset) and in regions of the right lateral temporal and parietal cortices. Latencies were consistent with bottom-up processing of the visible stimuli. We suggest that a visual pathway extracts phonetic cues from visible speech, and that previously reported effects of AV speech in classical early auditory areas, given later reported latencies, could be attributable to modulatory feedback from visual phonetic processing. PMID:19404730
Urakawa, Tomokazu; Ogata, Katsuya; Kimura, Takahiro; Kume, Yuko; Tobimatsu, Shozo
2015-01-01
Disambiguation of a noisy visual scene with prior knowledge is an indispensable task of the visual system. To adequately adapt to a dynamically changing visual environment full of noisy visual scenes, the implementation of knowledge-mediated disambiguation in the brain is imperative and essential for proceeding as fast as possible under the limited capacity of visual image processing. However, the temporal profile of the disambiguation process has not yet been fully elucidated in the brain. The present study attempted to determine how quickly knowledge-mediated disambiguation began to proceed along visual areas after the onset of a two-tone ambiguous image using magnetoencephalography with high temporal resolution. Using the predictive coding framework, we focused on activity reduction for the two-tone ambiguous image as an index of the implementation of disambiguation. Source analysis revealed that a significant activity reduction was observed in the lateral occipital area at approximately 120 ms after the onset of the ambiguous image, but not in preceding activity (about 115 ms) in the cuneus when participants perceptually disambiguated the ambiguous image with prior knowledge. These results suggested that knowledge-mediated disambiguation may be implemented as early as approximately 120 ms following an ambiguous visual scene, at least in the lateral occipital area, and provided an insight into the temporal profile of the disambiguation process of a noisy visual scene with prior knowledge. © 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Multiscale neural connectivity during human sensory processing in the brain
NASA Astrophysics Data System (ADS)
Maksimenko, Vladimir A.; Runnova, Anastasia E.; Frolov, Nikita S.; Makarov, Vladimir V.; Nedaivozov, Vladimir; Koronovskii, Alexey A.; Pisarchik, Alexander; Hramov, Alexander E.
2018-05-01
Stimulus-related brain activity is considered using wavelet-based analysis of neural interactions between occipital and parietal brain areas in alpha (8-12 Hz) and beta (15-30 Hz) frequency bands. We show that human sensory processing related to the visual stimuli perception induces brain response resulted in different ways of parieto-occipital interactions in these bands. In the alpha frequency band the parieto-occipital neuronal network is characterized by homogeneous increase of the interaction between all interconnected areas both within occipital and parietal lobes and between them. In the beta frequency band the occipital lobe starts to play a leading role in the dynamics of the occipital-parietal network: The perception of visual stimuli excites the visual center in the occipital area and then, due to the increase of parieto-occipital interactions, such excitation is transferred to the parietal area, where the attentional center takes place. In the case when stimuli are characterized by a high degree of ambiguity, we find greater increase of the interaction between interconnected areas in the parietal lobe due to the increase of human attention. Based on revealed mechanisms, we describe the complex response of the parieto-occipital brain neuronal network during the perception and primary processing of the visual stimuli. The results can serve as an essential complement to the existing theory of neural aspects of visual stimuli processing.
Evidence for a basal temporal visual language center: cortical stimulation producing pure alexia.
Mani, J; Diehl, B; Piao, Z; Schuele, S S; Lapresto, E; Liu, P; Nair, D R; Dinner, D S; Lüders, H O
2008-11-11
Dejerine and Benson and Geschwind postulated disconnection of the dominant angular gyrus from both visual association cortices as the basis for pure alexia, emphasizing disruption of white matter tracts in the dominant temporooccipital region. Recently functional imaging studies provide evidence for direct participation of basal temporal and occipital cortices in the cognitive process of reading. The exact location and function of these areas remain a matter of debate. To confirm the participation of the basal temporal region in reading. Extraoperative electrical stimulation of the dominant hemisphere was performed in three subjects using subdural electrodes, as part of presurgical evaluation for refractory epilepsy. Pure alexia was reproduced during cortical stimulation of the dominant posterior fusiform and inferior temporal gyri in all three patients. Stimulation resulted in selective reading difficulty with intact auditory comprehension and writing. Reading difficulty involved sentences and words with intact letter by letter reading. Picture naming difficulties were also noted at some electrodes. This region is located posterior to and contiguous with the basal temporal language area (BTLA) where stimulation resulted in global language dysfunction in visual and auditory realms. The location corresponded with the visual word form area described on functional MRI. These observations support the existence of a visual language area in the dominant fusiform and occipitotemporal gyri, contiguous with basal temporal language area. A portion of visual language area was exclusively involved in lexical processing while the other part of this region processed both lexical and nonlexical symbols.
Visual areas become less engaged in associative recall following memory stabilization.
Nieuwenhuis, Ingrid L C; Takashima, Atsuko; Oostenveld, Robert; Fernández, Guillén; Jensen, Ole
2008-04-15
Numerous studies have focused on changes in the activity in the hippocampus and higher association areas with consolidation and memory stabilization. Even though perceptual areas are engaged in memory recall, little is known about how memory stabilization is reflected in those areas. Using magnetoencephalography (MEG) we investigated changes in visual areas with memory stabilization. Subjects were trained on associating a face to one of eight locations. The first set of associations ('stabilized') was learned in three sessions distributed over a week. The second set ('labile') was learned in one session just prior to the MEG measurement. In the recall session only the face was presented and subjects had to indicate the correct location using a joystick. The MEG data revealed robust gamma activity during recall, which started in early visual cortex and propagated to higher visual and parietal brain areas. The occipital gamma power was higher for the labile than the stabilized condition (time=0.65-0.9 s). Also the event-related field strength was higher during recall of labile than stabilized associations (time=0.59-1.5 s). We propose that recall of the spatial associations prior to memory stabilization involves a top-down process relying on reconstructing learned representations in visual areas. This process is reflected in gamma band activity consistent with the notion that neuronal synchronization in the gamma band is required for visual representations. More direct synaptic connections are formed with memory stabilization, thus decreasing the dependence on visual areas.
Kahn, Itamar; Wig, Gagan S.; Schacter, Daniel L.
2012-01-01
Asymmetrical specialization of cognitive processes across the cerebral hemispheres is a hallmark of healthy brain development and an important evolutionary trait underlying higher cognition in humans. While previous research, including studies of priming, divided visual field presentation, and split-brain patients, demonstrates a general pattern of right/left asymmetry of form-specific versus form-abstract visual processing, little is known about brain organization underlying this dissociation. Here, using repetition priming of complex visual scenes and high-resolution functional magnetic resonance imaging (MRI), we demonstrate asymmetrical form specificity of visual processing between the right and left hemispheres within a region known to be critical for processing of visual spatial scenes (parahippocampal place area [PPA]). Next, we use resting-state functional connectivity MRI analyses to demonstrate that this functional asymmetry is associated with differential intrinsic activity correlations of the right versus left PPA with regions critically involved in perceptual versus conceptual processing, respectively. Our results demonstrate that the PPA comprises lateralized subregions across the cerebral hemispheres that are engaged in functionally dissociable yet complementary components of visual scene analysis. Furthermore, this functional asymmetry is associated with differential intrinsic functional connectivity of the PPA with distinct brain areas known to mediate dissociable cognitive processes. PMID:21968568
Stevens, W Dale; Kahn, Itamar; Wig, Gagan S; Schacter, Daniel L
2012-08-01
Asymmetrical specialization of cognitive processes across the cerebral hemispheres is a hallmark of healthy brain development and an important evolutionary trait underlying higher cognition in humans. While previous research, including studies of priming, divided visual field presentation, and split-brain patients, demonstrates a general pattern of right/left asymmetry of form-specific versus form-abstract visual processing, little is known about brain organization underlying this dissociation. Here, using repetition priming of complex visual scenes and high-resolution functional magnetic resonance imaging (MRI), we demonstrate asymmetrical form specificity of visual processing between the right and left hemispheres within a region known to be critical for processing of visual spatial scenes (parahippocampal place area [PPA]). Next, we use resting-state functional connectivity MRI analyses to demonstrate that this functional asymmetry is associated with differential intrinsic activity correlations of the right versus left PPA with regions critically involved in perceptual versus conceptual processing, respectively. Our results demonstrate that the PPA comprises lateralized subregions across the cerebral hemispheres that are engaged in functionally dissociable yet complementary components of visual scene analysis. Furthermore, this functional asymmetry is associated with differential intrinsic functional connectivity of the PPA with distinct brain areas known to mediate dissociable cognitive processes.
[Sensory loss and brain reorganization].
Fortin, Madeleine; Voss, Patrice; Lassonde, Maryse; Lepore, Franco
2007-11-01
It is without a doubt that humans are first and foremost visual beings. Even though the other sensory modalities provide us with valuable information, it is vision that generally offers the most reliable and detailed information concerning our immediate surroundings. It is therefore not surprising that nearly a third of the human brain processes, in one way or another, visual information. But what happens when the visual information no longer reaches these brain regions responsible for processing it? Indeed numerous medical conditions such as congenital glaucoma, retinis pigmentosa and retinal detachment, to name a few, can disrupt the visual system and lead to blindness. So, do the brain areas responsible for processing visual stimuli simply shut down and become non-functional? Do they become dead weight and simply stop contributing to cognitive and sensory processes? Current data suggests that this is not the case. Quite the contrary, it would seem that congenitally blind individuals benefit from the recruitment of these areas by other sensory modalities to carry out non-visual tasks. In fact, our laboratory has been studying blindness and its consequences on both the brain and behaviour for many years now. We have shown that blind individuals demonstrate exceptional hearing abilities. This finding holds true for stimuli originating from both near and far space. It also holds true, under certain circumstances, for those who lost their sight later in life, beyond a period generally believed to limit the brain changes following the loss of sight. In the case of the early blind, we have shown their ability to localize sounds is strongly correlated with activity in the occipital cortex (the location of the visual processing), demonstrating that these areas are functionally engaged by the task. Therefore it would seem that the plastic nature of the human brain allows them to make new use of the cerebral areas normally dedicated to visual processing.
Schindler, Andreas; Bartels, Andreas
2017-05-01
Superimposed on the visual feed-forward pathway, feedback connections convey higher level information to cortical areas lower in the hierarchy. A prominent framework for these connections is the theory of predictive coding where high-level areas send stimulus interpretations to lower level areas that compare them with sensory input. Along these lines, a growing body of neuroimaging studies shows that predictable stimuli lead to reduced blood oxygen level-dependent (BOLD) responses compared with matched nonpredictable counterparts, especially in early visual cortex (EVC) including areas V1-V3. The sources of these modulatory feedback signals are largely unknown. Here, we re-examined the robust finding of relative BOLD suppression in EVC evident during processing of coherent compared with random motion. Using functional connectivity analysis, we show an optic flow-dependent increase of functional connectivity between BOLD suppressed EVC and a network of visual motion areas including MST, V3A, V6, the cingulate sulcus visual area (CSv), and precuneus (Pc). Connectivity decreased between EVC and 2 areas known to encode heading direction: entorhinal cortex (EC) and retrosplenial cortex (RSC). Our results provide first evidence that BOLD suppression in EVC for predictable stimuli is indeed mediated by specific high-level areas, in accord with the theory of predictive coding. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
The role of temporo-parietal junction (TPJ) in global Gestalt perception.
Huberle, Elisabeth; Karnath, Hans-Otto
2012-07-01
Grouping processes enable the coherent perception of our environment. A number of brain areas has been suggested to be involved in the integration of elements into objects including early and higher visual areas along the ventral visual pathway as well as motion-processing areas of the dorsal visual pathway. However, integration not only is required for the cortical representation of individual objects, but is also essential for the perception of more complex visual scenes consisting of several different objects and/or shapes. The present fMRI experiments aimed to address such integration processes. We investigated the neural correlates underlying the global Gestalt perception of hierarchically organized stimuli that allowed parametrical degrading of the object at the global level. The comparison of intact versus disturbed perception of the global Gestalt revealed a network of cortical areas including the temporo-parietal junction (TPJ), anterior cingulate cortex and the precuneus. The TPJ location corresponds well with the areas known to be typically lesioned in stroke patients with simultanagnosia following bilateral brain damage. These patients typically show a deficit in identifying the global Gestalt of a visual scene. Further, we found the closest relation between behavioral performance and fMRI activation for the TPJ. Our data thus argue for a significant role of the TPJ in human global Gestalt perception.
Functional neuroanatomy of visual masking deficits in schizophrenia.
Green, Michael F; Lee, Junghee; Cohen, Mark S; Engel, Steven A; Korb, Alexander S; Nuechterlein, Keith H; Wynn, Jonathan K; Glahn, David C
2009-12-01
Visual masking procedures assess the earliest stages of visual processing. Patients with schizophrenia reliably show deficits on visual masking, and these procedures have been used to explore vulnerability to schizophrenia, probe underlying neural circuits, and help explain functional outcome. To identify and compare regional brain activity associated with one form of visual masking (ie, backward masking) in schizophrenic patients and healthy controls. Subjects received functional magnetic resonance imaging scans. While in the scanner, subjects performed a backward masking task and were given 3 functional localizer activation scans to identify early visual processing regions of interest (ROIs). University of California, Los Angeles, and the Department of Veterans Affairs Greater Los Angeles Healthcare System. Nineteen patients with schizophrenia and 19 healthy control subjects. Main Outcome Measure The magnitude of the functional magnetic resonance imaging signal during backward masking. Two ROIs (lateral occipital complex [LO] and the human motion selective cortex [hMT+]) showed sensitivity to the effects of masking, meaning that signal in these areas increased as the target became more visible. Patients had lower activation than controls in LO across all levels of visibility but did not differ in other visual processing ROIs. Using whole-brain analyses, we also identified areas outside the ROIs that were sensitive to masking effects (including bilateral inferior parietal lobe and thalamus), but groups did not differ in signal magnitude in these areas. The study results support a key role in LO for visual masking, consistent with previous studies in healthy controls. The current results indicate that patients fail to activate LO to the same extent as controls during visual processing regardless of stimulus visibility, suggesting a neural basis for the visual masking deficit, and possibly other visual integration deficits, in schizophrenia.
NASA Astrophysics Data System (ADS)
Li, W.; Shigeta, K.; Hasegawa, K.; Li, L.; Yano, K.; Tanaka, S.
2017-09-01
Recently, laser-scanning technology, especially mobile mapping systems (MMSs), has been applied to measure 3D urban scenes. Thus, it has become possible to simulate a traditional cultural event in a virtual space constructed using measured point clouds. In this paper, we take the festival float procession in the Gion Festival that has a long history in Kyoto City, Japan. The city government plans to revive the original procession route that is narrow and not used at present. For the revival, it is important to know whether a festival float collides with houses, billboards, electric wires or other objects along the original route. Therefore, in this paper, we propose a method for visualizing the collisions of point cloud objects. The advantageous features of our method are (1) a see-through visualization with a correct depth feel that is helpful to robustly determine the collision areas, (2) the ability to visualize areas of high collision risk as well as real collision areas, and (3) the ability to highlight target visualized areas by increasing the point densities there.
Vlamings, Petra Hendrika Johanna Maria; Jonkman, Lisa Marthe; van Daalen, Emma; van der Gaag, Rutger Jan; Kemner, Chantal
2010-12-15
A detailed visual processing style has been noted in autism spectrum disorder (ASD); this contributes to problems in face processing and has been directly related to abnormal processing of spatial frequencies (SFs). Little is known about the early development of face processing in ASD and the relation with abnormal SF processing. We investigated whether young ASD children show abnormalities in low spatial frequency (LSF, global) and high spatial frequency (HSF, detailed) processing and explored whether these are crucially involved in the early development of face processing. Three- to 4-year-old children with ASD (n = 22) were compared with developmentally delayed children without ASD (n = 17). Spatial frequency processing was studied by recording visual evoked potentials from visual brain areas while children passively viewed gratings (HSF/LSF). In addition, children watched face stimuli with different expressions, filtered to include only HSF or LSF. Enhanced activity in visual brain areas was found in response to HSF versus LSF information in children with ASD, in contrast to control subjects. Furthermore, facial-expression processing was also primarily driven by detail in ASD. Enhanced visual processing of detailed (HSF) information is present early in ASD and occurs for neutral (gratings), as well as for socially relevant stimuli (facial expressions). These data indicate that there is a general abnormality in visual SF processing in early ASD and are in agreement with suggestions that a fast LSF subcortical face processing route might be affected in ASD. This could suggest that abnormal visual processing is causative in the development of social problems in ASD. Copyright © 2010 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
Monaco, Simona; Gallivan, Jason P; Figley, Teresa D; Singhal, Anthony; Culham, Jody C
2017-11-29
The role of the early visual cortex and higher-order occipitotemporal cortex has been studied extensively for visual recognition and to a lesser degree for haptic recognition and visually guided actions. Using a slow event-related fMRI experiment, we investigated whether tactile and visual exploration of objects recruit the same "visual" areas (and in the case of visual cortex, the same retinotopic zones) and if these areas show reactivation during delayed actions in the dark toward haptically explored objects (and if so, whether this reactivation might be due to imagery). We examined activation during visual or haptic exploration of objects and action execution (grasping or reaching) separated by an 18 s delay. Twenty-nine human volunteers (13 females) participated in this study. Participants had their eyes open and fixated on a point in the dark. The objects were placed below the fixation point and accordingly visual exploration activated the cuneus, which processes retinotopic locations in the lower visual field. Strikingly, the occipital pole (OP), representing foveal locations, showed higher activation for tactile than visual exploration, although the stimulus was unseen and location in the visual field was peripheral. Moreover, the lateral occipital tactile-visual area (LOtv) showed comparable activation for tactile and visual exploration. Psychophysiological interaction analysis indicated that the OP showed stronger functional connectivity with anterior intraparietal sulcus and LOtv during the haptic than visual exploration of shapes in the dark. After the delay, the cuneus, OP, and LOtv showed reactivation that was independent of the sensory modality used to explore the object. These results show that haptic actions not only activate "visual" areas during object touch, but also that this information appears to be used in guiding grasping actions toward targets after a delay. SIGNIFICANCE STATEMENT Visual presentation of an object activates shape-processing areas and retinotopic locations in early visual areas. Moreover, if the object is grasped in the dark after a delay, these areas show "reactivation." Here, we show that these areas are also activated and reactivated for haptic object exploration and haptically guided grasping. Touch-related activity occurs not only in the retinotopic location of the visual stimulus, but also at the occipital pole (OP), corresponding to the foveal representation, even though the stimulus was unseen and located peripherally. That is, the same "visual" regions are implicated in both visual and haptic exploration; however, touch also recruits high-acuity central representation within early visual areas during both haptic exploration of objects and subsequent actions toward them. Functional connectivity analysis shows that the OP is more strongly connected with ventral and dorsal stream areas when participants explore an object in the dark than when they view it. Copyright © 2017 the authors 0270-6474/17/3711572-20$15.00/0.
Is Broca's Area Involved in the Processing of Passive Sentences? An Event-Related fMRI Study
ERIC Educational Resources Information Center
Yokoyama, Satoru; Watanabe, Jobu; Iwata, Kazuki; Ikuta, Naho; Haji, Tomoki; Usui, Nobuo; Taira, Masato; Miyamoto, Tadao; Nakamura, Wataru; Sato, Shigeru; Horie, Kaoru; Kawashima, Ryuta
2007-01-01
We used functional magnetic resonance imaging (fMRI) to investigate whether activation in Broca's area is greater during the processing of passive versus active sentences in the brains of healthy subjects. Twenty Japanese native speakers performed a visual sentence comprehension task in which they were asked to read a visually presented sentence…
Speed of feedforward and recurrent processing in multilayer networks of integrate-and-fire neurons.
Panzeri, S; Rolls, E T; Battaglia, F; Lavis, R
2001-11-01
The speed of processing in the visual cortical areas can be fast, with for example the latency of neuronal responses increasing by only approximately 10 ms per area in the ventral visual system sequence V1 to V2 to V4 to inferior temporal visual cortex. This has led to the suggestion that rapid visual processing can only be based on the feedforward connections between cortical areas. To test this idea, we investigated the dynamics of information retrieval in multiple layer networks using a four-stage feedforward network modelled with continuous dynamics with integrate-and-fire neurons, and associative synaptic connections between stages with a synaptic time constant of 10 ms. Through the implementation of continuous dynamics, we found latency differences in information retrieval of only 5 ms per layer when local excitation was absent and processing was purely feedforward. However, information latency differences increased significantly when non-associative local excitation was included. We also found that local recurrent excitation through associatively modified synapses can contribute significantly to processing in as little as 15 ms per layer, including the feedforward and local feedback processing. Moreover, and in contrast to purely feed-forward processing, the contribution of local recurrent feedback was useful and approximately this rapid even when retrieval was made difficult by noise. These findings suggest that cortical information processing can benefit from recurrent circuits when the allowed processing time per cortical area is at least 15 ms long.
Differential contribution of early visual areas to the perceptual process of contour processing.
Schira, Mark M; Fahle, Manfred; Donner, Tobias H; Kraft, Antje; Brandt, Stephan A
2004-04-01
We investigated contour processing and figure-ground detection within human retinotopic areas using event-related functional magnetic resonance imaging (fMRI) in 6 healthy and naïve subjects. A figure (6 degrees side length) was created by a 2nd-order texture contour. An independent and demanding foveal letter-discrimination task prevented subjects from noticing this more peripheral contour stimulus. The contour subdivided our stimulus into a figure and a ground. Using localizers and retinotopic mapping stimuli we were able to subdivide each early visual area into 3 eccentricity regions corresponding to 1) the central figure, 2) the area along the contour, and 3) the background. In these subregions we investigated the hemodynamic responses to our stimuli and compared responses with or without the contour defining the figure. No contour-related blood oxygenation level-dependent modulation in early visual areas V1, V3, VP, and MT+ was found. Significant signal modulation in the contour subregions of V2v, V2d, V3a, and LO occurred. This activation pattern was different from comparable studies, which might be attributable to the letter-discrimination task reducing confounding attentional modulation. In V3a, but not in any other retinotopic area, signal modulation corresponding to the central figure could be detected. Such contextual modulation will be discussed in light of the recurrent processing hypothesis and the role of visual awareness.
Masking disrupts reentrant processing in human visual cortex.
Fahrenfort, J J; Scholte, H S; Lamme, V A F
2007-09-01
In masking, a stimulus is rendered invisible through the presentation of a second stimulus shortly after the first. Over the years, authors have typically explained masking by postulating some early disruption process. In these feedforward-type explanations, the mask somehow "catches up" with the target stimulus, disrupting its processing either through lateral or interchannel inhibition. However, studies from recent years indicate that visual perception--and most notably visual awareness itself--may depend strongly on cortico-cortical feedback connections from higher to lower visual areas. This has led some researchers to propose that masking derives its effectiveness from selectively interrupting these reentrant processes. In this experiment, we used electroencephalogram measurements to determine what happens in the human visual cortex during detection of a texture-defined square under nonmasked (seen) and masked (unseen) conditions. Electro-encephalogram derivatives that are typically associated with reentrant processing turn out to be absent in the masked condition. Moreover, extrastriate visual areas are still activated early on by both seen and unseen stimuli, as shown by scalp surface Laplacian current source-density maps. This conclusively shows that feedforward processing is preserved, even when subject performance is at chance as determined by objective measures. From these results, we conclude that masking derives its effectiveness, at least partly, from disrupting reentrant processing, thereby interfering with the neural mechanisms of figure-ground segmentation and visual awareness itself.
Erlikhman, Gennady; Gurariy, Gennadiy; Mruczek, Ryan E.B.; Caplovitz, Gideon P.
2016-01-01
Oftentimes, objects are only partially and transiently visible as parts of them become occluded during observer or object motion. The visual system can integrate such object fragments across space and time into perceptual wholes or spatiotemporal objects. This integrative and dynamic process may involve both ventral and dorsal visual processing pathways, along which shape and spatial representations are thought to arise. We measured fMRI BOLD response to spatiotemporal objects and used multi-voxel pattern analysis (MVPA) to decode shape information across 20 topographic regions of visual cortex. Object identity could be decoded throughout visual cortex, including intermediate (V3A, V3B, hV4, LO1-2,) and dorsal (TO1-2, and IPS0-1) visual areas. Shape-specific information, therefore, may not be limited to early and ventral visual areas, particularly when it is dynamic and must be integrated. Contrary to the classic view that the representation of objects is the purview of the ventral stream, intermediate and dorsal areas may play a distinct and critical role in the construction of object representations across space and time. PMID:27033688
Innes-Brown, Hamish; Barutchu, Ayla; Crewther, David P.
2013-01-01
The effect of multi-modal vs uni-modal prior stimuli on the subsequent processing of a simple flash stimulus was studied in the context of the audio-visual ‘flash-beep’ illusion, in which the number of flashes a person sees is influenced by accompanying beep stimuli. EEG recordings were made while combinations of simple visual and audio-visual stimuli were presented. The experiments found that the electric field strength related to a flash stimulus was stronger when it was preceded by a multi-modal flash/beep stimulus, compared to when it was preceded by another uni-modal flash stimulus. This difference was found to be significant in two distinct timeframes – an early timeframe, from 130–160 ms, and a late timeframe, from 300–320 ms. Source localisation analysis found that the increased activity in the early interval was localised to an area centred on the inferior and superior parietal lobes, whereas the later increase was associated with stronger activity in an area centred on primary and secondary visual cortex, in the occipital lobe. The results suggest that processing of a visual stimulus can be affected by the presence of an immediately prior multisensory event. Relatively long-lasting interactions generated by the initial auditory and visual stimuli altered the processing of a subsequent visual stimulus. PMID:24391939
Visual Imagery without Visual Perception?
ERIC Educational Resources Information Center
Bertolo, Helder
2005-01-01
The question regarding visual imagery and visual perception remain an open issue. Many studies have tried to understand if the two processes share the same mechanisms or if they are independent, using different neural substrates. Most research has been directed towards the need of activation of primary visual areas during imagery. Here we review…
Altered figure-ground perception in monkeys with an extra-striate lesion.
Supèr, Hans; Lamme, Victor A F
2007-11-05
The visual system binds and segments the elements of an image into coherent objects and their surroundings. Recent findings demonstrate that primary visual cortex is involved in this process of figure-ground organization. In the primary visual cortex the late part of a neural response to a stimulus correlates with figure-ground segregation and perception. Such a late onset indicates an involvement of feedback projections from higher visual areas. To investigate the possible role of feedback in figure-ground perception we removed dorsal extra-striate areas of the monkey visual cortex. The findings show that figure-ground perception is reduced when the figure is presented in the lesioned hemifield and perception is normal when the figure appeared in the intact hemifield. In conclusion, our observations show the importance for recurrent processing in visual perception.
A magnetoencephalography study of visual processing of pain anticipation.
Machado, Andre G; Gopalakrishnan, Raghavan; Plow, Ela B; Burgess, Richard C; Mosher, John C
2014-07-15
Anticipating pain is important for avoiding injury; however, in chronic pain patients, anticipatory behavior can become maladaptive, leading to sensitization and limiting function. Knowledge of networks involved in pain anticipation and conditioning over time could help devise novel, better-targeted therapies. With the use of magnetoencephalography, we evaluated in 10 healthy subjects the neural processing of pain anticipation. Anticipatory cortical activity elicited by consecutive visual cues that signified imminent painful stimulus was compared with cues signifying nonpainful and no stimulus. We found that the neural processing of visually evoked pain anticipation involves the primary visual cortex along with cingulate and frontal regions. Visual cortex could quickly and independently encode and discriminate between visual cues associated with pain anticipation and no pain during preconscious phases following object presentation. When evaluating the effect of task repetition on participating cortical areas, we found that activity of prefrontal and cingulate regions was mostly prominent early on when subjects were still naive to a cue's contextual meaning. Visual cortical activity was significant throughout later phases. Although visual cortex may precisely and time efficiently decode cues anticipating pain or no pain, prefrontal areas establish the context associated with each cue. These findings have important implications toward processes involved in pain anticipation and maladaptive pain conditioning. Copyright © 2014 the American Physiological Society.
Emerging category representation in the visual forebrain hierarchy of pigeons (Columba livia).
Azizi, Amir Hossein; Pusch, Roland; Koenen, Charlotte; Klatt, Sebastian; Bröcker, Franziska; Thiele, Samuel; Kellermann, Janosch; Güntürkün, Onur; Cheng, Sen
2018-06-06
Recognizing and categorizing visual stimuli are cognitive functions vital for survival, and an important feature of visual systems in primates as well as in birds. Visual stimuli are processed along the ventral visual pathway. At every stage in the hierarchy, neurons respond selectively to more complex features, transforming the population representation of the stimuli. It is therefore easier to read-out category information in higher visual areas. While explicit category representations have been observed in the primate brain, less is known on equivalent processes in the avian brain. Even though their brain anatomies are radically different, it has been hypothesized that visual object representations are comparable across mammals and birds. In the present study, we investigated category representations in the pigeon visual forebrain using recordings from single cells responding to photographs of real-world objects. Using a linear classifier, we found that the population activity in the visual associative area mesopallium ventrolaterale (MVL) distinguishes between animate and inanimate objects, although this distinction is not required by the task. By contrast, a population of cells in the entopallium, a region that is lower in the hierarchy of visual areas and that is related to the primate extrastriate cortex, lacked this information. A model that pools responses of simple cells, which function as edge detectors, can account for the animate vs. inanimate categorization in the MVL, but performance in the model is based on different features than in MVL. Therefore, processing in MVL cells is very likely more abstract than simple computations on the output of edge detectors. Copyright © 2018. Published by Elsevier B.V.
Poggel, Dorothe A.; Treutwein, Bernhard; Sabel, Bernhard A.; Strasburger, Hans
2015-01-01
The issue of how basic sensory and temporal processing are related is still unresolved. We studied temporal processing, as assessed by simple visual reaction times (RT) and double-pulse resolution (DPR), in patients with partial vision loss after visual pathway lesions and investigated whether vision restoration training (VRT), a training program designed to improve light detection performance, would also affect temporal processing. Perimetric and campimetric visual field tests as well as maps of DPR thresholds and RT were acquired before and after a 3 months training period with VRT. Patient performance was compared to that of age-matched healthy subjects. Intact visual field size increased during training. Averaged across the entire visual field, DPR remained constant while RT improved slightly. However, in transition zones between the blind and intact areas (areas of residual vision) where patients had shown between 20 and 80% of stimulus detection probability in pre-training visual field tests, both DPR and RT improved markedly. The magnitude of improvement depended on the defect depth (or degree of intactness) of the respective region at baseline. Inter-individual training outcome variability was very high, with some patients showing little change and others showing performance approaching that of healthy controls. Training-induced improvement of light detection in patients with visual field loss thus generalized to dynamic visual functions. The findings suggest that similar neural mechanisms may underlie the impairment and subsequent training-induced functional recovery of both light detection and temporal processing. PMID:25717307
Giraud, Anne Lise; Truy, Eric
2002-01-01
Early visual cortex can be recruited by meaningful sounds in the absence of visual information. This occurs in particular in cochlear implant (CI) patients whose dependency on visual cues in speech comprehension is increased. Such cross-modal interaction mirrors the response of early auditory cortex to mouth movements (speech reading) and may reflect the natural expectancy of the visual counterpart of sounds, lip movements. Here we pursue the hypothesis that visual activations occur specifically in response to meaningful sounds. We performed PET in both CI patients and controls, while subjects listened either to their native language or to a completely unknown language. A recruitment of early visual cortex, the left posterior inferior temporal gyrus (ITG) and the left superior parietal cortex was observed in both groups. While no further activation occurred in the group of normal-hearing subjects, CI patients additionally recruited the right perirhinal/fusiform and mid-fusiform, the right temporo-occipito-parietal (TOP) junction and the left inferior prefrontal cortex (LIPF, Broca's area). This study confirms a participation of visual cortical areas in semantic processing of speech sounds. Observation of early visual activation in normal-hearing subjects shows that auditory-to-visual cross-modal effects can also be recruited under natural hearing conditions. In cochlear implant patients, speech activates the mid-fusiform gyrus in the vicinity of the so-called face area. This suggests that specific cross-modal interaction involving advanced stages in the visual processing hierarchy develops after cochlear implantation and may be the correlate of increased usage of lip-reading.
Tafazoli, Sina; Safaai, Houman; De Franceschi, Gioia; Rosselli, Federica Bianca; Vanzella, Walter; Riggi, Margherita; Buffolo, Federica; Panzeri, Stefano; Zoccolan, Davide
2017-01-01
Rodents are emerging as increasingly popular models of visual functions. Yet, evidence that rodent visual cortex is capable of advanced visual processing, such as object recognition, is limited. Here we investigate how neurons located along the progression of extrastriate areas that, in the rat brain, run laterally to primary visual cortex, encode object information. We found a progressive functional specialization of neural responses along these areas, with: (1) a sharp reduction of the amount of low-level, energy-related visual information encoded by neuronal firing; and (2) a substantial increase in the ability of both single neurons and neuronal populations to support discrimination of visual objects under identity-preserving transformations (e.g., position and size changes). These findings strongly argue for the existence of a rat object-processing pathway, and point to the rodents as promising models to dissect the neuronal circuitry underlying transformation-tolerant recognition of visual objects. DOI: http://dx.doi.org/10.7554/eLife.22794.001 PMID:28395730
Simulation of talking faces in the human brain improves auditory speech recognition
von Kriegstein, Katharina; Dogan, Özgür; Grüter, Martina; Giraud, Anne-Lise; Kell, Christian A.; Grüter, Thomas; Kleinschmidt, Andreas; Kiebel, Stefan J.
2008-01-01
Human face-to-face communication is essentially audiovisual. Typically, people talk to us face-to-face, providing concurrent auditory and visual input. Understanding someone is easier when there is visual input, because visual cues like mouth and tongue movements provide complementary information about speech content. Here, we hypothesized that, even in the absence of visual input, the brain optimizes both auditory-only speech and speaker recognition by harvesting speaker-specific predictions and constraints from distinct visual face-processing areas. To test this hypothesis, we performed behavioral and neuroimaging experiments in two groups: subjects with a face recognition deficit (prosopagnosia) and matched controls. The results show that observing a specific person talking for 2 min improves subsequent auditory-only speech and speaker recognition for this person. In both prosopagnosics and controls, behavioral improvement in auditory-only speech recognition was based on an area typically involved in face-movement processing. Improvement in speaker recognition was only present in controls and was based on an area involved in face-identity processing. These findings challenge current unisensory models of speech processing, because they show that, in auditory-only speech, the brain exploits previously encoded audiovisual correlations to optimize communication. We suggest that this optimization is based on speaker-specific audiovisual internal models, which are used to simulate a talking face. PMID:18436648
Area 18 of the cat: the first step in processing visual movement information.
Orban, G A
1977-01-01
In cats, responses of area 18 neurons to different moving patterns were measured. The influence of three movement parameters--direction, angular velocity, and amplitude of movement--were tested. The results indicate that in area 18 no ideal movement detector exists, but that simple and complex cells each perform complementary operations of primary visual areas, i.e. analysis and detection of movement.
Neural Representation of Motion-In-Depth in Area MT
Sanada, Takahisa M.
2014-01-01
Neural processing of 2D visual motion has been studied extensively, but relatively little is known about how visual cortical neurons represent visual motion trajectories that include a component toward or away from the observer (motion in depth). Psychophysical studies have demonstrated that humans perceive motion in depth based on both changes in binocular disparity over time (CD cue) and interocular velocity differences (IOVD cue). However, evidence for neurons that represent motion in depth has been limited, especially in primates, and it is unknown whether such neurons make use of CD or IOVD cues. We show that approximately one-half of neurons in macaque area MT are selective for the direction of motion in depth, and that this selectivity is driven primarily by IOVD cues, with a small contribution from the CD cue. Our results establish that area MT, a central hub of the primate visual motion processing system, contains a 3D representation of visual motion. PMID:25411481
Position Information Encoded by Population Activity in Hierarchical Visual Areas
Majima, Kei; Horikawa, Tomoyasu
2017-01-01
Abstract Neurons in high-level visual areas respond to more complex visual features with broader receptive fields (RFs) compared to those in low-level visual areas. Thus, high-level visual areas are generally considered to carry less information regarding the position of seen objects in the visual field. However, larger RFs may not imply loss of position information at the population level. Here, we evaluated how accurately the position of a seen object could be predicted (decoded) from activity patterns in each of six representative visual areas with different RF sizes [V1–V4, lateral occipital complex (LOC), and fusiform face area (FFA)]. We collected functional magnetic resonance imaging (fMRI) responses while human subjects viewed a ball randomly moving in a two-dimensional field. To estimate population RF sizes of individual fMRI voxels, RF models were fitted for individual voxels in each brain area. The voxels in higher visual areas showed larger estimated RFs than those in lower visual areas. Then, the ball’s position in a separate session was predicted by maximum likelihood estimation using the RF models of individual voxels. We also tested a model-free multivoxel regression (support vector regression, SVR) to predict the position. We found that regardless of the difference in RF size, all visual areas showed similar prediction accuracies, especially on the horizontal dimension. Higher areas showed slightly lower accuracies on the vertical dimension, which appears to be attributed to the narrower spatial distributions of the RF centers. The results suggest that much position information is preserved in population activity through the hierarchical visual pathway regardless of RF sizes and is potentially available in later processing for recognition and behavior. PMID:28451634
Behavioral and Brain Measures of Phasic Alerting Effects on Visual Attention.
Wiegand, Iris; Petersen, Anders; Finke, Kathrin; Bundesen, Claus; Lansner, Jon; Habekost, Thomas
2017-01-01
In the present study, we investigated effects of phasic alerting on visual attention in a partial report task, in which half of the displays were preceded by an auditory warning cue. Based on the computational Theory of Visual Attention (TVA), we estimated parameters of spatial and non-spatial aspects of visual attention and measured event-related lateralizations (ERLs) over visual processing areas. We found that the TVA parameter sensory effectiveness a , which is thought to reflect visual processing capacity, significantly increased with phasic alerting. By contrast, the distribution of visual processing resources according to task relevance and spatial position, as quantified in parameters top-down control α and spatial bias w index , was not modulated by phasic alerting. On the electrophysiological level, the latencies of ERLs in response to the task displays were reduced following the warning cue. These results suggest that phasic alerting facilitates visual processing in a general, unselective manner and that this effect originates in early stages of visual information processing.
Neural pathways for visual speech perception
Bernstein, Lynne E.; Liebenthal, Einat
2014-01-01
This paper examines the questions, what levels of speech can be perceived visually, and how is visual speech represented by the brain? Review of the literature leads to the conclusions that every level of psycholinguistic speech structure (i.e., phonetic features, phonemes, syllables, words, and prosody) can be perceived visually, although individuals differ in their abilities to do so; and that there are visual modality-specific representations of speech qua speech in higher-level vision brain areas. That is, the visual system represents the modal patterns of visual speech. The suggestion that the auditory speech pathway receives and represents visual speech is examined in light of neuroimaging evidence on the auditory speech pathways. We outline the generally agreed-upon organization of the visual ventral and dorsal pathways and examine several types of visual processing that might be related to speech through those pathways, specifically, face and body, orthography, and sign language processing. In this context, we examine the visual speech processing literature, which reveals widespread diverse patterns of activity in posterior temporal cortices in response to visual speech stimuli. We outline a model of the visual and auditory speech pathways and make several suggestions: (1) The visual perception of speech relies on visual pathway representations of speech qua speech. (2) A proposed site of these representations, the temporal visual speech area (TVSA) has been demonstrated in posterior temporal cortex, ventral and posterior to multisensory posterior superior temporal sulcus (pSTS). (3) Given that visual speech has dynamic and configural features, its representations in feedforward visual pathways are expected to integrate these features, possibly in TVSA. PMID:25520611
Zeki, Semir
2016-10-01
Results from a variety of sources, some many years old, lead ineluctably to a re-appraisal of the twin strategies of hierarchical and parallel processing used by the brain to construct an image of the visual world. Contrary to common supposition, there are at least three 'feed-forward' anatomical hierarchies that reach the primary visual cortex (V1) and the specialized visual areas outside it, in parallel. These anatomical hierarchies do not conform to the temporal order with which visual signals reach the specialized visual areas through V1. Furthermore, neither the anatomical hierarchies nor the temporal order of activation through V1 predict the perceptual hierarchies. The latter shows that we see (and become aware of) different visual attributes at different times, with colour leading form (orientation) and directional visual motion, even though signals from fast-moving, high-contrast stimuli are among the earliest to reach the visual cortex (of area V5). Parallel processing, on the other hand, is much more ubiquitous than commonly supposed but is subject to a barely noticed but fundamental aspect of brain operations, namely that different parallel systems operate asynchronously with respect to each other and reach perceptual endpoints at different times. This re-assessment leads to the conclusion that the visual brain is constituted of multiple, parallel and asynchronously operating task- and stimulus-dependent hierarchies (STDH); which of these parallel anatomical hierarchies have temporal and perceptual precedence at any given moment is stimulus and task related, and dependent on the visual brain's ability to undertake multiple operations asynchronously. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Visual Motion Processing Subserves Faster Visuomotor Reaction in Badminton Players.
Hülsdünker, Thorben; Strüder, Heiko K; Mierau, Andreas
2017-06-01
Athletes participating in ball or racquet sports have to respond to visual stimuli under critical time pressure. Previous studies used visual contrast stimuli to determine visual perception and visuomotor reaction in athletes and nonathletes; however, ball and racquet sports are characterized by motion rather than contrast visual cues. Because visual contrast and motion signals are processed in different cortical regions, this study aimed to determine differences in perception and processing of visual motion between athletes and nonathletes. Twenty-five skilled badminton players and 28 age-matched nonathletic controls participated in this study. Using a 64-channel EEG system, we investigated visual motion perception/processing in the motion-sensitive middle temporal (MT) cortical area in response to radial motion of different velocities. In a simple visuomotor reaction task, visuomotor transformation in Brodmann area 6 (BA6) and BA4 as well as muscular activation (EMG onset) and visuomotor reaction time (VMRT) were investigated. Stimulus- and response-locked potentials were determined to differentiate between perceptual and motor-related processes. As compared with nonathletes, athletes showed earlier EMG onset times (217 vs 178 ms, P < 0.001), accompanied by a faster VMRT (274 vs 243 ms, P < 0.001). Furthermore, athletes showed an earlier stimulus-locked peak activation of MT (200 vs 182 ms, P = 0.002) and BA6 (161 vs 137 ms, P = 0.009). Response-locked peak activation in MT was later in athletes (-7 vs 26 ms, P < 0.001), whereas no group differences were observed in BA6 and BA4. Multiple regression analyses with stimulus- and response-locked cortical potentials predicted EMG onset (r = 0.83) and VMRT (r = 0.77). The athletes' superior visuomotor performance in response to visual motion is primarily related to visual perception and, to a minor degree, to motor-related processes.
Visual processing deficits in 22q11.2 Deletion Syndrome.
Biria, Marjan; Tomescu, Miralena I; Custo, Anna; Cantonas, Lucia M; Song, Kun-Wei; Schneider, Maude; Murray, Micah M; Eliez, Stephan; Michel, Christoph M; Rihs, Tonia A
2018-01-01
Carriers of the rare 22q11.2 microdeletion present with a high percentage of positive and negative symptoms and a high genetic risk for schizophrenia. Visual processing impairments have been characterized in schizophrenia, but less so in 22q11.2 Deletion Syndrome (DS). Here, we focus on visual processing using high-density EEG and source imaging in 22q11.2DS participants (N = 25) and healthy controls (N = 26) with an illusory contour discrimination task. Significant differences between groups emerged at early and late stages of visual processing. In 22q11.2DS, we first observed reduced amplitudes over occipital channels and reduced source activations within dorsal and ventral visual stream areas during the P1 (100-125 ms) and within ventral visual cortex during the N1 (150-170 ms) visual evoked components. During a later window implicated in visual completion (240-285 ms), we observed an increase in global amplitudes in 22q11.2DS. The increased surface amplitudes for illusory contours at this window were inversely correlated with positive subscales of prodromal symptoms in 22q11.2DS. The reduced activity of ventral and dorsal visual areas during early stages points to an impairment in visual processing seen both in schizophrenia and 22q11.2DS. During intervals related to perceptual closure, the inverse correlation of high amplitudes with positive symptoms suggests that participants with 22q11.2DS who show an increased brain response to illusory contours during the relevant window for contour processing have less psychotic symptoms and might thus be at a reduced prodromal risk for schizophrenia.
Two subdivisions of macaque LIP process visual-oculomotor information differently.
Chen, Mo; Li, Bing; Guang, Jing; Wei, Linyu; Wu, Si; Liu, Yu; Zhang, Mingsha
2016-10-11
Although the cerebral cortex is thought to be composed of functionally distinct areas, the actual parcellation of area and assignment of function are still highly controversial. An example is the much-studied lateral intraparietal cortex (LIP). Despite the general agreement that LIP plays an important role in visual-oculomotor transformation, it remains unclear whether the area is primary sensory- or motor-related (the attention-intention debate). Although LIP has been considered as a functionally unitary area, its dorsal (LIPd) and ventral (LIPv) parts differ in local morphology and long-distance connectivity. In particular, LIPv has much stronger connections with two oculomotor centers, the frontal eye field and the deep layers of the superior colliculus, than does LIPd. Such anatomical distinctions imply that compared with LIPd, LIPv might be more involved in oculomotor processing. We tested this hypothesis physiologically with a memory saccade task and a gap saccade task. We found that LIP neurons with persistent memory activities in memory saccade are primarily provoked either by visual stimulation (vision-related) or by both visual and saccadic events (vision-saccade-related) in gap saccade. The distribution changes from predominantly vision-related to predominantly vision-saccade-related as the recording depth increases along the dorsal-ventral dimension. Consistently, the simultaneously recorded local field potential also changes from visual evoked to saccade evoked. Finally, local injection of muscimol (GABA agonist) in LIPv, but not in LIPd, dramatically decreases the proportion of express saccades. With these results, we conclude that LIPd and LIPv are more involved in visual and visual-saccadic processing, respectively.
Mullen, Kathy T; Chang, Dorita H F; Hess, Robert F
2015-12-01
There is controversy as to how responses to colour in the human brain are organized within the visual pathways. A key issue is whether there are modular pathways that respond selectively to colour or whether there are common neural substrates for both colour and achromatic (Ach) contrast. We used functional magnetic resonance imaging (fMRI) adaptation to investigate the responses of early and extrastriate visual areas to colour and Ach contrast. High-contrast red-green (RG) and Ach sinewave rings (0.5 cycles/degree, 2 Hz) were used as both adapting stimuli and test stimuli in a block design. We found robust adaptation to RG or Ach contrast in all visual areas. Cross-adaptation between RG and Ach contrast occurred in all areas indicating the presence of integrated, colour and Ach responses. Notably, we revealed contrasting trends for the two test stimuli. For the RG test, unselective processing (robust adaptation to both RG and Ach contrast) was most evident in the early visual areas (V1 and V2), but selective responses, revealed as greater adaptation between the same stimuli than cross-adaptation between different stimuli, emerged in the ventral cortex, in V4 and VO in particular. For the Ach test, unselective responses were again most evident in early visual areas but Ach selectivity emerged in the dorsal cortex (V3a and hMT+). Our findings support a strong presence of integrated mechanisms for colour and Ach contrast across the visual hierarchy, with a progression towards selective processing in extrastriate visual areas. © 2015 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Posterior Parietal Cortex Drives Inferotemporal Activations During Three-Dimensional Object Vision.
Van Dromme, Ilse C; Premereur, Elsie; Verhoef, Bram-Ernst; Vanduffel, Wim; Janssen, Peter
2016-04-01
The primate visual system consists of a ventral stream, specialized for object recognition, and a dorsal visual stream, which is crucial for spatial vision and actions. However, little is known about the interactions and information flow between these two streams. We investigated these interactions within the network processing three-dimensional (3D) object information, comprising both the dorsal and ventral stream. Reversible inactivation of the macaque caudal intraparietal area (CIP) during functional magnetic resonance imaging (fMRI) reduced fMRI activations in posterior parietal cortex in the dorsal stream and, surprisingly, also in the inferotemporal cortex (ITC) in the ventral visual stream. Moreover, CIP inactivation caused a perceptual deficit in a depth-structure categorization task. CIP-microstimulation during fMRI further suggests that CIP projects via posterior parietal areas to the ITC in the ventral stream. To our knowledge, these results provide the first causal evidence for the flow of visual 3D information from the dorsal stream to the ventral stream, and identify CIP as a key area for depth-structure processing. Thus, combining reversible inactivation and electrical microstimulation during fMRI provides a detailed view of the functional interactions between the two visual processing streams.
Posterior Parietal Cortex Drives Inferotemporal Activations During Three-Dimensional Object Vision
Van Dromme, Ilse C.; Premereur, Elsie; Verhoef, Bram-Ernst; Vanduffel, Wim; Janssen, Peter
2016-01-01
The primate visual system consists of a ventral stream, specialized for object recognition, and a dorsal visual stream, which is crucial for spatial vision and actions. However, little is known about the interactions and information flow between these two streams. We investigated these interactions within the network processing three-dimensional (3D) object information, comprising both the dorsal and ventral stream. Reversible inactivation of the macaque caudal intraparietal area (CIP) during functional magnetic resonance imaging (fMRI) reduced fMRI activations in posterior parietal cortex in the dorsal stream and, surprisingly, also in the inferotemporal cortex (ITC) in the ventral visual stream. Moreover, CIP inactivation caused a perceptual deficit in a depth-structure categorization task. CIP-microstimulation during fMRI further suggests that CIP projects via posterior parietal areas to the ITC in the ventral stream. To our knowledge, these results provide the first causal evidence for the flow of visual 3D information from the dorsal stream to the ventral stream, and identify CIP as a key area for depth-structure processing. Thus, combining reversible inactivation and electrical microstimulation during fMRI provides a detailed view of the functional interactions between the two visual processing streams. PMID:27082854
The effect of integration masking on visual processing in perceptual categorization.
Hélie, Sébastien
2017-08-01
Learning to recognize and categorize objects is an essential cognitive skill allowing animals to function in the world. However, animals rarely have access to a canonical view of an object in an uncluttered environment. Hence, it is essential to study categorization under noisy, degraded conditions. In this article, we explore how the brain processes categorization stimuli in low signal-to-noise conditions using multivariate pattern analysis. We used an integration masking paradigm with mask opacity of 50%, 60%, and 70% inside a magnetic resonance imaging scanner. The results show that mask opacity affects blood-oxygen-level dependent (BOLD) signal in visual processing areas (V1, V2, V3, and V4) but does not affect the BOLD signal in brain areas traditionally associated with categorization (prefrontal cortex, striatum, hippocampus). This suggests that when a stimulus is difficult to extract from its background (e.g., low signal-to-noise ratio), the visual system extracts the stimulus and that activity in areas typically associated with categorization are not affected by the difficulty level of the visual conditions. We conclude with implications of this result for research on visual attention, categorization, and the integration of these fields. Copyright © 2017 Elsevier Inc. All rights reserved.
Gravity influences top-down signals in visual processing.
Cheron, Guy; Leroy, Axelle; Palmero-Soler, Ernesto; De Saedeleer, Caty; Bengoetxea, Ana; Cebolla, Ana-Maria; Vidal, Manuel; Dan, Bernard; Berthoz, Alain; McIntyre, Joseph
2014-01-01
Visual perception is not only based on incoming visual signals but also on information about a multimodal reference frame that incorporates vestibulo-proprioceptive input and motor signals. In addition, top-down modulation of visual processing has previously been demonstrated during cognitive operations including selective attention and working memory tasks. In the absence of a stable gravitational reference, the updating of salient stimuli becomes crucial for successful visuo-spatial behavior by humans in weightlessness. Here we found that visually-evoked potentials triggered by the image of a tunnel just prior to an impending 3D movement in a virtual navigation task were altered in weightlessness aboard the International Space Station, while those evoked by a classical 2D-checkerboard were not. Specifically, the analysis of event-related spectral perturbations and inter-trial phase coherency of these EEG signals recorded in the frontal and occipital areas showed that phase-locking of theta-alpha oscillations was suppressed in weightlessness, but only for the 3D tunnel image. Moreover, analysis of the phase of the coherency demonstrated the existence on Earth of a directional flux in the EEG signals from the frontal to the occipital areas mediating a top-down modulation during the presentation of the image of the 3D tunnel. In weightlessness, this fronto-occipital, top-down control was transformed into a diverging flux from the central areas toward the frontal and occipital areas. These results demonstrate that gravity-related sensory inputs modulate primary visual areas depending on the affordances of the visual scene.
Individual differences in solving arithmetic word problems
2013-01-01
Background With the present functional magnetic resonance imaging (fMRI) study at 3 T, we investigated the neural correlates of visualization and verbalization during arithmetic word problem solving. In the domain of arithmetic, visualization might mean to visualize numbers and (intermediate) results while calculating, and verbalization might mean that numbers and (intermediate) results are verbally repeated during calculation. If the brain areas involved in number processing are domain-specific as assumed, that is, that the left angular gyrus (AG) shows an affinity to the verbal domain, and that the left and right intraparietal sulcus (IPS) shows an affinity to the visual domain, the activation of these areas should show a dependency on an individual’s cognitive style. Methods 36 healthy young adults participated in the fMRI study. The participants habitual use of visualization and verbalization during solving arithmetic word problems was assessed with a short self-report assessment. During the fMRI measurement, arithmetic word problems that had to be solved by the participants were presented in an event-related design. Results We found that visualizers showed greater brain activation in brain areas involved in visual processing, and that verbalizers showed greater brain activation within the left angular gyrus. Conclusions Our results indicate that cognitive styles or preferences play an important role in understanding brain activation. Our results confirm, that strong visualizers use mental imagery more strongly than weak visualizers during calculation. Moreover, our results suggest that the left AG shows a specific affinity to the verbal domain and subserves number processing in a modality-specific way. PMID:23883107
Robotic Attention Processing And Its Application To Visual Guidance
NASA Astrophysics Data System (ADS)
Barth, Matthew; Inoue, Hirochika
1988-03-01
This paper describes a method of real-time visual attention processing for robots performing visual guidance. This robot attention processing is based on a novel vision processor, the multi-window vision system that was developed at the University of Tokyo. The multi-window vision system is unique in that it only processes visual information inside local area windows. These local area windows are quite flexible in their ability to move anywhere on the visual screen, change their size and shape, and alter their pixel sampling rate. By using these windows for specific attention tasks, it is possible to perform high speed attention processing. The primary attention skills of detecting motion, tracking an object, and interpreting an image are all performed at high speed on the multi-window vision system. A basic robotic attention scheme using the attention skills was developed. The attention skills involved detection and tracking of salient visual features. The tracking and motion information thus obtained was utilized in producing the response to the visual stimulus. The response of the attention scheme was quick enough to be applicable to the real-time vision processing tasks of playing a video 'pong' game, and later using an automobile driving simulator. By detecting the motion of a 'ball' on a video screen and then tracking the movement, the attention scheme was able to control a 'paddle' in order to keep the ball in play. The response was faster than that of a human's, allowing the attention scheme to play the video game at higher speeds. Further, in the application to the driving simulator, the attention scheme was able to control both direction and velocity of a simulated vehicle following a lead car. These two applications show the potential of local visual processing in its use for robotic attention processing.
Visual activity predicts auditory recovery from deafness after adult cochlear implantation.
Strelnikov, Kuzma; Rouger, Julien; Demonet, Jean-François; Lagleyre, Sebastien; Fraysse, Bernard; Deguine, Olivier; Barone, Pascal
2013-12-01
Modern cochlear implantation technologies allow deaf patients to understand auditory speech; however, the implants deliver only a coarse auditory input and patients must use long-term adaptive processes to achieve coherent percepts. In adults with post-lingual deafness, the high progress of speech recovery is observed during the first year after cochlear implantation, but there is a large range of variability in the level of cochlear implant outcomes and the temporal evolution of recovery. It has been proposed that when profoundly deaf subjects receive a cochlear implant, the visual cross-modal reorganization of the brain is deleterious for auditory speech recovery. We tested this hypothesis in post-lingually deaf adults by analysing whether brain activity shortly after implantation correlated with the level of auditory recovery 6 months later. Based on brain activity induced by a speech-processing task, we found strong positive correlations in areas outside the auditory cortex. The highest positive correlations were found in the occipital cortex involved in visual processing, as well as in the posterior-temporal cortex known for audio-visual integration. The other area, which positively correlated with auditory speech recovery, was localized in the left inferior frontal area known for speech processing. Our results demonstrate that the visual modality's functional level is related to the proficiency level of auditory recovery. Based on the positive correlation of visual activity with auditory speech recovery, we suggest that visual modality may facilitate the perception of the word's auditory counterpart in communicative situations. The link demonstrated between visual activity and auditory speech perception indicates that visuoauditory synergy is crucial for cross-modal plasticity and fostering speech-comprehension recovery in adult cochlear-implanted deaf patients.
Language networks in anophthalmia: maintained hierarchy of processing in 'visual' cortex.
Watkins, Kate E; Cowey, Alan; Alexander, Iona; Filippini, Nicola; Kennedy, James M; Smith, Stephen M; Ragge, Nicola; Bridge, Holly
2012-05-01
Imaging studies in blind subjects have consistently shown that sensory and cognitive tasks evoke activity in the occipital cortex, which is normally visual. The precise areas involved and degree of activation are dependent upon the cause and age of onset of blindness. Here, we investigated the cortical language network at rest and during an auditory covert naming task in five bilaterally anophthalmic subjects, who have never received visual input. When listening to auditory definitions and covertly retrieving words, these subjects activated lateral occipital cortex bilaterally in addition to the language areas activated in sighted controls. This activity was significantly greater than that present in a control condition of listening to reversed speech. The lateral occipital cortex was also recruited into a left-lateralized resting-state network that usually comprises anterior and posterior language areas. Levels of activation to the auditory naming and reversed speech conditions did not differ in the calcarine (striate) cortex. This primary 'visual' cortex was not recruited to the left-lateralized resting-state network and showed high interhemispheric correlation of activity at rest, as is typically seen in unimodal cortical areas. In contrast, the interhemispheric correlation of resting activity in extrastriate areas was reduced in anophthalmia to the level of cortical areas that are heteromodal, such as the inferior frontal gyrus. Previous imaging studies in the congenitally blind show that primary visual cortex is activated in higher-order tasks, such as language and memory to a greater extent than during more basic sensory processing, resulting in a reversal of the normal hierarchy of functional organization across 'visual' areas. Our data do not support such a pattern of organization in anophthalmia. Instead, the patterns of activity during task and the functional connectivity at rest are consistent with the known hierarchy of processing in these areas normally seen for vision. The differences in cortical organization between bilateral anophthalmia and other forms of congenital blindness are considered to be due to the total absence of stimulation in 'visual' cortex by light or retinal activity in the former condition, and suggests development of subcortical auditory input to the geniculo-striate pathway.
Skill dependent audiovisual integration in the fusiform induces repetition suppression.
McNorgan, Chris; Booth, James R
2015-02-01
Learning to read entails mapping existing phonological representations to novel orthographic representations and is thus an ideal context for investigating experience driven audiovisual integration. Because two dominant brain-based theories of reading development hinge on the sensitivity of the visual-object processing stream to phonological information, we were interested in how reading skill relates to audiovisual integration in this area. Thirty-two children between 8 and 13 years of age spanning a range of reading skill participated in a functional magnetic resonance imaging experiment. Participants completed a rhyme judgment task to word pairs presented unimodally (auditory- or visual-only) and cross-modally (auditory followed by visual). Skill-dependent sub-additive audiovisual modulation was found in left fusiform gyrus, extending into the putative visual word form area, and was correlated with behavioral orthographic priming. These results suggest learning to read promotes facilitatory audiovisual integration in the ventral visual-object processing stream and may optimize this region for orthographic processing. Copyright © 2014 Elsevier Inc. All rights reserved.
Skill Dependent Audiovisual Integration in the Fusiform Induces Repetition Suppression
McNorgan, Chris; Booth, James R.
2015-01-01
Learning to read entails mapping existing phonological representations to novel orthographic representations and is thus an ideal context for investigating experience driven audiovisual integration. Because two dominant brain-based theories of reading development hinge on the sensitivity of the visual-object processing stream to phonological information, we were interested in how reading skill relates to audiovisual integration in this area. Thirty-two children between 8 and 13 years of age spanning a range of reading skill participated in a functional magnetic resonance imaging experiment. Participants completed a rhyme judgment task to word pairs presented unimodally (auditory- or visual-only) and cross-modally (auditory followed by visual). Skill-dependent sub-additive audiovisual modulation was found in left fusiform gyrus, extending into the putative visual word form area, and was correlated with behavioral orthographic priming. These results suggest learning to read promotes facilitatory audiovisual integration in the ventral visual-object processing stream and may optimize this region for orthographic processing. PMID:25585276
Crossmodal association of auditory and visual material properties in infants.
Ujiie, Yuta; Yamashita, Wakayo; Fujisaki, Waka; Kanazawa, So; Yamaguchi, Masami K
2018-06-18
The human perceptual system enables us to extract visual properties of an object's material from auditory information. In monkeys, the neural basis underlying such multisensory association develops through experience of exposure to a material; material information could be processed in the posterior inferior temporal cortex, progressively from the high-order visual areas. In humans, however, the development of this neural representation remains poorly understood. Here, we demonstrated for the first time the presence of a mapping of the auditory material property with visual material ("Metal" and "Wood") in the right temporal region in preverbal 4- to 8-month-old infants, using near-infrared spectroscopy (NIRS). Furthermore, we found that infants acquired the audio-visual mapping for a property of the "Metal" material later than for the "Wood" material, since infants form the visual property of "Metal" material after approximately 6 months of age. These findings indicate that multisensory processing of material information induces the activation of brain areas related to sound symbolism. Our findings also indicate that the material's familiarity might facilitate the development of multisensory processing during the first year of life.
Threat as a feature in visual semantic object memory.
Calley, Clifford S; Motes, Michael A; Chiang, H-Sheng; Buhl, Virginia; Spence, Jeffrey S; Abdi, Hervé; Anand, Raksha; Maguire, Mandy; Estevez, Leonardo; Briggs, Richard; Freeman, Thomas; Kraut, Michael A; Hart, John
2013-08-01
Threatening stimuli have been found to modulate visual processes related to perception and attention. The present functional magnetic resonance imaging (fMRI) study investigated whether threat modulates visual object recognition of man-made and naturally occurring categories of stimuli. Compared with nonthreatening pictures, threatening pictures of real items elicited larger fMRI BOLD signal changes in medial visual cortices extending inferiorly into the temporo-occipital (TO) "what" pathways. This region elicited greater signal changes for threatening items compared to nonthreatening from both the natural-occurring and man-made stimulus supraordinate categories, demonstrating a featural component to these visual processing areas. Two additional loci of signal changes within more lateral inferior TO areas (bilateral BA18 and 19 as well as the right ventral temporal lobe) were detected for a category-feature interaction, with stronger responses to man-made (category) threatening (feature) stimuli than to natural threats. The findings are discussed in terms of visual recognition of processing efficiently or rapidly groups of items that confer an advantage for survival. Copyright © 2012 Wiley Periodicals, Inc.
Sigalov, Nadine; Maidenbaum, Shachar; Amedi, Amir
2016-03-01
Cognitive neuroscience has long attempted to determine the ways in which cortical selectivity develops, and the impact of nature vs. nurture on it. Congenital blindness (CB) offers a unique opportunity to test this question as the brains of blind individuals develop without visual experience. Here we approach this question through the reading network. Several areas in the visual cortex have been implicated as part of the reading network, and one of the main ones among them is the VWFA, which is selective to the form of letters and words. But what happens in the CB brain? On the one hand, it has been shown that cross-modal plasticity leads to the recruitment of occipital areas, including the VWFA, for linguistic tasks. On the other hand, we have recently demonstrated VWFA activity for letters in contrast to other visual categories when the information is provided via other senses such as touch or audition. Which of these tasks is more dominant? By which mechanism does the CB brain process reading? Using fMRI and visual-to-auditory sensory substitution which transfers the topographical features of the letters we compare reading with semantic and scrambled conditions in a group of CB. We found activation in early auditory and visual cortices during the early processing phase (letter), while the later phase (word) showed VWFA and bilateral dorsal-intraparietal activations for words. This further supports the notion that many visual regions in general, even early visual areas, also maintain a predilection for task processing even when the modality is variable and in spite of putative lifelong linguistic cross-modal plasticity. Furthermore, we find that the VWFA is recruited preferentially for letter and word form, while it was not recruited, and even exhibited deactivation, for an immediately subsequent semantic task suggesting that despite only short sensory substitution experience orthographic task processing can dominate semantic processing in the VWFA. On a wider scope, this implies that at least in some cases cross-modal plasticity which enables the recruitment of areas for new tasks may be dominated by sensory independent task specific activation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Vorobyev, Victor A; Alho, Kimmo; Medvedev, Svyatoslav V; Pakhomov, Sergey V; Roudas, Marina S; Rutkovskaya, Julia M; Tervaniemi, Mari; Van Zuijen, Titia L; Näätänen, Risto
2004-07-01
Positron emission tomography (PET) was used to investigate the neural basis of selective processing of linguistic material during concurrent presentation of multiple stimulus streams ("cocktail-party effect"). Fifteen healthy right-handed adult males were to attend to one of three simultaneously presented messages: one presented visually, one to the left ear, and one to the right ear. During the control condition, subjects attended to visually presented consonant letter strings and ignored auditory messages. This paper reports the modality-nonspecific language processing and visual word-form processing, whereas the auditory attention effects have been reported elsewhere [Cogn. Brain Res. 17 (2003) 201]. The left-hemisphere areas activated by both the selective processing of text and speech were as follows: the inferior prefrontal (Brodmann's area, BA 45, 47), anterior temporal (BA 38), posterior insular (BA 13), inferior (BA 20) and middle temporal (BA 21), occipital (BA 18/30) cortices, the caudate nucleus, and the amygdala. In addition, bilateral activations were observed in the medial occipito-temporal cortex and the cerebellum. Decreases of activation during both text and speech processing were found in the parietal (BA 7, 40), frontal (BA 6, 8, 44) and occipito-temporal (BA 37) regions of the right hemisphere. Furthermore, the present data suggest that the left occipito-temporal cortex (BA 18, 20, 37, 21) can be subdivided into three functionally distinct regions in the posterior-anterior direction on the basis of their activation during attentive processing of sublexical orthography, visual word form, and supramodal higher-level aspects of language.
Cerebral Asymmetry of fMRI-BOLD Responses to Visual Stimulation
Hougaard, Anders; Jensen, Bettina Hagström; Amin, Faisal Mohammad; Rostrup, Egill; Hoffmann, Michael B.; Ashina, Messoud
2015-01-01
Hemispheric asymmetry of a wide range of functions is a hallmark of the human brain. The visual system has traditionally been thought of as symmetrically distributed in the brain, but a growing body of evidence has challenged this view. Some highly specific visual tasks have been shown to depend on hemispheric specialization. However, the possible lateralization of cerebral responses to a simple checkerboard visual stimulation has not been a focus of previous studies. To investigate this, we performed two sessions of blood-oxygenation level dependent (BOLD) functional magnetic resonance imaging (fMRI) in 54 healthy subjects during stimulation with a black and white checkerboard visual stimulus. While carefully excluding possible non-physiological causes of left-to-right bias, we compared the activation of the left and the right cerebral hemispheres and related this to grey matter volume, handedness, age, gender, ocular dominance, interocular difference in visual acuity, as well as line-bisection performance. We found a general lateralization of cerebral activation towards the right hemisphere of early visual cortical areas and areas of higher-level visual processing, involved in visuospatial attention, especially in top-down (i.e., goal-oriented) attentional processing. This right hemisphere lateralization was partly, but not completely, explained by an increased grey matter volume in the right hemisphere of the early visual areas. Difference in activation of the superior parietal lobule was correlated with subject age, suggesting a shift towards the left hemisphere with increasing age. Our findings suggest a right-hemispheric dominance of these areas, which could lend support to the generally observed leftward visual attentional bias and to the left hemifield advantage for some visual perception tasks. PMID:25985078
Characterizing the effects of feature salience and top-down attention in the early visual system.
Poltoratski, Sonia; Ling, Sam; McCormack, Devin; Tong, Frank
2017-07-01
The visual system employs a sophisticated balance of attentional mechanisms: salient stimuli are prioritized for visual processing, yet observers can also ignore such stimuli when their goals require directing attention elsewhere. A powerful determinant of visual salience is local feature contrast: if a local region differs from its immediate surround along one or more feature dimensions, it will appear more salient. We used high-resolution functional MRI (fMRI) at 7T to characterize the modulatory effects of bottom-up salience and top-down voluntary attention within multiple sites along the early visual pathway, including visual areas V1-V4 and the lateral geniculate nucleus (LGN). Observers viewed arrays of spatially distributed gratings, where one of the gratings immediately to the left or right of fixation differed from all other items in orientation or motion direction, making it salient. To investigate the effects of directed attention, observers were cued to attend to the grating to the left or right of fixation, which was either salient or nonsalient. Results revealed reliable additive effects of top-down attention and stimulus-driven salience throughout visual areas V1-hV4. In comparison, the LGN exhibited significant attentional enhancement but was not reliably modulated by orientation- or motion-defined salience. Our findings indicate that top-down effects of spatial attention can influence visual processing at the earliest possible site along the visual pathway, including the LGN, whereas the processing of orientation- and motion-driven salience primarily involves feature-selective interactions that take place in early cortical visual areas. NEW & NOTEWORTHY While spatial attention allows for specific, goal-driven enhancement of stimuli, salient items outside of the current focus of attention must also be prioritized. We used 7T fMRI to compare salience and spatial attentional enhancement along the early visual hierarchy. We report additive effects of attention and bottom-up salience in early visual areas, suggesting that salience enhancement is not contingent on the observer's attentional state. Copyright © 2017 the American Physiological Society.
Ventral and dorsal streams processing visual motion perception (FDG-PET study)
2012-01-01
Background Earlier functional imaging studies on visually induced self-motion perception (vection) disclosed a bilateral network of activations within primary and secondary visual cortex areas which was combined with signal decreases, i.e., deactivations, in multisensory vestibular cortex areas. This finding led to the concept of a reciprocal inhibitory interaction between the visual and vestibular systems. In order to define areas involved in special aspects of self-motion perception such as intensity and duration of the perceived circular vection (CV) or the amount of head tilt, correlation analyses of the regional cerebral glucose metabolism, rCGM (measured by fluorodeoxyglucose positron-emission tomography, FDG-PET) and these perceptual covariates were performed in 14 healthy volunteers. For analyses of the visual-vestibular interaction, the CV data were compared to a random dot motion stimulation condition (not inducing vection) and a control group at rest (no stimulation at all). Results Group subtraction analyses showed that the visual-vestibular interaction was modified during CV, i.e., the activations within the cerebellar vermis and parieto-occipital areas were enhanced. The correlation analysis between the rCGM and the intensity of visually induced vection, experienced as body tilt, showed a relationship for areas of the multisensory vestibular cortical network (inferior parietal lobule bilaterally, anterior cingulate gyrus), the medial parieto-occipital cortex, the frontal eye fields and the cerebellar vermis. The “earlier” multisensory vestibular areas like the parieto-insular vestibular cortex and the superior temporal gyrus did not appear in the latter analysis. The duration of perceived vection after stimulus stop was positively correlated with rCGM in medial temporal lobe areas bilaterally, which included the (para-)hippocampus, known to be involved in various aspects of memory processing. The amount of head tilt was found to be positively correlated with the rCGM of bilateral basal ganglia regions responsible for the control of motor function of the head. Conclusions Our data gave further insights into subfunctions within the complex cortical network involved in the processing of visual-vestibular interaction during CV. Specific areas of this cortical network could be attributed to the ventral stream (“what” pathway) responsible for the duration after stimulus stop and to the dorsal stream (“where/how” pathway) responsible for intensity aspects. PMID:22800430
Hoshi, Eiji
2013-01-01
Action is often executed according to information provided by a visual signal. As this type of behavior integrates two distinct neural representations, perception and action, it has been thought that identification of the neural mechanisms underlying this process will yield deeper insights into the principles underpinning goal-directed behavior. Based on a framework derived from conditional visuomotor association, prior studies have identified neural mechanisms in the dorsal premotor cortex (PMd), dorsolateral prefrontal cortex (dlPFC), ventrolateral prefrontal cortex (vlPFC), and basal ganglia (BG). However, applications resting solely on this conceptualization encounter problems related to generalization and flexibility, essential processes in executive function, because the association mode involves a direct one-to-one mapping of each visual signal onto a particular action. To overcome this problem, we extend this conceptualization and postulate a more general framework, conditional visuo-goal association. According to this new framework, the visual signal identifies an abstract behavioral goal, and an action is subsequently selected and executed to meet this goal. Neuronal activity recorded from the four key areas of the brains of monkeys performing a task involving conditional visuo-goal association revealed three major mechanisms underlying this process. First, visual-object signals are represented primarily in the vlPFC and BG. Second, all four areas are involved in initially determining the goals based on the visual signals, with the PMd and dlPFC playing major roles in maintaining the salience of the goals. Third, the cortical areas play major roles in specifying action, whereas the role of the BG in this process is restrictive. These new lines of evidence reveal that the four areas involved in conditional visuomotor association contribute to goal-directed behavior mediated by conditional visuo-goal association in an area-dependent manner. PMID:24155692
Visual processing in the central bee brain.
Paulk, Angelique C; Dacks, Andrew M; Phillips-Portillo, James; Fellous, Jean-Marc; Gronenberg, Wulfila
2009-08-12
Visual scenes comprise enormous amounts of information from which nervous systems extract behaviorally relevant cues. In most model systems, little is known about the transformation of visual information as it occurs along visual pathways. We examined how visual information is transformed physiologically as it is communicated from the eye to higher-order brain centers using bumblebees, which are known for their visual capabilities. We recorded intracellularly in vivo from 30 neurons in the central bumblebee brain (the lateral protocerebrum) and compared these neurons to 132 neurons from more distal areas along the visual pathway, namely the medulla and the lobula. In these three brain regions (medulla, lobula, and central brain), we examined correlations between the neurons' branching patterns and their responses primarily to color, but also to motion stimuli. Visual neurons projecting to the anterior central brain were generally color sensitive, while neurons projecting to the posterior central brain were predominantly motion sensitive. The temporal response properties differed significantly between these areas, with an increase in spike time precision across trials and a decrease in average reliable spiking as visual information processing progressed from the periphery to the central brain. These data suggest that neurons along the visual pathway to the central brain not only are segregated with regard to the physical features of the stimuli (e.g., color and motion), but also differ in the way they encode stimuli, possibly to allow for efficient parallel processing to occur.
Yap, Florence G H; Yen, Hong-Hsu
2014-02-20
Wireless Visual Sensor Networks (WVSNs) where camera-equipped sensor nodes can capture, process and transmit image/video information have become an important new research area. As compared to the traditional wireless sensor networks (WSNs) that can only transmit scalar information (e.g., temperature), the visual data in WVSNs enable much wider applications, such as visual security surveillance and visual wildlife monitoring. However, as compared to the scalar data in WSNs, visual data is much bigger and more complicated so intelligent schemes are required to capture/process/ transmit visual data in limited resources (hardware capability and bandwidth) WVSNs. WVSNs introduce new multi-disciplinary research opportunities of topics that include visual sensor hardware, image and multimedia capture and processing, wireless communication and networking. In this paper, we survey existing research efforts on the visual sensor hardware, visual sensor coverage/deployment, and visual data capture/ processing/transmission issues in WVSNs. We conclude that WVSN research is still in an early age and there are still many open issues that have not been fully addressed. More new novel multi-disciplinary, cross-layered, distributed and collaborative solutions should be devised to tackle these challenging issues in WVSNs.
Yap, Florence G. H.; Yen, Hong-Hsu
2014-01-01
Wireless Visual Sensor Networks (WVSNs) where camera-equipped sensor nodes can capture, process and transmit image/video information have become an important new research area. As compared to the traditional wireless sensor networks (WSNs) that can only transmit scalar information (e.g., temperature), the visual data in WVSNs enable much wider applications, such as visual security surveillance and visual wildlife monitoring. However, as compared to the scalar data in WSNs, visual data is much bigger and more complicated so intelligent schemes are required to capture/process/transmit visual data in limited resources (hardware capability and bandwidth) WVSNs. WVSNs introduce new multi-disciplinary research opportunities of topics that include visual sensor hardware, image and multimedia capture and processing, wireless communication and networking. In this paper, we survey existing research efforts on the visual sensor hardware, visual sensor coverage/deployment, and visual data capture/processing/transmission issues in WVSNs. We conclude that WVSN research is still in an early age and there are still many open issues that have not been fully addressed. More new novel multi-disciplinary, cross-layered, distributed and collaborative solutions should be devised to tackle these challenging issues in WVSNs. PMID:24561401
Proline and COMT Status Affect Visual Connectivity in Children with 22q11.2 Deletion Syndrome
Magnée, Maurice J. C. M.; Lamme, Victor A. F.; de Sain-van der Velden, Monique G. M.; Vorstman, Jacob A. S.; Kemner, Chantal
2011-01-01
Background Individuals with the 22q11.2 deletion syndrome (22q11DS) are at increased risk for schizophrenia and Autism Spectrum Disorders (ASDs). Given the prevalence of visual processing deficits in these three disorders, a causal relationship between genes in the deleted region of chromosome 22 and visual processing is likely. Therefore, 22q11DS may represent a unique model to understand the neurobiology of visual processing deficits related with ASD and psychosis. Methodology We measured Event-Related Potentials (ERPs) during a texture segregation task in 58 children with 22q11DS and 100 age-matched controls. The C1 component was used to index afferent activity of visual cortex area V1; the texture negativity wave provided a measure for the integrity of recurrent connections in the visual cortical system. COMT genotype and plasma proline levels were assessed in 22q11DS individuals. Principal Findings Children with 22q11DS showed enhanced feedforward activity starting from 70 ms after visual presentation. ERP activity related to visual feedback activity was reduced in the 22q11DS group, which was seen as less texture negativity around 150 ms post presentation. Within the 22q11DS group we further demonstrated an association between high plasma proline levels and aberrant feedback/feedforward ratios, which was moderated by the COMT 158 genotype. Conclusions These findings confirm the presence of early visual processing deficits in 22q11DS. We discuss these in terms of dysfunctional synaptic plasticity in early visual processing areas, possibly associated with deviant dopaminergic and glutamatergic transmission. As such, our findings may serve as a promising biomarker related to the development of schizophrenia among 22q11DS individuals. PMID:21998713
The Mechanism for Processing Random-Dot Motion at Various Speeds in Early Visual Cortices
An, Xu; Gong, Hongliang; McLoughlin, Niall; Yang, Yupeng; Wang, Wei
2014-01-01
All moving objects generate sequential retinotopic activations representing a series of discrete locations in space and time (motion trajectory). How direction-selective neurons in mammalian early visual cortices process motion trajectory remains to be clarified. Using single-cell recording and optical imaging of intrinsic signals along with mathematical simulation, we studied response properties of cat visual areas 17 and 18 to random dots moving at various speeds. We found that, the motion trajectory at low speed was encoded primarily as a direction signal by groups of neurons preferring that motion direction. Above certain transition speeds, the motion trajectory is perceived as a spatial orientation representing the motion axis of the moving dots. In both areas studied, above these speeds, other groups of direction-selective neurons with perpendicular direction preferences were activated to encode the motion trajectory as motion-axis information. This applied to both simple and complex neurons. The average transition speed for switching between encoding motion direction and axis was about 31°/s in area 18 and 15°/s in area 17. A spatio-temporal energy model predicted the transition speeds accurately in both areas, but not the direction-selective indexes to random-dot stimuli in area 18. In addition, above transition speeds, the change of direction preferences of population responses recorded by optical imaging can be revealed using vector maximum but not vector summation method. Together, this combined processing of motion direction and axis by neurons with orthogonal direction preferences associated with speed may serve as a common principle of early visual motion processing. PMID:24682033
The Puzzle of Visual Development: Behavior and Neural Limits.
Kiorpes, Lynne
2016-11-09
The development of visual function takes place over many months or years in primate infants. Visual sensitivity is very poor near birth and improves over different times courses for different visual functions. The neural mechanisms that underlie these processes are not well understood despite many decades of research. The puzzle arises because research into the factors that limit visual function in infants has found surprisingly mature neural organization and adult-like receptive field properties in very young infants. The high degree of visual plasticity that has been documented during the sensitive period in young children and animals leaves the brain vulnerable to abnormal visual experience. Abnormal visual experience during the sensitive period can lead to amblyopia, a developmental disorder of vision affecting ∼3% of children. This review provides a historical perspective on research into visual development and the disorder amblyopia. The mismatch between the status of the primary visual cortex and visual behavior, both during visual development and in amblyopia, is discussed, and several potential resolutions are considered. It seems likely that extrastriate visual areas further along the visual pathways may set important limits on visual function and show greater vulnerability to abnormal visual experience. Analyses based on multiunit, population activity may provide useful representations of the information being fed forward from primary visual cortex to extrastriate processing areas and to the motor output. Copyright © 2016 the authors 0270-6474/16/3611384-10$15.00/0.
Prolonged fasting impairs neural reactivity to visual stimulation.
Kohn, N; Wassenberg, A; Toygar, T; Kellermann, T; Weidenfeld, C; Berthold-Losleben, M; Chechko, N; Orfanos, S; Vocke, S; Laoutidis, Z G; Schneider, F; Karges, W; Habel, U
2016-01-01
Previous literature has shown that hypoglycemia influences the intensity of the BOLD signal. A similar but smaller effect may also be elicited by low normal blood glucose levels in healthy individuals. This may not only confound the BOLD signal measured in fMRI, but also more generally interact with cognitive processing, and thus indirectly influence fMRI results. Here we show in a placebo-controlled, crossover, double-blind study on 40 healthy subjects, that overnight fasting and low normal levels of glucose contrasted to an activated, elevated glucose condition have an impact on brain activation during basal visual stimulation. Additionally, functional connectivity of the visual cortex shows a strengthened association with higher-order attention-related brain areas in an elevated blood glucose condition compared to the fasting condition. In a fasting state visual brain areas show stronger coupling to the inferior temporal gyrus. Results demonstrate that prolonged overnight fasting leads to a diminished BOLD signal in higher-order occipital processing areas when compared to an elevated blood glucose condition. Additionally, functional connectivity patterns underscore the modulatory influence of fasting on visual brain networks. Patterns of brain activation and functional connectivity associated with a broad range of attentional processes are affected by maturation and aging and associated with psychiatric disease and intoxication. Thus, we conclude that prolonged fasting may decrease fMRI design sensitivity in any task involving attentional processes when fasting status or blood glucose is not controlled.
The impact of inverted text on visual word processing: An fMRI study.
Sussman, Bethany L; Reddigari, Samir; Newman, Sharlene D
2018-06-01
Visual word recognition has been studied for decades. One question that has received limited attention is how different text presentation orientations disrupt word recognition. By examining how word recognition processes may be disrupted by different text orientations it is hoped that new insights can be gained concerning the process. Here, we examined the impact of rotating and inverting text on the neural network responsible for visual word recognition focusing primarily on a region of the occipto-temporal cortex referred to as the visual word form area (VWFA). A lexical decision task was employed in which words and pseudowords were presented in one of three orientations (upright, rotated or inverted). The results demonstrate that inversion caused the greatest disruption of visual word recognition processes. Both rotated and inverted text elicited increased activation in spatial attention regions within the right parietal cortex. However, inverted text recruited phonological and articulatory processing regions within the left inferior frontal and left inferior parietal cortices. Finally, the VWFA was found to not behave similarly to the fusiform face area in that unusual text orientations resulted in increased activation and not decreased activation. It is hypothesized here that the VWFA activation is modulated by feedback from linguistic processes. Copyright © 2018 Elsevier Inc. All rights reserved.
Sheremata, Summer L; Somers, David C; Shomstein, Sarah
2018-02-07
Visual short-term memory (VSTM) and attention are distinct yet interrelated processes. While both require selection of information across the visual field, memory additionally requires the maintenance of information across time and distraction. VSTM recruits areas within human (male and female) dorsal and ventral parietal cortex that are also implicated in spatial selection; therefore, it is important to determine whether overlapping activation might reflect shared attentional demands. Here, identical stimuli and controlled sustained attention across both tasks were used to ask whether fMRI signal amplitude, functional connectivity, and contralateral visual field bias reflect memory-specific task demands. While attention and VSTM activated similar cortical areas, BOLD amplitude and functional connectivity in parietal cortex differentiated the two tasks. Relative to attention, VSTM increased BOLD amplitude in dorsal parietal cortex and decreased BOLD amplitude in the angular gyrus. Additionally, the tasks differentially modulated parietal functional connectivity. Contrasting VSTM and attention, intraparietal sulcus (IPS) 1-2 were more strongly connected with anterior frontoparietal areas and more weakly connected with posterior regions. This divergence between tasks demonstrates that parietal activation reflects memory-specific functions and consequently modulates functional connectivity across the cortex. In contrast, both tasks demonstrated hemispheric asymmetries for spatial processing, exhibiting a stronger contralateral visual field bias in the left versus the right hemisphere across tasks, suggesting that asymmetries are characteristic of a shared selection process in IPS. These results demonstrate that parietal activity and patterns of functional connectivity distinguish VSTM from more general attention processes, establishing a central role of the parietal cortex in maintaining visual information. SIGNIFICANCE STATEMENT Visual short-term memory (VSTM) and attention are distinct yet interrelated processes. Cognitive mechanisms and neural activity underlying these tasks show a large degree of overlap. To examine whether activity within the posterior parietal cortex (PPC) reflects object maintenance across distraction or sustained attention per se, it is necessary to control for attentional demands inherent in VSTM tasks. We demonstrate that activity in PPC reflects VSTM demands even after controlling for attention; remembering items across distraction modulates relationships between parietal and other areas differently than during periods of sustained attention. Our study fills a gap in the literature by directly comparing and controlling for overlap between visual attention and VSTM tasks. Copyright © 2018 the authors 0270-6474/18/381511-09$15.00/0.
Dysfunctional visual word form processing in progressive alexia.
Wilson, Stephen M; Rising, Kindle; Stib, Matthew T; Rapcsak, Steven Z; Beeson, Pélagie M
2013-04-01
Progressive alexia is an acquired reading deficit caused by degeneration of brain regions that are essential for written word processing. Functional imaging studies have shown that early processing of the visual word form depends on a hierarchical posterior-to-anterior processing stream in occipito-temporal cortex, whereby successive areas code increasingly larger and more complex perceptual attributes of the letter string. A region located in the left lateral occipito-temporal sulcus and adjacent fusiform gyrus shows maximal selectivity for words and has been dubbed the 'visual word form area'. We studied two patients with progressive alexia in order to determine whether their reading deficits were associated with structural and/or functional abnormalities in this visual word form system. Voxel-based morphometry showed left-lateralized occipito-temporal atrophy in both patients, very mild in one, but moderate to severe in the other. The two patients, along with 10 control subjects, were scanned with functional magnetic resonance imaging as they viewed rapidly presented words, false font strings, or a fixation crosshair. This paradigm was optimized to reliably map brain regions involved in orthographic processing in individual subjects. All 10 control subjects showed a posterior-to-anterior gradient of selectivity for words, and all 10 showed a functionally defined visual word form area in the left hemisphere that was activated for words relative to false font strings. In contrast, neither of the two patients with progressive alexia showed any evidence for a selectivity gradient or for word-specific activation of the visual word form area. The patient with mild atrophy showed normal responses to both words and false font strings in the posterior part of the visual word form system, but a failure to develop selectivity for words in the more anterior part of the system. In contrast, the patient with moderate to severe atrophy showed minimal activation of any part of the visual word form system for either words or false font strings. Our results suggest that progressive alexia is associated with a dysfunctional visual word form system, with or without substantial cortical atrophy. Furthermore, these findings demonstrate that functional MRI has the potential to reveal the neural bases of cognitive deficits in neurodegenerative patients at very early stages, in some cases before the development of extensive atrophy.
Dynamic spatial organization of the occipito-temporal word form area for second language processing.
Gao, Yue; Sun, Yafeng; Lu, Chunming; Ding, Guosheng; Guo, Taomei; Malins, Jeffrey G; Booth, James R; Peng, Danling; Liu, Li
2017-08-01
Despite the left occipito-temporal region having shown consistent activation in visual word form processing across numerous studies in different languages, the mechanisms by which word forms of second languages are processed in this region remain unclear. To examine this more closely, 16 Chinese-English and 14 English-Chinese late bilinguals were recruited to perform lexical decision tasks to visually presented words in both their native and second languages (L1 and L2) during functional magnetic resonance imaging scanning. Here we demonstrate that visual word form processing for L1 versus L2 engaged different spatial areas of the left occipito-temporal region. Namely, the spatial organization of the visual word form processing in the left occipito-temporal region is more medial and posterior for L2 than L1 processing in Chinese-English bilinguals, whereas activation is more lateral and anterior for L2 in English-Chinese bilinguals. In addition, for Chinese-English bilinguals, more lateral recruitment of the occipito-temporal region was correlated with higher L2 proficiency, suggesting higher L2 proficiency is associated with greater involvement of L1-preferred mechanisms. For English-Chinese bilinguals, higher L2 proficiency was correlated with more lateral and anterior activation of the occipito-temporal region, suggesting higher L2 proficiency is associated with greater involvement of L2-preferred mechanisms. Taken together, our results indicate that L1 and L2 recruit spatially different areas of the occipito-temporal region in visual word processing when the two scripts belong to different writing systems, and that the spatial organization of this region for L2 visual word processing is dynamically modulated by L2 proficiency. Specifically, proficiency in L2 in Chinese-English is associated with assimilation to the native language mechanisms, whereas L2 in English-Chinese is associated with accommodation to second language mechanisms. Copyright © 2017. Published by Elsevier Ltd.
Distinct roles of the cortical layers of area V1 in figure-ground segregation.
Self, Matthew W; van Kerkoerle, Timo; Supèr, Hans; Roelfsema, Pieter R
2013-11-04
What roles do the different cortical layers play in visual processing? We recorded simultaneously from all layers of the primary visual cortex while monkeys performed a figure-ground segregation task. This task can be divided into different subprocesses that are thought to engage feedforward, horizontal, and feedback processes at different time points. These different connection types have different patterns of laminar terminations in V1 and can therefore be distinguished with laminar recordings. We found that the visual response started 40 ms after stimulus presentation in layers 4 and 6, which are targets of feedforward connections from the lateral geniculate nucleus and distribute activity to the other layers. Boundary detection started shortly after the visual response. In this phase, boundaries of the figure induced synaptic currents and stronger neuronal responses in upper layer 4 and the superficial layers ~70 ms after stimulus onset, consistent with the hypothesis that they are detected by horizontal connections. In the next phase, ~30 ms later, synaptic inputs arrived in layers 1, 2, and 5 that receive feedback from higher visual areas, which caused the filling in of the representation of the entire figure with enhanced neuronal activity. The present results reveal unique contributions of the different cortical layers to the formation of a visual percept. This new blueprint of laminar processing may generalize to other tasks and to other areas of the cerebral cortex, where the layers are likely to have roles similar to those in area V1. Copyright © 2013 Elsevier Ltd. All rights reserved.
Gilaie-Dotan, Sharon
2016-03-01
A key question in visual neuroscience is the causal link between specific brain areas and perceptual functions; which regions are necessary for which visual functions? While the contribution of primary visual cortex and high-level visual regions to visual perception has been extensively investigated, the contribution of intermediate visual areas (e.g. V2/V3) to visual processes remains unclear. Here I review more than 20 visual functions (early, mid, and high-level) of LG, a developmental visual agnosic and prosopagnosic young adult, whose intermediate visual regions function in a significantly abnormal fashion as revealed through extensive fMRI and ERP investigations. While expectedly, some of LG's visual functions are significantly impaired, some of his visual functions are surprisingly normal (e.g. stereopsis, color, reading, biological motion). During the period of eight-year testing described here, LG trained on a perceptual learning paradigm that was successful in improving some but not all of his visual functions. Following LG's visual performance and taking into account additional findings in the field, I propose a framework for how different visual areas contribute to different visual functions, with an emphasis on intermediate visual regions. Thus, although rewiring and plasticity in the brain can occur during development to overcome and compensate for hindering developmental factors, LG's case seems to indicate that some visual functions are much less dependent on strict hierarchical flow than others, and can develop normally in spite of abnormal mid-level visual areas, thereby probably less dependent on intermediate visual regions. Copyright © 2015 Elsevier Ltd. All rights reserved.
How the blind "see" Braille: lessons from functional magnetic resonance imaging.
Sadato, Norihiro
2005-12-01
What does the visual cortex of the blind do during Braille reading? This process involves converting simple tactile information into meaningful patterns that have lexical and semantic properties. The perceptual processing of Braille might be mediated by the somatosensory system, whereas visual letter identity is accomplished within the visual system in sighted people. Recent advances in functional neuroimaging techniques, such as functional magnetic resonance imaging, have enabled exploration of the neural substrates of Braille reading. The primary visual cortex of early-onset blind subjects is functionally relevant to Braille reading, suggesting that the brain shows remarkable plasticity that potentially permits the additional processing of tactile information in the visual cortical areas.
Saliency affects feedforward more than feedback processing in early visual cortex.
Emmanouil, Tatiana Aloi; Avigan, Philip; Persuh, Marjan; Ro, Tony
2013-07-01
Early visual cortex activity is influenced by both bottom-up and top-down factors. To investigate the influences of bottom-up (saliency) and top-down (task) factors on different stages of visual processing, we used transcranial magnetic stimulation (TMS) of areas V1/V2 to induce visual suppression at varying temporal intervals. Subjects were asked to detect and discriminate the color or the orientation of briefly-presented small lines that varied on color saliency based on color contrast with the surround. Regardless of task, color saliency modulated the magnitude of TMS-induced visual suppression, especially at earlier temporal processing intervals that reflect the feedforward stage of visual processing in V1/V2. In a second experiment we found that our color saliency effects were also influenced by an inherent advantage of the color red relative to other hues and that color discrimination difficulty did not affect visual suppression. These results support the notion that early visual processing is stimulus driven and that feedforward and feedback processing encode different types of information about visual scenes. They further suggest that certain hues can be prioritized over others within our visual systems by being more robustly represented during early temporal processing intervals. Copyright © 2013 Elsevier Ltd. All rights reserved.
Selecting and perceiving multiple visual objects
Xu, Yaoda; Chun, Marvin M.
2010-01-01
To explain how multiple visual objects are attended and perceived, we propose that our visual system first selects a fixed number of about four objects from a crowded scene based on their spatial information (object individuation) and then encode their details (object identification). We describe the involvement of the inferior intra-parietal sulcus (IPS) in object individuation and the superior IPS and higher visual areas in object identification. Our neural object-file theory synthesizes and extends existing ideas in visual cognition and is supported by behavioral and neuroimaging results. It provides a better understanding of the role of the different parietal areas in encoding visual objects and can explain various forms of capacity-limited processing in visual cognition such as working memory. PMID:19269882
Kurosaki, Mitsuhaya; Shirao, Naoko; Yamashita, Hidehisa; Okamoto, Yasumasa; Yamawaki, Shigeto
2006-02-15
Our aim was to study the gender differences in brain activation upon viewing visual stimuli of distorted images of one's own body. We performed functional magnetic resonance imaging on 11 healthy young men and 11 healthy young women using the "body image tasks" which consisted of fat, real, and thin shapes of the subject's own body. Comparison of the brain activation upon performing the fat-image task versus real-image task showed significant activation of the bilateral prefrontal cortex and left parahippocampal area including the amygdala in the women, and significant activation of the right occipital lobe including the primary and secondary visual cortices in the men. Comparison of brain activation upon performing the thin-image task versus real-image task showed significant activation of the left prefrontal cortex, left limbic area including the cingulate gyrus and paralimbic area including the insula in women, and significant activation of the occipital lobe including the left primary and secondary visual cortices in men. These results suggest that women tend to perceive distorted images of their own bodies by complex cognitive processing of emotion, whereas men tend to perceive distorted images of their own bodies by object visual processing and spatial visual processing.
The Effects of Spatial Endogenous Pre-cueing across Eccentricities
Feng, Jing; Spence, Ian
2017-01-01
Frequently, we use expectations about likely locations of a target to guide the allocation of our attention. Despite the importance of this attentional process in everyday tasks, examination of pre-cueing effects on attention, particularly endogenous pre-cueing effects, has been relatively little explored outside an eccentricity of 20°. Given the visual field has functional subdivisions that attentional processes can differ significantly among the foveal, perifoveal, and more peripheral areas, how endogenous pre-cues that carry spatial information of targets influence our allocation of attention across a large visual field (especially in the more peripheral areas) remains unclear. We present two experiments examining how the expectation of the location of the target shapes the distribution of attention across eccentricities in the visual field. We measured participants’ ability to pick out a target among distractors in the visual field after the presentation of a highly valid cue indicating the size of the area in which the target was likely to occur, or the likely direction of the target (left or right side of the display). Our first experiment showed that participants had a higher target detection rate with faster responses, particularly at eccentricities of 20° and 30°. There was also a marginal advantage of pre-cueing effects when trials of the same size cue were blocked compared to when trials were mixed. Experiment 2 demonstrated a higher target detection rate when the target occurred at the cued direction. This pre-cueing effect was greater at larger eccentricities and with a longer cue-target interval. Our findings on the endogenous pre-cueing effects across a large visual area were summarized using a simple model to assist in conceptualizing the modifications of the distribution of attention over the visual field. We discuss our finding in light of cognitive penetration of perception, and highlight the importance of examining attentional process across a large area of the visual field. PMID:28638353
The Effects of Spatial Endogenous Pre-cueing across Eccentricities.
Feng, Jing; Spence, Ian
2017-01-01
Frequently, we use expectations about likely locations of a target to guide the allocation of our attention. Despite the importance of this attentional process in everyday tasks, examination of pre-cueing effects on attention, particularly endogenous pre-cueing effects, has been relatively little explored outside an eccentricity of 20°. Given the visual field has functional subdivisions that attentional processes can differ significantly among the foveal, perifoveal, and more peripheral areas, how endogenous pre-cues that carry spatial information of targets influence our allocation of attention across a large visual field (especially in the more peripheral areas) remains unclear. We present two experiments examining how the expectation of the location of the target shapes the distribution of attention across eccentricities in the visual field. We measured participants' ability to pick out a target among distractors in the visual field after the presentation of a highly valid cue indicating the size of the area in which the target was likely to occur, or the likely direction of the target (left or right side of the display). Our first experiment showed that participants had a higher target detection rate with faster responses, particularly at eccentricities of 20° and 30°. There was also a marginal advantage of pre-cueing effects when trials of the same size cue were blocked compared to when trials were mixed. Experiment 2 demonstrated a higher target detection rate when the target occurred at the cued direction. This pre-cueing effect was greater at larger eccentricities and with a longer cue-target interval. Our findings on the endogenous pre-cueing effects across a large visual area were summarized using a simple model to assist in conceptualizing the modifications of the distribution of attention over the visual field. We discuss our finding in light of cognitive penetration of perception, and highlight the importance of examining attentional process across a large area of the visual field.
Functional mapping of the primate auditory system.
Poremba, Amy; Saunders, Richard C; Crane, Alison M; Cook, Michelle; Sokoloff, Louis; Mishkin, Mortimer
2003-01-24
Cerebral auditory areas were delineated in the awake, passively listening, rhesus monkey by comparing the rates of glucose utilization in an intact hemisphere and in an acoustically isolated contralateral hemisphere of the same animal. The auditory system defined in this way occupied large portions of cerebral tissue, an extent probably second only to that of the visual system. Cortically, the activated areas included the entire superior temporal gyrus and large portions of the parietal, prefrontal, and limbic lobes. Several auditory areas overlapped with previously identified visual areas, suggesting that the auditory system, like the visual system, contains separate pathways for processing stimulus quality, location, and motion.
Scholte, H Steven; Jolij, Jacob; Fahrenfort, Johannes J; Lamme, Victor A F
2008-11-01
In texture segregation, an example of scene segmentation, we can discern two different processes: texture boundary detection and subsequent surface segregation [Lamme, V. A. F., Rodriguez-Rodriguez, V., & Spekreijse, H. Separate processing dynamics for texture elements, boundaries and surfaces in primary visual cortex of the macaque monkey. Cerebral Cortex, 9, 406-413, 1999]. Neural correlates of texture boundary detection have been found in monkey V1 [Sillito, A. M., Grieve, K. L., Jones, H. E., Cudeiro, J., & Davis, J. Visual cortical mechanisms detecting focal orientation discontinuities. Nature, 378, 492-496, 1995; Grosof, D. H., Shapley, R. M., & Hawken, M. J. Macaque-V1 neurons can signal illusory contours. Nature, 365, 550-552, 1993], but whether surface segregation occurs in monkey V1 [Rossi, A. F., Desimone, R., & Ungerleider, L. G. Contextual modulation in primary visual cortex of macaques. Journal of Neuroscience, 21, 1698-1709, 2001; Lamme, V. A. F. The neurophysiology of figure ground segregation in primary visual-cortex. Journal of Neuroscience, 15, 1605-1615, 1995], and whether boundary detection or surface segregation signals can also be measured in human V1, is more controversial [Kastner, S., De Weerd, P., & Ungerleider, L. G. Texture segregation in the human visual cortex: A functional MRI study. Journal of Neurophysiology, 83, 2453-2457, 2000]. Here we present electroencephalography (EEG) and functional magnetic resonance imaging data that have been recorded with a paradigm that makes it possible to differentiate between boundary detection and scene segmentation in humans. In this way, we were able to show with EEG that neural correlates of texture boundary detection are first present in the early visual cortex around 92 msec and then spread toward the parietal and temporal lobes. Correlates of surface segregation first appear in temporal areas (around 112 msec) and from there appear to spread to parietal, and back to occipital areas. After 208 msec, correlates of surface segregation and boundary detection also appear in more frontal areas. Blood oxygenation level-dependent magnetic resonance imaging results show correlates of boundary detection and surface segregation in all early visual areas including V1. We conclude that texture boundaries are detected in a feedforward fashion and are represented at increasing latencies in higher visual areas. Surface segregation, on the other hand, is represented in "reverse hierarchical" fashion and seems to arise from feedback signals toward early visual areas such as V1.
A task-dependent causal role for low-level visual processes in spoken word comprehension.
Ostarek, Markus; Huettig, Falk
2017-08-01
It is well established that the comprehension of spoken words referring to object concepts relies on high-level visual areas in the ventral stream that build increasingly abstract representations. It is much less clear whether basic low-level visual representations are also involved. Here we asked in what task situations low-level visual representations contribute functionally to concrete word comprehension using an interference paradigm. We interfered with basic visual processing while participants performed a concreteness task (Experiment 1), a lexical-decision task (Experiment 2), and a word class judgment task (Experiment 3). We found that visual noise interfered more with concrete versus abstract word processing, but only when the task required visual information to be accessed. This suggests that basic visual processes can be causally involved in language comprehension, but that their recruitment is not automatic and rather depends on the type of information that is required in a given task situation. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Meyer, Georg F; Harrison, Neil R; Wuerger, Sophie M
2013-08-01
An extensive network of cortical areas is involved in multisensory object and action recognition. This network draws on inferior frontal, posterior temporal, and parietal areas; activity is modulated by familiarity and the semantic congruency of auditory and visual component signals even if semantic incongruences are created by combining visual and auditory signals representing very different signal categories, such as speech and whole body actions. Here we present results from a high-density ERP study designed to examine the time-course and source location of responses to semantically congruent and incongruent audiovisual speech and body actions to explore whether the network involved in action recognition consists of a hierarchy of sequentially activated processing modules or a network of simultaneously active processing sites. We report two main results:1) There are no significant early differences in the processing of congruent and incongruent audiovisual action sequences. The earliest difference between congruent and incongruent audiovisual stimuli occurs between 240 and 280 ms after stimulus onset in the left temporal region. Between 340 and 420 ms, semantic congruence modulates responses in central and right frontal areas. Late differences (after 460 ms) occur bilaterally in frontal areas.2) Source localisation (dipole modelling and LORETA) reveals that an extended network encompassing inferior frontal, temporal, parasaggital, and superior parietal sites are simultaneously active between 180 and 420 ms to process auditory–visual action sequences. Early activation (before 120 ms) can be explained by activity in mainly sensory cortices. . The simultaneous activation of an extended network between 180 and 420 ms is consistent with models that posit parallel processing of complex action sequences in frontal, temporal and parietal areas rather than models that postulate hierarchical processing in a sequence of brain regions. Copyright © 2013 Elsevier Ltd. All rights reserved.
The neural bases of spatial frequency processing during scene perception
Kauffmann, Louise; Ramanoël, Stephen; Peyrin, Carole
2014-01-01
Theories on visual perception agree that scenes are processed in terms of spatial frequencies. Low spatial frequencies (LSF) carry coarse information whereas high spatial frequencies (HSF) carry fine details of the scene. However, how and where spatial frequencies are processed within the brain remain unresolved questions. The present review addresses these issues and aims to identify the cerebral regions differentially involved in low and high spatial frequency processing, and to clarify their attributes during scene perception. Results from a number of behavioral and neuroimaging studies suggest that spatial frequency processing is lateralized in both hemispheres, with the right and left hemispheres predominantly involved in the categorization of LSF and HSF scenes, respectively. There is also evidence that spatial frequency processing is retinotopically mapped in the visual cortex. HSF scenes (as opposed to LSF) activate occipital areas in relation to foveal representations, while categorization of LSF scenes (as opposed to HSF) activates occipital areas in relation to more peripheral representations. Concomitantly, a number of studies have demonstrated that LSF information may reach high-order areas rapidly, allowing an initial coarse parsing of the visual scene, which could then be sent back through feedback into the occipito-temporal cortex to guide finer HSF-based analysis. Finally, the review addresses spatial frequency processing within scene-selective regions areas of the occipito-temporal cortex. PMID:24847226
ERIC Educational Resources Information Center
Lam, Fook Chang; Lovett, Fiona; Dutton, Gordon N.
2010-01-01
Damage to the areas of the brain that are responsible for higher visual processing can lead to severe cerebral visual impairment (CVI). The prognosis for higher cognitive visual functions in children with CVI is not well described. We therefore present our six-year follow-up of a boy with CVI and highlight intervention approaches that have proved…
A Survey of Parents of Children with Cortical or Cerebral Visual Impairment
ERIC Educational Resources Information Center
Jackel, Bernadette; Wilson, Michelle; Hartmann, Elizabeth
2010-01-01
Cortical or cerebral visual impairment (CVI) can result when the visual pathways and visual processing areas of the brain have been damaged. Children with CVI may have difficulty finding an object among other objects, viewing in the distance, orienting themselves in space, going from grass to pavement or other changes in surface, and copying…
The relationship of global form and motion detection to reading fluency.
Englund, Julia A; Palomares, Melanie
2012-08-15
Visual motion processing in typical and atypical readers has suggested aspects of reading and motion processing share a common cortical network rooted in dorsal visual areas. Few studies have examined the relationship between reading performance and visual form processing, which is mediated by ventral cortical areas. We investigated whether reading fluency correlates with coherent motion detection thresholds in typically developing children using random dot kinematograms. As a comparison, we also evaluated the correlation between reading fluency and static form detection thresholds. Results show that both dorsal and ventral visual functions correlated with components of reading fluency, but that they have different developmental characteristics. Motion coherence thresholds correlated with reading rate and accuracy, which both improved with chronological age. Interestingly, when controlling for non-verbal abilities and age, reading accuracy significantly correlated with thresholds for coherent form detection but not coherent motion detection in typically developing children. Dorsal visual functions that mediate motion coherence seem to be related maturation of broad cognitive functions including non-verbal abilities and reading fluency. However, ventral visual functions that mediate form coherence seem to be specifically related to accurate reading in typically developing children. Copyright © 2012 Elsevier Ltd. All rights reserved.
Zhuang, Chengxu; Wang, Yulong; Yamins, Daniel; Hu, Xiaolin
2017-01-01
Visual information in the visual cortex is processed in a hierarchical manner. Recent studies show that higher visual areas, such as V2, V3, and V4, respond more vigorously to images with naturalistic higher-order statistics than to images lacking them. This property is a functional signature of higher areas, as it is much weaker or even absent in the primary visual cortex (V1). However, the mechanism underlying this signature remains elusive. We studied this problem using computational models. In several typical hierarchical visual models including the AlexNet, VggNet, and SHMAX, this signature was found to be prominent in higher layers but much weaker in lower layers. By changing both the model structure and experimental settings, we found that the signature strongly correlated with sparse firing of units in higher layers but not with any other factors, including model structure, training algorithm (supervised or unsupervised), receptive field size, and property of training stimuli. The results suggest an important role of sparse neuronal activity underlying this special feature of higher visual areas.
Zhuang, Chengxu; Wang, Yulong; Yamins, Daniel; Hu, Xiaolin
2017-01-01
Visual information in the visual cortex is processed in a hierarchical manner. Recent studies show that higher visual areas, such as V2, V3, and V4, respond more vigorously to images with naturalistic higher-order statistics than to images lacking them. This property is a functional signature of higher areas, as it is much weaker or even absent in the primary visual cortex (V1). However, the mechanism underlying this signature remains elusive. We studied this problem using computational models. In several typical hierarchical visual models including the AlexNet, VggNet, and SHMAX, this signature was found to be prominent in higher layers but much weaker in lower layers. By changing both the model structure and experimental settings, we found that the signature strongly correlated with sparse firing of units in higher layers but not with any other factors, including model structure, training algorithm (supervised or unsupervised), receptive field size, and property of training stimuli. The results suggest an important role of sparse neuronal activity underlying this special feature of higher visual areas. PMID:29163117
Neural processing of visual information under interocular suppression: a critical review
Sterzer, Philipp; Stein, Timo; Ludwig, Karin; Rothkirch, Marcus; Hesselmann, Guido
2014-01-01
When dissimilar stimuli are presented to the two eyes, only one stimulus dominates at a time while the other stimulus is invisible due to interocular suppression. When both stimuli are equally potent in competing for awareness, perception alternates spontaneously between the two stimuli, a phenomenon called binocular rivalry. However, when one stimulus is much stronger, e.g., due to higher contrast, the weaker stimulus can be suppressed for prolonged periods of time. A technique that has recently become very popular for the investigation of unconscious visual processing is continuous flash suppression (CFS): High-contrast dynamic patterns shown to one eye can render a low-contrast stimulus shown to the other eye invisible for up to minutes. Studies using CFS have produced new insights but also controversies regarding the types of visual information that can be processed unconsciously as well as the neural sites and the relevance of such unconscious processing. Here, we review the current state of knowledge in regard to neural processing of interocularly suppressed information. Focusing on recent neuroimaging findings, we discuss whether and to what degree such suppressed visual information is processed at early and more advanced levels of the visual processing hierarchy. We review controversial findings related to the influence of attention on early visual processing under interocular suppression, the putative differential roles of dorsal and ventral areas in unconscious object processing, and evidence suggesting privileged unconscious processing of emotional and other socially relevant information. On a more general note, we discuss methodological and conceptual issues, from practical issues of how unawareness of a stimulus is assessed to the overarching question of what constitutes an adequate operational definition of unawareness. Finally, we propose approaches for future research to resolve current controversies in this exciting research area. PMID:24904469
Wang, Quanxin; Tanigawa, Hisashi; Fujita, Ichiro
2017-04-01
Two distinct areas along the ventral visual stream of monkeys, the primary visual (V1) and inferior temporal (TE) cortices, exhibit different projection patterns of intrinsic horizontal axons with patchy terminal fields in adult animals. The differences between the patches in these 2 areas may reflect differences in cortical representation and processing of visual information. We studied the postnatal development of patches by injecting an anterograde tracer into TE and V1 in monkeys of various ages. At 1 week of age, labeled patches with distribution patterns reminiscent of those in adults were already present in both areas. The labeling intensity of patches decayed exponentially with projection distance in monkeys of all ages in both areas, but this trend was far less evident in TE. The number and extent of patches gradually decreased with age in V1, but not in TE. In V1, axonal and bouton densities increased postnatally only in patches with short projection distances, whereas in TE this density change occurred in patches with various projection distances. Thus, patches with area-specific distribution patterns are formed early in life, and area-specific postnatal developmental processes shape the connectivity of patches into adulthood. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Nocchi, Federico; Gazzellini, Simone; Grisolia, Carmela; Petrarca, Maurizio; Cannatà, Vittorio; Cappa, Paolo; D'Alessio, Tommaso; Castelli, Enrico
2012-07-24
The potential of robot-mediated therapy and virtual reality in neurorehabilitation is becoming of increasing importance. However, there is limited information, using neuroimaging, on the neural networks involved in training with these technologies. This study was intended to detect the brain network involved in the visual processing of movement during robotic training. The main aim was to investigate the existence of a common cerebral network able to assimilate biological (human upper limb) and non-biological (abstract object) movements, hence testing the suitability of the visual non-biological feedback provided by the InMotion2 Robot. A visual functional Magnetic Resonance Imaging (fMRI) task was administered to 22 healthy subjects. The task required observation and retrieval of motor gestures and of the visual feedback used in robotic training. Functional activations of both biological and non-biological movements were examined to identify areas activated in both conditions, along with differential activity in upper limb vs. abstract object trials. Control of response was also tested by administering trials with congruent and incongruent reaching movements. The observation of upper limb and abstract object movements elicited similar patterns of activations according to a caudo-rostral pathway for the visual processing of movements (including specific areas of the occipital, temporal, parietal, and frontal lobes). Similarly, overlapping activations were found for the subsequent retrieval of the observed movement. Furthermore, activations of frontal cortical areas were associated with congruent trials more than with the incongruent ones. This study identified the neural pathway associated with visual processing of movement stimuli used in upper limb robot-mediated training and investigated the brain's ability to assimilate abstract object movements with human motor gestures. In both conditions, activations were elicited in cerebral areas involved in visual perception, sensory integration, recognition of movement, re-mapping on the somatosensory and motor cortex, storage in memory, and response control. Results from the congruent vs. incongruent trials revealed greater activity for the former condition than the latter in a network including cingulate cortex, right inferior and middle frontal gyrus that are involved in the go-signal and in decision control. Results on healthy subjects would suggest the appropriateness of an abstract visual feedback provided during motor training. The task contributes to highlight the potential of fMRI in improving the understanding of visual motor processes and may also be useful in detecting brain reorganisation during training.
Takesaki, Natsumi; Kikuchi, Mitsuru; Yoshimura, Yuko; Hiraishi, Hirotoshi; Hasegawa, Chiaki; Kaneda, Reizo; Nakatani, Hideo; Takahashi, Tetsuya; Mottron, Laurent; Minabe, Yoshio
2016-01-01
Some individuals with autism spectrum (AS) perform better on visual reasoning tasks than would be predicted by their general cognitive performance. In individuals with AS, mechanisms in the brain’s visual area that underlie visual processing play a more prominent role in visual reasoning tasks than they do in normal individuals. In addition, increased connectivity with the visual area is thought to be one of the neural bases of autistic visual cognitive abilities. However, the contribution of such brain connectivity to visual cognitive abilities is not well understood, particularly in children. In this study, we investigated how functional connectivity between the visual areas and higher-order regions, which is reflected by alpha, beta and gamma band oscillations, contributes to the performance of visual reasoning tasks in typically developing (TD) (n = 18) children and AS children (n = 18). Brain activity was measured using a custom child-sized magneto-encephalograph. Imaginary coherence analysis was used as a proxy to estimate the functional connectivity between the occipital and other areas of the brain. Stronger connectivity from the occipital area, as evidenced by higher imaginary coherence in the gamma band, was associated with higher performance in the AS children only. We observed no significant correlation between the alpha or beta bands imaginary coherence and performance in the both groups. Alpha and beta bands reflect top-down pathways, while gamma band oscillations reflect a bottom-up influence. Therefore, our results suggest that visual reasoning in AS children is at least partially based on an enhanced reliance on visual perception and increased bottom-up connectivity from the visual areas. PMID:27631982
Sanfratello, Lori; Aine, Cheryl; Stephen, Julia
2018-05-25
Impairments in auditory and visual processing are common in schizophrenia (SP). In the unisensory realm visual deficits are primarily noted for the dorsal visual stream. In addition, insensitivity to timing offsets between stimuli are widely reported for SP. The aim of the present study was to test at the physiological level differences in dorsal/ventral stream visual processing and timing sensitivity between SP and healthy controls (HC) using MEG and a simple auditory/visual task utilizing a variety of multisensory conditions. The paradigm included all combinations of synchronous/asynchronous and central/peripheral stimuli, yielding 4 task conditions. Both HC and SP groups showed activation in parietal areas (dorsal visual stream) during all multisensory conditions, with parietal areas showing decreased activation for SP relative to HC, and a significantly delayed peak of activation for SP in intraparietal sulcus (IPS). We also observed a differential effect of stimulus synchrony on HC and SP parietal response. Furthermore, a (negative) correlation was found between SP positive symptoms and activity in IPS. Taken together, our results provide evidence of impairment of the dorsal visual stream in SP during a multisensory task, along with an altered response to timing offsets between presented multisensory stimuli. Copyright © 2018 Elsevier B.V. All rights reserved.
Sadananda, Monika; Bischof, Hans-Joachim
2006-10-16
Two forebrain areas in the hyperpallium apicale and in the lateral nidopallium of isolated male zebra finches are highly active (2-deoxyglucose technique) on exposure to females for the first time, that is first courtship. These areas also demonstrate enhanced neuronal plasticity when screened with c-fos immunocytochemistry. Both are areas involved in the processing of visual information conveyed by the two major visual pathways in birds, strengthening our hypothesis that courtship in the zebra finch is a visually guided behaviour. First courtship and chased birds show enhanced c-fos induction in the hyperpallial area, which could represent neuronal activity reflecting changes in the immediate environment. The enhanced expression of fos in first courtship birds in lateral nidopallial neurons indicates imminent long-lasting changes at the synaptic level that form the substrate for imprinting, a stable form of learning in birds.
Inversion effect in the visual processing of Chinese character: an fMRI study.
Zhao, Jizheng; Liu, Jiangang; Li, Jun; Liang, Jimin; Feng, Lu; Ai, Lin; Tian, Jie
2010-07-05
Chinese people engage long-term processing of characters. It has been demonstrated that the presented orientation affects the perception of several types of stimuli when people have possessed expertise with them, e.g. face, body, and scene. However, the influence of inversion on the neural mechanism of Chinese character processing has not been sufficiently discussed. In the present study, a functional magnetic resonance imaging (fMRI) experiment is performed to examine the effect of inversion on Chinese character processing, which employs Chinese character, face and house as stimuli. The region of interest analysis demonstrates inversion leads to neural response increases for Chinese character in left fusiform character-preferential area, bilateral fusiform object-preferential area and bilateral occipital object-preferential area, and such inversion-caused changes in the response pattern of characters processing are highly similar to those of faces processing but quiet different from those of houses processing. Whole brain analysis reveals the upright characters recruit several language regions for phonology and semantic processing, however, the inverted characters activated extensive regions related to the visual information processing. Our findings reveal a shift from the character-preferential processing route to the generic object processing steam within visual cortex when the characters are inverted, and suggest that different mechanisms may underlie the upright and the inverted Chinese character, respectively. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.
Modeling a space-variant cortical representation for apparent motion.
Wurbs, Jeremy; Mingolla, Ennio; Yazdanbakhsh, Arash
2013-08-06
Receptive field sizes of neurons in early primate visual areas increase with eccentricity, as does temporal processing speed. The fovea is evidently specialized for slow, fine movements while the periphery is suited for fast, coarse movements. In either the fovea or periphery discrete flashes can produce motion percepts. Grossberg and Rudd (1989) used traveling Gaussian activity profiles to model long-range apparent motion percepts. We propose a neural model constrained by physiological data to explain how signals from retinal ganglion cells to V1 affect the perception of motion as a function of eccentricity. Our model incorporates cortical magnification, receptive field overlap and scatter, and spatial and temporal response characteristics of retinal ganglion cells for cortical processing of motion. Consistent with the finding of Baker and Braddick (1985), in our model the maximum flash distance that is perceived as an apparent motion (Dmax) increases linearly as a function of eccentricity. Baker and Braddick (1985) made qualitative predictions about the functional significance of both stimulus and visual system parameters that constrain motion perception, such as an increase in the range of detectable motions as a function of eccentricity and the likely role of higher visual processes in determining Dmax. We generate corresponding quantitative predictions for those functional dependencies for individual aspects of motion processing. Simulation results indicate that the early visual pathway can explain the qualitative linear increase of Dmax data without reliance on extrastriate areas, but that those higher visual areas may serve as a modulatory influence on the exact Dmax increase.
Ludwig, Karin; Kathmann, Norbert; Sterzer, Philipp; Hesselmann, Guido
2015-01-01
Recent behavioral and neuroimaging studies using continuous flash suppression (CFS) have suggested that action-related processing in the dorsal visual stream might be independent of perceptual awareness, in line with the "vision-for-perception" versus "vision-for-action" distinction of the influential dual-stream theory. It remains controversial if evidence suggesting exclusive dorsal stream processing of tool stimuli under CFS can be explained by their elongated shape alone or by action-relevant category representations in dorsal visual cortex. To approach this question, we investigated category- and shape-selective functional magnetic resonance imaging-blood-oxygen level-dependent responses in both visual streams using images of faces and tools. Multivariate pattern analysis showed enhanced decoding of elongated relative to non-elongated tools, both in the ventral and dorsal visual stream. The second aim of our study was to investigate whether the depth of interocular suppression might differentially affect processing in dorsal and ventral areas. However, parametric modulation of suppression depth by varying the CFS mask contrast did not yield any evidence for differential modulation of category-selective activity. Together, our data provide evidence for shape-selective processing under CFS in both dorsal and ventral stream areas and, therefore, do not support the notion that dorsal "vision-for-action" processing is exclusively preserved under interocular suppression. © 2014 Wiley Periodicals, Inc.
Visual Neuroscience: Unique Neural System for Flight Stabilization in Hummingbirds.
Ibbotson, M R
2017-01-23
The pretectal visual motion processing area in the hummingbird brain is unlike that in other birds: instead of emphasizing detection of horizontal movements, it codes for motion in all directions through 360°, possibly offering precise visual stability control during hovering. Copyright © 2017 Elsevier Ltd. All rights reserved.
An fMRI Study of Multimodal Semantic and Phonological Processing in Reading Disabled Adolescents
ERIC Educational Resources Information Center
Landi, Nicole; Mencl, W. Einar; Frost, Stephen J.; Sandak, Rebecca; Pugh, Kenneth R.
2010-01-01
Using functional magnetic resonance imaging, we investigated multimodal (visual and auditory) semantic and unimodal (visual only) phonological processing in reading disabled (RD) adolescents and non-impaired (NI) control participants. We found reduced activation for RD relative to NI in a number of left-hemisphere reading-related areas across all…
Visual management system and timber management application
Warren R. Bacon; Asa D. (Bud) Twombly
1979-01-01
This paper includes an illustration of a planning process to guide vegetation management throughout a travel route seen area and over the time period of a total management rotation (100-300 years). The process will produce direction on visual characteristics to be created and maintained within the biological potential and coordinated with associated re-source...
The trait of sensory processing sensitivity and neural responses to changes in visual scenes
Xu, Xiaomeng; Aron, Arthur; Aron, Elaine; Cao, Guikang; Feng, Tingyong; Weng, Xuchu
2011-01-01
This exploratory study examined the extent to which individual differences in sensory processing sensitivity (SPS), a temperament/personality trait characterized by social, emotional and physical sensitivity, are associated with neural response in visual areas in response to subtle changes in visual scenes. Sixteen participants completed the Highly Sensitive Person questionnaire, a standard measure of SPS. Subsequently, they were tested on a change detection task while undergoing functional magnetic resonance imaging (fMRI). SPS was associated with significantly greater activation in brain areas involved in high-order visual processing (i.e. right claustrum, left occipitotemporal, bilateral temporal and medial and posterior parietal regions) as well as in the right cerebellum, when detecting minor (vs major) changes in stimuli. These findings remained strong and significant after controlling for neuroticism and introversion, traits that are often correlated with SPS. These results provide the first evidence of neural differences associated with SPS, the first direct support for the sensory aspect of this trait that has been studied primarily for its social and affective implications, and preliminary evidence for heightened sensory processing in individuals high in SPS. PMID:20203139
Amano, Kaoru; Shibata, Kazuhisa; Kawato, Mitsuo; Sasaki, Yuka; Watanabe, Takeo
2016-01-01
Summary Associative learning is an essential brain process where the contingency of different items increases after training. Associative learning has been found to occur in many brain regions [1-4]. However, there is no clear evidence that associative learning of visual features occurs in early visual areas, although a number of studies have indicated that learning of a single visual feature (perceptual learning) involves early visual areas [5-8]. Here, via decoded functional magnetic resonance imaging (fMRI) neurofeedback, termed “DecNef” [9], we tested whether associative learning of color and orientation can be created in early visual areas. During three days' training, DecNef induced fMRI signal patterns that corresponded to a specific target color (red) mostly in early visual areas while a vertical achromatic grating was physically presented to participants. As a result, participants came to perceive “red” significantly more frequently than “green” in an achromatic vertical grating. This effect was also observed 3 to 5 months after the training. These results suggest that long-term associative learning of the two different visual features such as color and orientation was created most likely in early visual areas. This newly extended technique that induces associative learning is called “A(ssociative)-DecNef” and may be used as an important tool for understanding and modifying brain functions, since associations are fundamental and ubiquitous functions in the brain. PMID:27374335
Amano, Kaoru; Shibata, Kazuhisa; Kawato, Mitsuo; Sasaki, Yuka; Watanabe, Takeo
2016-07-25
Associative learning is an essential brain process where the contingency of different items increases after training. Associative learning has been found to occur in many brain regions [1-4]. However, there is no clear evidence that associative learning of visual features occurs in early visual areas, although a number of studies have indicated that learning of a single visual feature (perceptual learning) involves early visual areas [5-8]. Here, via decoded fMRI neurofeedback termed "DecNef" [9], we tested whether associative learning of orientation and color can be created in early visual areas. During 3 days of training, DecNef induced fMRI signal patterns that corresponded to a specific target color (red) mostly in early visual areas while a vertical achromatic grating was physically presented to participants. As a result, participants came to perceive "red" significantly more frequently than "green" in an achromatic vertical grating. This effect was also observed 3-5 months after the training. These results suggest that long-term associative learning of two different visual features such as orientation and color was created, most likely in early visual areas. This newly extended technique that induces associative learning is called "A-DecNef," and it may be used as an important tool for understanding and modifying brain functions because associations are fundamental and ubiquitous functions in the brain. Copyright © 2016 Elsevier Ltd. All rights reserved.
Serial grouping of 2D-image regions with object-based attention in humans.
Jeurissen, Danique; Self, Matthew W; Roelfsema, Pieter R
2016-06-13
After an initial stage of local analysis within the retina and early visual pathways, the human visual system creates a structured representation of the visual scene by co-selecting image elements that are part of behaviorally relevant objects. The mechanisms underlying this perceptual organization process are only partially understood. We here investigate the time-course of perceptual grouping of two-dimensional image-regions by measuring the reaction times of human participants and report that it is associated with the gradual spread of object-based attention. Attention spreads fastest over large and homogeneous areas and is slowed down at locations that require small-scale processing. We find that the time-course of the object-based selection process is well explained by a 'growth-cone' model, which selects surface elements in an incremental, scale-dependent manner. We discuss how the visual cortical hierarchy can implement this scale-dependent spread of object-based attention, leveraging the different receptive field sizes in distinct cortical areas.
Goodhew, Stephanie C; Lawrence, Rebecca K; Edwards, Mark
2017-05-01
There are volumes of information available to process in visual scenes. Visual spatial attention is a critically important selection mechanism that prevents these volumes from overwhelming our visual system's limited-capacity processing resources. We were interested in understanding the effect of the size of the attended area on visual perception. The prevailing model of attended-region size across cognition, perception, and neuroscience is the zoom-lens model. This model stipulates that the magnitude of perceptual processing enhancement is inversely related to the size of the attended region, such that a narrow attended-region facilitates greater perceptual enhancement than a wider region. Yet visual processing is subserved by two major visual pathways (magnocellular and parvocellular) that operate with a degree of independence in early visual processing and encode contrasting visual information. Historically, testing of the zoom-lens has used measures of spatial acuity ideally suited to parvocellular processing. This, therefore, raises questions about the generality of the zoom-lens model to different aspects of visual perception. We found that while a narrow attended-region facilitated spatial acuity and the perception of high spatial frequency targets, it had no impact on either temporal acuity or the perception of low spatial frequency targets. This pattern also held up when targets were not presented centrally. This supports the notion that visual attended-region size has dissociable effects on magnocellular versus parvocellular mediated visual processing.
Organization of the Drosophila larval visual circuit
Gendre, Nanae; Neagu-Maier, G Larisa; Fetter, Richard D; Schneider-Mizell, Casey M; Truman, James W; Zlatic, Marta; Cardona, Albert
2017-01-01
Visual systems transduce, process and transmit light-dependent environmental cues. Computation of visual features depends on photoreceptor neuron types (PR) present, organization of the eye and wiring of the underlying neural circuit. Here, we describe the circuit architecture of the visual system of Drosophila larvae by mapping the synaptic wiring diagram and neurotransmitters. By contacting different targets, the two larval PR-subtypes create two converging pathways potentially underlying the computation of ambient light intensity and temporal light changes already within this first visual processing center. Locally processed visual information then signals via dedicated projection interneurons to higher brain areas including the lateral horn and mushroom body. The stratified structure of the larval optic neuropil (LON) suggests common organizational principles with the adult fly and vertebrate visual systems. The complete synaptic wiring diagram of the LON paves the way to understanding how circuits with reduced numerical complexity control wide ranges of behaviors.
Query-Driven Visualization and Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruebel, Oliver; Bethel, E. Wes; Prabhat, Mr.
2012-11-01
This report focuses on an approach to high performance visualization and analysis, termed query-driven visualization and analysis (QDV). QDV aims to reduce the amount of data that needs to be processed by the visualization, analysis, and rendering pipelines. The goal of the data reduction process is to separate out data that is "scientifically interesting'' and to focus visualization, analysis, and rendering on that interesting subset. The premise is that for any given visualization or analysis task, the data subset of interest is much smaller than the larger, complete data set. This strategy---extracting smaller data subsets of interest and focusing ofmore » the visualization processing on these subsets---is complementary to the approach of increasing the capacity of the visualization, analysis, and rendering pipelines through parallelism. This report discusses the fundamental concepts in QDV, their relationship to different stages in the visualization and analysis pipelines, and presents QDV's application to problems in diverse areas, ranging from forensic cybersecurity to high energy physics.« less
Toschi, Nicola; Kim, Jieun; Sclocco, Roberta; Duggento, Andrea; Barbieri, Riccardo; Kuo, Braden; Napadow, Vitaly
2017-01-01
The brain networks supporting nausea not yet understood. We previously found that while visual stimulation activated primary (V1) and extrastriate visual cortices (MT+/V5, coding for visual motion), increasing nausea was associated with increasing sustained activation in several brain areas, with significant co-activation for anterior insula (aIns) and mid-cingulate (MCC) cortices. Here, we hypothesized that motion sickness also alters functional connectivity between visual motion and previously identified nausea-processing brain regions. Subjects prone to motion sickness and controls completed a motion sickness provocation task during fMRI/ECG acquisition. We studied changes in connectivity between visual processing areas activated by the stimulus (MT+/V5, V1), right aIns and MCC when comparing rest (BASELINE) to peak nausea state (NAUSEA). Compared to BASELINE, NAUSEA reduced connectivity between right and left V1 and increased connectivity between right MT+/V5 and aIns and between left MT+/V5 and MCC. Additionally, the change in MT+/V5 to insula connectivity was significantly associated with a change in sympathovagal balance, assessed by heart rate variability analysis. No state-related connectivity changes were noted for the control group. Increased connectivity between a visual motion processing region and nausea/salience brain regions may reflect increased transfer of visual/vestibular mismatch information to brain regions supporting nausea perception and autonomic processing. We conclude that vection-induced nausea increases connectivity between nausea-processing regions and those activated by the nauseogenic stimulus. This enhanced low-frequency coupling may support continual, slowly evolving nausea perception and shifts toward sympathetic dominance. Disengaging this coupling may be a target for biobehavioral interventions aimed at reducing motion sickness severity. Copyright © 2016 Elsevier B.V. All rights reserved.
Naming-Speed Processes, Timing, and Reading: A Conceptual Review.
ERIC Educational Resources Information Center
Wolf, Maryanne; Bowers, Patricia Greig; Biddle, Kathleen
2000-01-01
This article reviews evidence for seven central questions about the role of naming-speed deficits in developmental reading disabilities. Cross-sectional, longitudinal, and cross-linguistic research on naming-speed processes, timing processes, and reading is presented. An evolving model of visual naming illustrates areas of difference and areas of…
Central Processing Dysfunctions in Children: A Review of Research.
ERIC Educational Resources Information Center
Chalfant, James C.; Scheffelin, Margaret A.
Research on central processing dysfunctions in children is reviewed in three major areas. The first, dysfunctions in the analysis of sensory information, includes auditory, visual, and haptic processing. The second, dysfunction in the synthesis of sensory information, covers multiple stimulus integration and short-term memory. The third area of…
Functional selectivity for face processing in the temporal voice area of early deaf individuals
van Ackeren, Markus J.; Rabini, Giuseppe; Zonca, Joshua; Foa, Valentina; Baruffaldi, Francesca; Rezk, Mohamed; Pavani, Francesco; Rossion, Bruno; Collignon, Olivier
2017-01-01
Brain systems supporting face and voice processing both contribute to the extraction of important information for social interaction (e.g., person identity). How does the brain reorganize when one of these channels is absent? Here, we explore this question by combining behavioral and multimodal neuroimaging measures (magneto-encephalography and functional imaging) in a group of early deaf humans. We show enhanced selective neural response for faces and for individual face coding in a specific region of the auditory cortex that is typically specialized for voice perception in hearing individuals. In this region, selectivity to face signals emerges early in the visual processing hierarchy, shortly after typical face-selective responses in the ventral visual pathway. Functional and effective connectivity analyses suggest reorganization in long-range connections from early visual areas to the face-selective temporal area in individuals with early and profound deafness. Altogether, these observations demonstrate that regions that typically specialize for voice processing in the hearing brain preferentially reorganize for face processing in born-deaf people. Our results support the idea that cross-modal plasticity in the case of early sensory deprivation relates to the original functional specialization of the reorganized brain regions. PMID:28652333
Resilience to the contralateral visual field bias as a window into object representations
Garcea, Frank E.; Kristensen, Stephanie; Almeida, Jorge; Mahon, Bradford Z.
2016-01-01
Viewing images of manipulable objects elicits differential blood oxygen level-dependent (BOLD) contrast across parietal and dorsal occipital areas of the human brain that support object-directed reaching, grasping, and complex object manipulation. However, it is unknown which object-selective regions of parietal cortex receive their principal inputs from the ventral object-processing pathway and which receive their inputs from the dorsal object-processing pathway. Parietal areas that receive their inputs from the ventral visual pathway, rather than from the dorsal stream, will have inputs that are already filtered through object categorization and identification processes. This predicts that parietal regions that receive inputs from the ventral visual pathway should exhibit object-selective responses that are resilient to contralateral visual field biases. To test this hypothesis, adult participants viewed images of tools and animals that were presented to the left or right visual fields during functional magnetic resonance imaging (fMRI). We found that the left inferior parietal lobule showed robust tool preferences independently of the visual field in which tool stimuli were presented. In contrast, a region in posterior parietal/dorsal occipital cortex in the right hemisphere exhibited an interaction between visual field and category: tool-preferences were strongest contralateral to the stimulus. These findings suggest that action knowledge accessed in the left inferior parietal lobule operates over inputs that are abstracted from the visual input and contingent on analysis by the ventral visual pathway, consistent with its putative role in supporting object manipulation knowledge. PMID:27160998
Heinen, Klaartje; Jolij, Jacob; Lamme, Victor A F
2005-09-08
Discriminating objects from their surroundings by the visual system is known as figure-ground segregation. This process entails two different subprocesses: boundary detection and subsequent surface segregation or 'filling in'. In this study, we used transcranial magnetic stimulation to test the hypothesis that temporally distinct processes in V1 and related early visual areas such as V2 or V3 are causally related to the process of figure-ground segregation. Our results indicate that correct discrimination between two visual stimuli, which relies on figure-ground segregation, requires two separate periods of information processing in the early visual cortex: one around 130-160 ms and the other around 250-280 ms.
Representation of vestibular and visual cues to self-motion in ventral intraparietal (VIP) cortex
Chen, Aihua; Deangelis, Gregory C.; Angelaki, Dora E.
2011-01-01
Convergence of vestibular and visual motion information is important for self-motion perception. One cortical area that combines vestibular and optic flow signals is the ventral intraparietal area (VIP). We characterized unisensory and multisensory responses of macaque VIP neurons to translations and rotations in three dimensions. Approximately half of VIP cells show significant directional selectivity in response to optic flow, half show tuning to vestibular stimuli, and one-third show multisensory responses. Visual and vestibular direction preferences of multisensory VIP neurons could be congruent or opposite. When visual and vestibular stimuli were combined, VIP responses could be dominated by either input, unlike medial superior temporal area (MSTd) where optic flow tuning typically dominates or the visual posterior sylvian area (VPS) where vestibular tuning dominates. Optic flow selectivity in VIP was weaker than in MSTd but stronger than in VPS. In contrast, vestibular tuning for translation was strongest in VPS, intermediate in VIP, and weakest in MSTd. To characterize response dynamics, direction-time data were fit with a spatiotemporal model in which temporal responses were modeled as weighted sums of velocity, acceleration, and position components. Vestibular responses in VIP reflected balanced contributions of velocity and acceleration, whereas visual responses were dominated by velocity. Timing of vestibular responses in VIP was significantly faster than in MSTd, whereas timing of optic flow responses did not differ significantly among areas. These findings suggest that VIP may be proximal to MSTd in terms of vestibular processing but hierarchically similar to MSTd in terms of optic flow processing. PMID:21849564
Numerosity processing in early visual cortex.
Fornaciai, Michele; Brannon, Elizabeth M; Woldorff, Marty G; Park, Joonkoo
2017-08-15
While parietal cortex is thought to be critical for representing numerical magnitudes, we recently reported an event-related potential (ERP) study demonstrating selective neural sensitivity to numerosity over midline occipital sites very early in the time course, suggesting the involvement of early visual cortex in numerosity processing. However, which specific brain area underlies such early activation is not known. Here, we tested whether numerosity-sensitive neural signatures arise specifically from the initial stages of visual cortex, aiming to localize the generator of these signals by taking advantage of the distinctive folding pattern of early occipital cortices around the calcarine sulcus, which predicts an inversion of polarity of ERPs arising from these areas when stimuli are presented in the upper versus lower visual field. Dot arrays, including 8-32dots constructed systematically across various numerical and non-numerical visual attributes, were presented randomly in either the upper or lower visual hemifields. Our results show that neural responses at about 90ms post-stimulus were robustly sensitive to numerosity. Moreover, the peculiar pattern of polarity inversion of numerosity-sensitive activity at this stage suggested its generation primarily in V2 and V3. In contrast, numerosity-sensitive ERP activity at occipito-parietal channels later in the time course (210-230ms) did not show polarity inversion, indicating a subsequent processing stage in the dorsal stream. Overall, these results demonstrate that numerosity processing begins in one of the earliest stages of the cortical visual stream. Copyright © 2017 Elsevier Inc. All rights reserved.
Sadananda, Monika; Bischof, Hans-Joachim
2006-08-23
The lateral forebrain of zebra finches that comprises parts of the lateral nidopallium and parts of the lateral mesopallium is supposed to be involved in the storage and processing of visual information acquired by an early learning process called sexual imprinting. This information is later used to select an appropriate sexual partner for courtship behavior. Being involved in such a complicated behavioral task, the lateral nidopallium should be an integrative area receiving input from many other regions of the brain. Our experiments indeed show that the lateral nidopallium receives input from a variety of telencephalic regions including the primary and secondary areas of both visual pathways, the globus pallidus, the caudolateral nidopallium functionally comparable to the prefrontal cortex, the caudomedial nidopallium involved in song perception and storage of song-related memories, and some parts of the arcopallium. There are also a number of thalamic, mesencephalic, and brainstem efferents including the catecholaminergic locus coeruleus and the unspecific activating reticular formation. The spatial distribution of afferents suggests a compartmentalization of the lateral nidopallium into several subdivisions. Based on its connections, the lateral nidopallium should be considered as an area of higher order processing of visual information coming from the tectofugal and the thalamofugal visual pathways. Other sensory modalities and also motivational factors from a variety of brain areas are also integrated here. These findings support the idea of an involvement of the lateral nidopallium in imprinting and the control of courtship behavior.
Tschechne, Stephan; Neumann, Heiko
2014-01-01
Visual structures in the environment are segmented into image regions and those combined to a representation of surfaces and prototypical objects. Such a perceptual organization is performed by complex neural mechanisms in the visual cortex of primates. Multiple mutually connected areas in the ventral cortical pathway receive visual input and extract local form features that are subsequently grouped into increasingly complex, more meaningful image elements. Such a distributed network of processing must be capable to make accessible highly articulated changes in shape boundary as well as very subtle curvature changes that contribute to the perception of an object. We propose a recurrent computational network architecture that utilizes hierarchical distributed representations of shape features to encode surface and object boundary over different scales of resolution. Our model makes use of neural mechanisms that model the processing capabilities of early and intermediate stages in visual cortex, namely areas V1–V4 and IT. We suggest that multiple specialized component representations interact by feedforward hierarchical processing that is combined with feedback signals driven by representations generated at higher stages. Based on this, global configurational as well as local information is made available to distinguish changes in the object's contour. Once the outline of a shape has been established, contextual contour configurations are used to assign border ownership directions and thus achieve segregation of figure and ground. The model, thus, proposes how separate mechanisms contribute to distributed hierarchical cortical shape representation and combine with processes of figure-ground segregation. Our model is probed with a selection of stimuli to illustrate processing results at different processing stages. We especially highlight how modulatory feedback connections contribute to the processing of visual input at various stages in the processing hierarchy. PMID:25157228
Tschechne, Stephan; Neumann, Heiko
2014-01-01
Visual structures in the environment are segmented into image regions and those combined to a representation of surfaces and prototypical objects. Such a perceptual organization is performed by complex neural mechanisms in the visual cortex of primates. Multiple mutually connected areas in the ventral cortical pathway receive visual input and extract local form features that are subsequently grouped into increasingly complex, more meaningful image elements. Such a distributed network of processing must be capable to make accessible highly articulated changes in shape boundary as well as very subtle curvature changes that contribute to the perception of an object. We propose a recurrent computational network architecture that utilizes hierarchical distributed representations of shape features to encode surface and object boundary over different scales of resolution. Our model makes use of neural mechanisms that model the processing capabilities of early and intermediate stages in visual cortex, namely areas V1-V4 and IT. We suggest that multiple specialized component representations interact by feedforward hierarchical processing that is combined with feedback signals driven by representations generated at higher stages. Based on this, global configurational as well as local information is made available to distinguish changes in the object's contour. Once the outline of a shape has been established, contextual contour configurations are used to assign border ownership directions and thus achieve segregation of figure and ground. The model, thus, proposes how separate mechanisms contribute to distributed hierarchical cortical shape representation and combine with processes of figure-ground segregation. Our model is probed with a selection of stimuli to illustrate processing results at different processing stages. We especially highlight how modulatory feedback connections contribute to the processing of visual input at various stages in the processing hierarchy.
Brain activity during auditory and visual phonological, spatial and simple discrimination tasks.
Salo, Emma; Rinne, Teemu; Salonen, Oili; Alho, Kimmo
2013-02-16
We used functional magnetic resonance imaging to measure human brain activity during tasks demanding selective attention to auditory or visual stimuli delivered in concurrent streams. Auditory stimuli were syllables spoken by different voices and occurring in central or peripheral space. Visual stimuli were centrally or more peripherally presented letters in darker or lighter fonts. The participants performed a phonological, spatial or "simple" (speaker-gender or font-shade) discrimination task in either modality. Within each modality, we expected a clear distinction between brain activations related to nonspatial and spatial processing, as reported in previous studies. However, within each modality, different tasks activated largely overlapping areas in modality-specific (auditory and visual) cortices, as well as in the parietal and frontal brain regions. These overlaps may be due to effects of attention common for all three tasks within each modality or interaction of processing task-relevant features and varying task-irrelevant features in the attended-modality stimuli. Nevertheless, brain activations caused by auditory and visual phonological tasks overlapped in the left mid-lateral prefrontal cortex, while those caused by the auditory and visual spatial tasks overlapped in the inferior parietal cortex. These overlapping activations reveal areas of multimodal phonological and spatial processing. There was also some evidence for intermodal attention-related interaction. Most importantly, activity in the superior temporal sulcus elicited by unattended speech sounds was attenuated during the visual phonological task in comparison with the other visual tasks. This effect might be related to suppression of processing irrelevant speech presumably distracting the phonological task involving the letters. Copyright © 2012 Elsevier B.V. All rights reserved.
Normal form from biological motion despite impaired ventral stream function.
Gilaie-Dotan, S; Bentin, S; Harel, M; Rees, G; Saygin, A P
2011-04-01
We explored the extent to which biological motion perception depends on ventral stream integration by studying LG, an unusual case of developmental visual agnosia. LG has significant ventral stream processing deficits but no discernable structural cortical abnormality. LG's intermediate visual areas and object-sensitive regions exhibit abnormal activation during visual object perception, in contrast to area V5/MT+ which responds normally to visual motion (Gilaie-Dotan, Perry, Bonneh, Malach, & Bentin, 2009). Here, in three studies we used point light displays, which require visual integration, in adaptive threshold experiments to examine LG's ability to detect form from biological and non-biological motion cues. LG's ability to detect and discriminate form from biological motion was similar to healthy controls. In contrast, he was significantly deficient in processing form from non-biological motion. Thus, LG can rely on biological motion cues to perceive human forms, but is considerably impaired in extracting form from non-biological motion. Finally, we found that while LG viewed biological motion, activity in a network of brain regions associated with processing biological motion was functionally correlated with his V5/MT+ activity, indicating that normal inputs from V5/MT+ might suffice to activate his action perception system. These results indicate that processing of biologically moving form can dissociate from other form processing in the ventral pathway. Furthermore, the present results indicate that integrative ventral stream processing is necessary for uncompromised processing of non-biological form from motion. Copyright © 2011 Elsevier Ltd. All rights reserved.
Kuniecki, Michał; Wołoszyn, Kinga; Domagalik, Aleksandra; Pilarczyk, Joanna
2018-05-01
Processing of emotional visual information engages cognitive functions and induces arousal. We aimed to examine the modulatory role of emotional valence on brain activations linked to the processing of visual information and those linked to arousal. Participants were scanned and their pupil size was measured while viewing negative and neutral images. The visual noise was added to the images in various proportions to parametrically manipulate the amount of visual information. Pupil size was used as an index of physiological arousal. We show that arousal induced by the negative images, as compared to the neutral ones, is primarily related to greater amygdala activity while increasing visibility of negative content to enhanced activity in the lateral occipital complex (LOC). We argue that more intense visual processing of negative scenes can occur irrespective of the level of arousal. It may suggest that higher areas of the visual stream are fine-tuned to process emotionally relevant objects. Both arousal and processing of emotional visual information modulated activity within the ventromedial prefrontal cortex (vmPFC). Overlapping activations within the vmPFC may reflect the integration of these aspects of emotional processing. Additionally, we show that emotionally-evoked pupil dilations are related to activations in the amygdala, vmPFC, and LOC.
Wiegand, Iris; Töllner, Thomas; Habekost, Thomas; Dyrholm, Mads; Müller, Hermann J; Finke, Kathrin
2014-08-01
An individual's visual attentional capacity is characterized by 2 central processing resources, visual perceptual processing speed and visual short-term memory (vSTM) storage capacity. Based on Bundesen's theory of visual attention (TVA), independent estimates of these parameters can be obtained from mathematical modeling of performance in a whole report task. The framework's neural interpretation (NTVA) further suggests distinct brain mechanisms underlying these 2 functions. Using an interindividual difference approach, the present study was designed to establish the respective ERP correlates of both parameters. Participants with higher compared to participants with lower processing speed were found to show significantly reduced visual N1 responses, indicative of higher efficiency in early visual processing. By contrast, for participants with higher relative to lower vSTM storage capacity, contralateral delay activity over visual areas was enhanced while overall nonlateralized delay activity was reduced, indicating that holding (the maximum number of) items in vSTM relies on topographically specific sustained activation within the visual system. Taken together, our findings show that the 2 main aspects of visual attentional capacity are reflected in separable neurophysiological markers, validating a central assumption of NTVA. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Attention Increases Spike Count Correlations between Visual Cortical Areas.
Ruff, Douglas A; Cohen, Marlene R
2016-07-13
Visual attention, which improves perception of attended locations or objects, has long been known to affect many aspects of the responses of neuronal populations in visual cortex. There are two nonmutually exclusive hypotheses concerning the neuronal mechanisms that underlie these perceptual improvements. The first hypothesis, that attention improves the information encoded by a population of neurons in a particular cortical area, has considerable physiological support. The second hypothesis is that attention improves perception by selectively communicating relevant visual information. This idea has been tested primarily by measuring interactions between neurons on very short timescales, which are mathematically nearly independent of neuronal interactions on longer timescales. We tested the hypothesis that attention changes the way visual information is communicated between cortical areas on longer timescales by recording simultaneously from neurons in primary visual cortex (V1) and the middle temporal area (MT) in rhesus monkeys. We used two independent and complementary approaches. Our correlative experiment showed that attention increases the trial-to-trial response variability that is shared between the two areas. In our causal experiment, we electrically microstimulated V1 and found that attention increased the effect of stimulation on MT responses. Together, our results suggest that attention affects both the way visual stimuli are encoded within a cortical area and the extent to which visual information is communicated between areas on behaviorally relevant timescales. Visual attention dramatically improves the perception of attended stimuli. Attention has long been thought to act by selecting relevant visual information for further processing. It has been hypothesized that this selection is accomplished by increasing communication between neurons that encode attended information in different cortical areas. We recorded simultaneously from neurons in primary visual cortex and the middle temporal area while rhesus monkeys performed an attention task. We found that attention increased shared variability between neurons in the two areas and that attention increased the effect of microstimulation in V1 on the firing rates of MT neurons. Our results provide support for the hypothesis that attention increases communication between neurons in different brain areas on behaviorally relevant timescales. Copyright © 2016 the authors 0270-6474/16/367523-12$15.00/0.
Attention Increases Spike Count Correlations between Visual Cortical Areas
Cohen, Marlene R.
2016-01-01
Visual attention, which improves perception of attended locations or objects, has long been known to affect many aspects of the responses of neuronal populations in visual cortex. There are two nonmutually exclusive hypotheses concerning the neuronal mechanisms that underlie these perceptual improvements. The first hypothesis, that attention improves the information encoded by a population of neurons in a particular cortical area, has considerable physiological support. The second hypothesis is that attention improves perception by selectively communicating relevant visual information. This idea has been tested primarily by measuring interactions between neurons on very short timescales, which are mathematically nearly independent of neuronal interactions on longer timescales. We tested the hypothesis that attention changes the way visual information is communicated between cortical areas on longer timescales by recording simultaneously from neurons in primary visual cortex (V1) and the middle temporal area (MT) in rhesus monkeys. We used two independent and complementary approaches. Our correlative experiment showed that attention increases the trial-to-trial response variability that is shared between the two areas. In our causal experiment, we electrically microstimulated V1 and found that attention increased the effect of stimulation on MT responses. Together, our results suggest that attention affects both the way visual stimuli are encoded within a cortical area and the extent to which visual information is communicated between areas on behaviorally relevant timescales. SIGNIFICANCE STATEMENT Visual attention dramatically improves the perception of attended stimuli. Attention has long been thought to act by selecting relevant visual information for further processing. It has been hypothesized that this selection is accomplished by increasing communication between neurons that encode attended information in different cortical areas. We recorded simultaneously from neurons in primary visual cortex and the middle temporal area while rhesus monkeys performed an attention task. We found that attention increased shared variability between neurons in the two areas and that attention increased the effect of microstimulation in V1 on the firing rates of MT neurons. Our results provide support for the hypothesis that attention increases communication between neurons in different brain areas on behaviorally relevant timescales. PMID:27413161
Spatiotemporal Processing in Crossmodal Interactions for Perception of the External World: A Review
Hidaka, Souta; Teramoto, Wataru; Sugita, Yoichi
2015-01-01
Research regarding crossmodal interactions has garnered much interest in the last few decades. A variety of studies have demonstrated that multisensory information (vision, audition, tactile sensation, and so on) can perceptually interact with each other in the spatial and temporal domains. Findings regarding crossmodal interactions in the spatiotemporal domain (i.e., motion processing) have also been reported, with updates in the last few years. In this review, we summarize past and recent findings on spatiotemporal processing in crossmodal interactions regarding perception of the external world. A traditional view regarding crossmodal interactions holds that vision is superior to audition in spatial processing, but audition is dominant over vision in temporal processing. Similarly, vision is considered to have dominant effects over the other sensory modalities (i.e., visual capture) in spatiotemporal processing. However, recent findings demonstrate that sound could have a driving effect on visual motion perception. Moreover, studies regarding perceptual associative learning reported that, after association is established between a sound sequence without spatial information and visual motion information, the sound sequence could trigger visual motion perception. Other sensory information, such as motor action or smell, has also exhibited similar driving effects on visual motion perception. Additionally, recent brain imaging studies demonstrate that similar activation patterns could be observed in several brain areas, including the motion processing areas, between spatiotemporal information from different sensory modalities. Based on these findings, we suggest that multimodal information could mutually interact in spatiotemporal processing in the percept of the external world and that common perceptual and neural underlying mechanisms would exist for spatiotemporal processing. PMID:26733827
Spatiotemporal dynamics underlying object completion in human ventral visual cortex.
Tang, Hanlin; Buia, Calin; Madhavan, Radhika; Crone, Nathan E; Madsen, Joseph R; Anderson, William S; Kreiman, Gabriel
2014-08-06
Natural vision often involves recognizing objects from partial information. Recognition of objects from parts presents a significant challenge for theories of vision because it requires spatial integration and extrapolation from prior knowledge. Here we recorded intracranial field potentials of 113 visually selective electrodes from epilepsy patients in response to whole and partial objects. Responses along the ventral visual stream, particularly the inferior occipital and fusiform gyri, remained selective despite showing only 9%-25% of the object areas. However, these visually selective signals emerged ∼100 ms later for partial versus whole objects. These processing delays were particularly pronounced in higher visual areas within the ventral stream. This latency difference persisted when controlling for changes in contrast, signal amplitude, and the strength of selectivity. These results argue against a purely feedforward explanation of recognition from partial information, and provide spatiotemporal constraints on theories of object recognition that involve recurrent processing. Copyright © 2014 Elsevier Inc. All rights reserved.
Brooks, Joseph L.; Gilaie-Dotan, Sharon; Rees, Geraint; Bentin, Shlomo; Driver, Jon
2012-01-01
Visual perception depends not only on local stimulus features but also on their relationship to the surrounding stimulus context, as evident in both local and contextual influences on figure-ground segmentation. Intermediate visual areas may play a role in such contextual influences, as we tested here by examining LG, a rare case of developmental visual agnosia. LG has no evident abnormality of brain structure and functional neuroimaging showed relatively normal V1 function, but his intermediate visual areas (V2/V3) function abnormally. We found that contextual influences on figure-ground organization were selectively disrupted in LG, while local sources of figure-ground influences were preserved. Effects of object knowledge and familiarity on figure-ground organization were also significantly diminished. Our results suggest that the mechanisms mediating contextual and familiarity influences on figure-ground organization are dissociable from those mediating local influences on figure-ground assignment. The disruption of contextual processing in intermediate visual areas may play a role in the substantial object recognition difficulties experienced by LG. PMID:22947116
Epicenters of dynamic connectivity in the adaptation of the ventral visual system.
Prčkovska, Vesna; Huijbers, Willem; Schultz, Aaron; Ortiz-Teran, Laura; Peña-Gomez, Cleofe; Villoslada, Pablo; Johnson, Keith; Sperling, Reisa; Sepulcre, Jorge
2017-04-01
Neuronal responses adapt to familiar and repeated sensory stimuli. Enhanced synchrony across wide brain systems has been postulated as a potential mechanism for this adaptation phenomenon. Here, we used recently developed graph theory methods to investigate hidden connectivity features of dynamic synchrony changes during a visual repetition paradigm. Particularly, we focused on strength connectivity changes occurring at local and distant brain neighborhoods. We found that connectivity reorganization in visual modal cortex-such as local suppressed connectivity in primary visual areas and distant suppressed connectivity in fusiform areas-is accompanied by enhanced local and distant connectivity in higher cognitive processing areas in multimodal and association cortex. Moreover, we found a shift of the dynamic functional connections from primary-visual-fusiform to primary-multimodal/association cortex. These findings suggest that repetition-suppression is made possible by reorganization of functional connectivity that enables communication between low- and high-order areas. Hum Brain Mapp 38:1965-1976, 2017. © 2017 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Tal, Zohar; Geva, Ran; Amedi, Amir
2016-01-01
Recent evidence from blind participants suggests that visual areas are task-oriented and sensory modality input independent rather than sensory-specific to vision. Specifically, visual areas are thought to retain their functional selectivity when using non-visual inputs (touch or sound) even without having any visual experience. However, this theory is still controversial since it is not clear whether this also characterizes the sighted brain, and whether the reported results in the sighted reflect basic fundamental a-modal processes or are an epiphenomenon to a large extent. In the current study, we addressed these questions using a series of fMRI experiments aimed to explore visual cortex responses to passive touch on various body parts and the coupling between the parietal and visual cortices as manifested by functional connectivity. We show that passive touch robustly activated the object selective parts of the lateral–occipital (LO) cortex while deactivating almost all other occipital–retinotopic-areas. Furthermore, passive touch responses in the visual cortex were specific to hand and upper trunk stimulations. Psychophysiological interaction (PPI) analysis suggests that LO is functionally connected to the hand area in the primary somatosensory homunculus (S1), during hand and shoulder stimulations but not to any of the other body parts. We suggest that LO is a fundamental hub that serves as a node between visual-object selective areas and S1 hand representation, probably due to the critical evolutionary role of touch in object recognition and manipulation. These results might also point to a more general principle suggesting that recruitment or deactivation of the visual cortex by other sensory input depends on the ecological relevance of the information conveyed by this input to the task/computations carried out by each area or network. This is likely to rely on the unique and differential pattern of connectivity for each visual area with the rest of the brain. PMID:26673114
Grotheer, Mareike; Ambrus, Géza Gergely; Kovács, Gyula
2016-05-15
Recent research suggests the existence of a visual area selectively processing numbers in the human inferior temporal cortex (number form area (NFA); Abboud et al., 2015; Grotheer et al., 2016; Shum et al., 2013). The NFA is thought to be involved in the preferential encoding of numbers over false characters, letters and non-number words (Grotheer et al., 2016; Shum et al., 2013), independently of the sensory modality (Abboud et al., 2015). However, it is not yet clear if this area is mandatory for normal number processing. The present study exploited the fact that high-resolution fMRI can be applied to identify the NFA individually (Grotheer et al., 2016) and tested if transcranial magnetic stimulation (TMS) of this area interferes with stimulus processing in a selective manner. Double-pulse TMS targeted at the right NFA significantly impaired the detection of briefly presented and masked Arabic numbers in comparison to vertex stimulation. This suggests the NFA to be necessary for fluent number processing. Surprisingly, TMS of the NFA also impaired the detection of Roman letters. On the other hand, stimulation of the lateral occipital complex (LO) had neither an effect on the detection of numbers nor on letters. Our results show, for the first time, that the NFA is causally involved in the early visual processing of numbers as well as of letters. Copyright © 2016 Elsevier Inc. All rights reserved.
Distributed neural signatures of natural audiovisual speech and music in the human auditory cortex.
Salmi, Juha; Koistinen, Olli-Pekka; Glerean, Enrico; Jylänki, Pasi; Vehtari, Aki; Jääskeläinen, Iiro P; Mäkelä, Sasu; Nummenmaa, Lauri; Nummi-Kuisma, Katarina; Nummi, Ilari; Sams, Mikko
2017-08-15
During a conversation or when listening to music, auditory and visual information are combined automatically into audiovisual objects. However, it is still poorly understood how specific type of visual information shapes neural processing of sounds in lifelike stimulus environments. Here we applied multi-voxel pattern analysis to investigate how naturally matching visual input modulates supratemporal cortex activity during processing of naturalistic acoustic speech, singing and instrumental music. Bayesian logistic regression classifiers with sparsity-promoting priors were trained to predict whether the stimulus was audiovisual or auditory, and whether it contained piano playing, speech, or singing. The predictive performances of the classifiers were tested by leaving one participant at a time for testing and training the model using the remaining 15 participants. The signature patterns associated with unimodal auditory stimuli encompassed distributed locations mostly in the middle and superior temporal gyrus (STG/MTG). A pattern regression analysis, based on a continuous acoustic model, revealed that activity in some of these MTG and STG areas were associated with acoustic features present in speech and music stimuli. Concurrent visual stimulus modulated activity in bilateral MTG (speech), lateral aspect of right anterior STG (singing), and bilateral parietal opercular cortex (piano). Our results suggest that specific supratemporal brain areas are involved in processing complex natural speech, singing, and piano playing, and other brain areas located in anterior (facial speech) and posterior (music-related hand actions) supratemporal cortex are influenced by related visual information. Those anterior and posterior supratemporal areas have been linked to stimulus identification and sensory-motor integration, respectively. Copyright © 2017 Elsevier Inc. All rights reserved.
Schurz, Matthias; Sturm, Denise; Richlan, Fabio; Kronbichler, Martin; Ladurner, Gunther; Wimmer, Heinz
2010-01-01
Based on our previous work, we expected the Visual Word Form Area (VWFA) in the left ventral visual pathway to be engaged by both whole-word recognition and by serial sublexical coding of letter strings. To examine this double function, a phonological lexical decision task (i.e., “Does xxx sound like an existing word?”) presented short and long letter strings of words, pseudohomophones, and pseudowords (e.g., Taxi, Taksi and Tazi). Main findings were that the length effect for words was limited to occipital regions and absent in the VWFA. In contrast, a marked length effect for pseudowords was found throughout the ventral visual pathway including the VWFA, as well as in regions presumably engaged by visual attention and silent-articulatory processes. The length by lexicality interaction on brain activation corresponds to well-established behavioral findings of a length by lexicality interaction on naming latencies and speaks for the engagement of the VWFA by both lexical and sublexical processes. PMID:19896538
Schurz, Matthias; Sturm, Denise; Richlan, Fabio; Kronbichler, Martin; Ladurner, Gunther; Wimmer, Heinz
2010-02-01
Based on our previous work, we expected the Visual Word Form Area (VWFA) in the left ventral visual pathway to be engaged by both whole-word recognition and by serial sublexical coding of letter strings. To examine this double function, a phonological lexical decision task (i.e., "Does xxx sound like an existing word?") presented short and long letter strings of words, pseudohomophones, and pseudowords (e.g., Taxi, Taksi and Tazi). Main findings were that the length effect for words was limited to occipital regions and absent in the VWFA. In contrast, a marked length effect for pseudowords was found throughout the ventral visual pathway including the VWFA, as well as in regions presumably engaged by visual attention and silent-articulatory processes. The length by lexicality interaction on brain activation corresponds to well-established behavioral findings of a length by lexicality interaction on naming latencies and speaks for the engagement of the VWFA by both lexical and sublexical processes. Copyright (c) 2009 Elsevier Inc. All rights reserved.
Heinen, Klaartje; Feredoes, Eva; Weiskopf, Nikolaus; Ruff, Christian C; Driver, Jon
2014-11-01
Voluntary selective attention can prioritize different features in a visual scene. The frontal eye-fields (FEF) are one potential source of such feature-specific top-down signals, but causal evidence for influences on visual cortex (as was shown for "spatial" attention) has remained elusive. Here, we show that transcranial magnetic stimulation (TMS) applied to right FEF increased the blood oxygen level-dependent (BOLD) signals in visual areas processing "target feature" but not in "distracter feature"-processing regions. TMS-induced BOLD signals increase in motion-responsive visual cortex (MT+) when motion was attended in a display with moving dots superimposed on face stimuli, but in face-responsive fusiform area (FFA) when faces were attended to. These TMS effects on BOLD signal in both regions were negatively related to performance (on the motion task), supporting the behavioral relevance of this pathway. Our findings provide new causal evidence for the human FEF in the control of nonspatial "feature"-based attention, mediated by dynamic influences on feature-specific visual cortex that vary with the currently attended property. © The Author 2013. Published by Oxford University Press.
Cognitive Task Analysis of the Battalion Level Visualization Process
2007-10-01
of the visualization space are identified using commonly understood doctrinal language and mnemonic devices. a Degree to which the commander and staff...the elements of the visualization space are identified using commonly understood doctrinal language and mnemonic devices. Visualization elements are...11 skill areas were identified as potential focal points for future training development. The findings were used to design and develop exemplar
Stephen, Julia M; Ranken, Doug F; Aine, Cheryl J
2006-01-01
The sensitivity of visual areas to different temporal frequencies, as well as the functional connections between these areas, was examined using magnetoencephalography (MEG). Alternating circular sinusoids (0, 3.1, 8.7 and 14 Hz) were presented to foveal and peripheral locations in the visual field to target ventral and dorsal stream structures, respectively. It was hypothesized that higher temporal frequencies would preferentially activate dorsal stream structures. To determine the effect of frequency on the cortical response we analyzed the late time interval (220-770 ms) using a multi-dipole spatio-temporal analysis approach to provide source locations and timecourses for each condition. As an exploratory aspect, we performed cross-correlation analysis on the source timecourses to determine which sources responded similarly within conditions. Contrary to predictions, dorsal stream areas were not activated more frequently during high temporal frequency stimulation. However, across cortical sources the frequency-following response showed a difference, with significantly higher power at the second harmonic for the 3.1 and 8.7 Hz stimulation and at the first and second harmonics for the 14 Hz stimulation with this pattern seen robustly in area V1. Cross-correlations of the source timecourses showed that both low- and high-order visual areas, including dorsal and ventral stream areas, were significantly correlated in the late time interval. The results imply that frequency information is transferred to higher-order visual areas without translation. Despite the less complex waveforms seen in the late interval of time, the cross-correlation results show that visual, temporal and parietal cortical areas are intricately involved in late-interval visual processing.
Bressler, David W.; Fortenbaugh, Francesca C.; Robertson, Lynn C.; Silver, Michael A.
2013-01-01
Endogenous visual spatial attention improves perception and enhances neural responses to visual stimuli at attended locations. Although many aspects of visual processing differ significantly between central and peripheral vision, little is known regarding the neural substrates of the eccentricity dependence of spatial attention effects. We measured amplitudes of positive and negative fMRI responses to visual stimuli as a function of eccentricity in a large number of topographically-organized cortical areas. Responses to each stimulus were obtained when the stimulus was attended and when spatial attention was directed to a stimulus in the opposite visual hemifield. Attending to the stimulus increased both positive and negative response amplitudes in all cortical areas we studied: V1, V2, V3, hV4, VO1, LO1, LO2, V3A/B, IPS0, TO1, and TO2. However, the eccentricity dependence of these effects differed considerably across cortical areas. In early visual, ventral, and lateral occipital cortex, attentional enhancement of positive responses was greater for central compared to peripheral eccentricities. The opposite pattern was observed in dorsal stream areas IPS0 and putative MT homolog TO1, where attentional enhancement of positive responses was greater in the periphery. Both the magnitude and the eccentricity dependence of attentional modulation of negative fMRI responses closely mirrored that of positive responses across cortical areas. PMID:23562388
Heim, Stefan; Weidner, Ralph; von Overheidt, Ann-Christin; Tholen, Nicole; Grande, Marion; Amunts, Katrin
2014-03-01
Phonological and visual dysfunctions may result in reading deficits like those encountered in developmental dyslexia. Here, we use a novel approach to induce similar reading difficulties in normal readers in an event-related fMRI study, thus systematically investigating which brain regions relate to different pathways relating to orthographic-phonological (e.g. grapheme-to-phoneme conversion, GPC) vs. visual processing. Based upon a previous behavioural study (Tholen et al. 2011), the retrieval of phonemes from graphemes was manipulated by lowering the identifiability of letters in familiar vs. unfamiliar shapes. Visual word and letter processing was impeded by presenting the letters of a word in a moving, non-stationary manner. FMRI revealed that the visual condition activated cytoarchitectonically defined area hOC5 in the magnocellular pathway and area 7A in the right mesial parietal cortex. In contrast, the grapheme manipulation revealed different effects localised predominantly in bilateral inferior frontal gyrus (left cytoarchitectonic area 44; right area 45) and inferior parietal lobule (including areas PF/PFm), regions that have been demonstrated to show abnormal activation in dyslexic as compared to normal readers. This pattern of activation bears close resemblance to recent findings in dyslexic samples both behaviourally and with respect to the neurofunctional activation patterns. The novel paradigm may thus prove useful in future studies to understand reading problems related to distinct pathways, potentially providing a link also to the understanding of real reading impairments in dyslexia.
Temporal characteristics of audiovisual information processing.
Fuhrmann Alpert, Galit; Hein, Grit; Tsai, Nancy; Naumer, Marcus J; Knight, Robert T
2008-05-14
In complex natural environments, auditory and visual information often have to be processed simultaneously. Previous functional magnetic resonance imaging (fMRI) studies focused on the spatial localization of brain areas involved in audiovisual (AV) information processing, but the temporal characteristics of AV information flow in these regions remained unclear. In this study, we used fMRI and a novel information-theoretic approach to study the flow of AV sensory information. Subjects passively perceived sounds and images of objects presented either alone or simultaneously. Applying the measure of mutual information, we computed for each voxel the latency in which the blood oxygenation level-dependent signal had the highest information content about the preceding stimulus. The results indicate that, after AV stimulation, the earliest informative activity occurs in right Heschl's gyrus, left primary visual cortex, and the posterior portion of the superior temporal gyrus, which is known as a region involved in object-related AV integration. Informative activity in the anterior portion of superior temporal gyrus, middle temporal gyrus, right occipital cortex, and inferior frontal cortex was found at a later latency. Moreover, AV presentation resulted in shorter latencies in multiple cortical areas compared with isolated auditory or visual presentation. The results provide evidence for bottom-up processing from primary sensory areas into higher association areas during AV integration in humans and suggest that AV presentation shortens processing time in early sensory cortices.
A normalization model suggests that attention changes the weighting of inputs between visual areas
Cohen, Marlene R.
2017-01-01
Models of divisive normalization can explain the trial-averaged responses of neurons in sensory, association, and motor areas under a wide range of conditions, including how visual attention changes the gains of neurons in visual cortex. Attention, like other modulatory processes, is also associated with changes in the extent to which pairs of neurons share trial-to-trial variability. We showed recently that in addition to decreasing correlations between similarly tuned neurons within the same visual area, attention increases correlations between neurons in primary visual cortex (V1) and the middle temporal area (MT) and that an extension of a classic normalization model can account for this correlation increase. One of the benefits of having a descriptive model that can account for many physiological observations is that it can be used to probe the mechanisms underlying processes such as attention. Here, we use electrical microstimulation in V1 paired with recording in MT to provide causal evidence that the relationship between V1 and MT activity is nonlinear and is well described by divisive normalization. We then use the normalization model and recording and microstimulation experiments to show that the attention dependence of V1–MT correlations is better explained by a mechanism in which attention changes the weights of connections between V1 and MT than by a mechanism that modulates responses in either area. Our study shows that normalization can explain interactions between neurons in different areas and provides a framework for using multiarea recording and stimulation to probe the neural mechanisms underlying neuronal computations. PMID:28461501
A normalization model suggests that attention changes the weighting of inputs between visual areas.
Ruff, Douglas A; Cohen, Marlene R
2017-05-16
Models of divisive normalization can explain the trial-averaged responses of neurons in sensory, association, and motor areas under a wide range of conditions, including how visual attention changes the gains of neurons in visual cortex. Attention, like other modulatory processes, is also associated with changes in the extent to which pairs of neurons share trial-to-trial variability. We showed recently that in addition to decreasing correlations between similarly tuned neurons within the same visual area, attention increases correlations between neurons in primary visual cortex (V1) and the middle temporal area (MT) and that an extension of a classic normalization model can account for this correlation increase. One of the benefits of having a descriptive model that can account for many physiological observations is that it can be used to probe the mechanisms underlying processes such as attention. Here, we use electrical microstimulation in V1 paired with recording in MT to provide causal evidence that the relationship between V1 and MT activity is nonlinear and is well described by divisive normalization. We then use the normalization model and recording and microstimulation experiments to show that the attention dependence of V1-MT correlations is better explained by a mechanism in which attention changes the weights of connections between V1 and MT than by a mechanism that modulates responses in either area. Our study shows that normalization can explain interactions between neurons in different areas and provides a framework for using multiarea recording and stimulation to probe the neural mechanisms underlying neuronal computations.
Gopalakrishnan, R; Burgess, R C; Plow, E B; Floden, D P; Machado, A G
2015-09-24
Pain anticipation plays a critical role in pain chronification and results in disability due to pain avoidance. It is important to understand how different sensory modalities (auditory, visual or tactile) may influence pain anticipation as different strategies could be applied to mitigate anticipatory phenomena and chronification. In this study, using a countdown paradigm, we evaluated with magnetoencephalography the neural networks associated with pain anticipation elicited by different sensory modalities in normal volunteers. When encountered with well-established cues that signaled pain, visual and somatosensory cortices engaged the pain neuromatrix areas early during the countdown process, whereas the auditory cortex displayed delayed processing. In addition, during pain anticipation, the visual cortex displayed independent processing capabilities after learning the contextual meaning of cues from associative and limbic areas. Interestingly, cross-modal activation was also evident and strong when visual and tactile cues signaled upcoming pain. Dorsolateral prefrontal cortex and mid-cingulate cortex showed significant activity during pain anticipation regardless of modality. Our results show pain anticipation is processed with great time efficiency by a highly specialized and hierarchical network. The highest degree of higher-order processing is modulated by context (pain) rather than content (modality) and rests within the associative limbic regions, corroborating their intrinsic role in chronification. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.
Schmidt, K; Forkmann, K; Sinke, C; Gratz, M; Bitz, A; Bingel, U
2016-07-01
Compared to peripheral pain, trigeminal pain elicits higher levels of fear, which is assumed to enhance the interruptive effects of pain on concomitant cognitive processes. In this fMRI study we examined the behavioral and neural effects of trigeminal (forehead) and peripheral (hand) pain on visual processing and memory encoding. Cerebral activity was measured in 23 healthy subjects performing a visual categorization task that was immediately followed by a surprise recognition task. During the categorization task subjects received concomitant noxious electrical stimulation on the forehead or hand. Our data show that fear ratings were significantly higher for trigeminal pain. Categorization and recognition performance did not differ between pictures that were presented with trigeminal and peripheral pain. However, object categorization in the presence of trigeminal pain was associated with stronger activity in task-relevant visual areas (lateral occipital complex, LOC), memory encoding areas (hippocampus and parahippocampus) and areas implicated in emotional processing (amygdala) compared to peripheral pain. Further, individual differences in neural activation between the trigeminal and the peripheral condition were positively related to differences in fear ratings between both conditions. Functional connectivity between amygdala and LOC was increased during trigeminal compared to peripheral painful stimulation. Fear-driven compensatory resource activation seems to be enhanced for trigeminal stimuli, presumably due to their exceptional biological relevance. Copyright © 2016 Elsevier Inc. All rights reserved.
Deep recurrent neural network reveals a hierarchy of process memory during dynamic natural vision.
Shi, Junxing; Wen, Haiguang; Zhang, Yizhen; Han, Kuan; Liu, Zhongming
2018-05-01
The human visual cortex extracts both spatial and temporal visual features to support perception and guide behavior. Deep convolutional neural networks (CNNs) provide a computational framework to model cortical representation and organization for spatial visual processing, but unable to explain how the brain processes temporal information. To overcome this limitation, we extended a CNN by adding recurrent connections to different layers of the CNN to allow spatial representations to be remembered and accumulated over time. The extended model, or the recurrent neural network (RNN), embodied a hierarchical and distributed model of process memory as an integral part of visual processing. Unlike the CNN, the RNN learned spatiotemporal features from videos to enable action recognition. The RNN better predicted cortical responses to natural movie stimuli than the CNN, at all visual areas, especially those along the dorsal stream. As a fully observable model of visual processing, the RNN also revealed a cortical hierarchy of temporal receptive window, dynamics of process memory, and spatiotemporal representations. These results support the hypothesis of process memory, and demonstrate the potential of using the RNN for in-depth computational understanding of dynamic natural vision. © 2018 Wiley Periodicals, Inc.
Brain signal complexity rises with repetition suppression in visual learning.
Lafontaine, Marc Philippe; Lacourse, Karine; Lina, Jean-Marc; McIntosh, Anthony R; Gosselin, Frédéric; Théoret, Hugo; Lippé, Sarah
2016-06-21
Neuronal activity associated with visual processing of an unfamiliar face gradually diminishes when it is viewed repeatedly. This process, known as repetition suppression (RS), is involved in the acquisition of familiarity. Current models suggest that RS results from interactions between visual information processing areas located in the occipito-temporal cortex and higher order areas, such as the dorsolateral prefrontal cortex (DLPFC). Brain signal complexity, which reflects information dynamics of cortical networks, has been shown to increase as unfamiliar faces become familiar. However, the complementarity of RS and increases in brain signal complexity have yet to be demonstrated within the same measurements. We hypothesized that RS and brain signal complexity increase occur simultaneously during learning of unfamiliar faces. Further, we expected alteration of DLPFC function by transcranial direct current stimulation (tDCS) to modulate RS and brain signal complexity over the occipito-temporal cortex. Participants underwent three tDCS conditions in random order: right anodal/left cathodal, right cathodal/left anodal and sham. Following tDCS, participants learned unfamiliar faces, while an electroencephalogram (EEG) was recorded. Results revealed RS over occipito-temporal electrode sites during learning, reflected by a decrease in signal energy, a measure of amplitude. Simultaneously, as signal energy decreased, brain signal complexity, as estimated with multiscale entropy (MSE), increased. In addition, prefrontal tDCS modulated brain signal complexity over the right occipito-temporal cortex during the first presentation of faces. These results suggest that although RS may reflect a brain mechanism essential to learning, complementary processes reflected by increases in brain signal complexity, may be instrumental in the acquisition of novel visual information. Such processes likely involve long-range coordinated activity between prefrontal and lower order visual areas. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.
Barton, Brian; Brewer, Alyssa A.
2017-01-01
The cortical hierarchy of the human visual system has been shown to be organized around retinal spatial coordinates throughout much of low- and mid-level visual processing. These regions contain visual field maps (VFMs) that each follows the organization of the retina, with neighboring aspects of the visual field processed in neighboring cortical locations. On a larger, macrostructural scale, groups of such sensory cortical field maps (CFMs) in both the visual and auditory systems are organized into roughly circular cloverleaf clusters. CFMs within clusters tend to share properties such as receptive field distribution, cortical magnification, and processing specialization. Here we use fMRI and population receptive field (pRF) modeling to investigate the extent of VFM and cluster organization with an examination of higher-level visual processing in temporal cortex and compare these measurements to mid-level visual processing in dorsal occipital cortex. In human temporal cortex, the posterior superior temporal sulcus (pSTS) has been implicated in various neuroimaging studies as subserving higher-order vision, including face processing, biological motion perception, and multimodal audiovisual integration. In human dorsal occipital cortex, the transverse occipital sulcus (TOS) contains the V3A/B cluster, which comprises two VFMs subserving mid-level motion perception and visuospatial attention. For the first time, we present the organization of VFMs in pSTS in a cloverleaf cluster. This pSTS cluster contains four VFMs bilaterally: pSTS-1:4. We characterize these pSTS VFMs as relatively small at ∼125 mm2 with relatively large pRF sizes of ∼2–8° of visual angle across the central 10° of the visual field. V3A and V3B are ∼230 mm2 in surface area, with pRF sizes here similarly ∼1–8° of visual angle across the same region. In addition, cortical magnification measurements show that a larger extent of the pSTS VFM surface areas are devoted to the peripheral visual field than those in the V3A/B cluster. Reliability measurements of VFMs in pSTS and V3A/B reveal that these cloverleaf clusters are remarkably consistent and functionally differentiable. Our findings add to the growing number of measurements of widespread sensory CFMs organized into cloverleaf clusters, indicating that CFMs and cloverleaf clusters may both be fundamental organizing principles in cortical sensory processing. PMID:28293182
Visualization of the tire-soil interaction area by means of ObjectARX programming interface
NASA Astrophysics Data System (ADS)
Mueller, W.; Gruszczyński, M.; Raba, B.; Lewicki, A.; Przybył, K.; Zaborowicz, M.; Koszela, K.; Boniecki, P.
2014-04-01
The process of data visualization, important for their analysis, becomes problematic when large data sets generated via computer simulations are available. This problem concerns, among others, the models that describe the geometry of tire-soil interaction. For the purpose of a graphical representation of this area and implementation of various geometric calculations the authors have developed a plug-in application for AutoCAD, based on the latest technologies, including ObjectARX, LINQ and the use of Visual Studio platform. Selected programming tools offer a wide variety of IT structures that enable data visualization and data analysis and are important e.g. in model verification.
Warren, Amy L; Donnon, Tyrone L; Wagg, Catherine R; Priest, Heather; Fernandez, Nicole J
2018-01-18
Visual diagnostic reasoning is the cognitive process by which pathologists reach a diagnosis based on visual stimuli (cytologic, histopathologic, or gross imagery). Currently, there is little to no literature examining visual reasoning in veterinary pathology. The objective of the study was to use eye tracking to establish baseline quantitative and qualitative differences between the visual reasoning processes of novice and expert veterinary pathologists viewing cytology specimens. Novice and expert participants were each shown 10 cytology images and asked to formulate a diagnosis while wearing eye-tracking equipment (10 slides) and while concurrently verbalizing their thought processes using the think-aloud protocol (5 slides). Compared to novices, experts demonstrated significantly higher diagnostic accuracy (p<.017), shorter time to diagnosis (p<.017), and a higher percentage of time spent viewing areas of diagnostic interest (p<.017). Experts elicited more key diagnostic features in the think-aloud protocol and had more efficient patterns of eye movement. These findings suggest that experts' fast time to diagnosis, efficient eye-movement patterns, and preference for viewing areas of interest supports system 1 (pattern-recognition) reasoning and script-inductive knowledge structures with system 2 (analytic) reasoning to verify their diagnosis.
Visual form-processing deficits: a global clinical classification.
Unzueta-Arce, J; García-García, R; Ladera-Fernández, V; Perea-Bartolomé, M V; Mora-Simón, S; Cacho-Gutiérrez, J
2014-10-01
Patients who have difficulties recognising visual form stimuli are usually labelled as having visual agnosia. However, recent studies let us identify different clinical manifestations corresponding to discrete diagnostic entities which reflect a variety of deficits along the continuum of cortical visual processing. We reviewed different clinical cases published in medical literature as well as proposals for classifying deficits in order to provide a global perspective of the subject. Here, we present the main findings on the neuroanatomical basis of visual form processing and discuss the criteria for evaluating processing which may be abnormal. We also include an inclusive diagram of visual form processing deficits which represents the different clinical cases described in the literature. Lastly, we propose a boosted decision tree to serve as a guide in the process of diagnosing such cases. Although the medical community largely agrees on which cortical areas and neuronal circuits are involved in visual processing, future studies making use of new functional neuroimaging techniques will provide more in-depth information. A well-structured and exhaustive assessment of the different stages of visual processing, designed with a global view of the deficit in mind, will give a better idea of the prognosis and serve as a basis for planning personalised psychostimulation and rehabilitation strategies. Copyright © 2011 Sociedad Española de Neurología. Published by Elsevier Espana. All rights reserved.
Laskowska-Macios, Karolina; Nys, Julie; Hu, Tjing-Tjing; Zapasnik, Monika; Van der Perren, Anke; Kossut, Malgorzata; Burnat, Kalina; Arckens, Lutgarde
2015-08-14
Binocular pattern deprivation from eye opening (early BD) delays the maturation of the primary visual cortex. This delay is more pronounced for the peripheral than the central visual field representation within area 17, particularly between the age of 2 and 4 months [Laskowska-Macios, Cereb Cortex, 2014]. In this study, we probed for related dynamic changes in the cortical proteome. We introduced age, cortical region and BD as principal variables in a 2-D DIGE screen of area 17. In this way we explored the potential of BD-related protein expression changes between central and peripheral area 17 of 2- and 4-month-old BD (2BD, 4BD) kittens as a valid parameter towards the identification of brain maturation-related molecular processes. Consistent with the maturation delay, distinct developmental protein expression changes observed for normal kittens were postponed by BD, especially in the peripheral region. These BD-induced proteomic changes suggest a negative regulation of neurite outgrowth, synaptic transmission and clathrin-mediated endocytosis, thereby implicating these processes in normal experience-induced visual cortex maturation. Verification of the expression of proteins from each of the biological processes via Western analysis disclosed that some of the transient proteomic changes correlate to the distinct behavioral outcome in adult life, depending on timing and duration of the BD period [Neuroscience 2013;255:99-109]. Taken together, the plasticity potential to recover from BD, in relation to ensuing restoration of normal visual input, appears to rely on specific protein expression changes and cellular processes induced by the loss of pattern vision in early life.
Weyand, T G; Gafka, A C
2001-01-01
We studied the visuomotor activity of corticotectal (CT) cells in two visual cortical areas [area 17 and the posteromedial lateral suprasylvian cortex (PMLS)] of the cat. The cats were trained in simple oculomotor tasks, and head position was fixed. Most CT cells in both cortical areas gave a vigorous discharge to a small stimulus used to control gaze when it fell within the retinotopically defined visual field. However, the vigor of the visual response did not predict latency to initiate a saccade, saccade velocity, amplitude, or even if a saccade would be made, minimizing any potential role these cells might have in premotor or attentional processes. Most CT cells in both areas were selective for direction of stimulus motion, and cells in PMLS showed a direction preference favoring motion away from points of central gaze. CT cells did not discharge with eye movements in the dark. During eye movements in the light, many CT cells in area 17 increased their activity. In contrast, cells in PMLS, including CT cells, were generally unresponsive during saccades. Paradoxically, cells in PMLS responded vigorously to stimuli moving at saccadic velocities, indicating that the oculomotor system suppresses visual activity elicited by moving the retina across an illuminated scene. Nearly all CT cells showed oscillatory activity in the frequency range of 20-90 Hz, especially in response to visual stimuli. However, this activity was capricious; strong oscillations in one trial could disappear in the next despite identical stimulus conditions. Although the CT cells in both of these regions share many characteristics, the direction anisotropy and the suppression of activity during eye movements which characterize the neurons in PMLS suggests that these two areas have different roles in facilitating perceptual/motor processes at the level of the superior colliculus.
A massively asynchronous, parallel brain.
Zeki, Semir
2015-05-19
Whether the visual brain uses a parallel or a serial, hierarchical, strategy to process visual signals, the end result appears to be that different attributes of the visual scene are perceived asynchronously--with colour leading form (orientation) by 40 ms and direction of motion by about 80 ms. Whatever the neural root of this asynchrony, it creates a problem that has not been properly addressed, namely how visual attributes that are perceived asynchronously over brief time windows after stimulus onset are bound together in the longer term to give us a unified experience of the visual world, in which all attributes are apparently seen in perfect registration. In this review, I suggest that there is no central neural clock in the (visual) brain that synchronizes the activity of different processing systems. More likely, activity in each of the parallel processing-perceptual systems of the visual brain is reset independently, making of the brain a massively asynchronous organ, just like the new generation of more efficient computers promise to be. Given the asynchronous operations of the brain, it is likely that the results of activities in the different processing-perceptual systems are not bound by physiological interactions between cells in the specialized visual areas, but post-perceptually, outside the visual brain.
Faro, Alberto; Giordano, Daniela; Spampinato, Concetto
2008-06-01
This paper proposes a traffic monitoring architecture based on a high-speed communication network whose nodes are equipped with fuzzy processors and cellular neural network (CNN) embedded systems. It implements a real-time mobility information system where visual human perceptions sent by people working on the territory and video-sequences of traffic taken from webcams are jointly processed to evaluate the fundamental traffic parameters for every street of a metropolitan area. This paper presents the whole methodology for data collection and analysis and compares the accuracy and the processing time of the proposed soft computing techniques with other existing algorithms. Moreover, this paper discusses when and why it is recommended to fuse the visual perceptions of the traffic with the automated measurements taken from the webcams to compute the maximum traveling time that is likely needed to reach any destination in the traffic network.
Pavan, Andrea; Ghin, Filippo; Donato, Rita; Campana, Gianluca; Mather, George
2017-08-15
A long-held view of the visual system is that form and motion are independently analysed. However, there is physiological and psychophysical evidence of early interaction in the processing of form and motion. In this study, we used a combination of Glass patterns (GPs) and repetitive Transcranial Magnetic Stimulation (rTMS) to investigate in human observers the neural mechanisms underlying form-motion integration. GPs consist of randomly distributed dot pairs (dipoles) that induce the percept of an oriented stimulus. GPs can be either static or dynamic. Dynamic GPs have both a form component (i.e., orientation) and a non-directional motion component along the orientation axis. GPs were presented in two temporal intervals and observers were asked to discriminate the temporal interval containing the most coherent GP. rTMS was delivered over early visual area (V1/V2) and over area V5/MT shortly after the presentation of the GP in each interval. The results showed that rTMS applied over early visual areas affected the perception of static GPs, but the stimulation of area V5/MT did not affect observers' performance. On the other hand, rTMS was delivered over either V1/V2 or V5/MT strongly impaired the perception of dynamic GPs. These results suggest that early visual areas seem to be involved in the processing of the spatial structure of GPs, and interfering with the extraction of the global spatial structure also affects the extraction of the motion component, possibly interfering with early form-motion integration. However, visual area V5/MT is likely to be involved only in the processing of the motion component of dynamic GPs. These results suggest that motion and form cues may interact as early as V1/V2. Copyright © 2017 Elsevier Inc. All rights reserved.
Advanced Computer Image Generation Techniques Exploiting Perceptual Characteristics
1981-08-01
the capabilities/limitations of the human visual perceptual processing system and improve the training effectiveness of visual simulation systems...Myron Braunstein of the University of California at Irvine performed all the work in the perceptual area. Mr. Timothy A. Zimmerlin contributed the... work . Thus, while some areas are related, each is resolved independently in order to focus on the basic perceptual limitation. In addition, the
Dysfunctional visual word form processing in progressive alexia
Rising, Kindle; Stib, Matthew T.; Rapcsak, Steven Z.; Beeson, Pélagie M.
2013-01-01
Progressive alexia is an acquired reading deficit caused by degeneration of brain regions that are essential for written word processing. Functional imaging studies have shown that early processing of the visual word form depends on a hierarchical posterior-to-anterior processing stream in occipito-temporal cortex, whereby successive areas code increasingly larger and more complex perceptual attributes of the letter string. A region located in the left lateral occipito-temporal sulcus and adjacent fusiform gyrus shows maximal selectivity for words and has been dubbed the ‘visual word form area’. We studied two patients with progressive alexia in order to determine whether their reading deficits were associated with structural and/or functional abnormalities in this visual word form system. Voxel-based morphometry showed left-lateralized occipito-temporal atrophy in both patients, very mild in one, but moderate to severe in the other. The two patients, along with 10 control subjects, were scanned with functional magnetic resonance imaging as they viewed rapidly presented words, false font strings, or a fixation crosshair. This paradigm was optimized to reliably map brain regions involved in orthographic processing in individual subjects. All 10 control subjects showed a posterior-to-anterior gradient of selectivity for words, and all 10 showed a functionally defined visual word form area in the left hemisphere that was activated for words relative to false font strings. In contrast, neither of the two patients with progressive alexia showed any evidence for a selectivity gradient or for word-specific activation of the visual word form area. The patient with mild atrophy showed normal responses to both words and false font strings in the posterior part of the visual word form system, but a failure to develop selectivity for words in the more anterior part of the system. In contrast, the patient with moderate to severe atrophy showed minimal activation of any part of the visual word form system for either words or false font strings. Our results suggest that progressive alexia is associated with a dysfunctional visual word form system, with or without substantial cortical atrophy. Furthermore, these findings demonstrate that functional MRI has the potential to reveal the neural bases of cognitive deficits in neurodegenerative patients at very early stages, in some cases before the development of extensive atrophy. PMID:23471694
Liang, Maojin; Chen, Yuebo; Zhao, Fei; Zhang, Junpeng; Liu, Jiahao; Zhang, Xueyuan; Cai, Yuexin; Chen, Suijun; Li, Xianghui; Chen, Ling; Zheng, Yiqing
2017-09-01
Although visual processing recruitment of the auditory cortices has been reported previously in prelingually deaf children who have a rapidly developing brain and no auditory processing, the visual processing recruitment of auditory cortices might be different in processing different visual stimuli and may affect cochlear implant (CI) outcomes. Ten prelingually deaf children, 4 to 6 years old, were recruited for the study. Twenty prelingually deaf subjects, 4 to 6 years old with CIs for 1 year, were also recruited; 10 with well-performing CIs, 10 with poorly performing CIs. Ten age and sex-matched normal-hearing children were recruited as controls. Visual ("sound" photo [photograph with imaginative sound] and "nonsound" photo [photograph without imaginative sound]) evoked potentials were measured in all subjects. P1 at Oz and N1 at the bilateral temporal-frontal areas (FC3 and FC4) were compared. N1 amplitudes were strongest in the deaf children, followed by those with poorly performing CIs, controls and those with well-performing CIs. There was no significant difference between controls and those with well-performing CIs. "Sound" photo stimuli evoked a stronger N1 than "nonsound" photo stimuli. Further analysis showed that only at FC4 in deaf subjects and those with poorly performing CIs were the N1 responses to "sound" photo stimuli stronger than those to "nonsound" photo stimuli. No significant difference was found for the FC3 and FC4 areas. No significant difference was found in N1 latencies and P1 amplitudes or latencies. The results indicate enhanced visual recruitment of the auditory cortices in prelingually deaf children. Additionally, the decrement in visual recruitment of auditory cortices was related to good CI outcomes.
Spatiotemporal Visualization of Tsunami Waves Using Kml on Google Earth
NASA Astrophysics Data System (ADS)
Mohammadi, H.; Delavar, M. R.; Sharifi, M. A.; Pirooz, M. D.
2017-09-01
Disaster risk is a function of hazard and vulnerability. Risk is defined as the expected losses, including lives, personal injuries, property damages, and economic disruptions, due to a particular hazard for a given area and time period. Risk assessment is one of the key elements of a natural disaster management strategy as it allows for better disaster mitigation and preparation. It provides input for informed decision making, and increases risk awareness among decision makers and other stakeholders. Virtual globes such as Google Earth can be used as a visualization tool. Proper spatiotemporal graphical representations of the concerned risk significantly reduces the amount of effort to visualize the impact of the risk and improves the efficiency of the decision-making process to mitigate the impact of the risk. The spatiotemporal visualization of tsunami waves for disaster management process is an attractive topic in geosciences to assist investigation of areas at tsunami risk. In this paper, a method for coupling virtual globes with tsunami wave arrival time models is presented. In this process we have shown 2D+Time of tsunami waves for propagation and inundation of tsunami waves, both coastal line deformation, and the flooded areas. In addition, the worst case scenario of tsunami on Chabahar port derived from tsunami modelling is also presented using KML on google earth.
Woodhead, Zoe Victoria Joan; Wise, Richard James Surtees; Sereno, Marty; Leech, Robert
2011-10-01
Different cortical regions within the ventral occipitotemporal junction have been reported to show preferential responses to particular objects. Thus, it is argued that there is evidence for a left-lateralized visual word form area and a right-lateralized fusiform face area, but the unique specialization of these areas remains controversial. Words are characterized by greater power in the high spatial frequency (SF) range, whereas faces comprise a broader range of high and low frequencies. We investigated how these high-order visual association areas respond to simple sine-wave gratings that varied in SF. Using functional magnetic resonance imaging, we demonstrated lateralization of activity that was concordant with the low-level visual property of words and faces; left occipitotemporal cortex is more strongly activated by high than by low SF gratings, whereas the right occipitotemporal cortex responded more to low than high spatial frequencies. Therefore, the SF of a visual stimulus may bias the lateralization of processing irrespective of its higher order properties.
Atabaki, A; Marciniak, K; Dicke, P W; Karnath, H-O; Thier, P
2014-03-01
Distinguishing a target from distractors during visual search is crucial for goal-directed behaviour. The more distractors that are presented with the target, the larger is the subject's error rate. This observation defines the set-size effect in visual search. Neurons in areas related to attention and eye movements, like the lateral intraparietal area (LIP) and frontal eye field (FEF), diminish their firing rates when the number of distractors increases, in line with the behavioural set-size effect. Furthermore, human imaging studies that have tried to delineate cortical areas modulating their blood oxygenation level-dependent (BOLD) response with set size have yielded contradictory results. In order to test whether BOLD imaging of the rhesus monkey cortex yields results consistent with the electrophysiological findings and, moreover, to clarify if additional other cortical regions beyond the two hitherto implicated are involved in this process, we studied monkeys while performing a covert visual search task. When varying the number of distractors in the search task, we observed a monotonic increase in error rates when search time was kept constant as was expected if monkeys resorted to a serial search strategy. Visual search consistently evoked robust BOLD activity in the monkey FEF and a region in the intraparietal sulcus in its lateral and middle part, probably involving area LIP. Whereas the BOLD response in the FEF did not depend on set size, the LIP signal increased in parallel with set size. These results demonstrate the virtue of BOLD imaging in monkeys when trying to delineate cortical areas underlying a cognitive process like visual search. However, they also demonstrate the caution needed when inferring neural activity from BOLD activity. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
John W. Sinton
1979-01-01
The first purpose of this study was to deter-mine the visual quality of New Jersey Pine Barrens forests according to residents of the area. The goal of the study was to determine how to manage Pine Barrens forests to obtain high visual quality within the framework of residents' preferences, available by the Federal Omnibus Parks Acts of 1978 and proposed New...
Anatomy and physiology of the afferent visual system.
Prasad, Sashank; Galetta, Steven L
2011-01-01
The efficient organization of the human afferent visual system meets enormous computational challenges. Once visual information is received by the eye, the signal is relayed by the retina, optic nerve, chiasm, tracts, lateral geniculate nucleus, and optic radiations to the striate cortex and extrastriate association cortices for final visual processing. At each stage, the functional organization of these circuits is derived from their anatomical and structural relationships. In the retina, photoreceptors convert photons of light to an electrochemical signal that is relayed to retinal ganglion cells. Ganglion cell axons course through the optic nerve, and their partial decussation in the chiasm brings together corresponding inputs from each eye. Some inputs follow pathways to mediate pupil light reflexes and circadian rhythms. However, the majority of inputs arrive at the lateral geniculate nucleus, which relays visual information via second-order neurons that course through the optic radiations to arrive in striate cortex. Feedback mechanisms from higher cortical areas shape the neuronal responses in early visual areas, supporting coherent visual perception. Detailed knowledge of the anatomy of the afferent visual system, in combination with skilled examination, allows precise localization of neuropathological processes and guides effective diagnosis and management of neuro-ophthalmic disorders. Copyright © 2011 Elsevier B.V. All rights reserved.
A Task-Dependent Causal Role for Low-Level Visual Processes in Spoken Word Comprehension
ERIC Educational Resources Information Center
Ostarek, Markus; Huettig, Falk
2017-01-01
It is well established that the comprehension of spoken words referring to object concepts relies on high-level visual areas in the ventral stream that build increasingly abstract representations. It is much less clear whether basic low-level visual representations are also involved. Here we asked in what task situations low-level visual…
2014-01-01
Background Neurofibromatosis type 1 (NF1) affects several areas of cognitive function including visual processing and attention. We investigated the neural mechanisms underlying the visual deficits of children and adolescents with NF1 by studying visual evoked potentials (VEPs) and brain oscillations during visual stimulation and rest periods. Methods Electroencephalogram/event-related potential (EEG/ERP) responses were measured during visual processing (NF1 n = 17; controls n = 19) and idle periods with eyes closed and eyes open (NF1 n = 12; controls n = 14). Visual stimulation was chosen to bias activation of the three detection mechanisms: achromatic, red-green and blue-yellow. Results We found significant differences between the groups for late chromatic VEPs and a specific enhancement in the amplitude of the parieto-occipital alpha amplitude both during visual stimulation and idle periods. Alpha modulation and the negative influence of alpha oscillations in visual performance were found in both groups. Conclusions Our findings suggest abnormal later stages of visual processing and enhanced amplitude of alpha oscillations supporting the existence of deficits in basic sensory processing in NF1. Given the link between alpha oscillations, visual perception and attention, these results indicate a neural mechanism that might underlie the visual sensitivity deficits and increased lapses of attention observed in individuals with NF1. PMID:24559228
A theta rhythm in macaque visual cortex and its attentional modulation
Spyropoulos, Georgios; Fries, Pascal
2018-01-01
Theta rhythms govern rodent sniffing and whisking, and human language processing. Human psychophysics suggests a role for theta also in visual attention. However, little is known about theta in visual areas and its attentional modulation. We used electrocorticography (ECoG) to record local field potentials (LFPs) simultaneously from areas V1, V2, V4, and TEO of two macaque monkeys performing a selective visual attention task. We found a ≈4-Hz theta rhythm within both the V1–V2 and the V4–TEO region, and theta synchronization between them, with a predominantly feedforward directed influence. ECoG coverage of large parts of these regions revealed a surprising spatial correspondence between theta and visually induced gamma. Furthermore, gamma power was modulated with theta phase. Selective attention to the respective visual stimulus strongly reduced these theta-rhythmic processes, leading to an unusually strong attention effect for V1. Microsaccades (MSs) were partly locked to theta. However, neuronal theta rhythms tended to be even more pronounced for epochs devoid of MSs. Thus, we find an MS-independent theta rhythm specific to visually driven parts of V1–V2, which rhythmically modulates local gamma and entrains V4–TEO, and which is strongly reduced by attention. We propose that the less theta-rhythmic and thereby more continuous processing of the attended stimulus serves the exploitation of this behaviorally most relevant information. The theta-rhythmic and thereby intermittent processing of the unattended stimulus likely reflects the ecologically important exploration of less relevant sources of information. PMID:29848632
Khalil, Reem; Levitt, Jonathan B
2013-09-01
A critical question in brain development is whether different brain circuits mature concurrently or with different timescales. To characterize the anatomical and functional development of different visual cortical areas, one must be able to distinguish these areas. Here, we show that zinc histochemistry, which reveals a subset of glutamatergic processes, can be used to reliably distinguish visual areas in juvenile and adult ferret cerebral cortex, and that the postnatal decline in levels of synaptic zinc follows a broadly similar developmental trajectory in multiple areas of ferret visual cortex. Zinc staining in all areas examined (17, 18, 19, 21, and Suprasylvian) is greater in the 5-week-old than in the adult. Furthermore, there is less laminar variation in zinc staining in the 5-week-old visual cortex than in the adult. Despite differences in staining intensity, areal boundaries can be discerned in the juvenile as in the adult. By 6 weeks of age, we observe a significant decline in visual cortical synaptic zinc; this decline was most pronounced in layer IV of areas 17 and 18, with much less change in higher-order extrastriate areas during the important period in visual cortical development following eye opening. By 10 weeks of age, the laminar pattern of zinc staining in all visual areas is essentially adultlike. The decline in synaptic zinc in the supra- and infragranular layers in all areas proceeds at the same rate, though the decline in layer IV does not. These results suggest that the timecourse of synaptic zinc decline is lamina specific, and further confirm and extend the notion that at least some aspects of cortical maturation follow a similar developmental timecourse in multiple areas. The postnatal decline in synaptic zinc we observe during the second postnatal month begins after eye opening, consistent with evidence that synaptic zinc is regulated by sensory experience.
fMRI evidence for areas that process surface gloss in the human visual cortex
Sun, Hua-Chun; Ban, Hiroshi; Di Luca, Massimiliano; Welchman, Andrew E.
2015-01-01
Surface gloss is an important cue to the material properties of objects. Recent progress in the study of macaque’s brain has increased our understating of the areas involved in processing information about gloss, however the homologies with the human brain are not yet fully understood. Here we used human functional magnetic resonance imaging (fMRI) measurements to localize brain areas preferentially responding to glossy objects. We measured cortical activity for thirty-two rendered three-dimensional objects that had either Lambertian or specular surface properties. To control for differences in image structure, we overlaid a grid on the images and scrambled its cells. We found activations related to gloss in the posterior fusiform sulcus (pFs) and in area V3B/KO. Subsequent analysis with Granger causality mapping indicated that V3B/KO processes gloss information differently than pFs. Our results identify a small network of mid-level visual areas whose activity may be important in supporting the perception of surface gloss. PMID:25490434
Serial grouping of 2D-image regions with object-based attention in humans
Jeurissen, Danique; Self, Matthew W; Roelfsema, Pieter R
2016-01-01
After an initial stage of local analysis within the retina and early visual pathways, the human visual system creates a structured representation of the visual scene by co-selecting image elements that are part of behaviorally relevant objects. The mechanisms underlying this perceptual organization process are only partially understood. We here investigate the time-course of perceptual grouping of two-dimensional image-regions by measuring the reaction times of human participants and report that it is associated with the gradual spread of object-based attention. Attention spreads fastest over large and homogeneous areas and is slowed down at locations that require small-scale processing. We find that the time-course of the object-based selection process is well explained by a 'growth-cone' model, which selects surface elements in an incremental, scale-dependent manner. We discuss how the visual cortical hierarchy can implement this scale-dependent spread of object-based attention, leveraging the different receptive field sizes in distinct cortical areas. DOI: http://dx.doi.org/10.7554/eLife.14320.001 PMID:27291188
Saunders, Gabrielle H; Echt, Katharina V
2012-01-01
Combat exposures to blast can result in both peripheral damage to the ears and eyes and central damage to the auditory and visual processing areas in the brain. The functional effects of the latter include visual, auditory, and cognitive processing difficulties that manifest as deficits in attention, memory, and problem solving--symptoms similar to those seen in individuals with visual and auditory processing disorders. Coexisting damage to the auditory and visual system is referred to as dual sensory impairment (DSI). The number of Operation Iraqi Freedom/Operation Enduring Freedom Veterans with DSI is vast; yet currently no established models or guidelines exist for assessment, rehabilitation, or service-delivery practice. In this article, we review the current state of knowledge regarding blast exposure and DSI and outline the many unknowns in this area. Further, we propose a model for clinical assessment and rehabilitation of blast-related DSI that includes development of a coordinated team-based approach to target activity limitations and participation restrictions in order to enhance reintegration, recovery, and quality of life.
The impact of recreational MDMA 'ecstasy' use on global form processing.
White, Claire; Edwards, Mark; Brown, John; Bell, Jason
2014-11-01
The ability to integrate local orientation information into a global form percept was investigated in long-term ecstasy users. Evidence suggests that ecstasy disrupts the serotonin system, with the visual areas of the brain being particularly susceptible. Previous research has found altered orientation processing in the primary visual area (V1) of users, thought to be due to disrupted serotonin-mediated lateral inhibition. The current study aimed to investigate whether orientation deficits extend to higher visual areas involved in global form processing. Forty-five participants completed a psychophysical (Glass pattern) study allowing an investigation into the mechanisms underlying global form processing and sensitivity to changes in the offset of the stimuli (jitter). A subgroup of polydrug-ecstasy users (n=6) with high ecstasy use had significantly higher thresholds for the detection of Glass patterns than controls (n=21, p=0.039) after Bonferroni correction. There was also a significant interaction between jitter level and drug-group, with polydrug-ecstasy users showing reduced sensitivity to alterations in jitter level (p=0.003). These results extend previous research, suggesting disrupted global form processing and reduced sensitivity to orientation jitter with ecstasy use. Further research is needed to investigate this finding in a larger sample of heavy ecstasy users and to differentiate the effects of other drugs. © The Author(s) 2014.
Delle Monache, Sergio; Lacquaniti, Francesco; Bosco, Gianfranco
2017-09-01
The ability to catch objects when transiently occluded from view suggests their motion can be extrapolated. Intraparietal cortex (IPS) plays a major role in this process along with other brain structures, depending on the task. For example, interception of objects under Earth's gravity effects may depend on time-to-contact predictions derived from integration of visual signals processed by hMT/V5+ with a priori knowledge of gravity residing in the temporoparietal junction (TPJ). To investigate this issue further, we disrupted TPJ, hMT/V5+, and IPS activities with transcranial magnetic stimulation (TMS) while subjects intercepted computer-simulated projectile trajectories perturbed randomly with either hypo- or hypergravity effects. In experiment 1 , trajectories were occluded either 750 or 1,250 ms before landing. Three subject groups underwent triple-pulse TMS (tpTMS, 3 pulses at 10 Hz) on one target area (TPJ | hMT/V5+ | IPS) and on the vertex (control site), timed at either trajectory perturbation or occlusion. In experiment 2 , trajectories were entirely visible and participants received tpTMS on TPJ and hMT/V5+ with same timing as experiment 1 tpTMS of TPJ, hMT/V5+, and IPS affected differently the interceptive timing. TPJ stimulation affected preferentially responses to 1-g motion, hMT/V5+ all response types, and IPS stimulation induced opposite effects on 0-g and 2-g responses, being ineffective on 1-g responses. Only IPS stimulation was effective when applied after target disappearance, implying this area might elaborate memory representations of occluded target motion. Results are compatible with the idea that IPS, TPJ, and hMT/V5+ contribute to distinct aspects of visual motion extrapolation, perhaps through parallel processing. NEW & NOTEWORTHY Visual extrapolation represents a potential neural solution to afford motor interactions with the environment in the face of missing information. We investigated relative contributions by temporoparietal junction (TPJ), hMT/V5+, and intraparietal cortex (IPS), cortical areas potentially involved in these processes. Parallel organization of visual extrapolation processes emerged with respect to the target's motion causal nature: TPJ was primarily involved for visual motion congruent with gravity effects, IPS for arbitrary visual motion, whereas hMT/V5+ contributed at earlier processing stages. Copyright © 2017 the American Physiological Society.
Bogousslavsky, J; Miklossy, J; Deruaz, J P; Assal, G; Regli, F
1987-01-01
A macular-sparing superior altitudinal hemianopia with no visuo-psychic disturbance, except impaired visual learning, was associated with bilateral ischaemic necrosis of the lingual gyrus and only partial involvement of the fusiform gyrus on the left side. It is suggested that bilateral destruction of the lingual gyrus alone is not sufficient to affect complex visual processing. The fusiform gyrus probably has a critical role in colour integration, visuo-spatial processing, facial recognition and corresponding visual imagery. Involvement of the occipitotemporal projection system deep to the lingual gyri probably explained visual memory dysfunction, by a visuo-limbic disconnection. Impaired verbal memory may have been due to posterior involvement of the parahippocampal gyrus and underlying white matter, which may have disconnected the intact speech areas from the left medial temporal structures. Images PMID:3585386
Nonretinotopic visual processing in the brain.
Melcher, David; Morrone, Maria Concetta
2015-01-01
A basic principle in visual neuroscience is the retinotopic organization of neural receptive fields. Here, we review behavioral, neurophysiological, and neuroimaging evidence for nonretinotopic processing of visual stimuli. A number of behavioral studies have shown perception depending on object or external-space coordinate systems, in addition to retinal coordinates. Both single-cell neurophysiology and neuroimaging have provided evidence for the modulation of neural firing by gaze position and processing of visual information based on craniotopic or spatiotopic coordinates. Transient remapping of the spatial and temporal properties of neurons contingent on saccadic eye movements has been demonstrated in visual cortex, as well as frontal and parietal areas involved in saliency/priority maps, and is a good candidate to mediate some of the spatial invariance demonstrated by perception. Recent studies suggest that spatiotopic selectivity depends on a low spatial resolution system of maps that operates over a longer time frame than retinotopic processing and is strongly modulated by high-level cognitive factors such as attention. The interaction of an initial and rapid retinotopic processing stage, tied to new fixations, and a longer lasting but less precise nonretinotopic level of visual representation could underlie the perception of both a detailed and a stable visual world across saccadic eye movements.
Touch to see: neuropsychological evidence of a sensory mirror system for touch.
Bolognini, Nadia; Olgiati, Elena; Xaiz, Annalisa; Posteraro, Lucio; Ferraro, Francesco; Maravita, Angelo
2012-09-01
The observation of touch can be grounded in the activation of brain areas underpinning direct tactile experience, namely the somatosensory cortices. What is the behavioral impact of such a mirror sensory activity on visual perception? To address this issue, we investigated the causal interplay between observed and felt touch in right brain-damaged patients, as a function of their underlying damaged visual and/or tactile modalities. Patients and healthy controls underwent a detection task, comprising visual stimuli depicting touches or without a tactile component. Touch and No-touch stimuli were presented in egocentric or allocentric perspectives. Seeing touches, regardless of the viewing perspective, differently affects visual perception depending on which sensory modality is damaged: In patients with a selective visual deficit, but without any tactile defect, the sight of touch improves the visual impairment; this effect is associated with a lesion to the supramarginal gyrus. In patients with a tactile deficit, but intact visual perception, the sight of touch disrupts visual processing, inducing a visual extinction-like phenomenon. This disruptive effect is associated with the damage of the postcentral gyrus. Hence, a damage to the somatosensory system can lead to a dysfunctional visual processing, and an intact somatosensory processing can aid visual perception.
D’Angiulli, Amedeo; Griffiths, Gordon; Marmolejo-Ramos, Fernando
2015-01-01
The neural correlates of visualization underlying word comprehension were examined in preschool children. On each trial, a concrete or abstract word was delivered binaurally (part 1: post-auditory visualization), followed by a four-picture array (a target plus three distractors; part 2: matching visualization). Children were to select the picture matching the word they heard in part 1. Event-related potentials (ERPs) locked to each stimulus presentation and task interval were averaged over sets of trials of increasing word abstractness. ERP time-course during both parts of the task showed that early activity (i.e., <300 ms) was predominant in response to concrete words, while activity in response to abstract words became evident only at intermediate (i.e., 300–699 ms) and late (i.e., 700–1000 ms) ERP intervals. Specifically, ERP topography showed that while early activity during post-auditory visualization was linked to left temporo-parietal areas for concrete words, early activity during matching visualization occurred mostly in occipito-parietal areas for concrete words, but more anteriorly in centro-parietal areas for abstract words. In intermediate ERPs, post-auditory visualization coincided with parieto-occipital and parieto-frontal activity in response to both concrete and abstract words, while in matching visualization a parieto-central activity was common to both types of words. In the late ERPs for both types of words, the post-auditory visualization involved right-hemispheric activity following a “post-anterior” pathway sequence: occipital, parietal, and temporal areas; conversely, matching visualization involved left-hemispheric activity following an “ant-posterior” pathway sequence: frontal, temporal, parietal, and occipital areas. These results suggest that, similarly, for concrete and abstract words, meaning in young children depends on variably complex visualization processes integrating visuo-auditory experiences and supramodal embodying representations. PMID:26175697
Multimission image processing and science data visualization
NASA Technical Reports Server (NTRS)
Green, William B.
1993-01-01
The Operational Science Analysis (OSA) Functional area supports science instrument data display, analysis, visualization and photo processing in support of flight operations of planetary spacecraft managed by the Jet Propulsion Laboratory (JPL). This paper describes the data products generated by the OSA functional area, and the current computer system used to generate these data products. The objectives on a system upgrade now in process are described. The design approach to development of the new system are reviewed, including use of the Unix operating system and X-Window display standards to provide platform independence, portability, and modularity within the new system, is reviewed. The new system should provide a modular and scaleable capability supporting a variety of future missions at JPL.
Age-related macular degeneration changes the processing of visual scenes in the brain.
Ramanoël, Stephen; Chokron, Sylvie; Hera, Ruxandra; Kauffmann, Louise; Chiquet, Christophe; Krainik, Alexandre; Peyrin, Carole
2018-01-01
In age-related macular degeneration (AMD), the processing of fine details in a visual scene, based on a high spatial frequency processing, is impaired, while the processing of global shapes, based on a low spatial frequency processing, is relatively well preserved. The present fMRI study aimed to investigate the residual abilities and functional brain changes of spatial frequency processing in visual scenes in AMD patients. AMD patients and normally sighted elderly participants performed a categorization task using large black and white photographs of scenes (indoors vs. outdoors) filtered in low and high spatial frequencies, and nonfiltered. The study also explored the effect of luminance contrast on the processing of high spatial frequencies. The contrast across scenes was either unmodified or equalized using a root-mean-square contrast normalization in order to increase contrast in high-pass filtered scenes. Performance was lower for high-pass filtered scenes than for low-pass and nonfiltered scenes, for both AMD patients and controls. The deficit for processing high spatial frequencies was more pronounced in AMD patients than in controls and was associated with lower activity for patients than controls not only in the occipital areas dedicated to central and peripheral visual fields but also in a distant cerebral region specialized for scene perception, the parahippocampal place area. Increasing the contrast improved the processing of high spatial frequency content and spurred activation of the occipital cortex for AMD patients. These findings may lead to new perspectives for rehabilitation procedures for AMD patients.
Top-down alpha oscillatory network interactions during visuospatial attention orienting.
Doesburg, Sam M; Bedo, Nicolas; Ward, Lawrence M
2016-05-15
Neuroimaging and lesion studies indicate that visual attention is controlled by a distributed network of brain areas. The covert control of visuospatial attention has also been associated with retinotopic modulation of alpha-band oscillations within early visual cortex, which are thought to underlie inhibition of ignored areas of visual space. The relation between distributed networks mediating attention control and more focal oscillatory mechanisms, however, remains unclear. The present study evaluated the hypothesis that alpha-band, directed, network interactions within the attention control network are systematically modulated by the locus of visuospatial attention. We localized brain areas involved in visuospatial attention orienting using magnetoencephalographic (MEG) imaging and investigated alpha-band Granger-causal interactions among activated regions using narrow-band transfer entropy. The deployment of attention to one side of visual space was indexed by lateralization of alpha power changes between about 400ms and 700ms post-cue onset. The changes in alpha power were associated, in the same time period, with lateralization of anterior-to-posterior information flow in the alpha-band from various brain areas involved in attention control, including the anterior cingulate cortex, left middle and inferior frontal gyri, left superior temporal gyrus, and right insula, and inferior parietal lobule, to early visual areas. We interpreted these results to indicate that distributed network interactions mediated by alpha oscillations exert top-down influences on early visual cortex to modulate inhibition of processing for ignored areas of visual space. Copyright © 2016. Published by Elsevier Inc.
The neural organization of perception in chess experts.
Krawczyk, Daniel C; Boggan, Amy L; McClelland, M Michelle; Bartlett, James C
2011-07-20
The human visual system responds to expertise, and it has been suggested that regions that process faces also process other objects of expertise including chess boards by experts. We tested whether chess and face processing overlap in brain activity using fMRI. Chess experts and novices exhibited face selective areas, but these regions showed no selectivity to chess configurations relative to other stimuli. We next compared neural responses to chess and to scrambled chess displays to isolate areas relevant to expertise. Areas within the posterior cingulate, orbitofrontal cortex, and right temporal cortex were active in this comparison in experts over novices. We also compared chess and face responses within the posterior cingulate and found this area responsive to chess only in experts. These findings indicate that the configurations in chess are not strongly processed by face-selective regions that are selective for faces in individuals who have expertise in both domains. Further, the area most consistently involved in chess did not show overlap with faces. Overall, these results suggest that expert visual processing may be similar at the level of recognition, but need not show the same neural correlates. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Visual Communications And Image Processing
NASA Astrophysics Data System (ADS)
Hsing, T. Russell; Tzou, Kou-Hu
1989-07-01
This special issue on Visual Communications and Image Processing contains 14 papers that cover a wide spectrum in this fast growing area. For the past few decades, researchers and scientists have devoted their efforts to these fields. Through this long-lasting devotion, we witness today the growing popularity of low-bit-rate video as a convenient tool for visual communication. We also see the integration of high-quality video into broadband digital networks. Today, with more sophisticated processing, clearer and sharper pictures are being restored from blurring and noise. Also, thanks to the advances in digital image processing, even a PC-based system can be built to recognize highly complicated Chinese characters at the speed of 300 characters per minute. This special issue can be viewed as a milestone of visual communications and image processing on its journey to eternity. It presents some overviews on advanced topics as well as some new development in specific subjects.
Bressler, David W; Fortenbaugh, Francesca C; Robertson, Lynn C; Silver, Michael A
2013-06-07
Endogenous visual spatial attention improves perception and enhances neural responses to visual stimuli at attended locations. Although many aspects of visual processing differ significantly between central and peripheral vision, little is known regarding the neural substrates of the eccentricity dependence of spatial attention effects. We measured amplitudes of positive and negative fMRI responses to visual stimuli as a function of eccentricity in a large number of topographically-organized cortical areas. Responses to each stimulus were obtained when the stimulus was attended and when spatial attention was directed to a stimulus in the opposite visual hemifield. Attending to the stimulus increased both positive and negative response amplitudes in all cortical areas we studied: V1, V2, V3, hV4, VO1, LO1, LO2, V3A/B, IPS0, TO1, and TO2. However, the eccentricity dependence of these effects differed considerably across cortical areas. In early visual, ventral, and lateral occipital cortex, attentional enhancement of positive responses was greater for central compared to peripheral eccentricities. The opposite pattern was observed in dorsal stream areas IPS0 and putative MT homolog TO1, where attentional enhancement of positive responses was greater in the periphery. Both the magnitude and the eccentricity dependence of attentional modulation of negative fMRI responses closely mirrored that of positive responses across cortical areas. Copyright © 2013 Elsevier Ltd. All rights reserved.
Implications on visual apperception: energy, duration, structure and synchronization.
Bókkon, I; Vimal, Ram Lakhan Pandey
2010-07-01
Although primary visual cortex (V1 or striate) activity per se is not sufficient for visual apperception (normal conscious visual experiences and conscious functions such as detection, discrimination, and recognition), the same is also true for extrastriate visual areas (such as V2, V3, V4/V8/VO, V5/M5/MST, IT, and GF). In the lack of V1 area, visual signals can still reach several extrastriate parts but appear incapable of generating normal conscious visual experiences. It is scarcely emphasized in the scientific literature that conscious perceptions and representations must have also essential energetic conditions. These energetic conditions are achieved by spatiotemporal networks of dynamic mitochondrial distributions inside neurons. However, the highest density of neurons in neocortex (number of neurons per degree of visual angle) devoted to representing the visual field is found in retinotopic V1. It means that the highest mitochondrial (energetic) activity can be achieved in mitochondrial cytochrome oxidase-rich V1 areas. Thus, V1 bear the highest energy allocation for visual representation. In addition, the conscious perceptions also demand structural conditions, presence of adequate duration of information representation, and synchronized neural processes and/or 'interactive hierarchical structuralism.' For visual apperception, various visual areas are involved depending on context such as stimulus characteristics such as color, form/shape, motion, and other features. Here, we focus primarily on V1 where specific mitochondrial-rich retinotopic structures are found; we will concisely discuss V2 where smaller riches of these structures are found. We also point out that residual brain states are not fully reflected in active neural patterns after visual perception. Namely, after visual perception, subliminal residual states are not being reflected in passive neural recording techniques, but require active stimulation to be revealed.
Paneri, Sofia; Gregoriou, Georgia G.
2017-01-01
The ability to select information that is relevant to current behavioral goals is the hallmark of voluntary attention and an essential part of our cognition. Attention tasks are a prime example to study at the neuronal level, how task related information can be selectively processed in the brain while irrelevant information is filtered out. Whereas, numerous studies have focused on elucidating the mechanisms of visual attention at the single neuron and population level in the visual cortices, considerably less work has been devoted to deciphering the distinct contribution of higher-order brain areas, which are known to be critical for the employment of attention. Among these areas, the prefrontal cortex (PFC) has long been considered a source of top-down signals that bias selection in early visual areas in favor of the attended features. Here, we review recent experimental data that support the role of PFC in attention. We examine the existing evidence for functional specialization within PFC and we discuss how long-range interactions between PFC subregions and posterior visual areas may be implemented in the brain and contribute to the attentional modulation of different measures of neural activity in visual cortices. PMID:29033784
Paneri, Sofia; Gregoriou, Georgia G
2017-01-01
The ability to select information that is relevant to current behavioral goals is the hallmark of voluntary attention and an essential part of our cognition. Attention tasks are a prime example to study at the neuronal level, how task related information can be selectively processed in the brain while irrelevant information is filtered out. Whereas, numerous studies have focused on elucidating the mechanisms of visual attention at the single neuron and population level in the visual cortices, considerably less work has been devoted to deciphering the distinct contribution of higher-order brain areas, which are known to be critical for the employment of attention. Among these areas, the prefrontal cortex (PFC) has long been considered a source of top-down signals that bias selection in early visual areas in favor of the attended features. Here, we review recent experimental data that support the role of PFC in attention. We examine the existing evidence for functional specialization within PFC and we discuss how long-range interactions between PFC subregions and posterior visual areas may be implemented in the brain and contribute to the attentional modulation of different measures of neural activity in visual cortices.
Longcamp, Marieke; Anton, Jean-Luc; Roth, Muriel; Velay, Jean-Luc
2005-01-01
In a previous fMRI study on right-handers (Rhrs), we reported that part of the left ventral premotor cortex (BA6) was activated when alphabetical characters were passively observed and that the same region was also involved in handwriting [Longcamp, M., Anton, J. L., Roth, M., & Velay, J. L. (2003). Visual presentation of single letters activates a premotor area involved in writing. NeuroImage, 19, 1492-1500]. We therefore suggested that letter-viewing may induce automatic involvement of handwriting movements. In the present study, in order to confirm this hypothesis, we carried out a similar fMRI experiment on a group of left-handed subjects (Lhrs). We reasoned that if the above assumption was correct, visual perception of letters by Lhrs might automatically activate cortical motor areas coding for left-handed writing movements, i.e., areas located in the right hemisphere. The visual stimuli used here were either single letters, single pseudoletters, or a control stimulus. The subjects were asked to watch these stimuli attentively, and no response was required. The results showed that a ventral premotor cortical area (BA6) in the right hemisphere was specifically activated when Lhrs looked at letters and not at pseudoletters. This right area was symmetrically located with respect to the left one activated under the same circumstances in Rhrs. This finding supports the hypothesis that visual perception of written language evokes covert motor processes. In addition, a bilateral area, also located in the premotor cortex (BA6), but more ventrally and medially, was found to be activated in response to both letters and pseudoletters. This premotor region, which was not activated correspondingly in Rhrs, might be involved in the processing of graphic stimuli, whatever their degree of familiarity.
Pihlajamäki, Maija; Tanila, Heikki; Könönen, Mervi; Hänninen, Tuomo; Aronen, Hannu J; Soininen, Hilkka
2005-10-01
The ventral visual stream processes information about the identity of objects ('what'), whereas the dorsal stream processes the spatial locations of objects ('where'). There is a corresponding, although disputed, distinction for the ventrolateral and dorsolateral prefrontal areas. Furthermore, there seems to be a distinction between the anterior and posterior medial temporal lobe (MTL) structures in the processing of novel items and new spatial arrangements, respectively. Functional differentiation of the intermediary mid-line cortical and temporal neocortical structures that communicate with the occipitotemporal, occipitoparietal, prefrontal, and MTL structures, however, is unclear. Therefore, in the present functional magnetic resonance imaging (fMRI) study, we examined whether the distinction among the MTL structures extends to these closely connected cortical areas. The most striking difference in the fMRI responses during visual presentation of changes in either items or their locations was the bilateral activation of the temporal lobe and ventrolateral prefrontal cortical areas for novel object identification in contrast to wide parietal and dorsolateral prefrontal activation for the novel locations of objects. An anterior-posterior distinction of fMRI responses similar to the MTL was observed in the cingulate/retrosplenial, and superior and middle temporal cortices. In addition to the distinct areas of activation, certain frontal, parietal, and temporo-occipital areas responded to both object and spatial novelty, suggesting a common attentional network for both types of changes in the visual environment. These findings offer new insights to the functional roles and intrinsic specialization of the cingulate/retrosplenial, and lateral temporal cortical areas in visuospatial cognition.
A Multi-Area Stochastic Model for a Covert Visual Search Task.
Schwemmer, Michael A; Feng, Samuel F; Holmes, Philip J; Gottlieb, Jacqueline; Cohen, Jonathan D
2015-01-01
Decisions typically comprise several elements. For example, attention must be directed towards specific objects, their identities recognized, and a choice made among alternatives. Pairs of competing accumulators and drift-diffusion processes provide good models of evidence integration in two-alternative perceptual choices, but more complex tasks requiring the coordination of attention and decision making involve multistage processing and multiple brain areas. Here we consider a task in which a target is located among distractors and its identity reported by lever release. The data comprise reaction times, accuracies, and single unit recordings from two monkeys' lateral interparietal area (LIP) neurons. LIP firing rates distinguish between targets and distractors, exhibit stimulus set size effects, and show response-hemifield congruence effects. These data motivate our model, which uses coupled sets of leaky competing accumulators to represent processes hypothesized to occur in feature-selective areas and limb motor and pre-motor areas, together with the visual selection process occurring in LIP. Model simulations capture the electrophysiological and behavioral data, and fitted parameters suggest that different connection weights between LIP and the other cortical areas may account for the observed behavioral differences between the animals.
Surfing a spike wave down the ventral stream.
VanRullen, Rufin; Thorpe, Simon J
2002-10-01
Numerous theories of neural processing, often motivated by experimental observations, have explored the computational properties of neural codes based on the absolute or relative timing of spikes in spike trains. Spiking neuron models and theories however, as well as their experimental counterparts, have generally been limited to the simulation or observation of isolated neurons, isolated spike trains, or reduced neural populations. Such theories would therefore seem inappropriate to capture the properties of a neural code relying on temporal spike patterns distributed across large neuronal populations. Here we report a range of computer simulations and theoretical considerations that were designed to explore the possibilities of one such code and its relevance for visual processing. In a unified framework where the relation between stimulus saliency and spike relative timing plays the central role, we describe how the ventral stream of the visual system could process natural input scenes and extract meaningful information, both rapidly and reliably. The first wave of spikes generated in the retina in response to a visual stimulation carries information explicitly in its spatio-temporal structure: the most salient information is represented by the first spikes over the population. This spike wave, propagating through a hierarchy of visual areas, is regenerated at each processing stage, where its temporal structure can be modified by (i). the selectivity of the cortical neurons, (ii). lateral interactions and (iii). top-down attentional influences from higher order cortical areas. The resulting model could account for the remarkable efficiency and rapidity of processing observed in the primate visual system.
A Second Level Pictorial Turn? The Emergence of Digital Ekphrasis from The Visuality of New Media
ERIC Educational Resources Information Center
Shiel, Nina
2013-01-01
The increasing visuality of our culture was observed in 1994 by Mitchell, who coined the term "pictorial turn" to describe the interest in the visual taking place in culture and discourse (Mitchell, 1994). Since then, this process has increased further, particularly in all the areas of digital/new media. This chapter will consider this…
ERIC Educational Resources Information Center
Van Eck, Richard N.; Fu, Hongxia; Drechsel, Paul V. J.
2015-01-01
Air traffic control (ATC) operations are critical to the U.S. aviation infrastructure, making ATC training a critical area of study. Because ATC performance is heavily dependent on visual processing, it is important to understand how to screen for or promote relevant visual processing abilities. While conventional wisdom has maintained that such…
Spering, Miriam; Montagnini, Anna
2011-04-22
Many neurophysiological studies in monkeys have indicated that visual motion information for the guidance of perception and smooth pursuit eye movements is - at an early stage - processed in the same visual pathway in the brain, crucially involving the middle temporal area (MT). However, these studies left some questions unanswered: Are perception and pursuit driven by the same or independent neuronal signals within this pathway? Are the perceptual interpretation of visual motion information and the motor response to visual signals limited by the same source of neuronal noise? Here, we review psychophysical studies that were motivated by these questions and compared perception and pursuit behaviorally in healthy human observers. We further review studies that focused on the interaction between perception and pursuit. The majority of results point to similarities between perception and pursuit, but dissociations were also reported. We discuss recent developments in this research area and conclude with suggestions for common and separate principles for the guidance of perceptual and motor responses to visual motion information. Copyright © 2010 Elsevier Ltd. All rights reserved.
Neurophysiological correlates of relatively enhanced local visual search in autistic adolescents.
Manjaly, Zina M; Bruning, Nicole; Neufang, Susanne; Stephan, Klaas E; Brieber, Sarah; Marshall, John C; Kamp-Becker, Inge; Remschmidt, Helmut; Herpertz-Dahlmann, Beate; Konrad, Kerstin; Fink, Gereon R
2007-03-01
Previous studies found normal or even superior performance of autistic patients on visuospatial tasks requiring local search, like the Embedded Figures Task (EFT). A well-known interpretation of this is "weak central coherence", i.e. autistic patients may show a reduced general ability to process information in its context and may therefore have a tendency to favour local over global aspects of information processing. An alternative view is that the local processing advantage in the EFT may result from a relative amplification of early perceptual processes which boosts processing of local stimulus properties but does not affect processing of global context. This study used functional magnetic resonance imaging (fMRI) in 12 autistic adolescents (9 Asperger and 3 high-functioning autistic patients) and 12 matched controls to help distinguish, on neurophysiological grounds, between these two accounts of EFT performance in autistic patients. Behaviourally, we found autistic individuals to be unimpaired during the EFT while they were significantly worse at performing a closely matched control task with minimal local search requirements. The fMRI results showed that activations specific for the local search aspects of the EFT were left-lateralised in parietal and premotor areas for the control group (as previously demonstrated for adults), whereas for the patients these activations were found in right primary visual cortex and bilateral extrastriate areas. These results suggest that enhanced local processing in early visual areas, as opposed to impaired processing of global context, is characteristic for performance of the EFT by autistic patients.
Neurophysiological correlates of relatively enhanced local visual search in autistic adolescents
Manjaly, Zina M.; Bruning, Nicole; Neufang, Susanne; Stephan, Klaas E.; Brieber, Sarah; Marshall, John C.; Kamp-Becker, Inge; Remschmidt, Helmut; Herpertz-Dahlmann, Beate; Konrad, Kerstin; Fink, Gereon R.
2007-01-01
Previous studies found normal or even superior performance of autistic patients on visuospatial tasks requiring local search, like the Embedded Figures Task (EFT). A well-known interpretation of this is “weak central coherence”, i.e. autistic patients may show a reduced general ability to process information in its context and may therefore have a tendency to favour local over global aspects of information processing. An alternative view is that the local processing advantage in the EFT may result from a relative amplification of early perceptual processes which boosts processing of local stimulus properties but does not affect processing of global context. This study used functional magnetic resonance imaging (fMRI) in 12 autistic adolescents (9 Asperger and 3 high-functioning autistic patients) and 12 matched controls to help distinguish, on neurophysiological grounds, between these two accounts of EFT performance in autistic patients. Behaviourally, we found autistic individuals to be unimpaired during the EFT while they were significantly worse at performing a closely matched control task with minimal local search requirements. The fMRI results showed that activations specific for the local search aspects of the EFT were left-lateralised in parietal and premotor areas for the control group (as previously demonstrated for adults), whereas for the patients these activations were found in right primary visual cortex and bilateral extrastriate areas. These results suggest that enhanced local processing in early visual areas, as opposed to impaired processing of global context, is characteristic for performance of the EFT by autistic patients. PMID:17240169
Language Proficiency Modulates the Recruitment of Non-Classical Language Areas in Bilinguals
Leonard, Matthew K.; Torres, Christina; Travis, Katherine E.; Brown, Timothy T.; Hagler, Donald J.; Dale, Anders M.; Elman, Jeffrey L.; Halgren, Eric
2011-01-01
Bilingualism provides a unique opportunity for understanding the relative roles of proficiency and order of acquisition in determining how the brain represents language. In a previous study, we combined magnetoencephalography (MEG) and magnetic resonance imaging (MRI) to examine the spatiotemporal dynamics of word processing in a group of Spanish-English bilinguals who were more proficient in their native language. We found that from the earliest stages of lexical processing, words in the second language evoke greater activity in bilateral posterior visual regions, while activity to the native language is largely confined to classical left hemisphere fronto-temporal areas. In the present study, we sought to examine whether these effects relate to language proficiency or order of language acquisition by testing Spanish-English bilingual subjects who had become dominant in their second language. Additionally, we wanted to determine whether activity in bilateral visual regions was related to the presentation of written words in our previous study, so we presented subjects with both written and auditory words. We found greater activity for the less proficient native language in bilateral posterior visual regions for both the visual and auditory modalities, which started during the earliest word encoding stages and continued through lexico-semantic processing. In classical left fronto-temporal regions, the two languages evoked similar activity. Therefore, it is the lack of proficiency rather than secondary acquisition order that determines the recruitment of non-classical areas for word processing. PMID:21455315
Structural and functional analyses of human cerebral cortex using a surface-based atlas
NASA Technical Reports Server (NTRS)
Van Essen, D. C.; Drury, H. A.
1997-01-01
We have analyzed the geometry, geography, and functional organization of human cerebral cortex using surface reconstructions and cortical flat maps of the left and right hemispheres generated from a digital atlas (the Visible Man). The total surface area of the reconstructed Visible Man neocortex is 1570 cm2 (both hemispheres), approximately 70% of which is buried in sulci. By linking the Visible Man cerebrum to the Talairach stereotaxic coordinate space, the locations of activation foci reported in neuroimaging studies can be readily visualized in relation to the cortical surface. The associated spatial uncertainty was empirically shown to have a radius in three dimensions of approximately 10 mm. Application of this approach to studies of visual cortex reveals the overall patterns of activation associated with different aspects of visual function and the relationship of these patterns to topographically organized visual areas. Our analysis supports a distinction between an anterior region in ventral occipito-temporal cortex that is selectively involved in form processing and a more posterior region (in or near areas VP and V4v) involved in both form and color processing. Foci associated with motion processing are mainly concentrated in a region along the occipito-temporal junction, the ventral portion of which overlaps with foci also implicated in form processing. Comparisons between flat maps of human and macaque monkey cerebral cortex indicate significant differences as well as many similarities in the relative sizes and positions of cortical regions known or suspected to be homologous in the two species.
Processing speed in recurrent visual networks correlates with general intelligence.
Jolij, Jacob; Huisman, Danielle; Scholte, Steven; Hamel, Ronald; Kemner, Chantal; Lamme, Victor A F
2007-01-08
Studies on the neural basis of general fluid intelligence strongly suggest that a smarter brain processes information faster. Different brain areas, however, are interconnected by both feedforward and feedback projections. Whether both types of connections or only one of the two types are faster in smarter brains remains unclear. Here we show, by measuring visual evoked potentials during a texture discrimination task, that general fluid intelligence shows a strong correlation with processing speed in recurrent visual networks, while there is no correlation with speed of feedforward connections. The hypothesis that a smarter brain runs faster may need to be refined: a smarter brain's feedback connections run faster.
Rolfs, Martin; Carrasco, Marisa
2012-01-01
Humans and other animals with foveate vision make saccadic eye movements to prioritize the visual analysis of behaviorally relevant information. Even before movement onset, visual processing is selectively enhanced at the target of a saccade, presumably gated by brain areas controlling eye movements. Here we assess concurrent changes in visual performance and perceived contrast before saccades, and show that saccade preparation enhances perception rapidly, altering early visual processing in a manner akin to increasing the physical contrast of the visual input. Observers compared orientation and contrast of a test stimulus, appearing briefly before a saccade, to a standard stimulus, presented previously during a fixation period. We found simultaneous progressive enhancement in both orientation discrimination performance and perceived contrast as time approached saccade onset. These effects were robust as early as 60 ms after the eye movement was cued, much faster than the voluntary deployment of covert attention (without eye movements), which takes ~300 ms. Our results link the dynamics of saccade preparation, visual performance, and subjective experience and show that upcoming eye movements alter visual processing by increasing the signal strength. PMID:23035086
Hasegawa, Naoya; Kitamura, Hideaki; Murakami, Hiroatsu; Kameyama, Shigeki; Sasagawa, Mutsuo; Egawa, Jun; Tamura, Ryu; Endo, Taro; Someya, Toshiyuki
2013-01-01
Individuals with autistic spectrum disorder (ASD) demonstrate an impaired ability to infer the mental states of others from their gaze. Thus, investigating the relationship between ASD and eye gaze processing is crucial for understanding the neural basis of social impairments seen in individuals with ASD. In addition, characteristics of ASD are observed in more comprehensive visual perception tasks. These visual characteristics of ASD have been well-explained in terms of the atypical relationship between high- and low-level gaze processing in ASD. We studied neural activity during gaze processing in individuals with ASD using magnetoencephalography, with a focus on the relationship between high- and low-level gaze processing both temporally and spatially. Minimum Current Estimate analysis was applied to perform source analysis of magnetic responses to gaze stimuli. The source analysis showed that later activity in the primary visual area (V1) was affected by gaze direction only in the ASD group. Conversely, the right posterior superior temporal sulcus, which is a brain region that processes gaze as a social signal, in the typically developed group showed a tendency toward greater activation during direct compared with averted gaze processing. These results suggest that later activity in V1 relating to gaze processing is altered or possibly enhanced in high-functioning individuals with ASD, which may underpin the social cognitive impairments in these individuals. © 2013 S. Karger AG, Basel.
A Novel Interhemispheric Interaction: Modulation of Neuronal Cooperativity in the Visual Areas
Carmeli, Cristian; Lopez-Aguado, Laura; Schmidt, Kerstin E.; De Feo, Oscar; Innocenti, Giorgio M.
2007-01-01
Background The cortical representation of the visual field is split along the vertical midline, with the left and the right hemi-fields projecting to separate hemispheres. Connections between the visual areas of the two hemispheres are abundant near the representation of the visual midline. It was suggested that they re-establish the functional continuity of the visual field by controlling the dynamics of the responses in the two hemispheres. Methods/Principal Findings To understand if and how the interactions between the two hemispheres participate in processing visual stimuli, the synchronization of responses to identical or different moving gratings in the two hemi-fields were studied in anesthetized ferrets. The responses were recorded by multiple electrodes in the primary visual areas and the synchronization of local field potentials across the electrodes were analyzed with a recent method derived from dynamical system theory. Inactivating the visual areas of one hemisphere modulated the synchronization of the stimulus-driven activity in the other hemisphere. The modulation was stimulus-specific and was consistent with the fine morphology of callosal axons in particular with the spatio-temporal pattern of activity that axonal geometry can generate. Conclusions/Significance These findings describe a new kind of interaction between the cerebral hemispheres and highlight the role of axonal geometry in modulating aspects of cortical dynamics responsible for stimulus detection and/or categorization. PMID:18074012
2016-01-01
Abstract Cortical mapping techniques using fMRI have been instrumental in identifying the boundaries of topological (neighbor‐preserving) maps in early sensory areas. The presence of topological maps beyond early sensory areas raises the possibility that they might play a significant role in other cognitive systems, and that topological mapping might help to delineate areas involved in higher cognitive processes. In this study, we combine surface‐based visual, auditory, and somatomotor mapping methods with a naturalistic reading comprehension task in the same group of subjects to provide a qualitative and quantitative assessment of the cortical overlap between sensory‐motor maps in all major sensory modalities, and reading processing regions. Our results suggest that cortical activation during naturalistic reading comprehension overlaps more extensively with topological sensory‐motor maps than has been heretofore appreciated. Reading activation in regions adjacent to occipital lobe and inferior parietal lobe almost completely overlaps visual maps, whereas a significant portion of frontal activation for reading in dorsolateral and ventral prefrontal cortex overlaps both visual and auditory maps. Even classical language regions in superior temporal cortex are partially overlapped by topological visual and auditory maps. By contrast, the main overlap with somatomotor maps is restricted to a small region on the anterior bank of the central sulcus near the border between the face and hand representations of M‐I. Hum Brain Mapp 37:2784–2810, 2016. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc. PMID:27061771
Campagne, Aurélie; Fradcourt, Benoit; Pichat, Cédric; Baciu, Monica; Kauffmann, Louise; Peyrin, Carole
2016-01-01
Visual processing of emotional stimuli critically depends on the type of cognitive appraisal involved. The present fMRI pilot study aimed to investigate the cerebral correlates involved in the visual processing of emotional scenes in two tasks, one emotional, based on the appraisal of personal emotional experience, and the other motivational, based on the appraisal of the tendency to action. Given that the use of spatial frequency information is relatively flexible during the visual processing of emotional stimuli depending on the task's demands, we also explored the effect of the type of spatial frequency in visual stimuli in each task by using emotional scenes filtered in low spatial frequency (LSF) and high spatial frequencies (HSF). Activation was observed in the visual areas of the fusiform gyrus for all emotional scenes in both tasks, and in the amygdala for unpleasant scenes only. The motivational task induced additional activation in frontal motor-related areas (e.g. premotor cortex, SMA) and parietal regions (e.g. superior and inferior parietal lobules). Parietal regions were recruited particularly during the motivational appraisal of approach in response to pleasant scenes. These frontal and parietal activations, respectively, suggest that motor and navigation processes play a specific role in the identification of the tendency to action in the motivational task. Furthermore, activity observed in the motivational task, in response to both pleasant and unpleasant scenes, was significantly greater for HSF than for LSF scenes, suggesting that the tendency to action is driven mainly by the detailed information contained in scenes. Results for the emotional task suggest that spatial frequencies play only a small role in the evaluation of unpleasant and pleasant emotions. Our preliminary study revealed a partial distinction between visual processing of emotional scenes during identification of the tendency to action, and during identification of personal emotional experiences. It also illustrates flexible use of the spatial frequencies contained in scenes depending on their emotional valence and on task demands.
Lee, Kyoung-Min; Ahn, Kyung-Ha; Keller, Edward L
2012-01-01
The frontal eye fields (FEF), originally identified as an oculomotor cortex, have also been implicated in perceptual functions, such as constructing a visual saliency map and shifting visual attention. Further dissecting the area's role in the transformation from visual input to oculomotor command has been difficult because of spatial confounding between stimuli and responses and consequently between intermediate cognitive processes, such as attention shift and saccade preparation. Here we developed two tasks in which the visual stimulus and the saccade response were dissociated in space (the extended memory-guided saccade task), and bottom-up attention shift and saccade target selection were independent (the four-alternative delayed saccade task). Reversible inactivation of the FEF in rhesus monkeys disrupted, as expected, contralateral memory-guided saccades, but visual detection was demonstrated to be intact at the same field. Moreover, saccade behavior was impaired when a bottom-up shift of attention was not a prerequisite for saccade target selection, indicating that the inactivation effect was independent of the previously reported dysfunctions in bottom-up attention control. These findings underscore the motor aspect of the area's functions, especially in situations where saccades are generated by internal cognitive processes, including visual short-term memory and long-term associative memory.
Feature-Specific Organization of Feedback Pathways in Mouse Visual Cortex.
Huh, Carey Y L; Peach, John P; Bennett, Corbett; Vega, Roxana M; Hestrin, Shaul
2018-01-08
Higher and lower cortical areas in the visual hierarchy are reciprocally connected [1]. Although much is known about how feedforward pathways shape receptive field properties of visual neurons, relatively little is known about the role of feedback pathways in visual processing. Feedback pathways are thought to carry top-down signals, including information about context (e.g., figure-ground segmentation and surround suppression) [2-5], and feedback has been demonstrated to sharpen orientation tuning of neurons in the primary visual cortex (V1) [6, 7]. However, the response characteristics of feedback neurons themselves and how feedback shapes V1 neurons' tuning for other features, such as spatial frequency (SF), remain largely unknown. Here, using a retrograde virus, targeted electrophysiological recordings, and optogenetic manipulations, we show that putatively feedback neurons in layer 5 (hereafter "L5 feedback") in higher visual areas, AL (anterolateral area) and PM (posteromedial area), display distinct visual properties in awake head-fixed mice. AL L5 feedback neurons prefer significantly lower SF (mean: 0.04 cycles per degree [cpd]) compared to PM L5 feedback neurons (0.15 cpd). Importantly, silencing AL L5 feedback reduced visual responses of V1 neurons preferring low SF (mean change in firing rate: -8.0%), whereas silencing PM L5 feedback suppressed responses of high-SF-preferring V1 neurons (-20.4%). These findings suggest that feedback connections from higher visual areas convey distinctly tuned visual inputs to V1 that serve to boost V1 neurons' responses to SF. Such like-to-like functional organization may represent an important feature of feedback pathways in sensory systems and in the nervous system in general. Copyright © 2017 Elsevier Ltd. All rights reserved.
A massively asynchronous, parallel brain
Zeki, Semir
2015-01-01
Whether the visual brain uses a parallel or a serial, hierarchical, strategy to process visual signals, the end result appears to be that different attributes of the visual scene are perceived asynchronously—with colour leading form (orientation) by 40 ms and direction of motion by about 80 ms. Whatever the neural root of this asynchrony, it creates a problem that has not been properly addressed, namely how visual attributes that are perceived asynchronously over brief time windows after stimulus onset are bound together in the longer term to give us a unified experience of the visual world, in which all attributes are apparently seen in perfect registration. In this review, I suggest that there is no central neural clock in the (visual) brain that synchronizes the activity of different processing systems. More likely, activity in each of the parallel processing-perceptual systems of the visual brain is reset independently, making of the brain a massively asynchronous organ, just like the new generation of more efficient computers promise to be. Given the asynchronous operations of the brain, it is likely that the results of activities in the different processing-perceptual systems are not bound by physiological interactions between cells in the specialized visual areas, but post-perceptually, outside the visual brain. PMID:25823871
When apperceptive agnosia is explained by a deficit of primary visual processing.
Serino, Andrea; Cecere, Roberto; Dundon, Neil; Bertini, Caterina; Sanchez-Castaneda, Cristina; Làdavas, Elisabetta
2014-03-01
Visual agnosia is a deficit in shape perception, affecting figure, object, face and letter recognition. Agnosia is usually attributed to lesions to high-order modules of the visual system, which combine visual cues to represent the shape of objects. However, most of previously reported agnosia cases presented visual field (VF) defects and poor primary visual processing. The present case-study aims to verify whether form agnosia could be explained by a deficit in basic visual functions, rather that by a deficit in high-order shape recognition. Patient SDV suffered a bilateral lesion of the occipital cortex due to anoxia. When tested, he could navigate, interact with others, and was autonomous in daily life activities. However, he could not recognize objects from drawings and figures, read or recognize familiar faces. He was able to recognize objects by touch and people from their voice. Assessments of visual functions showed blindness at the centre of the VF, up to almost 5°, bilaterally, with better stimulus detection in the periphery. Colour and motion perception was preserved. Psychophysical experiments showed that SDV's visual recognition deficits were not explained by poor spatial acuity or by the crowding effect. Rather a severe deficit in line orientation processing might be a key mechanism explaining SDV's agnosia. Line orientation processing is a basic function of primary visual cortex neurons, necessary for detecting "edges" of visual stimuli to build up a "primal sketch" for object recognition. We propose, therefore, that some forms of visual agnosia may be explained by deficits in basic visual functions due to widespread lesions of the primary visual areas, affecting primary levels of visual processing. Copyright © 2013 Elsevier Ltd. All rights reserved.
Spatial Attention Reduces Burstiness in Macaque Visual Cortical Area MST.
Xue, Cheng; Kaping, Daniel; Ray, Sonia Baloni; Krishna, B Suresh; Treue, Stefan
2017-01-01
Visual attention modulates the firing rate of neurons in many primate cortical areas. In V4, a cortical area in the ventral visual pathway, spatial attention has also been shown to reduce the tendency of neurons to fire closely separated spikes (burstiness). A recent model proposes that a single mechanism accounts for both the firing rate enhancement and the burstiness reduction in V4, but this has not been empirically tested. It is also unclear if the burstiness reduction by spatial attention is found in other visual areas and for other attentional types. We therefore recorded from single neurons in the medial superior temporal area (MST), a key motion-processing area along the dorsal visual pathway, of two rhesus monkeys while they performed a task engaging both spatial and feature-based attention. We show that in MST, spatial attention is associated with a clear reduction in burstiness that is independent of the concurrent enhancement of firing rate. In contrast, feature-based attention enhances firing rate but is not associated with a significant reduction in burstiness. These results establish burstiness reduction as a widespread effect of spatial attention. They also suggest that in contrast to the recently proposed model, the effects of spatial attention on burstiness and firing rate emerge from different mechanisms. © The Author 2016. Published by Oxford University Press.
Spatial Attention Reduces Burstiness in Macaque Visual Cortical Area MST
Xue, Cheng; Kaping, Daniel; Ray, Sonia Baloni; Krishna, B. Suresh; Treue, Stefan
2017-01-01
Abstract Visual attention modulates the firing rate of neurons in many primate cortical areas. In V4, a cortical area in the ventral visual pathway, spatial attention has also been shown to reduce the tendency of neurons to fire closely separated spikes (burstiness). A recent model proposes that a single mechanism accounts for both the firing rate enhancement and the burstiness reduction in V4, but this has not been empirically tested. It is also unclear if the burstiness reduction by spatial attention is found in other visual areas and for other attentional types. We therefore recorded from single neurons in the medial superior temporal area (MST), a key motion-processing area along the dorsal visual pathway, of two rhesus monkeys while they performed a task engaging both spatial and feature-based attention. We show that in MST, spatial attention is associated with a clear reduction in burstiness that is independent of the concurrent enhancement of firing rate. In contrast, feature-based attention enhances firing rate but is not associated with a significant reduction in burstiness. These results establish burstiness reduction as a widespread effect of spatial attention. They also suggest that in contrast to the recently proposed model, the effects of spatial attention on burstiness and firing rate emerge from different mechanisms. PMID:28365773
Wada, Atsushi; Sakano, Yuichi; Ando, Hiroshi
2016-01-01
Vision is important for estimating self-motion, which is thought to involve optic-flow processing. Here, we investigated the fMRI response profiles in visual area V6, the precuneus motion area (PcM), and the cingulate sulcus visual area (CSv)—three medial brain regions recently shown to be sensitive to optic-flow. We used wide-view stereoscopic stimulation to induce robust self-motion processing. Stimuli included static, randomly moving, and coherently moving dots (simulating forward self-motion). We varied the stimulus size and the presence of stereoscopic information. A combination of univariate and multi-voxel pattern analyses (MVPA) revealed that fMRI responses in the three regions differed from each other. The univariate analysis identified optic-flow selectivity and an effect of stimulus size in V6, PcM, and CSv, among which only CSv showed a significantly lower response to random motion stimuli compared with static conditions. Furthermore, MVPA revealed an optic-flow specific multi-voxel pattern in the PcM and CSv, where the discrimination of coherent motion from both random motion and static conditions showed above-chance prediction accuracy, but that of random motion from static conditions did not. Additionally, while area V6 successfully classified different stimulus sizes regardless of motion pattern, this classification was only partial in PcM and was absent in CSv. This may reflect the known retinotopic representation in V6 and the absence of such clear visuospatial representation in CSv. We also found significant correlations between the strength of subjective self-motion and univariate activation in all examined regions except for primary visual cortex (V1). This neuro-perceptual correlation was significantly higher for V6, PcM, and CSv when compared with V1, and higher for CSv when compared with the visual motion area hMT+. Our convergent results suggest the significant involvement of CSv in self-motion processing, which may give rise to its percept. PMID:26973588
McMenamin, Brenton W.; Deason, Rebecca G.; Steele, Vaughn R.; Koutstaal, Wilma; Marsolek, Chad J.
2014-01-01
Previous research indicates that dissociable neural subsystems underlie abstract-category (AC) recognition and priming of objects (e.g., cat, piano) and specific-exemplar (SE) recognition and priming of objects (e.g., a calico cat, a different calico cat, a grand piano, etc.). However, the degree of separability between these subsystems is not known, despite the importance of this issue for assessing relevant theories. Visual object representations are widely distributed in visual cortex, thus a multivariate pattern analysis (MVPA) approach to analyzing functional magnetic resonance imaging (fMRI) data may be critical for assessing the separability of different kinds of visual object processing. Here we examined the neural representations of visual object categories and visual object exemplars using multi-voxel pattern analyses of brain activity elicited in visual object processing areas during a repetition-priming task. In the encoding phase, participants viewed visual objects and the printed names of other objects. In the subsequent test phase, participants identified objects that were either same-exemplar primed, different-exemplar primed, word-primed, or unprimed. In visual object processing areas, classifiers were trained to distinguish same-exemplar primed objects from word-primed objects. Then, the abilities of these classifiers to discriminate different-exemplar primed objects and word-primed objects (reflecting AC priming) and to discriminate same-exemplar primed objects and different-exemplar primed objects (reflecting SE priming) was assessed. Results indicated that (a) repetition priming in occipital-temporal regions is organized asymmetrically, such that AC priming is more prevalent in the left hemisphere and SE priming is more prevalent in the right hemisphere, and (b) AC and SE subsystems are weakly modular, not strongly modular or unified. PMID:25528436
McMenamin, Brenton W; Deason, Rebecca G; Steele, Vaughn R; Koutstaal, Wilma; Marsolek, Chad J
2015-02-01
Previous research indicates that dissociable neural subsystems underlie abstract-category (AC) recognition and priming of objects (e.g., cat, piano) and specific-exemplar (SE) recognition and priming of objects (e.g., a calico cat, a different calico cat, a grand piano, etc.). However, the degree of separability between these subsystems is not known, despite the importance of this issue for assessing relevant theories. Visual object representations are widely distributed in visual cortex, thus a multivariate pattern analysis (MVPA) approach to analyzing functional magnetic resonance imaging (fMRI) data may be critical for assessing the separability of different kinds of visual object processing. Here we examined the neural representations of visual object categories and visual object exemplars using multi-voxel pattern analyses of brain activity elicited in visual object processing areas during a repetition-priming task. In the encoding phase, participants viewed visual objects and the printed names of other objects. In the subsequent test phase, participants identified objects that were either same-exemplar primed, different-exemplar primed, word-primed, or unprimed. In visual object processing areas, classifiers were trained to distinguish same-exemplar primed objects from word-primed objects. Then, the abilities of these classifiers to discriminate different-exemplar primed objects and word-primed objects (reflecting AC priming) and to discriminate same-exemplar primed objects and different-exemplar primed objects (reflecting SE priming) was assessed. Results indicated that (a) repetition priming in occipital-temporal regions is organized asymmetrically, such that AC priming is more prevalent in the left hemisphere and SE priming is more prevalent in the right hemisphere, and (b) AC and SE subsystems are weakly modular, not strongly modular or unified. Copyright © 2014 Elsevier Inc. All rights reserved.
Experiences in using DISCUS for visualizing human communication
NASA Astrophysics Data System (ADS)
Groehn, Matti; Nieminen, Marko; Haho, Paeivi; Smeds, Riitta
2000-02-01
In this paper, we present further improvement to the DISCUS software that can be used to record and analyze the flow and constants of business process simulation session discussion. The tool was initially introduced in 'visual data exploration and analysis IV' conference. The initial features of the tool enabled the visualization of discussion flow in business process simulation sessions and the creation of SOM analyses. The improvements of the tool consists of additional visualization possibilities that enable quick on-line analyses and improved graphical statistics. We have also created the very first interface to audio data and implemented two ways to visualize it. We also outline additional possibilities to use the tool in other application areas: these include usability testing and the possibility to use the tool for capturing design rationale in a product development process. The data gathered with DISCUS may be used in other applications, and further work may be done with data ming techniques.
Abnormal visual scan paths: a psychophysiological marker of delusions in schizophrenia.
Phillips, M L; David, A S
1998-02-09
The role of the visual scan path as a psychophysiological marker of visual attention has been highlighted previously (Phillips and David, 1994). We investigated information processing in schizophrenic patients with severe delusions and again when the delusions were subsiding using visual scan path measurements. We aimed to demonstrate a specific deficit in processing human faces in deluded subjects by relating this to abnormal viewing strategies. Scan paths were measured in six deluded and five non-deluded schizophrenics (matched for medication and negative symptoms), and nine age-matched normal controls. Deluded subjects had abnormal scan paths in a recognition task, fixating non-feature areas significantly more than controls, but were equally accurate. Re-testing after improvement in delusional conviction revealed fewer group differences. The results suggest state-dependent abnormal information processing in schizophrenics when deluded, with reliance on less-salient visual information for decision-making.
Inagaki, Mikio; Fujita, Ichiro
2011-07-13
Social communication in nonhuman primates and humans is strongly affected by facial information from other individuals. Many cortical and subcortical brain areas are known to be involved in processing facial information. However, how the neural representation of faces differs across different brain areas remains unclear. Here, we demonstrate that the reference frame for spatial frequency (SF) tuning of face-responsive neurons differs in the temporal visual cortex and amygdala in monkeys. Consistent with psychophysical properties for face recognition, temporal cortex neurons were tuned to image-based SFs (cycles/image) and showed viewing distance-invariant representation of face patterns. On the other hand, many amygdala neurons were influenced by retina-based SFs (cycles/degree), a characteristic that is useful for social distance computation. The two brain areas also differed in the luminance contrast sensitivity of face-responsive neurons; amygdala neurons sharply reduced their responses to low luminance contrast images, while temporal cortex neurons maintained the level of their responses. From these results, we conclude that different types of visual processing in the temporal visual cortex and the amygdala contribute to the construction of the neural representations of faces.
Rhone, Ariane E; Nourski, Kirill V; Oya, Hiroyuki; Kawasaki, Hiroto; Howard, Matthew A; McMurray, Bob
In everyday conversation, viewing a talker's face can provide information about the timing and content of an upcoming speech signal, resulting in improved intelligibility. Using electrocorticography, we tested whether human auditory cortex in Heschl's gyrus (HG) and on superior temporal gyrus (STG) and motor cortex on precentral gyrus (PreC) were responsive to visual/gestural information prior to the onset of sound and whether early stages of auditory processing were sensitive to the visual content (speech syllable versus non-speech motion). Event-related band power (ERBP) in the high gamma band was content-specific prior to acoustic onset on STG and PreC, and ERBP in the beta band differed in all three areas. Following sound onset, we found with no evidence for content-specificity in HG, evidence for visual specificity in PreC, and specificity for both modalities in STG. These results support models of audio-visual processing in which sensory information is integrated in non-primary cortical areas.
A studyforrest extension, retinotopic mapping and localization of higher visual areas
Sengupta, Ayan; Kaule, Falko R.; Guntupalli, J. Swaroop; Hoffmann, Michael B.; Häusler, Christian; Stadler, Jörg; Hanke, Michael
2016-01-01
The studyforrest (http://studyforrest.org) dataset is likely the largest neuroimaging dataset on natural language and story processing publicly available today. In this article, along with a companion publication, we present an update of this dataset that extends its scope to vision and multi-sensory research. 15 participants of the original cohort volunteered for a series of additional studies: a clinical examination of visual function, a standard retinotopic mapping procedure, and a localization of higher visual areas—such as the fusiform face area. The combination of this update, the previous data releases for the dataset, and the companion publication, which includes neuroimaging and eye tracking data from natural stimulation with a motion picture, form an extremely versatile and comprehensive resource for brain imaging research—with almost six hours of functional neuroimaging data across five different stimulation paradigms for each participant. Furthermore, we describe employed paradigms and present results that document the quality of the data for the purpose of characterising major properties of participants’ visual processing stream. PMID:27779618
The neural basis of body form and body action agnosia.
Moro, Valentina; Urgesi, Cosimo; Pernigo, Simone; Lanteri, Paola; Pazzaglia, Mariella; Aglioti, Salvatore Maria
2008-10-23
Visual analysis of faces and nonfacial body stimuli brings about neural activity in different cortical areas. Moreover, processing body form and body action relies on distinct neural substrates. Although brain lesion studies show specific face processing deficits, neuropsychological evidence for defective recognition of nonfacial body parts is lacking. By combining psychophysics studies with lesion-mapping techniques, we found that lesions of ventromedial, occipitotemporal areas induce face and body recognition deficits while lesions involving extrastriate body area seem causatively associated with impaired recognition of body but not of face and object stimuli. We also found that body form and body action recognition deficits can be double dissociated and are causatively associated with lesions to extrastriate body area and ventral premotor cortex, respectively. Our study reports two category-specific visual deficits, called body form and body action agnosia, and highlights their neural underpinnings.
Alterations to global but not local motion processing in long-term ecstasy (MDMA) users.
White, Claire; Brown, John; Edwards, Mark
2014-07-01
Growing evidence indicates that the main psychoactive ingredient in the illegal drug "ecstasy" (methylendioxymethamphetamine) causes reduced activity in the serotonin and gamma-aminobutyric acid (GABA) systems in humans. On the basis of substantial serotonin input to the occipital lobe, recent research investigated visual processing in long-term users and found a larger magnitude of the tilt aftereffect, interpreted to reflect broadened orientation tuning bandwidths. Further research found higher orientation discrimination thresholds and reduced long-range interactions in the primary visual area of ecstasy users. The aim of the present research was to investigate whether serotonin-mediated V1 visual processing deficits in ecstasy users extend to motion processing mechanisms. Forty-five participants (21 controls, 24 drug users) completed two psychophysical studies: A direction discrimination study directly measured local motion processing in V1, while a motion coherence task tested global motion processing in area V5/MT. "Primary" ecstasy users (n = 18), those without substantial polydrug use, had significantly lower global motion thresholds than controls [p = 0.027, Cohen's d = 0.78 (large)], indicating increased sensitivity to global motion stimuli, but no difference in local motion processing (p = 0.365). These results extend on previous research investigating the long-term effects of illicit drugs on visual processing. Two possible explanations are explored: defuse attentional processes may be facilitating spatial pooling of motion signals in users. Alternatively, it may be that a GABA-mediated disruption to V5/MT processing is reducing spatial suppression and therefore improving global motion perception in ecstasy users.
Lee Masson, Haemy; Bulthé, Jessica; Op de Beeck, Hans P; Wallraven, Christian
2016-08-01
Humans are highly adept at multisensory processing of object shape in both vision and touch. Previous studies have mostly focused on where visually perceived object-shape information can be decoded, with haptic shape processing receiving less attention. Here, we investigate visuo-haptic shape processing in the human brain using multivoxel correlation analyses. Importantly, we use tangible, parametrically defined novel objects as stimuli. Two groups of participants first performed either a visual or haptic similarity-judgment task. The resulting perceptual object-shape spaces were highly similar and matched the physical parameter space. In a subsequent fMRI experiment, objects were first compared within the learned modality and then in the other modality in a one-back task. When correlating neural similarity spaces with perceptual spaces, visually perceived shape was decoded well in the occipital lobe along with the ventral pathway, whereas haptically perceived shape information was mainly found in the parietal lobe, including frontal cortex. Interestingly, ventrolateral occipito-temporal cortex decoded shape in both modalities, highlighting this as an area capable of detailed visuo-haptic shape processing. Finally, we found haptic shape representations in early visual cortex (in the absence of visual input), when participants switched from visual to haptic exploration, suggesting top-down involvement of visual imagery on haptic shape processing. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Visual Analysis of Air Traffic Data
NASA Technical Reports Server (NTRS)
Albrecht, George Hans; Pang, Alex
2012-01-01
In this paper, we present visual analysis tools to help study the impact of policy changes on air traffic congestion. The tools support visualization of time-varying air traffic density over an area of interest using different time granularity. We use this visual analysis platform to investigate how changing the aircraft separation volume can reduce congestion while maintaining key safety requirements. The same platform can also be used as a decision aid for processing requests for unmanned aerial vehicle operations.
Lin, Jo-Fu Lotus; Silva-Pereyra, Juan; Chou, Chih-Che; Lin, Fa-Hsuan
2018-04-11
Variability in neuronal response latency has been typically considered caused by random noise. Previous studies of single cells and large neuronal populations have shown that the temporal variability tends to increase along the visual pathway. Inspired by these previous studies, we hypothesized that functional areas at later stages in the visual pathway of face processing would have larger variability in the response latency. To test this hypothesis, we used magnetoencephalographic data collected when subjects were presented with images of human faces. Faces are known to elicit a sequence of activity from the primary visual cortex to the fusiform gyrus. Our results revealed that the fusiform gyrus showed larger variability in the response latency compared to the calcarine fissure. Dynamic and spectral analyses of the latency variability indicated that the response latency in the fusiform gyrus was more variable than in the calcarine fissure between 70 ms and 200 ms after the stimulus onset and between 4 Hz and 40 Hz, respectively. The sequential processing of face information from the calcarine sulcus to the fusiform sulcus was more reliably detected based on sizes of the response variability than instants of the maximal response peaks. With two areas in the ventral visual pathway, we show that the variability in response latency across brain areas can be used to infer the sequence of cortical activity.
Developmental remodeling of corticocortical feedback circuits in ferret visual cortex
Khalil, Reem; Levitt, Jonathan B.
2014-01-01
Visual cortical areas in the mammalian brain are linked through a system of interareal feedforward and feedback connections, which presumably underlie different visual functions. We characterized the refinement of feedback projections to primary visual cortex (V1) from multiple sources in juvenile ferrets ranging in age from four to ten weeks postnatal. We studied whether the refinement of different aspects of feedback circuitry from multiple visual cortical areas proceeds at a similar rate in all areas. We injected the neuronal tracer cholera toxin B (CTb) into V1, and mapped the areal and laminar distribution of retrogradely labeled cells in extrastriate cortex. Around the time of eye opening at four weeks postnatal, the retinotopic arrangement of feedback appears essentially adultlike; however, Suprasylvian cortex supplies the greatest proportion of feedback, whereas area 18 supplies the greatest proportion in the adult. The density of feedback cells and the ratio of supragranular/infragranular feedback contribution declined in this period at a similar rate in all cortical areas. We also find significant feedback to V1 from layer IV of all extrastriate areas. The regularity of cell spacing, the proportion of feedback arising from layer IV, and the tangential extent of feedback in each area all remained essentially unchanged during this period, except for the infragranular feedback source in area 18 which expanded. Thus, while much of the basic pattern of cortical feedback to V1 is present before eye opening, there is major synchronous reorganization after eye opening, suggesting a crucial role for visual experience in this remodeling process. PMID:24665018
Developmental remodeling of corticocortical feedback circuits in ferret visual cortex.
Khalil, Reem; Levitt, Jonathan B
2014-10-01
Visual cortical areas in the mammalian brain are linked through a system of interareal feedforward and feedback connections, which presumably underlie different visual functions. We characterized the refinement of feedback projections to primary visual cortex (V1) from multiple sources in juvenile ferrets ranging in age from 4-10 weeks postnatal. We studied whether the refinement of different aspects of feedback circuitry from multiple visual cortical areas proceeds at a similar rate in all areas. We injected the neuronal tracer cholera toxin B (CTb) into V1 and mapped the areal and laminar distribution of retrogradely labeled cells in extrastriate cortex. Around the time of eye opening at 4 weeks postnatal, the retinotopic arrangement of feedback appears essentially adult-like; however, suprasylvian cortex supplies the greatest proportion of feedback, whereas area 18 supplies the greatest proportion in the adult. The density of feedback cells and the ratio of supragranular/infragranular feedback contribution declined in this period at a similar rate in all cortical areas. We also found significant feedback to V1 from layer IV of all extrastriate areas. The regularity of cell spacing, the proportion of feedback arising from layer IV, and the tangential extent of feedback in each area all remained essentially unchanged during this period, except for the infragranular feedback source in area 18, which expanded. Thus, while much of the basic pattern of cortical feedback to V1 is present before eye opening, there is major synchronous reorganization after eye opening, suggesting a crucial role for visual experience in this remodeling process. © 2014 Wiley Periodicals, Inc.
Dima, Diana C; Perry, Gavin; Singh, Krish D
2018-06-11
In navigating our environment, we rapidly process and extract meaning from visual cues. However, the relationship between visual features and categorical representations in natural scene perception is still not well understood. Here, we used natural scene stimuli from different categories and filtered at different spatial frequencies to address this question in a passive viewing paradigm. Using representational similarity analysis (RSA) and cross-decoding of magnetoencephalography (MEG) data, we show that categorical representations emerge in human visual cortex at ∼180 ms and are linked to spatial frequency processing. Furthermore, dorsal and ventral stream areas reveal temporally and spatially overlapping representations of low and high-level layer activations extracted from a feedforward neural network. Our results suggest that neural patterns from extrastriate visual cortex switch from low-level to categorical representations within 200 ms, highlighting the rapid cascade of processing stages essential in human visual perception. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Weinstein, Joel M; Gilmore, Rick O; Shaikh, Sumera M; Kunselman, Allen R; Trescher, William V; Tashima, Lauren M; Boltz, Marianne E; McAuliffe, Matthew B; Cheung, Albert; Fesi, Jeremy D
2012-07-01
We sought to characterize visual motion processing in children with cerebral visual impairment (CVI) due to periventricular white matter damage caused by either hydrocephalus (eight individuals) or periventricular leukomalacia (PVL) associated with prematurity (11 individuals). Using steady-state visually evoked potentials (ssVEP), we measured cortical activity related to motion processing for two distinct types of visual stimuli: 'local' motion patterns thought to activate mainly primary visual cortex (V1), and 'global' or coherent patterns thought to activate higher cortical visual association areas (V3, V5, etc.). We studied three groups of children: (1) 19 children with CVI (mean age 9y 6mo [SD 3y 8mo]; 9 male; 10 female); (2) 40 neurologically and visually normal comparison children (mean age 9y 6mo [SD 3y 1mo]; 18 male; 22 female); and (3) because strabismus and amblyopia are common in children with CVI, a group of 41 children without neurological problems who had visual deficits due to amblyopia and/or strabismus (mean age 7y 8mo [SD 2y 8mo]; 28 male; 13 female). We found that the processing of global as opposed to local motion was preferentially impaired in individuals with CVI, especially for slower target velocities (p=0.028). Motion processing is impaired in children with CVI. ssVEP may provide useful and objective information about the development of higher visual function in children at risk for CVI. © The Authors. Journal compilation © Mac Keith Press 2011.
Villena-González, Mario; López, Vladimir; Rodríguez, Eugenio
2016-05-15
When attention is oriented toward inner thoughts, as spontaneously occurs during mind wandering, the processing of external information is attenuated. However, the potential effects of thought's content regarding sensory attenuation are still unknown. The present study aims to assess if the representational format of thoughts, such as visual imagery or inner speech, might differentially affect the sensory processing of external stimuli. We recorded the brain activity of 20 participants (12 women) while they were exposed to a probe visual stimulus in three different conditions: executing a task on the visual probe (externally oriented attention), and two conditions involving inward-turned attention i.e. generating inner speech and performing visual imagery. Event-related potentials results showed that the P1 amplitude, related with sensory response, was significantly attenuated during both task involving inward attention compared with external task. When both representational formats were compared, the visual imagery condition showed stronger attenuation in sensory processing than inner speech condition. Alpha power in visual areas was measured as an index of cortical inhibition. Larger alpha amplitude was found when participants engaged in an internal thought contrasted with the external task, with visual imagery showing even more alpha power than inner speech condition. Our results show, for the first time to our knowledge, that visual attentional processing to external stimuli during self-generated thoughts is differentially affected by the representational format of the ongoing train of thoughts. Copyright © 2016 Elsevier Inc. All rights reserved.
Neuro-inspired smart image sensor: analog Hmax implementation
NASA Astrophysics Data System (ADS)
Paindavoine, Michel; Dubois, Jérôme; Musa, Purnawarman
2015-03-01
Neuro-Inspired Vision approach, based on models from biology, allows to reduce the computational complexity. One of these models - The Hmax model - shows that the recognition of an object in the visual cortex mobilizes V1, V2 and V4 areas. From the computational point of view, V1 corresponds to the area of the directional filters (for example Sobel filters, Gabor filters or wavelet filters). This information is then processed in the area V2 in order to obtain local maxima. This new information is then sent to an artificial neural network. This neural processing module corresponds to area V4 of the visual cortex and is intended to categorize objects present in the scene. In order to realize autonomous vision systems (consumption of a few milliwatts) with such treatments inside, we studied and realized in 0.35μm CMOS technology prototypes of two image sensors in order to achieve the V1 and V2 processing of Hmax model.
Processing Stages Underlying Word Recognition in the Anteroventral Temporal Lobe
Halgren, Eric; Wang, Chunmao; Schomer, Donald L.; Knake, Susanne; Marinkovic, Ksenija; Wu, Julian; Ulbert, Istvan
2006-01-01
The anteroventral temporal lobe integrates visual, lexical, semantic and mnestic aspects of word-processing, through its reciprocal connections with the ventral visual stream, language areas, and the hippocampal formation. We used linear microelectrode arrays to probe population synaptic currents and neuronal firing in different cortical layers of the anteroventral temporal lobe, during semantic judgments with implicit priming, and overt word recognition. Since different extrinsic and associative inputs preferentially target different cortical layers, this method can help reveal the sequence and nature of local processing stages at a higher resolution than was previously possible. The initial response in inferotemporal and perirhinal cortices is a brief current sink beginning at ~120ms, and peaking at ~170ms. Localization of this initial sink to middle layers suggests that it represents feedforward input from lower visual areas, and simultaneously increased firing implies that it represents excitatory synaptic currents. Until ~800ms, the main focus of transmembrane current sinks alternates between middle and superficial layers, with the superficial focus becoming increasingly dominant after ~550ms. Since superficial layers are the target of local and feedback associative inputs, this suggests an alternation in predominant synaptic input between feedforward and feedback modes. Word repetition does not affect the initial perirhinal and inferotemporal middle layer sink, but does decrease later activity. Entorhinal activity begins later (~200ms), with greater apparent excitatory postsynaptic currents and multiunit activity in neocortically-projecting than hippocampal-projecting layers. In contrast to perirhinal and entorhinal responses, entorhinal responses are larger to repeated words during memory retrieval. These results identify a sequence of physiological activation, beginning with a sharp activation from lower level visual areas carrying specific information to middle layers. This is followed by feedback and associative interactions involving upper cortical layers, which are abbreviated to repeated words. Following bottom-up and associative stages, top-down recollective processes may be driven by entorhinal cortex. Word processing involves a systematic sequence of fast feedforward information transfer from visual areas to anteroventral temporal cortex, followed by prolonged interactions of this feedforward information with local associations, and feedback mnestic information from the medial temporal lobe. PMID:16488158
Takeuchi, Tatsuto; Yoshimoto, Sanae; Shimada, Yasuhiro; Kochiyama, Takanori; Kondo, Hirohito M
2017-02-19
Recent studies have shown that interindividual variability can be a rich source of information regarding the mechanism of human visual perception. In this study, we examined the mechanisms underlying interindividual variability in the perception of visual motion, one of the fundamental components of visual scene analysis, by measuring neurotransmitter concentrations using magnetic resonance spectroscopy. First, by psychophysically examining two types of motion phenomena-motion assimilation and contrast-we found that, following the presentation of the same stimulus, some participants perceived motion assimilation, while others perceived motion contrast. Furthermore, we found that the concentration of the excitatory neurotransmitter glutamate-glutamine (Glx) in the dorsolateral prefrontal cortex (Brodmann area 46) was positively correlated with the participant's tendency to motion assimilation over motion contrast; however, this effect was not observed in the visual areas. The concentration of the inhibitory neurotransmitter γ-aminobutyric acid had only a weak effect compared with that of Glx. We conclude that excitatory process in the suprasensory area is important for an individual's tendency to determine antagonistically perceived visual motion phenomena.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Author(s).
Inter-area correlations in the ventral visual pathway reflect feature integration
Freeman, Jeremy; Donner, Tobias H.; Heeger, David J.
2011-01-01
During object perception, the brain integrates simple features into representations of complex objects. A perceptual phenomenon known as visual crowding selectively interferes with this process. Here, we use crowding to characterize a neural correlate of feature integration. Cortical activity was measured with functional magnetic resonance imaging, simultaneously in multiple areas of the ventral visual pathway (V1–V4 and the visual word form area, VWFA, which responds preferentially to familiar letters), while human subjects viewed crowded and uncrowded letters. Temporal correlations between cortical areas were lower for crowded letters than for uncrowded letters, especially between V1 and VWFA. These differences in correlation were retinotopically specific, and persisted when attention was diverted from the letters. But correlation differences were not evident when we substituted the letters with grating patches that were not crowded under our stimulus conditions. We conclude that inter-area correlations reflect feature integration and are disrupted by crowding. We propose that crowding may perturb the transformations between neural representations along the ventral pathway that underlie the integration of features into objects. PMID:21521832
Integration for navigation on the UMASS mobile perception lab
NASA Technical Reports Server (NTRS)
Draper, Bruce; Fennema, Claude; Rochwerger, Benny; Riseman, Edward; Hanson, Allen
1994-01-01
Integration of real-time visual procedures for use on the Mobile Perception Lab (MPL) was presented. The MPL is an autonomous vehicle designed for testing visually guided behavior. Two critical areas of focus in the system design were data storage/exchange and process control. The Intermediate Symbolic Representation (ISR3) supported data storage and exchange, and the MPL script monitor provided process control. Resource allocation, inter-process communication, and real-time control are difficult problems which must be solved in order to construct strong autonomous systems.
Artificial limb representation in amputees
van den Heiligenberg, Fiona M Z; Orlov, Tanya; Macdonald, Scott N; Duff, Eugene P; Henderson Slater, David; Beckmann, Christian F; Johansen-Berg, Heidi; Culham, Jody C; Makin, Tamar R
2018-01-01
Abstract The human brain contains multiple hand-selective areas, in both the sensorimotor and visual systems. Could our brain repurpose neural resources, originally developed for supporting hand function, to represent and control artificial limbs? We studied individuals with congenital or acquired hand-loss (hereafter one-handers) using functional MRI. We show that the more one-handers use an artificial limb (prosthesis) in their everyday life, the stronger visual hand-selective areas in the lateral occipitotemporal cortex respond to prosthesis images. This was found even when one-handers were presented with images of active prostheses that share the functionality of the hand but not necessarily its visual features (e.g. a ‘hook’ prosthesis). Further, we show that daily prosthesis usage determines large-scale inter-network communication across hand-selective areas. This was demonstrated by increased resting state functional connectivity between visual and sensorimotor hand-selective areas, proportional to the intensiveness of everyday prosthesis usage. Further analysis revealed a 3-fold coupling between prosthesis activity, visuomotor connectivity and usage, suggesting a possible role for the motor system in shaping use-dependent representation in visual hand-selective areas, and/or vice versa. Moreover, able-bodied control participants who routinely observe prosthesis usage (albeit less intensively than the prosthesis users) showed significantly weaker associations between degree of prosthesis observation and visual cortex activity or connectivity. Together, our findings suggest that altered daily motor behaviour facilitates prosthesis-related visual processing and shapes communication across hand-selective areas. This neurophysiological substrate for prosthesis embodiment may inspire rehabilitation approaches to improve usage of existing substitutionary devices and aid implementation of future assistive and augmentative technologies. PMID:29534154
Artificial limb representation in amputees.
van den Heiligenberg, Fiona M Z; Orlov, Tanya; Macdonald, Scott N; Duff, Eugene P; Henderson Slater, David; Beckmann, Christian F; Johansen-Berg, Heidi; Culham, Jody C; Makin, Tamar R
2018-05-01
The human brain contains multiple hand-selective areas, in both the sensorimotor and visual systems. Could our brain repurpose neural resources, originally developed for supporting hand function, to represent and control artificial limbs? We studied individuals with congenital or acquired hand-loss (hereafter one-handers) using functional MRI. We show that the more one-handers use an artificial limb (prosthesis) in their everyday life, the stronger visual hand-selective areas in the lateral occipitotemporal cortex respond to prosthesis images. This was found even when one-handers were presented with images of active prostheses that share the functionality of the hand but not necessarily its visual features (e.g. a 'hook' prosthesis). Further, we show that daily prosthesis usage determines large-scale inter-network communication across hand-selective areas. This was demonstrated by increased resting state functional connectivity between visual and sensorimotor hand-selective areas, proportional to the intensiveness of everyday prosthesis usage. Further analysis revealed a 3-fold coupling between prosthesis activity, visuomotor connectivity and usage, suggesting a possible role for the motor system in shaping use-dependent representation in visual hand-selective areas, and/or vice versa. Moreover, able-bodied control participants who routinely observe prosthesis usage (albeit less intensively than the prosthesis users) showed significantly weaker associations between degree of prosthesis observation and visual cortex activity or connectivity. Together, our findings suggest that altered daily motor behaviour facilitates prosthesis-related visual processing and shapes communication across hand-selective areas. This neurophysiological substrate for prosthesis embodiment may inspire rehabilitation approaches to improve usage of existing substitutionary devices and aid implementation of future assistive and augmentative technologies.
Functional size of human visual area V1: a neural correlate of top-down attention.
Verghese, Ashika; Kolbe, Scott C; Anderson, Andrew J; Egan, Gary F; Vidyasagar, Trichur R
2014-06-01
Heavy demands are placed on the brain's attentional capacity when selecting a target item in a cluttered visual scene, or when reading. It is widely accepted that such attentional selection is mediated by top-down signals from higher cortical areas to early visual areas such as the primary visual cortex (V1). Further, it has also been reported that there is considerable variation in the surface area of V1. This variation may impact on either the number or specificity of attentional feedback signals and, thereby, the efficiency of attentional mechanisms. In this study, we investigated whether individual differences between humans performing attention-demanding tasks can be related to the functional area of V1. We found that those with a larger representation in V1 of the central 12° of the visual field as measured using BOLD signals from fMRI were able to perform a serial search task at a faster rate. In line with recent suggestions of the vital role of visuo-spatial attention in reading, the speed of reading showed a strong positive correlation with the speed of visual search, although it showed little correlation with the size of V1. The results support the idea that the functional size of the primary visual cortex is an important determinant of the efficiency of selective spatial attention for simple tasks, and that the attentional processing required for complex tasks like reading are to a large extent determined by other brain areas and inter-areal connections. Copyright © 2014 Elsevier Inc. All rights reserved.
Beyond the FFA: The Role of the Ventral Anterior Temporal Lobes in Face Processing
Collins, Jessica A.; Olson, Ingrid R.
2014-01-01
Extensive research has supported the existence of a specialized face-processing network that is distinct from the visual processing areas used for general object recognition. The majority of this work has been aimed at characterizing the response properties of the fusiform face area (FFA) and the occipital face area (OFA), which together are thought to constitute the core network of brain areas responsible for facial identification. Although accruing evidence has shown that face-selective patches in the ventral anterior temporal lobes (vATLs) are interconnected with the FFA and OFA, and that they play a role in facial identification, the relative contribution of these brain areas to the core face-processing network has remained unarticulated. Here we review recent research critically implicating the vATLs in face perception and memory. We propose that current models of face processing should be revised such that the ventral anterior temporal lobes serve a centralized role in the visual face-processing network. We speculate that a hierarchically organized system of face processing areas extends bilaterally from the inferior occipital gyri to the vATLs, with facial representations becoming increasingly complex and abstracted from low-level perceptual features as they move forward along this network. The anterior temporal face areas may serve as the apex of this hierarchy, instantiating the final stages of face recognition. We further argue that the anterior temporal face areas are ideally suited to serve as an interface between face perception and face memory, linking perceptual representations of individual identity with person-specific semantic knowledge. PMID:24937188
Haptic perception and body representation in lateral and medial occipito-temporal cortices.
Costantini, Marcello; Urgesi, Cosimo; Galati, Gaspare; Romani, Gian Luca; Aglioti, Salvatore M
2011-04-01
Although vision is the primary sensory modality that humans and other primates use to identify objects in the environment, we can recognize crucial object features (e.g., shape, size) using the somatic modality. Previous studies have shown that the occipito-temporal areas dedicated to the visual processing of object forms, faces and bodies also show category-selective responses when the preferred stimuli are haptically explored out of view. Visual processing of human bodies engages specific areas in lateral (extrastriate body area, EBA) and medial (fusiform body area, FBA) occipito-temporal cortex. This study aimed at exploring the relative involvement of EBA and FBA in the haptic exploration of body parts. During fMRI scanning, participants were asked to haptically explore either real-size fake body parts or objects. We found a selective activation of right and left EBA, but not of right FBA, while participants haptically explored body parts as compared to real objects. This suggests that EBA may integrate visual body representations with somatosensory information regarding body parts and form a multimodal representation of the body. Furthermore, both left and right EBA showed a comparable level of body selectivity during haptic perception and visual imagery. However, right but not left EBA was more activated during haptic exploration than visual imagery of body parts, ruling out that the response to haptic body exploration was entirely due to the use of visual imagery. Overall, the results point to the existence of different multimodal body representations in the occipito-temporal cortex which are activated during perception and imagery of human body parts. Copyright © 2011 Elsevier Ltd. All rights reserved.
Laramée, Marie-Eve; Smolders, Katrien; Hu, Tjing-Tjing; Bronchti, Gilles; Boire, Denis; Arckens, Lutgarde
2016-01-01
In blind individuals, visually deprived occipital areas are activated by non-visual stimuli. The extent of this cross-modal activation depends on the age at onset of blindness. Cross-modal inputs have access to several anatomical pathways to reactivate deprived visual areas. Ectopic cross-modal subcortical connections have been shown in anophthalmic animals but not in animals deprived of sight at a later age. Direct and indirect cross-modal cortical connections toward visual areas could also be involved, yet the number of neurons implicated is similar between blind mice and sighted controls. Changes at the axon terminal, dendritic spine or synaptic level are therefore expected upon loss of visual inputs. Here, the proteome of V1, V2M and V2L from P0-enucleated, anophthalmic and sighted mice, sharing a common genetic background (C57BL/6J x ZRDCT/An), was investigated by 2-D DIGE and Western analyses to identify molecular adaptations to enucleation and/or anophthalmia. Few proteins were differentially expressed in enucleated or anophthalmic mice in comparison to sighted mice. The loss of sight affected three pathways: metabolism, synaptic transmission and morphogenesis. Most changes were detected in V1, followed by V2M. Overall, cross-modal adaptations could be promoted in both models of early blindness but not through the exact same molecular strategy. A lower metabolic activity observed in visual areas of blind mice suggests that even if cross-modal inputs reactivate visual areas, they could remain suboptimally processed.
Smolders, Katrien; Hu, Tjing-Tjing; Bronchti, Gilles; Boire, Denis; Arckens, Lutgarde
2016-01-01
In blind individuals, visually deprived occipital areas are activated by non-visual stimuli. The extent of this cross-modal activation depends on the age at onset of blindness. Cross-modal inputs have access to several anatomical pathways to reactivate deprived visual areas. Ectopic cross-modal subcortical connections have been shown in anophthalmic animals but not in animals deprived of sight at a later age. Direct and indirect cross-modal cortical connections toward visual areas could also be involved, yet the number of neurons implicated is similar between blind mice and sighted controls. Changes at the axon terminal, dendritic spine or synaptic level are therefore expected upon loss of visual inputs. Here, the proteome of V1, V2M and V2L from P0-enucleated, anophthalmic and sighted mice, sharing a common genetic background (C57BL/6J x ZRDCT/An), was investigated by 2-D DIGE and Western analyses to identify molecular adaptations to enucleation and/or anophthalmia. Few proteins were differentially expressed in enucleated or anophthalmic mice in comparison to sighted mice. The loss of sight affected three pathways: metabolism, synaptic transmission and morphogenesis. Most changes were detected in V1, followed by V2M. Overall, cross-modal adaptations could be promoted in both models of early blindness but not through the exact same molecular strategy. A lower metabolic activity observed in visual areas of blind mice suggests that even if cross-modal inputs reactivate visual areas, they could remain suboptimally processed. PMID:27410964
A New System To Support Knowledge Discovery: Telemakus.
ERIC Educational Resources Information Center
Revere, Debra; Fuller, Sherrilynne S.; Bugni, Paul F.; Martin, George M.
2003-01-01
The Telemakus System builds on the areas of concept representation, schema theory, and information visualization to enhance knowledge discovery from scientific literature. This article describes the underlying theories and an overview of a working implementation designed to enhance the knowledge discovery process through retrieval, visual and…
Goebel, Rainer
2018-01-01
Abstract Visual perception includes ventral and dorsal stream processes. However, it is still unclear whether the former is predominantly related to conscious and the latter to nonconscious visual perception as argued in the literature. In this study upright and inverted body postures were rendered either visible or invisible under continuous flash suppression (CFS), while brain activity of human participants was measured with functional MRI (fMRI). Activity in the ventral body-sensitive areas was higher during visible conditions. In comparison, activity in the posterior part of the bilateral intraparietal sulcus (IPS) showed a significant interaction of stimulus orientation and visibility. Our results provide evidence that dorsal stream areas are less associated with visual awareness. PMID:29445766
Müller, Matthias M; Andersen, Søren K; Hindi Attar, Catherine
2011-11-02
A central controversy in the field of attention is how the brain deals with emotional distractors and to what extent they capture attentional processing resources reflexively due to their inherent significance for guidance of adaptive behavior and survival. Especially, the time course of competitive interactions in early visual areas and whether masking of briefly presented emotional stimuli can inhibit biasing of processing resources in these areas is currently unknown. We recorded frequency-tagged potentials evoked by a flickering target detection task in the foreground of briefly presented emotional or neutral pictures that were followed by a mask in human subjects. We observed greater competition for processing resources in early visual cortical areas with shortly presented emotional relative to neutral pictures ~275 ms after picture offset. This was paralleled by a reduction of target detection rates in trials with emotional pictures ~400 ms after picture offset. Our finding that briefly presented emotional distractors are able to bias attention well after their offset provides evidence for a rather slow feedback or reentrant neural competition mechanism for emotional distractors that continues after the offset of the emotional stimulus.
Improved emotional conflict control triggered by the processing priority of negative emotion.
Yang, Qian; Wang, Xiangpeng; Yin, Shouhang; Zhao, Xiaoyue; Tan, Jinfeng; Chen, Antao
2016-04-18
The prefrontal cortex is responsible for emotional conflict resolution, and this control mechanism is affected by the emotional valence of distracting stimuli. In the present study, we investigated effects of negative and positive stimuli on emotional conflict control using a face-word Stroop task in combination with functional brain imaging. Emotional conflict was absent in the negative face context, in accordance with the null activation observed in areas regarding emotional face processing (fusiform face area, middle temporal/occipital gyrus). Importantly, these visual areas negatively coupled with the dorsolateral prefrontal cortex (DLPFC). However, the significant emotional conflict was observed in the positive face context, this effect was accompanied by activation in areas associated with emotional face processing, and the default mode network (DMN), here, DLPFC mainly negatively coupled with DMN, rather than visual areas. These results suggested that the conflict control mechanism exerted differently between negative faces and positive faces, it implemented more efficiently in the negative face condition, whereas it is more devoted to inhibiting internal interference in the positive face condition. This study thus provides a plausible mechanism of emotional conflict resolution that the rapid pathway for negative emotion processing efficiently triggers control mechanisms to preventively resolve emotional conflict.
Improved emotional conflict control triggered by the processing priority of negative emotion
Yang, Qian; Wang, Xiangpeng; Yin, Shouhang; Zhao, Xiaoyue; Tan, Jinfeng; Chen, Antao
2016-01-01
The prefrontal cortex is responsible for emotional conflict resolution, and this control mechanism is affected by the emotional valence of distracting stimuli. In the present study, we investigated effects of negative and positive stimuli on emotional conflict control using a face-word Stroop task in combination with functional brain imaging. Emotional conflict was absent in the negative face context, in accordance with the null activation observed in areas regarding emotional face processing (fusiform face area, middle temporal/occipital gyrus). Importantly, these visual areas negatively coupled with the dorsolateral prefrontal cortex (DLPFC). However, the significant emotional conflict was observed in the positive face context, this effect was accompanied by activation in areas associated with emotional face processing, and the default mode network (DMN), here, DLPFC mainly negatively coupled with DMN, rather than visual areas. These results suggested that the conflict control mechanism exerted differently between negative faces and positive faces, it implemented more efficiently in the negative face condition, whereas it is more devoted to inhibiting internal interference in the positive face condition. This study thus provides a plausible mechanism of emotional conflict resolution that the rapid pathway for negative emotion processing efficiently triggers control mechanisms to preventively resolve emotional conflict. PMID:27086908
Heading Tuning in Macaque Area V6.
Fan, Reuben H; Liu, Sheng; DeAngelis, Gregory C; Angelaki, Dora E
2015-12-16
Cortical areas, such as the dorsal subdivision of the medial superior temporal area (MSTd) and the ventral intraparietal area (VIP), have been shown to integrate visual and vestibular self-motion signals. Area V6 is interconnected with areas MSTd and VIP, allowing for the possibility that V6 also integrates visual and vestibular self-motion cues. An alternative hypothesis in the literature is that V6 does not use these sensory signals to compute heading but instead discounts self-motion signals to represent object motion. However, the responses of V6 neurons to visual and vestibular self-motion cues have never been studied, thus leaving the functional roles of V6 unclear. We used a virtual reality system to examine the 3D heading tuning of macaque V6 neurons in response to optic flow and inertial motion stimuli. We found that the majority of V6 neurons are selective for heading defined by optic flow. However, unlike areas MSTd and VIP, V6 neurons are almost universally unresponsive to inertial motion in the absence of optic flow. We also explored the spatial reference frames of heading signals in V6 by measuring heading tuning for different eye positions, and we found that the visual heading tuning of most V6 cells was eye-centered. Similar to areas MSTd and VIP, the population of V6 neurons was best able to discriminate small variations in heading around forward and backward headings. Our findings support the idea that V6 is involved primarily in processing visual motion signals and does not appear to play a role in visual-vestibular integration for self-motion perception. To understand how we successfully navigate our world, it is important to understand which parts of the brain process cues used to perceive our direction of self-motion (i.e., heading). Cortical area V6 has been implicated in heading computations based on human neuroimaging data, but direct measurements of heading selectivity in individual V6 neurons have been lacking. We provide the first demonstration that V6 neurons carry 3D visual heading signals, which are represented in an eye-centered reference frame. In contrast, we found almost no evidence for vestibular heading signals in V6, indicating that V6 is unlikely to contribute to multisensory integration of heading signals, unlike other cortical areas. These findings provide important constraints on the roles of V6 in self-motion perception. Copyright © 2015 the authors 0270-6474/15/3516303-12$15.00/0.
Frontal and parietal theta burst TMS impairs working memory for visual-spatial conjunctions
Morgan, Helen M.; Jackson, Margaret C.; van Koningsbruggen, Martijn G.; Shapiro, Kimron L.; Linden, David E.J.
2013-01-01
In tasks that selectively probe visual or spatial working memory (WM) frontal and posterior cortical areas show a segregation, with dorsal areas preferentially involved in spatial (e.g. location) WM and ventral areas in visual (e.g. object identity) WM. In a previous fMRI study [1], we showed that right parietal cortex (PC) was more active during WM for orientation, whereas left inferior frontal gyrus (IFG) was more active during colour WM. During WM for colour-orientation conjunctions, activity in these areas was intermediate to the level of activity for the single task preferred and non-preferred information. To examine whether these specialised areas play a critical role in coordinating visual and spatial WM to perform a conjunction task, we used theta burst transcranial magnetic stimulation (TMS) to induce a functional deficit. Compared to sham stimulation, TMS to right PC or left IFG selectively impaired WM for conjunctions but not single features. This is consistent with findings from visual search paradigms, in which frontal and parietal TMS selectively affects search for conjunctions compared to single features, and with combined TMS and functional imaging work suggesting that parietal and frontal regions are functionally coupled in tasks requiring integration of visual and spatial information. Our results thus elucidate mechanisms by which the brain coordinates spatially segregated processing streams and have implications beyond the field of working memory. PMID:22483548
Frontal and parietal theta burst TMS impairs working memory for visual-spatial conjunctions.
Morgan, Helen M; Jackson, Margaret C; van Koningsbruggen, Martijn G; Shapiro, Kimron L; Linden, David E J
2013-03-01
In tasks that selectively probe visual or spatial working memory (WM) frontal and posterior cortical areas show a segregation, with dorsal areas preferentially involved in spatial (e.g. location) WM and ventral areas in visual (e.g. object identity) WM. In a previous fMRI study [1], we showed that right parietal cortex (PC) was more active during WM for orientation, whereas left inferior frontal gyrus (IFG) was more active during colour WM. During WM for colour-orientation conjunctions, activity in these areas was intermediate to the level of activity for the single task preferred and non-preferred information. To examine whether these specialised areas play a critical role in coordinating visual and spatial WM to perform a conjunction task, we used theta burst transcranial magnetic stimulation (TMS) to induce a functional deficit. Compared to sham stimulation, TMS to right PC or left IFG selectively impaired WM for conjunctions but not single features. This is consistent with findings from visual search paradigms, in which frontal and parietal TMS selectively affects search for conjunctions compared to single features, and with combined TMS and functional imaging work suggesting that parietal and frontal regions are functionally coupled in tasks requiring integration of visual and spatial information. Our results thus elucidate mechanisms by which the brain coordinates spatially segregated processing streams and have implications beyond the field of working memory. Copyright © 2013 Elsevier Inc. All rights reserved.
A low-cost and versatile system for projecting wide-field visual stimuli within fMRI scanners
Greco, V.; Frijia, F.; Mikellidou, K.; Montanaro, D.; Farini, A.; D’Uva, M.; Poggi, P.; Pucci, M.; Sordini, A.; Morrone, M. C.; Burr, D. C.
2016-01-01
We have constructed and tested a custom-made magnetic-imaging-compatible visual projection system designed to project on a very wide visual field (~80°). A standard projector was modified with a coupling lens, projecting images into the termination of an image fiber. The other termination of the fiber was placed in the 3-T scanner room with a projection lens, which projected the images relayed by the fiber onto a screen over the head coil, viewed by a participant wearing magnifying goggles. To validate the system, wide-field stimuli were presented in order to identify retinotopic visual areas. The results showed that this low-cost and versatile optical system may be a valuable tool to map visual areas in the brain that process peripheral receptive fields. PMID:26092392
Sleepiness induced by sleep-debt enhanced amygdala activity for subliminal signals of fear.
Motomura, Yuki; Kitamura, Shingo; Oba, Kentaro; Terasawa, Yuri; Enomoto, Minori; Katayose, Yasuko; Hida, Akiko; Moriguchi, Yoshiya; Higuchi, Shigekazu; Mishima, Kazuo
2014-08-19
Emotional information is frequently processed below the level of consciousness, where subcortical regions of the brain are thought to play an important role. In the absence of conscious visual experience, patients with visual cortex damage discriminate the valence of emotional expression. Even in healthy individuals, a subliminal mechanism can be utilized to compensate for a functional decline in visual cognition of various causes such as strong sleepiness. In this study, sleep deprivation was simulated in healthy individuals to investigate functional alterations in the subliminal processing of emotional information caused by reduced conscious visual cognition and attention due to an increase in subjective sleepiness. Fourteen healthy adult men participated in a within-subject crossover study consisting of a 5-day session of sleep debt (SD, 4-h sleep) and a 5-day session of sleep control (SC, 8-h sleep). On the last day of each session, participants performed an emotional face-viewing task that included backward masking of nonconscious presentations during magnetic resonance scanning. Finally, data from eleven participants who were unaware of nonconscious face presentations were analyzed. In fear contrasts, subjective sleepiness was significantly positively correlated with activity in the amygdala, ventromedial prefrontal cortex, hippocampus, and insular cortex, and was significantly negatively correlated with the secondary and tertiary visual areas and the fusiform face area. In fear-neutral contrasts, subjective sleepiness was significantly positively correlated with activity of the bilateral amygdala. Further, changes in subjective sleepiness (the difference between the SC and SD sessions) were correlated with both changes in amygdala activity and functional connectivity between the amygdala and superior colliculus in response to subliminal fearful faces. Sleepiness induced functional decline in the brain areas involved in conscious visual cognition of facial expressions, but also enhanced subliminal emotional processing via superior colliculus as represented by activity in the amygdala. These findings suggest that an evolutionally old and auxiliary subliminal hazard perception system is activated as a compensatory mechanism when conscious visual cognition is impaired. In addition, enhancement of subliminal emotional processing might cause involuntary emotional instability during sleep debt through changes in emotional response to or emotional evaluation of external stimuli.
Sequential then Interactive Processing of Letters and Words in the Left Fusiform Gyrus
Thesen, Thomas; McDonald, Carrie R.; Carlson, Chad; Doyle, Werner; Cash, Syd; Sherfey, Jason; Felsovalyi, Olga; Girard, Holly; Barr, William; Devinsky, Orrin; Kuzniecky, Ruben; Halgren, Eric
2013-01-01
Despite decades of cognitive, neuropsychological, and neuroimaging studies, it is unclear if letters are identified prior to word-form encoding during reading, or if letters and their combinations are encoded simultaneously and interactively. Here, using functional magnetic resonance imaging, we show that a ‘letter-form’ area (responding more to consonant strings than false fonts) can be distinguished from an immediately anterior ‘visual word-form area’ in ventral occipitotemporal cortex (responding more to words than consonant strings). Letter-selective magnetoencephalographic responses begin in the letter-form area ~60ms earlier than word-selective responses in the word-form area. Local field potentials confirm the latency and location of letter-selective responses. This area shows increased high gamma power for ~400ms, and strong phase-locking with more anterior areas supporting lexico-semantic processing. These findings suggest that during reading, visual stimuli are first encoded as letters before their combinations are encoded as words. Activity then rapidly spreads anteriorly, and the entire network is engaged in sustained integrative processing. PMID:23250414
Simultaneous selection by object-based attention in visual and frontal cortex
Pooresmaeili, Arezoo; Poort, Jasper; Roelfsema, Pieter R.
2014-01-01
Models of visual attention hold that top-down signals from frontal cortex influence information processing in visual cortex. It is unknown whether situations exist in which visual cortex actively participates in attentional selection. To investigate this question, we simultaneously recorded neuronal activity in the frontal eye fields (FEF) and primary visual cortex (V1) during a curve-tracing task in which attention shifts are object-based. We found that accurate performance was associated with similar latencies of attentional selection in both areas and that the latency in both areas increased if the task was made more difficult. The amplitude of the attentional signals in V1 saturated early during a trial, whereas these selection signals kept increasing for a longer time in FEF, until the moment of an eye movement, as if FEF integrated attentional signals present in early visual cortex. In erroneous trials, we observed an interareal latency difference because FEF selected the wrong curve before V1 and imposed its erroneous decision onto visual cortex. The neuronal activity in visual and frontal cortices was correlated across trials, and this trial-to-trial coupling was strongest for the attended curve. These results imply that selective attention relies on reciprocal interactions within a large network of areas that includes V1 and FEF. PMID:24711379
A multi-pathway hypothesis for human visual fear signaling
Silverstein, David N.; Ingvar, Martin
2015-01-01
A hypothesis is proposed for five visual fear signaling pathways in humans, based on an analysis of anatomical connectivity from primate studies and human functional connectvity and tractography from brain imaging studies. Earlier work has identified possible subcortical and cortical fear pathways known as the “low road” and “high road,” which arrive at the amygdala independently. In addition to a subcortical pathway, we propose four cortical signaling pathways in humans along the visual ventral stream. All four of these traverse through the LGN to the visual cortex (VC) and branching off at the inferior temporal area, with one projection directly to the amygdala; another traversing the orbitofrontal cortex; and two others passing through the parietal and then prefrontal cortex, one excitatory pathway via the ventral-medial area and one regulatory pathway via the ventral-lateral area. These pathways have progressively longer propagation latencies and may have progressively evolved with brain development to take advantage of higher-level processing. Using the anatomical path lengths and latency estimates for each of these five pathways, predictions are made for the relative processing times at selective ROIs and arrival at the amygdala, based on the presentation of a fear-relevant visual stimulus. Partial verification of the temporal dynamics of this hypothesis might be accomplished using experimental MEG analysis. Possible experimental protocols are suggested. PMID:26379513
Jonkman, L M; Kenemans, J L; Kemner, C; Verbaten, M N; van Engeland, H
2004-07-01
This study was aimed at investigating whether attention-deficit hyperactivity disorder (ADHD) children suffer from specific early selective attention deficits in the visual modality with the aid of event-related brain potentials (ERPs). Furthermore, brain source localization was applied to identify brain areas underlying possible deficits in selective visual processing in ADHD children. A two-channel visual color selection task was administered to 18 ADHD and 18 control subjects in the age range of 7-13 years and ERP activity was derived from 30 electrodes. ADHD children exhibited lower perceptual sensitivity scores resulting in poorer target selection. The ERP data suggested an early selective-attention deficit as manifested in smaller frontal positive activity (frontal selection positivity; FSP) in ADHD children around 200 ms whereas later occipital and fronto-central negative activity (OSN and N2b; 200-400 ms latency) appeared to be unaffected. Source localization explained the FSP by posterior-medial equivalent dipoles in control subjects, which may reflect the contribution of numerous surrounding areas. ADHD children have problems with selective visual processing that might be caused by a specific early filtering deficit (absent FSP) occurring around 200 ms. The neural sources underlying these problems have to be further identified. Source localization also suggested abnormalities in the 200-400 ms time range, pertaining to the distribution of attention-modulated activity in lateral frontal areas.
Early Visual Word Processing Is Flexible: Evidence from Spatiotemporal Brain Dynamics.
Chen, Yuanyuan; Davis, Matthew H; Pulvermüller, Friedemann; Hauk, Olaf
2015-09-01
Visual word recognition is often described as automatic, but the functional locus of top-down effects is still a matter of debate. Do task demands modulate how information is retrieved, or only how it is used? We used EEG/MEG recordings to assess whether, when, and how task contexts modify early retrieval of specific psycholinguistic information in occipitotemporal cortex, an area likely to contribute to early stages of visual word processing. Using a parametric approach, we analyzed the spatiotemporal response patterns of occipitotemporal cortex for orthographic, lexical, and semantic variables in three psycholinguistic tasks: silent reading, lexical decision, and semantic decision. Task modulation of word frequency and imageability effects occurred simultaneously in ventral occipitotemporal regions-in the vicinity of the putative visual word form area-around 160 msec, following task effects on orthographic typicality around 100 msec. Frequency and typicality also produced task-independent effects in anterior temporal lobe regions after 200 msec. The early task modulation for several specific psycholinguistic variables indicates that occipitotemporal areas integrate perceptual input with prior knowledge in a task-dependent manner. Still, later task-independent effects in anterior temporal lobes suggest that word recognition eventually leads to retrieval of semantic information irrespective of task demands. We conclude that even a highly overlearned visual task like word recognition should be described as flexible rather than automatic.
Changes of Visual Pathway and Brain Connectivity in Glaucoma: A Systematic Review
Nuzzi, Raffaele; Dallorto, Laura; Rolle, Teresa
2018-01-01
Background: Glaucoma is a leading cause of irreversible blindness worldwide. The increasing interest in the involvement of the cortical visual pathway in glaucomatous patients is due to the implications in recent therapies, such as neuroprotection and neuroregeneration. Objective: In this review, we outline the current understanding of brain structural, functional, and metabolic changes detected with the modern techniques of neuroimaging in glaucomatous subjects. Methods: We screened MEDLINE, EMBASE, CINAHL, CENTRAL, LILACS, Trip Database, and NICE for original contributions published until 31 October 2017. Studies with at least six patients affected by any type of glaucoma were considered. We included studies using the following neuroimaging techniques: functional Magnetic Resonance Imaging (fMRI), resting-state fMRI (rs-fMRI), magnetic resonance spectroscopy (MRS), voxel- based Morphometry (VBM), surface-based Morphometry (SBM), diffusion tensor MRI (DTI). Results: Over a total of 1,901 studies, 56 case series with a total of 2,381 patients were included. Evidence of neurodegenerative process in glaucomatous patients was found both within and beyond the visual system. Structural alterations in visual cortex (mainly reduced cortex thickness and volume) have been demonstrated with SBM and VBM; these changes were not limited to primary visual cortex but also involved association visual areas. Other brain regions, associated with visual function, demonstrated a certain grade of increased or decreased gray matter volume. Functional and metabolic abnormalities resulted within primary visual cortex in all studies with fMRI and MRS. Studies with rs-fMRI found disrupted connectivity between the primary and higher visual cortex and between visual cortex and associative visual areas in the task-free state of glaucomatous patients. Conclusions: This review contributes to the better understanding of brain abnormalities in glaucoma. It may stimulate further speculation about brain plasticity at a later age and therapeutic strategies, such as the prevention of cortical degeneration in patients with glaucoma. Structural, functional, and metabolic neuroimaging methods provided evidence of changes throughout the visual pathway in glaucomatous patients. Other brain areas, not directly involved in the processing of visual information, also showed alterations. PMID:29896087
Dynamic Stimuli And Active Processing In Human Visual Perception
NASA Astrophysics Data System (ADS)
Haber, Ralph N.
1990-03-01
Theories of visual perception traditionally have considered a static retinal image to be the starting point for processing; and has considered processing both to be passive and a literal translation of that frozen, two dimensional, pictorial image. This paper considers five problem areas in the analysis of human visually guided locomotion, in which the traditional approach is contrasted to newer ones that utilize dynamic definitions of stimulation, and an active perceiver: (1) differentiation between object motion and self motion, and among the various kinds of self motion (e.g., eyes only, head only, whole body, and their combinations); (2) the sources and contents of visual information that guide movement; (3) the acquisition and performance of perceptual motor skills; (4) the nature of spatial representations, percepts, and the perceived layout of space; and (5) and why the retinal image is a poor starting point for perceptual processing. These newer approaches argue that stimuli must be considered as dynamic: humans process the systematic changes in patterned light when objects move and when they themselves move. Furthermore, the processing of visual stimuli must be active and interactive, so that perceivers can construct panoramic and stable percepts from an interaction of stimulus information and expectancies of what is contained in the visual environment. These developments all suggest a very different approach to the computational analyses of object location and identification, and of the visual guidance of locomotion.
Neural correlates of individual performance differences in resolving perceptual conflict.
Labrenz, Franziska; Themann, Maria; Wascher, Edmund; Beste, Christian; Pfleiderer, Bettina
2012-01-01
Attentional mechanisms are a crucial prerequisite to organize behavior. Most situations may be characterized by a 'competition' between salient, but irrelevant stimuli and less salient, relevant stimuli. In such situations top-down and bottom-up mechanisms interact with each other. In the present fMRI study, we examined how interindividual differences in resolving situations of perceptual conflict are reflected in brain networks mediating attentional selection. Doing so, we employed a change detection task in which subjects had to detect luminance changes in the presence and absence of competing distractors. The results show that good performers presented increased activation in the orbitofrontal cortex (BA 11), anterior cingulate (BA 25), inferior parietal lobule (BA 40) and visual areas V2 and V3 but decreased activation in BA 39. This suggests that areas mediating top-down attentional control are stronger activated in this group. Increased activity in visual areas reflects distinct neuronal enhancement relating to selective attentional mechanisms in order to solve the perceptual conflict. Opposed to good performers, brain areas activated by poor performers comprised the left inferior parietal lobule (BA 39) and fronto-parietal and visual regions were continuously deactivated, suggesting that poor performers perceive stronger conflict than good performers. Moreover, the suppression of neural activation in visual areas might indicate a strategy of poor performers to inhibit the processing of the irrelevant non-target feature. These results indicate that high sensitivity in perceptual areas and increased attentional control led to less conflict in stimulus processing and consequently to higher performance in competitive attentional selection.
Homman-Ludiye, Jihane; Bourne, James A.
2014-01-01
The integration of the visual stimulus takes place at the level of the neocortex, organized in anatomically distinct and functionally unique areas. Primates, including humans, are heavily dependent on vision, with approximately 50% of their neocortical surface dedicated to visual processing and possess many more visual areas than any other mammal, making them the model of choice to study visual cortical arealisation. However, in order to identify the mechanisms responsible for patterning the developing neocortex, specifying area identity as well as elucidate events that have enabled the evolution of the complex primate visual cortex, it is essential to gain access to the cortical maps of alternative species. To this end, species including the mouse have driven the identification of cellular markers, which possess an area-specific expression profile, the development of new tools to label connections and technological advance in imaging techniques enabling monitoring of cortical activity in a behaving animal. In this review we present non-primate species that have contributed to elucidating the evolution and development of the visual cortex. We describe the current understanding of the mechanisms supporting the establishment of areal borders during development, mainly gained in the mouse thanks to the availability of genetically modified lines but also the limitations of the mouse model and the need for alternate species. PMID:25071460
Kauffmann, Louise; Chauvin, Alan; Pichat, Cédric; Peyrin, Carole
2015-10-01
According to current models of visual perception scenes are processed in terms of spatial frequencies following a predominantly coarse-to-fine processing sequence. Low spatial frequencies (LSF) reach high-order areas rapidly in order to activate plausible interpretations of the visual input. This triggers top-down facilitation that guides subsequent processing of high spatial frequencies (HSF) in lower-level areas such as the inferotemporal and occipital cortices. However, dynamic interactions underlying top-down influences on the occipital cortex have never been systematically investigated. The present fMRI study aimed to further explore the neural bases and effective connectivity underlying coarse-to-fine processing of scenes, particularly the role of the occipital cortex. We used sequences of six filtered scenes as stimuli depicting coarse-to-fine or fine-to-coarse processing of scenes. Participants performed a categorization task on these stimuli (indoor vs. outdoor). Firstly, we showed that coarse-to-fine (compared to fine-to-coarse) sequences elicited stronger activation in the inferior frontal gyrus (in the orbitofrontal cortex), the inferotemporal cortex (in the fusiform and parahippocampal gyri), and the occipital cortex (in the cuneus). Dynamic causal modeling (DCM) was then used to infer effective connectivity between these regions. DCM results revealed that coarse-to-fine processing resulted in increased connectivity from the occipital cortex to the inferior frontal gyrus and from the inferior frontal gyrus to the inferotemporal cortex. Critically, we also observed an increase in connectivity strength from the inferior frontal gyrus to the occipital cortex, suggesting that top-down influences from frontal areas may guide processing of incoming signals. The present results support current models of visual perception and refine them by emphasizing the role of the occipital cortex as a cortical site for feedback projections in the neural network underlying coarse-to-fine processing of scenes. Copyright © 2015 Elsevier Inc. All rights reserved.
Campana, Florence; Rebollo, Ignacio; Urai, Anne; Wyart, Valentin; Tallon-Baudry, Catherine
2016-05-11
The reverse hierarchy theory (Hochstein and Ahissar, 2002) makes strong, but so far untested, predictions on conscious vision. In this theory, local details encoded in lower-order visual areas are unconsciously processed before being automatically and rapidly combined into global information in higher-order visual areas, where conscious percepts emerge. Contingent on current goals, local details can afterward be consciously retrieved. This model therefore predicts that (1) global information is perceived faster than local details, (2) global information is computed regardless of task demands during early visual processing, and (3) spontaneous vision is dominated by global percepts. We designed novel textured stimuli that are, as opposed to the classic Navon's letters, truly hierarchical (i.e., where global information is solely defined by local information but where local and global orientations can still be manipulated separately). In line with the predictions, observers were systematically faster reporting global than local properties of those stimuli. Second, global information could be decoded from magneto-encephalographic data during early visual processing regardless of task demands. Last, spontaneous subjective reports were dominated by global information and the frequency and speed of spontaneous global perception correlated with the accuracy and speed in the global task. No such correlation was observed for local information. We therefore show that information at different levels of the visual hierarchy is not equally likely to become conscious; rather, conscious percepts emerge preferentially at a global level. We further show that spontaneous reports can be reliable and are tightly linked to objective performance at the global level. Is information encoded at different levels of the visual system (local details in low-level areas vs global shapes in high-level areas) equally likely to become conscious? We designed new hierarchical stimuli and provide the first empirical evidence based on behavioral and MEG data that global information encoded at high levels of the visual hierarchy dominates perception. This result held both in the presence and in the absence of task demands. The preferential emergence of percepts at high levels can account for two properties of conscious vision, namely, the dominance of global percepts and the feeling of visual richness reported independently of the perception of local details. Copyright © 2016 the authors 0270-6474/16/365200-14$15.00/0.
Pre-Processes for Urban Areas Detection in SAR Images
NASA Astrophysics Data System (ADS)
Altay Açar, S.; Bayır, Ş.
2017-11-01
In this study, pre-processes for urban areas detection in synthetic aperture radar (SAR) images are examined. These pre-processes are image smoothing, thresholding and white coloured regions determination. Image smoothing is carried out to remove noises then thresholding is applied to obtain binary image. Finally, candidate urban areas are detected by using white coloured regions determination. All pre-processes are applied by utilizing the developed software. Two different SAR images which are acquired by TerraSAR-X are used in experimental study. Obtained results are shown visually.
The role of primary auditory and visual cortices in temporal processing: A tDCS approach.
Mioni, G; Grondin, S; Forgione, M; Fracasso, V; Mapelli, D; Stablum, F
2016-10-15
Many studies showed that visual stimuli are frequently experienced as shorter than equivalent auditory stimuli. These findings suggest that timing is distributed across many brain areas and that "different clocks" might be involved in temporal processing. The aim of this study is to investigate, with the application of tDCS over V1 and A1, the specific role of primary sensory cortices (either visual or auditory) in temporal processing. Forty-eight University students were included in the study. Twenty-four participants were stimulated over A1 and 24 participants were stimulated over V1. Participants performed time bisection tasks, in the visual and the auditory modalities, involving standard durations lasting 300ms (short) and 900ms (long). When tDCS was delivered over A1, no effect of stimulation was observed on perceived duration but we observed higher temporal variability under anodic stimulation compared to sham and higher variability in the visual compared to the auditory modality. When tDCS was delivered over V1, an under-estimation of perceived duration and higher variability was observed in the visual compared to the auditory modality. Our results showed more variability of visual temporal processing under tDCS stimulation. These results suggest a modality independent role of A1 in temporal processing and a modality specific role of V1 in the processing of temporal intervals in the visual modality. Copyright © 2016 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Ragan, Janet M.; Ragan, Tillman J.
1982-01-01
Briefly summarizes history of neurolinguistic programing, which set out to model elements and processes of effective communication and to reduce these to formulas that can be taught to others. Potential areas of inquiry for neurolinguistic programers which should be of concern to visual literacists are discussed. (MBR)
Evolution of crossmodal reorganization of the voice area in cochlear-implanted deaf patients.
Rouger, Julien; Lagleyre, Sébastien; Démonet, Jean-François; Fraysse, Bernard; Deguine, Olivier; Barone, Pascal
2012-08-01
Psychophysical and neuroimaging studies in both animal and human subjects have clearly demonstrated that cortical plasticity following sensory deprivation leads to a brain functional reorganization that favors the spared modalities. In postlingually deaf patients, the use of a cochlear implant (CI) allows a recovery of the auditory function, which will probably counteract the cortical crossmodal reorganization induced by hearing loss. To study the dynamics of such reversed crossmodal plasticity, we designed a longitudinal neuroimaging study involving the follow-up of 10 postlingually deaf adult CI users engaged in a visual speechreading task. While speechreading activates Broca's area in normally hearing subjects (NHS), the activity level elicited in this region in CI patients is abnormally low and increases progressively with post-implantation time. Furthermore, speechreading in CI patients induces abnormal crossmodal activations in right anterior regions of the superior temporal cortex normally devoted to processing human voice stimuli (temporal voice-sensitive areas-TVA). These abnormal activity levels diminish with post-implantation time and tend towards the levels observed in NHS. First, our study revealed that the neuroplasticity after cochlear implantation involves not only auditory but also visual and audiovisual speech processing networks. Second, our results suggest that during deafness, the functional links between cortical regions specialized in face and voice processing are reallocated to support speech-related visual processing through cross-modal reorganization. Such reorganization allows a more efficient audiovisual integration of speech after cochlear implantation. These compensatory sensory strategies are later completed by the progressive restoration of the visuo-audio-motor speech processing loop, including Broca's area. Copyright © 2011 Wiley Periodicals, Inc.
The role of pulvinar in the transmission of information in the visual hierarchy.
Cortes, Nelson; van Vreeswijk, Carl
2012-01-01
VISUAL RECEPTIVE FIELD (RF) ATTRIBUTES IN VISUAL CORTEX OF PRIMATES HAVE BEEN EXPLAINED MAINLY FROM CORTICAL CONNECTIONS: visual RFs progress from simple to complex through cortico-cortical pathways from lower to higher levels in the visual hierarchy. This feedforward flow of information is paired with top-down processes through the feedback pathway. Although the hierarchical organization explains the spatial properties of RFs, is unclear how a non-linear transmission of activity through the visual hierarchy can yield smooth contrast response functions in all level of the hierarchy. Depending on the gain, non-linear transfer functions create either a bimodal response to contrast, or no contrast dependence of the response in the highest level of the hierarchy. One possible mechanism to regulate this transmission of visual contrast information from low to high level involves an external component that shortcuts the flow of information through the hierarchy. A candidate for this shortcut is the Pulvinar nucleus of the thalamus. To investigate representation of stimulus contrast a hierarchical model network of ten cortical areas is examined. In each level of the network, the activity from the previous layer is integrated and then non-linearly transmitted to the next level. The arrangement of interactions creates a gradient from simple to complex RFs of increasing size as one moves from lower to higher cortical levels. The visual input is modeled as a Gaussian random input, whose width codes for the contrast. This input is applied to the first area. The output activity ratio among different contrast values is analyzed for the last level to observe sensitivity to a contrast and contrast invariant tuning. For a purely cortical system, the output of the last area can be approximately contrast invariant, but the sensitivity to contrast is poor. To account for an alternative visual processing pathway, non-reciprocal connections from and to a parallel pulvinar like structure of nine areas is coupled to the system. Compared to the pure feedforward model, cortico-pulvino-cortical output presents much more sensitivity to contrast and has a similar level of contrast invariance of the tuning.
The Role of Pulvinar in the Transmission of Information in the Visual Hierarchy
Cortes, Nelson; van Vreeswijk, Carl
2012-01-01
Visual receptive field (RF) attributes in visual cortex of primates have been explained mainly from cortical connections: visual RFs progress from simple to complex through cortico-cortical pathways from lower to higher levels in the visual hierarchy. This feedforward flow of information is paired with top-down processes through the feedback pathway. Although the hierarchical organization explains the spatial properties of RFs, is unclear how a non-linear transmission of activity through the visual hierarchy can yield smooth contrast response functions in all level of the hierarchy. Depending on the gain, non-linear transfer functions create either a bimodal response to contrast, or no contrast dependence of the response in the highest level of the hierarchy. One possible mechanism to regulate this transmission of visual contrast information from low to high level involves an external component that shortcuts the flow of information through the hierarchy. A candidate for this shortcut is the Pulvinar nucleus of the thalamus. To investigate representation of stimulus contrast a hierarchical model network of ten cortical areas is examined. In each level of the network, the activity from the previous layer is integrated and then non-linearly transmitted to the next level. The arrangement of interactions creates a gradient from simple to complex RFs of increasing size as one moves from lower to higher cortical levels. The visual input is modeled as a Gaussian random input, whose width codes for the contrast. This input is applied to the first area. The output activity ratio among different contrast values is analyzed for the last level to observe sensitivity to a contrast and contrast invariant tuning. For a purely cortical system, the output of the last area can be approximately contrast invariant, but the sensitivity to contrast is poor. To account for an alternative visual processing pathway, non-reciprocal connections from and to a parallel pulvinar like structure of nine areas is coupled to the system. Compared to the pure feedforward model, cortico-pulvino-cortical output presents much more sensitivity to contrast and has a similar level of contrast invariance of the tuning. PMID:22654750
Neural Signatures of Stimulus Features in Visual Working Memory—A Spatiotemporal Approach
Jackson, Margaret C.; Klein, Christoph; Mohr, Harald; Shapiro, Kimron L.; Linden, David E. J.
2010-01-01
We examined the neural signatures of stimulus features in visual working memory (WM) by integrating functional magnetic resonance imaging (fMRI) and event-related potential data recorded during mental manipulation of colors, rotation angles, and color–angle conjunctions. The N200, negative slow wave, and P3b were modulated by the information content of WM, and an fMRI-constrained source model revealed a progression in neural activity from posterior visual areas to higher order areas in the ventral and dorsal processing streams. Color processing was associated with activity in inferior frontal gyrus during encoding and retrieval, whereas angle processing involved right parietal regions during the delay interval. WM for color–angle conjunctions did not involve any additional neural processes. The finding that different patterns of brain activity underlie WM for color and spatial information is consistent with ideas that the ventral/dorsal “what/where” segregation of perceptual processing influences WM organization. The absence of characteristic signatures of conjunction-related brain activity, which was generally intermediate between the 2 single conditions, suggests that conjunction judgments are based on the coordinated activity of these 2 streams. PMID:19429863
Neural Mechanisms of Cortical Motion Computation Based on a Neuromorphic Sensory System
Abdul-Kreem, Luma Issa; Neumann, Heiko
2015-01-01
The visual cortex analyzes motion information along hierarchically arranged visual areas that interact through bidirectional interconnections. This work suggests a bio-inspired visual model focusing on the interactions of the cortical areas in which a new mechanism of feedforward and feedback processing are introduced. The model uses a neuromorphic vision sensor (silicon retina) that simulates the spike-generation functionality of the biological retina. Our model takes into account two main model visual areas, namely V1 and MT, with different feature selectivities. The initial motion is estimated in model area V1 using spatiotemporal filters to locally detect the direction of motion. Here, we adapt the filtering scheme originally suggested by Adelson and Bergen to make it consistent with the spike representation of the DVS. The responses of area V1 are weighted and pooled by area MT cells which are selective to different velocities, i.e. direction and speed. Such feature selectivity is here derived from compositions of activities in the spatio-temporal domain and integrating over larger space-time regions (receptive fields). In order to account for the bidirectional coupling of cortical areas we match properties of the feature selectivity in both areas for feedback processing. For such linkage we integrate the responses over different speeds along a particular preferred direction. Normalization of activities is carried out over the spatial as well as the feature domains to balance the activities of individual neurons in model areas V1 and MT. Our model was tested using different stimuli that moved in different directions. The results reveal that the error margin between the estimated motion and synthetic ground truth is decreased in area MT comparing with the initial estimation of area V1. In addition, the modulated V1 cell activations shows an enhancement of the initial motion estimation that is steered by feedback signals from MT cells. PMID:26554589
Wu, Jinglong; Chen, Kewei; Imajyo, Satoshi; Ohno, Seiichiro; Kanazawa, Susumu
2013-01-01
In human visual cortex, the primary visual cortex (V1) is considered to be essential for visual information processing; the fusiform face area (FFA) and parahippocampal place area (PPA) are considered as face-selective region and places-selective region, respectively. Recently, a functional magnetic resonance imaging (fMRI) study showed that the neural activity ratios between V1 and FFA were constant as eccentricities increasing in central visual field. However, in wide visual field, the neural activity relationships between V1 and FFA or V1 and PPA are still unclear. In this work, using fMRI and wide-view present system, we tried to address this issue by measuring neural activities in V1, FFA and PPA for the images of faces and houses aligning in 4 eccentricities and 4 meridians. Then, we further calculated ratio relative to V1 (RRV1) as comparing the neural responses amplitudes in FFA or PPA with those in V1. We found V1, FFA, and PPA showed significant different neural activities to faces and houses in 3 dimensions of eccentricity, meridian, and region. Most importantly, the RRV1s in FFA and PPA also exhibited significant differences in 3 dimensions. In the dimension of eccentricity, both FFA and PPA showed smaller RRV1s at central position than those at peripheral positions. In meridian dimension, both FFA and PPA showed larger RRV1s at upper vertical positions than those at lower vertical positions. In the dimension of region, FFA had larger RRV1s than PPA. We proposed that these differential RRV1s indicated FFA and PPA might have different processing strategies for encoding the wide field visual information from V1. These different processing strategies might depend on the retinal position at which faces or houses are typically observed in daily life. We posited a role of experience in shaping the information processing strategies in the ventral visual cortex. PMID:23991147
Sood, Mariam R; Sereno, Martin I
2016-08-01
Cortical mapping techniques using fMRI have been instrumental in identifying the boundaries of topological (neighbor-preserving) maps in early sensory areas. The presence of topological maps beyond early sensory areas raises the possibility that they might play a significant role in other cognitive systems, and that topological mapping might help to delineate areas involved in higher cognitive processes. In this study, we combine surface-based visual, auditory, and somatomotor mapping methods with a naturalistic reading comprehension task in the same group of subjects to provide a qualitative and quantitative assessment of the cortical overlap between sensory-motor maps in all major sensory modalities, and reading processing regions. Our results suggest that cortical activation during naturalistic reading comprehension overlaps more extensively with topological sensory-motor maps than has been heretofore appreciated. Reading activation in regions adjacent to occipital lobe and inferior parietal lobe almost completely overlaps visual maps, whereas a significant portion of frontal activation for reading in dorsolateral and ventral prefrontal cortex overlaps both visual and auditory maps. Even classical language regions in superior temporal cortex are partially overlapped by topological visual and auditory maps. By contrast, the main overlap with somatomotor maps is restricted to a small region on the anterior bank of the central sulcus near the border between the face and hand representations of M-I. Hum Brain Mapp 37:2784-2810, 2016. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.
Thomson, Eric E.; Zea, Ivan; França, Wendy
2017-01-01
Abstract Adult rats equipped with a sensory prosthesis, which transduced infrared (IR) signals into electrical signals delivered to somatosensory cortex (S1), took approximately 4 d to learn a four-choice IR discrimination task. Here, we show that when such IR signals are projected to the primary visual cortex (V1), rats that are pretrained in a visual-discrimination task typically learn the same IR discrimination task on their first day of training. However, without prior training on a visual discrimination task, the learning rates for S1- and V1-implanted animals converged, suggesting there is no intrinsic difference in learning rate between the two areas. We also discovered that animals were able to integrate IR information into the ongoing visual processing stream in V1, performing a visual-IR integration task in which they had to combine IR and visual information. Furthermore, when the IR prosthesis was implanted in S1, rats showed no impairment in their ability to use their whiskers to perform a tactile discrimination task. Instead, in some rats, this ability was actually enhanced. Cumulatively, these findings suggest that cortical sensory neuroprostheses can rapidly augment the representational scope of primary sensory areas, integrating novel sources of information into ongoing processing while incurring minimal loss of native function. PMID:29279860
Manipulation of the extrastriate frontal loop can resolve visual disability in blindsight patients.
Badgaiyan, Rajendra D
2012-12-01
Patients with blindsight are not consciously aware of visual stimuli in the affected field of vision but retain nonconscious perception. This disability can be resolved if nonconsciously perceived information can be brought to their conscious awareness. It can be accomplished by manipulating neural network of visual awareness. To understand this network, we studied the pattern of cortical activity elicited during processing of visual stimuli with or without conscious awareness. The analysis indicated that a re-entrant signaling loop between the area V3A (located in the extrastriate cortex) and the frontal cortex is critical for processing conscious awareness. The loop is activated by visual signals relayed in the primary visual cortex, which is damaged in blindsight patients. Because of the damage, V3A-frontal loop is not activated and the signals are not processed for conscious awareness. These patients however continue to receive visual signals through the lateral geniculate nucleus. Since these signals do not activate the V3A-frontal loop, the stimuli are not consciously perceived. If visual input from the lateral geniculate nucleus is appropriately manipulated and made to activate the V3A-frontal loop, blindsight patients can regain conscious vision. Published by Elsevier Ltd.
Resolving the organization of the third tier visual cortex in primates: a hypothesis-based approach.
Angelucci, Alessandra; Rosa, Marcello G P
2015-01-01
As highlighted by several contributions to this special issue, there is still ongoing debate about the number, exact location, and boundaries of the visual areas located in cortex immediately rostral to the second visual area (V2), i.e., the "third tier" visual cortex, in primates. In this review, we provide a historical overview of the main ideas that have led to four models of third tier cortex organization, which are at the center of today's debate. We formulate specific predictions of these models, and compare these predictions with experimental evidence obtained primarily in New World primates. From this analysis, we conclude that only one of these models (the "multiple-areas" model) can accommodate the breadth of available experimental evidence. According to this model, most of the third tier cortex in New World primates is occupied by two distinct areas, both representing the full contralateral visual quadrant: the dorsomedial area (DM), restricted to the dorsal half of the third visual complex, and the ventrolateral posterior area (VLP), occupying its ventral half and a substantial fraction of its dorsal half. DM belongs to the dorsal stream of visual processing, and overlaps with macaque parietooccipital (PO) area (or V6), whereas VLP belongs to the ventral stream and overlaps considerably with area V3 proposed by others. In contrast, there is substantial evidence that is inconsistent with the concept of a single elongated area V3 lining much of V2. We also review the experimental evidence from macaque monkey and humans, and propose that, once the data are interpreted within an evolutionary-developmental context, these species share a homologous (but not necessarily identical) organization of the third tier cortex as that observed in New World monkeys. Finally, we identify outstanding issues, and propose experiments to resolve them, highlighting in particular the need for more extensive, hypothesis-driven investigations in macaque and humans.
The Simplest Chronoscope V: A Theory of Dual Primary and Secondary Reaction Time Systems.
Montare, Alberto
2016-12-01
Extending work by Montare, visual simple reaction time, choice reaction time, discriminative reaction time, and overall reaction time scores obtained from college students by the simplest chronoscope (a falling meterstick) method were significantly faster as well as significantly less variable than scores of the same individuals from electromechanical reaction timers (machine method). Results supported the existence of dual reaction time systems: an ancient primary reaction time system theoretically activating the V5 parietal area of the dorsal visual stream that evolved to process significantly faster sensory-motor reactions to sudden stimulations arising from environmental objects in motion, and a secondary reaction time system theoretically activating the V4 temporal area of the ventral visual stream that subsequently evolved to process significantly slower sensory-perceptual-motor reactions to sudden stimulations arising from motionless colored objects. © The Author(s) 2016.
Hilgetag, C C; O'Neill, M A; Young, M P
2000-01-29
Neuroanatomists have described a large number of connections between the various structures of monkey and cat cortical sensory systems. Because of the complexity of the connection data, analysis is required to unravel what principles of organization they imply. To date, analysis of laminar origin and termination connection data to reveal hierarchical relationships between the cortical areas has been the most widely acknowledged approach. We programmed a network processor that searches for optimal hierarchical orderings of cortical areas given known hierarchical constraints and rules for their interpretation. For all cortical systems and all cost functions, the processor found a multitude of equally low-cost hierarchies. Laminar hierarchical constraints that are presently available in the anatomical literature were therefore insufficient to constrain a unique ordering for any of the sensory systems we analysed. Hierarchical orderings of the monkey visual system that have been widely reported, but which were derived by hand, were not among the optimal orderings. All the cortical systems we studied displayed a significant degree of hierarchical organization, and the anatomical constraints from the monkey visual and somato-motor systems were satisfied with very few constraint violations in the optimal hierarchies. The visual and somato-motor systems in that animal were therefore surprisingly strictly hierarchical. Most inconsistencies between the constraints and the hierarchical relationships in the optimal structures for the visual system were related to connections of area FST (fundus of superior temporal sulcus). We found that the hierarchical solutions could be further improved by assuming that FST consists of two areas, which differ in the nature of their projections. Indeed, we found that perfect hierarchical arrangements of the primate visual system, without any violation of anatomical constraints, could be obtained under two reasonable conditions, namely the subdivision of FST into two distinct areas, whose connectivity we predict, and the abolition of at least one of the less reliable rule constraints. Our analyses showed that the future collection of the same type of laminar constraints, or the inclusion of new hierarchical constraints from thalamocortical connections, will not resolve the problem of multiple optimal hierarchical representations for the primate visual system. Further data, however, may help to specify the relative ordering of some more areas. This indeterminacy of the visual hierarchy is in part due to the reported absence of some connections between cortical areas. These absences are consistent with limited cross-talk between differentiated processing streams in the system. Hence, hierarchical representation of the visual system is affected by, and must take into account, other organizational features, such as processing streams.
Nakamura, Hisashi; Hioki, Hiroyuki; Furuta, Takahiro; Kaneko, Takeshi
2015-05-01
The lateral posterior thalamic nucleus (LP) is one of the components of the extrageniculate pathway in the rat visual system, and is cytoarchitecturally divided into three subdivisions--lateral (LPl), rostromedial (LPrm), and caudomedial (LPcm) portions. To clarify the differences in the dendritic fields and axonal arborisations among the three subdivisions, we applied a single-neuron labeling technique with viral vectors to LP neurons. The proximal dendrites of LPl neurons were more numerous than those of LPrm and LPcm neurons, and LPrm neurons tended to have wider dendritic fields than LPl neurons. We then analysed the axonal arborisations of LP neurons by reconstructing the axon fibers in the cortex. The LPl, LPrm and LPcm were different from one another in terms of the projection targets--the main target cortical regions of LPl and LPrm neurons were the secondary and primary visual areas, whereas those of LPcm neurons were the postrhinal and temporal association areas. Furthermore, the principal target cortical layers of LPl neurons in the visual areas were middle layers, but that of LPrm neurons was layer 1. This indicates that LPl and LPrm neurons can be categorised into the core and matrix types of thalamic neurons, respectively, in the visual areas. In addition, LPl neurons formed multiple axonal clusters within the visual areas, whereas the fibers of LPrm neurons were widely and diffusely distributed. It is therefore presumed that these two types of neurons play different roles in visual information processing by dual thalamocortical innervation of the visual areas. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Audio–visual interactions for motion perception in depth modulate activity in visual area V3A
Ogawa, Akitoshi; Macaluso, Emiliano
2013-01-01
Multisensory signals can enhance the spatial perception of objects and events in the environment. Changes of visual size and auditory intensity provide us with the main cues about motion direction in depth. However, frequency changes in audition and binocular disparity in vision also contribute to the perception of motion in depth. Here, we presented subjects with several combinations of auditory and visual depth-cues to investigate multisensory interactions during processing of motion in depth. The task was to discriminate the direction of auditory motion in depth according to increasing or decreasing intensity. Rising or falling auditory frequency provided an additional within-audition cue that matched or did not match the intensity change (i.e. intensity-frequency (IF) “matched vs. unmatched” conditions). In two-thirds of the trials, a task-irrelevant visual stimulus moved either in the same or opposite direction of the auditory target, leading to audio–visual “congruent vs. incongruent” between-modalities depth-cues. Furthermore, these conditions were presented either with or without binocular disparity. Behavioral data showed that the best performance was observed in the audio–visual congruent condition with IF matched. Brain imaging results revealed maximal response in visual area V3A when all cues provided congruent and reliable depth information (i.e. audio–visual congruent, IF-matched condition including disparity cues). Analyses of effective connectivity revealed increased coupling from auditory cortex to V3A specifically in audio–visual congruent trials. We conclude that within- and between-modalities cues jointly contribute to the processing of motion direction in depth, and that they do so via dynamic changes of connectivity between visual and auditory cortices. PMID:23333414
Bressler, David W.; Silver, Michael A.
2010-01-01
Spatial attention improves visual perception and increases the amplitude of neural responses in visual cortex. In addition, spatial attention tasks and fMRI have been used to discover topographic visual field representations in regions outside visual cortex. We therefore hypothesized that requiring subjects to attend to a retinotopic mapping stimulus would facilitate the characterization of visual field representations in a number of cortical areas. In our study, subjects attended either a central fixation point or a wedge-shaped stimulus that rotated about the fixation point. Response reliability was assessed by computing coherence between the fMRI time series and a sinusoid with the same frequency as the rotating wedge stimulus. When subjects attended to the rotating wedge instead of ignoring it, the reliability of retinotopic mapping signals increased by approximately 50% in early visual cortical areas (V1, V2, V3, V3A/B, V4) and ventral occipital cortex (VO1) and by approximately 75% in lateral occipital (LO1, LO2) and posterior parietal (IPS0, IPS1 and IPS2) cortical areas. Additionally, one 5-minute run of retinotopic mapping in the attention-to-wedge condition produced responses as reliable as the average of three to five (early visual cortex) or more than five (lateral occipital, ventral occipital, and posterior parietal cortex) attention-to-fixation runs. These results demonstrate that allocating attention to the retinotopic mapping stimulus substantially reduces the amount of scanning time needed to determine the visual field representations in occipital and parietal topographic cortical areas. Attention significantly increased response reliability in every cortical area we examined and may therefore be a general mechanism for improving the fidelity of neural representations of sensory stimuli at multiple levels of the cortical processing hierarchy. PMID:20600961
Cavina-Pratesi, C; Kentridge, R W; Heywood, C A; Milner, A D
2010-02-01
Real-life visual object recognition requires the processing of more than just geometric (shape, size, and orientation) properties. Surface properties such as color and texture are equally important, particularly for providing information about the material properties of objects. Recent neuroimaging research suggests that geometric and surface properties are dealt with separately within the lateral occipital cortex (LOC) and the collateral sulcus (CoS), respectively. Here we compared objects that differed either in aspect ratio or in surface texture only, keeping all other visual properties constant. Results on brain-intact participants confirmed that surface texture activates an area in the posterior CoS, quite distinct from the area activated by shape within LOC. We also tested 2 patients with visual object agnosia, one of whom (DF) performed well on the texture task but at chance on the shape task, whereas the other (MS) showed the converse pattern. This behavioral double dissociation was matched by a parallel neuroimaging dissociation, with activation in CoS but not LOC in patient DF and activation in LOC but not CoS in patient MS. These data provide presumptive evidence that the areas respectively activated by shape and texture play a causally necessary role in the perceptual discrimination of these features.
The effects of link format and screen location on visual search of web pages.
Ling, Jonathan; Van Schaik, Paul
2004-06-22
Navigation of web pages is of critical importance to the usability of web-based systems such as the World Wide Web and intranets. The primary means of navigation is through the use of hyperlinks. However, few studies have examined the impact of the presentation format of these links on visual search. The present study used a two-factor mixed measures design to investigate whether there was an effect of link format (plain text, underlined, bold, or bold and underlined) upon speed and accuracy of visual search and subjective measures in both the navigation and content areas of web pages. An effect of link format on speed of visual search for both hits and correct rejections was found. This effect was observed in the navigation and the content areas. Link format did not influence accuracy in either screen location. Participants showed highest preference for links that were in bold and underlined, regardless of screen area. These results are discussed in the context of visual search processes and design recommendations are given.
Reinforcement Learning of Linking and Tracing Contours in Recurrent Neural Networks
Brosch, Tobias; Neumann, Heiko; Roelfsema, Pieter R.
2015-01-01
The processing of a visual stimulus can be subdivided into a number of stages. Upon stimulus presentation there is an early phase of feedforward processing where the visual information is propagated from lower to higher visual areas for the extraction of basic and complex stimulus features. This is followed by a later phase where horizontal connections within areas and feedback connections from higher areas back to lower areas come into play. In this later phase, image elements that are behaviorally relevant are grouped by Gestalt grouping rules and are labeled in the cortex with enhanced neuronal activity (object-based attention in psychology). Recent neurophysiological studies revealed that reward-based learning influences these recurrent grouping processes, but it is not well understood how rewards train recurrent circuits for perceptual organization. This paper examines the mechanisms for reward-based learning of new grouping rules. We derive a learning rule that can explain how rewards influence the information flow through feedforward, horizontal and feedback connections. We illustrate the efficiency with two tasks that have been used to study the neuronal correlates of perceptual organization in early visual cortex. The first task is called contour-integration and demands the integration of collinear contour elements into an elongated curve. We show how reward-based learning causes an enhancement of the representation of the to-be-grouped elements at early levels of a recurrent neural network, just as is observed in the visual cortex of monkeys. The second task is curve-tracing where the aim is to determine the endpoint of an elongated curve composed of connected image elements. If trained with the new learning rule, neural networks learn to propagate enhanced activity over the curve, in accordance with neurophysiological data. We close the paper with a number of model predictions that can be tested in future neurophysiological and computational studies. PMID:26496502
Brain correlates of automatic visual change detection.
Cléry, H; Andersson, F; Fonlupt, P; Gomot, M
2013-07-15
A number of studies support the presence of visual automatic detection of change, but little is known about the brain generators involved in such processing and about the modulation of brain activity according to the salience of the stimulus. The study presented here was designed to locate the brain activity elicited by unattended visual deviant and novel stimuli using fMRI. Seventeen adult participants were presented with a passive visual oddball sequence while performing a concurrent visual task. Variations in BOLD signal were observed in the modality-specific sensory cortex, but also in non-specific areas involved in preattentional processing of changing events. A degree-of-deviance effect was observed, since novel stimuli elicited more activity in the sensory occipital regions and at the medial frontal site than small changes. These findings could be compared to those obtained in the auditory modality and might suggest a "general" change detection process operating in several sensory modalities. Copyright © 2013 Elsevier Inc. All rights reserved.
Multisensory connections of monkey auditory cerebral cortex
Smiley, John F.; Falchier, Arnaud
2009-01-01
Functional studies have demonstrated multisensory responses in auditory cortex, even in the primary and early auditory association areas. The features of somatosensory and visual responses in auditory cortex suggest that they are involved in multiple processes including spatial, temporal and object-related perception. Tract tracing studies in monkeys have demonstrated several potential sources of somatosensory and visual inputs to auditory cortex. These include potential somatosensory inputs from the retroinsular (RI) and granular insula (Ig) cortical areas, and from the thalamic posterior (PO) nucleus. Potential sources of visual responses include peripheral field representations of areas V2 and prostriata, as well as the superior temporal polysensory area (STP) in the superior temporal sulcus, and the magnocellular medial geniculate thalamic nucleus (MGm). Besides these sources, there are several other thalamic, limbic and cortical association structures that have multisensory responses and may contribute cross-modal inputs to auditory cortex. These connections demonstrated by tract tracing provide a list of potential inputs, but in most cases their significance has not been confirmed by functional experiments. It is possible that the somatosensory and visual modulation of auditory cortex are each mediated by multiple extrinsic sources. PMID:19619628
Visual function and color vision in adults with Attention-Deficit/Hyperactivity Disorder.
Kim, Soyeon; Chen, Samantha; Tannock, Rosemary
2014-01-01
Color vision and self-reported visual function in everyday life in young adults with Attention-Deficit/Hyperactivity Disorder (ADHD) were investigated. Participants were 30 young adults with ADHD and 30 controls matched for age and gender. They were tested individually and completed the Visual Activities Questionnaire (VAQ), Farnsworth-Munsell 100 Hue Test (FMT) and A Quick Test of Cognitive Speed (AQT). The ADHD group reported significantly more problems in 4 of 8 areas on the VAQ: depth perception, peripheral vision, visual search and visual processing speed. Further analyses of VAQ items revealed that the ADHD group endorsed more visual problems associated with driving than controls. Color perception difficulties on the FMT were restricted to the blue spectrum in the ADHD group. FMT and AQT results revealed slower processing of visual stimuli in the ADHD group. A comprehensive investigation of mechanisms underlying visual function and color vision in adults with ADHD is warranted, along with the potential impact of these visual problems on driving performance. Copyright © 2013 Spanish General Council of Optometry. Published by Elsevier Espana. All rights reserved.
Activity in early visual areas predicts interindividual differences in binocular rivalry dynamics
Yamashiro, Hiroyuki; Mano, Hiroaki; Umeda, Masahiro; Higuchi, Toshihiro; Saiki, Jun
2013-01-01
When dissimilar images are presented to the two eyes, binocular rivalry (BR) occurs, and perception alternates spontaneously between the images. Although neural correlates of the oscillating perception during BR have been found in multiple sites along the visual pathway, the source of BR dynamics is unclear. Psychophysical and modeling studies suggest that both low- and high-level cortical processes underlie BR dynamics. Previous neuroimaging studies have demonstrated the involvement of high-level regions by showing that frontal and parietal cortices responded time locked to spontaneous perceptual alternation in BR. However, a potential contribution of early visual areas to BR dynamics has been overlooked, because these areas also responded to the physical stimulus alternation mimicking BR. In the present study, instead of focusing on activity during perceptual switches, we highlighted brain activity during suppression periods to investigate a potential link between activity in human early visual areas and BR dynamics. We used a strong interocular suppression paradigm called continuous flash suppression to suppress and fluctuate the visibility of a probe stimulus and measured retinotopic responses to the onset of the invisible probe using functional MRI. There were ∼130-fold differences in the median suppression durations across 12 subjects. The individual differences in suppression durations could be predicted by the amplitudes of the retinotopic activity in extrastriate visual areas (V3 and V4v) evoked by the invisible probe. Weaker responses were associated with longer suppression durations. These results demonstrate that retinotopic representations in early visual areas play a role in the dynamics of perceptual alternations during BR. PMID:24353304
Cognitive and psychological science insights to improve climate change data visualization
NASA Astrophysics Data System (ADS)
Harold, Jordan; Lorenzoni, Irene; Shipley, Thomas F.; Coventry, Kenny R.
2016-12-01
Visualization of climate data plays an integral role in the communication of climate change findings to both expert and non-expert audiences. The cognitive and psychological sciences can provide valuable insights into how to improve visualization of climate data based on knowledge of how the human brain processes visual and linguistic information. We review four key research areas to demonstrate their potential to make data more accessible to diverse audiences: directing visual attention, visual complexity, making inferences from visuals, and the mapping between visuals and language. We present evidence-informed guidelines to help climate scientists increase the accessibility of graphics to non-experts, and illustrate how the guidelines can work in practice in the context of Intergovernmental Panel on Climate Change graphics.
Klink, P Christiaan; Dagnino, Bruno; Gariel-Mathis, Marie-Alice; Roelfsema, Pieter R
2017-07-05
The visual cortex is hierarchically organized, with low-level areas coding for simple features and higher areas for complex ones. Feedforward and feedback connections propagate information between areas in opposite directions, but their functional roles are only partially understood. We used electrical microstimulation to perturb the propagation of neuronal activity between areas V1 and V4 in monkeys performing a texture-segregation task. In both areas, microstimulation locally caused a brief phase of excitation, followed by inhibition. Both these effects propagated faithfully in the feedforward direction from V1 to V4. Stimulation of V4, however, caused little V1 excitation, but it did yield a delayed suppression during the late phase of visually driven activity. This suppression was pronounced for the V1 figure representation and weaker for background representations. Our results reveal functional differences between feedforward and feedback processing in texture segregation and suggest a specific modulating role for feedback connections in perceptual organization. Copyright © 2017 Elsevier Inc. All rights reserved.
Campagne, Aurélie; Fradcourt, Benoit; Pichat, Cédric; Baciu, Monica; Kauffmann, Louise; Peyrin, Carole
2016-01-01
Visual processing of emotional stimuli critically depends on the type of cognitive appraisal involved. The present fMRI pilot study aimed to investigate the cerebral correlates involved in the visual processing of emotional scenes in two tasks, one emotional, based on the appraisal of personal emotional experience, and the other motivational, based on the appraisal of the tendency to action. Given that the use of spatial frequency information is relatively flexible during the visual processing of emotional stimuli depending on the task’s demands, we also explored the effect of the type of spatial frequency in visual stimuli in each task by using emotional scenes filtered in low spatial frequency (LSF) and high spatial frequencies (HSF). Activation was observed in the visual areas of the fusiform gyrus for all emotional scenes in both tasks, and in the amygdala for unpleasant scenes only. The motivational task induced additional activation in frontal motor-related areas (e.g. premotor cortex, SMA) and parietal regions (e.g. superior and inferior parietal lobules). Parietal regions were recruited particularly during the motivational appraisal of approach in response to pleasant scenes. These frontal and parietal activations, respectively, suggest that motor and navigation processes play a specific role in the identification of the tendency to action in the motivational task. Furthermore, activity observed in the motivational task, in response to both pleasant and unpleasant scenes, was significantly greater for HSF than for LSF scenes, suggesting that the tendency to action is driven mainly by the detailed information contained in scenes. Results for the emotional task suggest that spatial frequencies play only a small role in the evaluation of unpleasant and pleasant emotions. Our preliminary study revealed a partial distinction between visual processing of emotional scenes during identification of the tendency to action, and during identification of personal emotional experiences. It also illustrates flexible use of the spatial frequencies contained in scenes depending on their emotional valence and on task demands. PMID:26757433
System and method for image mapping and visual attention
NASA Technical Reports Server (NTRS)
Peters, II, Richard A. (Inventor)
2010-01-01
A method is described for mapping dense sensory data to a Sensory Ego Sphere (SES). Methods are also described for finding and ranking areas of interest in the images that form a complete visual scene on an SES. Further, attentional processing of image data is best done by performing attentional processing on individual full-size images from the image sequence, mapping each attentional location to the nearest node, and then summing attentional locations at each node.
System and method for image mapping and visual attention
NASA Technical Reports Server (NTRS)
Peters, II, Richard A. (Inventor)
2011-01-01
A method is described for mapping dense sensory data to a Sensory Ego Sphere (SES). Methods are also described for finding and ranking areas of interest in the images that form a complete visual scene on an SES. Further, attentional processing of image data is best done by performing attentional processing on individual full-size images from the image sequence, mapping each attentional location to the nearest node, and then summing all attentional locations at each node.
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.; Fales, Carl L.
1990-01-01
Researchers are concerned with the end-to-end performance of image gathering, coding, and processing. The applications range from high-resolution television to vision-based robotics, wherever the resolution, efficiency and robustness of visual information acquisition and processing are critical. For the presentation at this workshop, it is convenient to divide research activities into the following two overlapping areas: The first is the development of focal-plane processing techniques and technology to effectively combine image gathering with coding, with an emphasis on low-level vision processing akin to the retinal processing in human vision. The approach includes the familiar Laplacian pyramid, the new intensity-dependent spatial summation, and parallel sensing/processing networks. Three-dimensional image gathering is attained by combining laser ranging with sensor-array imaging. The second is the rigorous extension of information theory and optimal filtering to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing.
Image and emotion: from outcomes to brain behavior.
Nanda, Upali; Zhu, Xi; Jansen, Ben H
2012-01-01
A systematic review of neuroscience articles on the emotional states of fear, anxiety, and pain to understand how emotional response is linked to the visual characteristics of an image at the level of brain behavior. A number of outcome studies link exposure to visual images (with nature content) to improvements in stress, anxiety, and pain perception. However, an understanding of the underlying perceptual mechanisms has been lacking. In this article, neuroscience studies that use visual images to induce fear, anxiety, or pain are reviewed to gain an understanding of how the brain processes visual images in this context and to explore whether this processing can be linked to specific visual characteristics. The amygdala was identified as one of the key regions of the brain involved in the processing of fear, anxiety, and pain (induced by visual images). Other key areas included the thalamus, insula, and hippocampus. Characteristics of visual images such as the emotional dimension (valence/arousal), subject matter (familiarity, ambiguity, novelty, realism, and facial expressions), and form (sharp and curved contours) were identified as key factors influencing emotional processing. The broad structural properties of an image and overall content were found to have a more pivotal role in the emotional response than the specific details of an image. Insights on specific visual properties were translated to recommendations for what should be incorporated-and avoided-in healthcare environments.
Krajcovicova, Lenka; Barton, Marek; Elfmarkova-Nemcova, Nela; Mikl, Michal; Marecek, Radek; Rektorova, Irena
2017-12-01
Visual processing difficulties are often present in Alzheimer's disease (AD), even in its pre-dementia phase (i.e. in mild cognitive impairment, MCI). The default mode network (DMN) modulates the brain connectivity depending on the specific cognitive demand, including visual processes. The aim of the present study was to analyze specific changes in connectivity of the posterior DMN node (i.e. the posterior cingulate cortex and precuneus, PCC/P) associated with visual processing in 17 MCI patients and 15 AD patients as compared to 18 healthy controls (HC) using functional magnetic resonance imaging. We used psychophysiological interaction (PPI) analysis to detect specific alterations in PCC connectivity associated with visual processing while controlling for brain atrophy. In the HC group, we observed physiological changes in PCC connectivity in ventral visual stream areas and with PCC/P during the visual task, reflecting the successful involvement of these regions in visual processing. In the MCI group, the PCC connectivity changes were disturbed and remained significant only with the anterior precuneus. In between-group comparison, we observed significant PPI effects in the right superior temporal gyrus in both MCI and AD as compared to HC. This change in connectivity may reflect ineffective "compensatory" mechanism present in the early pre-dementia stages of AD or abnormal modulation of brain connectivity due to the disease pathology. With the disease progression, these changes become more evident but less efficient in terms of compensation. This approach can separate the MCI from HC with 77% sensitivity and 89% specificity.
Neural substrates of interpreting actions and emotions from body postures.
Kana, Rajesh K; Travers, Brittany G
2012-04-01
Accurately reading the body language of others may be vital for navigating the social world, and this ability may be influenced by factors, such as our gender, personality characteristics and neurocognitive processes. This fMRI study examined the brain activation of 26 healthy individuals (14 women and 12 men) while they judged the action performed or the emotion felt by stick figure characters appearing in different postures. In both tasks, participants activated areas associated with visual representation of the body, motion processing and emotion recognition. Behaviorally, participants demonstrated greater ease in judging the physical actions of the characters compared to judging their emotional states, and participants showed more activation in areas associated with emotion processing in the emotion detection task, whereas they showed more activation in visual, spatial and action-related areas in the physical action task. Gender differences emerged in brain responses, such that men showed greater activation than women in the left dorsal premotor cortex in both tasks. Finally, participants higher in self-reported empathy demonstrated greater activation in areas associated with self-referential processing and emotion interpretation. These results suggest that empathy levels and sex of the participant may affect neural responses to emotional body language.
Neural substrates of interpreting actions and emotions from body postures
Travers, Brittany G.
2012-01-01
Accurately reading the body language of others may be vital for navigating the social world, and this ability may be influenced by factors, such as our gender, personality characteristics and neurocognitive processes. This fMRI study examined the brain activation of 26 healthy individuals (14 women and 12 men) while they judged the action performed or the emotion felt by stick figure characters appearing in different postures. In both tasks, participants activated areas associated with visual representation of the body, motion processing and emotion recognition. Behaviorally, participants demonstrated greater ease in judging the physical actions of the characters compared to judging their emotional states, and participants showed more activation in areas associated with emotion processing in the emotion detection task, whereas they showed more activation in visual, spatial and action-related areas in the physical action task. Gender differences emerged in brain responses, such that men showed greater activation than women in the left dorsal premotor cortex in both tasks. Finally, participants higher in self-reported empathy demonstrated greater activation in areas associated with self-referential processing and emotion interpretation. These results suggest that empathy levels and sex of the participant may affect neural responses to emotional body language. PMID:21504992
Knowledge is power: how conceptual knowledge transforms visual cognition.
Collins, Jessica A; Olson, Ingrid R
2014-08-01
In this review, we synthesize the existing literature demonstrating the dynamic interplay between conceptual knowledge and visual perceptual processing. We consider two theoretical frameworks that demonstrate interactions between processes and brain areas traditionally considered perceptual or conceptual. Specifically, we discuss categorical perception, in which visual objects are represented according to category membership, and highlight studies showing that category knowledge can penetrate early stages of visual analysis. We next discuss the embodied account of conceptual knowledge, which holds that concepts are instantiated in the same neural regions required for specific types of perception and action, and discuss the limitations of this framework. We additionally consider studies showing that gaining abstract semantic knowledge about objects and faces leads to behavioral and electrophysiological changes that are indicative of more efficient stimulus processing. Finally, we consider the role that perceiver goals and motivation may play in shaping the interaction between conceptual and perceptual processing. We hope to demonstrate how pervasive such interactions between motivation, conceptual knowledge, and perceptual processing are in our understanding of the visual environment, and to demonstrate the need for future research aimed at understanding how such interactions arise in the brain.
Knowledge is Power: How Conceptual Knowledge Transforms Visual Cognition
Collins, Jessica A.; Olson, Ingrid R.
2014-01-01
In this review we synthesize the existing literature demonstrating the dynamic interplay between conceptual knowledge and visual perceptual processing. We consider two theoretical frameworks demonstrating interactions between processes and brain areas traditionally considered perceptual or conceptual. Specifically, we discuss categorical perception, in which visual objects are represented according to category membership, and highlight studies showing that category knowledge can penetrate early stages of visual analysis. We next discuss the embodied account of conceptual knowledge, which holds that concepts are instantiated in the same neural regions required for specific types of perception and action, and discuss the limitations of this framework. We additionally consider studies showing that gaining abstract semantic knowledge about objects and faces leads to behavioral and electrophysiological changes that are indicative of more efficient stimulus processing. Finally, we consider the role that perceiver goals and motivation may play in shaping the interaction between conceptual and perceptual processing. We hope to demonstrate how pervasive such interactions between motivation, conceptual knowledge, and perceptual processing are to our understanding of the visual environment, and demonstrate the need for future research aimed at understanding how such interactions arise in the brain. PMID:24402731
The influence of spontaneous activity on stimulus processing in primary visual cortex.
Schölvinck, M L; Friston, K J; Rees, G
2012-02-01
Spontaneous activity in the resting human brain has been studied extensively; however, how such activity affects the local processing of a sensory stimulus is relatively unknown. Here, we examined the impact of spontaneous activity in primary visual cortex on neuronal and behavioural responses to a simple visual stimulus, using functional MRI. Stimulus-evoked responses remained essentially unchanged by spontaneous fluctuations, combining with them in a largely linear fashion (i.e., with little evidence for an interaction). However, interactions between spontaneous fluctuations and stimulus-evoked responses were evident behaviourally; high levels of spontaneous activity tended to be associated with increased stimulus detection at perceptual threshold. Our results extend those found in studies of spontaneous fluctuations in motor cortex and higher order visual areas, and suggest a fundamental role for spontaneous activity in stimulus processing. Copyright © 2011. Published by Elsevier Inc.
Good Food, Bad Food, and White Rice: Understanding Child Feeding Using Visual-Narrative Elicitation.
Wentworth, Chelsea
2017-01-01
Visual-narrative elicitation, a process combining photo elicitation and pile sorting in applied medical anthropology, sheds light on food consumption patterns in urban areas of Vanuatu where childhood malnutrition is a persistent problem. Groups of participants took photographs of the foods they feed their children, and the resources and barriers they encounter in accessing foodstuffs. This revealed how imported and local foods are assigned value as "good" or "bad" foods when contributing to dietary diversity and creating appropriate meals for children, particularly in the context of consuming white rice. The process of gathering and working with photographs illuminated the complex negotiations in which caregivers engaged when making food and nutritional choices for their children. At the nexus of visual and medical anthropology, the visual-narrative elicitation process yielded nuanced, comprehensive understandings of how caregivers value the various foods they feed their children.
High-level, but not low-level, motion perception is impaired in patients with schizophrenia.
Kandil, Farid I; Pedersen, Anya; Wehnes, Jana; Ohrmann, Patricia
2013-01-01
Smooth pursuit eye movements are compromised in patients with schizophrenia and their first-degree relatives. Although research has demonstrated that the motor components of smooth pursuit eye movements are intact, motion perception has been shown to be impaired. In particular, studies have consistently revealed deficits in performance on tasks specific to the high-order motion area V5 (middle temporal area, MT) in patients with schizophrenia. In contrast, data from low-level motion detectors in the primary visual cortex (V1) have been inconsistent. To differentiate between low-level and high-level visual motion processing, we applied a temporal-order judgment task for motion events and a motion-defined figure-ground segregation task using patients with schizophrenia and healthy controls. Successful judgments in both tasks rely on the same low-level motion detectors in the V1; however, the first task is further processed in the higher-order motion area MT in the magnocellular (dorsal) pathway, whereas the second task requires subsequent computations in the parvocellular (ventral) pathway in visual area V4 and the inferotemporal cortex (IT). These latter structures are supposed to be intact in schizophrenia. Patients with schizophrenia revealed a significantly impaired temporal resolution on the motion-based temporal-order judgment task but only mild impairment in the motion-based segregation task. These results imply that low-level motion detection in V1 is not, or is only slightly, compromised; furthermore, our data restrain the locus of the well-known deficit in motion detection to areas beyond the primary visual cortex.
Bender, Stephan; Behringer, Stephanie; Freitag, Christine M; Resch, Franz; Weisbrod, Matthias
2010-12-01
To elucidate the contributions of modality-dependent post-processing in auditory, motor and visual cortical areas to short-term memory. We compared late negative waves (N700) during the post-processing of single lateralized stimuli which were separated by long intertrial intervals across the auditory, motor and visual modalities. Tasks either required or competed with attention to post-processing of preceding events, i.e. active short-term memory maintenance. N700 indicated that cortical post-processing exceeded short movements as well as short auditory or visual stimuli for over half a second without intentional short-term memory maintenance. Modality-specific topographies pointed towards sensory (respectively motor) generators with comparable time-courses across the different modalities. Lateralization and amplitude of auditory/motor/visual N700 were enhanced by active short-term memory maintenance compared to attention to current perceptions or passive stimulation. The memory-related N700 increase followed the characteristic time-course and modality-specific topography of the N700 without intentional memory-maintenance. Memory-maintenance-related lateralized negative potentials may be related to a less lateralised modality-dependent post-processing N700 component which occurs also without intentional memory maintenance (automatic memory trace or effortless attraction of attention). Encoding to short-term memory may involve controlled attention to modality-dependent post-processing. Similar short-term memory processes may exist in the auditory, motor and visual systems. Copyright © 2010 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Kometer, Michael; Cahn, B Rael; Andel, David; Carter, Olivia L; Vollenweider, Franz X
2011-03-01
Recent findings suggest that the serotonergic system and particularly the 5-HT2A/1A receptors are implicated in visual processing and possibly the pathophysiology of visual disturbances including hallucinations in schizophrenia and Parkinson's disease. To investigate the role of 5-HT2A/1A receptors in visual processing the effect of the hallucinogenic 5-HT2A/1A agonist psilocybin (125 and 250 μg/kg vs. placebo) on the spatiotemporal dynamics of modal object completion was assessed in normal volunteers (n = 17) using visual evoked potential recordings in conjunction with topographic-mapping and source analysis. These effects were then considered in relation to the subjective intensity of psilocybin-induced visual hallucinations quantified by psychometric measurement. Psilocybin dose-dependently decreased the N170 and, in contrast, slightly enhanced the P1 component selectively over occipital electrode sites. The decrease of the N170 was most apparent during the processing of incomplete object figures. Moreover, during the time period of the N170, the overall reduction of the activation in the right extrastriate and posterior parietal areas correlated positively with the intensity of visual hallucinations. These results suggest a central role of the 5-HT2A/1A-receptors in the modulation of visual processing. Specifically, a reduced N170 component was identified as potentially reflecting a key process of 5-HT2A/1A receptor-mediated visual hallucinations and aberrant modal object completion potential. Copyright © 2011 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
Alvarez, George A.; Cavanagh, Patrick
2014-01-01
It is much easier to divide attention across the left and right visual hemifields than within the same visual hemifield. Here we investigate whether this benefit of dividing attention across separate visual fields is evident at early cortical processing stages. We measured the steady-state visual evoked potential, an oscillatory response of the visual cortex elicited by flickering stimuli, of moving targets and distractors while human observers performed a tracking task. The amplitude of responses at the target frequencies was larger than that of the distractor frequencies when participants tracked two targets in separate hemifields, indicating that attention can modulate early visual processing when it is divided across hemifields. However, these attentional modulations disappeared when both targets were tracked within the same hemifield. These effects were not due to differences in task performance, because accuracy was matched across the tracking conditions by adjusting target speed (with control conditions ruling out effects due to speed alone). To investigate later processing stages, we examined the P3 component over central-parietal scalp sites that was elicited by the test probe at the end of the trial. The P3 amplitude was larger for probes on targets than on distractors, regardless of whether attention was divided across or within a hemifield, indicating that these higher-level processes were not constrained by visual hemifield. These results suggest that modulating early processing stages enables more efficient target tracking, and that within-hemifield competition limits the ability to modulate multiple target representations within the hemifield maps of the early visual cortex. PMID:25164651
Contextual modulation and stimulus selectivity in extrastriate cortex.
Krause, Matthew R; Pack, Christopher C
2014-11-01
Contextual modulation is observed throughout the visual system, using techniques ranging from single-neuron recordings to behavioral experiments. Its role in generating feature selectivity within the retina and primary visual cortex has been extensively described in the literature. Here, we describe how similar computations can also elaborate feature selectivity in the extrastriate areas of both the dorsal and ventral streams of the primate visual system. We discuss recent work that makes use of normalization models to test specific roles for contextual modulation in visual cortex function. We suggest that contextual modulation renders neuronal populations more selective for naturalistic stimuli. Specifically, we discuss contextual modulation's role in processing optic flow in areas MT and MST and for representing naturally occurring curvature and contours in areas V4 and IT. We also describe how the circuitry that supports contextual modulation is robust to variations in overall input levels. Finally, we describe how this theory relates to other hypothesized roles for contextual modulation. Copyright © 2014 Elsevier Ltd. All rights reserved.
Vermaercke, Ben; Van den Bergh, Gert; Gerich, Florian; Op de Beeck, Hans
2015-01-01
Recent studies have revealed a surprising degree of functional specialization in rodent visual cortex. It is unknown to what degree this functional organization is related to the well-known hierarchical organization of the visual system in primates. We designed a study in rats that targets one of the hallmarks of the hierarchical object vision pathway in primates: selectivity for behaviorally relevant dimensions. We compared behavioral performance in a visual water maze with neural discriminability in five visual cortical areas. We tested behavioral discrimination in two independent batches of six rats using six pairs of shapes used previously to probe shape selectivity in monkey cortex (Lehky and Sereno, 2007). The relative difficulty (error rate) of shape pairs was strongly correlated between the two batches, indicating that some shape pairs were more difficult to discriminate than others. Then, we recorded in naive rats from five visual areas from primary visual cortex (V1) over areas LM, LI, LL, up to lateral occipito-temporal cortex (TO). Shape selectivity in the upper layers of V1, where the information enters cortex, correlated mostly with physical stimulus dissimilarity and not with behavioral performance. In contrast, neural discriminability in lower layers of all areas was strongly correlated with behavioral performance. These findings, in combination with the results from Vermaercke et al. (2014b), suggest that the functional specialization in rodent lateral visual cortex reflects a processing hierarchy resulting in the emergence of complex selectivity that is related to behaviorally relevant stimulus differences.
Wu, Xiang; He, Sheng; Bushara, Khalaf; Zeng, Feiyan; Liu, Ying; Zhang, Daren
2012-10-01
Object recognition occurs even when environmental information is incomplete. Illusory contours (ICs), in which a contour is perceived though the contour edges are incomplete, have been extensively studied as an example of such a visual completion phenomenon. Despite the neural activity in response to ICs in visual cortical areas from low (V1 and V2) to high (LOC: the lateral occipital cortex) levels, the details of the neural processing underlying IC perception are largely not clarified. For example, how do the visual areas function in IC perception and how do they interact to archive the coherent contour perception? IC perception involves the process of completing the local discrete contour edges (contour completion) and the process of representing the global completed contour information (contour representation). Here, functional magnetic resonance imaging was used to dissociate contour completion and contour representation by varying each in opposite directions. The results show that the neural activity was stronger to stimuli with more contour completion than to stimuli with more contour representation in V1 and V2, which was the reverse of that in the LOC. When inspecting the neural activity change across the visual pathway, the activation remained high for the stimuli with more contour completion and increased for the stimuli with more contour representation. These results suggest distinct neural correlates of contour completion and contour representation, and the possible collaboration between the two processes during IC perception, indicating a neural connection between the discrete retinal input and the coherent visual percept. Copyright © 2011 Wiley Periodicals, Inc.
Brain response to visual sexual stimuli in homosexual pedophiles
Schiffer, Boris; Krueger, Tillmann; Paul, Thomas; de Greiff, Armin; Forsting, Michael; Leygraf, Norbert; Schedlowski, Manfred; Gizewski, Elke
2008-01-01
Objective The neurobiological mechanisms of deviant sexual preferences such as pedophilia are largely unknown. The objective of this study was to analyze whether brain activation patterns of homosexual pedophiles differed from those of a nonpedophile homosexual control group during visual sexual stimulation. Method A consecutive sample of 11 pedophile forensic inpatients exclusively attracted to boys and 12 age-matched homosexual control participants from a comparable socioeconomic stratum underwent functional magnetic resonance imaging during a visual sexual stimulation procedure that used sexually stimulating and emotionally neutral photographs. Sexual arousal was assessed according to a subjective rating scale. Results In contrast to sexually neutral pictures, in both groups sexually arousing pictures having both homosexual and pedophile content activated brain areas known to be involved in processing visual stimuli containing emotional content, including the occipitotemporal and prefrontal cortices. However, during presentation of the respective sexual stimuli, the thalamus, globus pallidus and striatum, which correspond to the key areas of the brain involved in sexual arousal and behaviour, showed significant activation in pedophiles, but not in control subjects. Conclusions Central processing of visual sexual stimuli in homosexual pedophiles seems to be comparable to that in nonpedophile control subjects. However, compared with homosexual control subjects, activation patterns in pedophiles refer more strongly to subcortical regions, which have previously been discussed in the context of processing reward signals and also play an important role in addictive and stimulus-controlled behaviour. Thus future studies should further elucidate the specificity of these brain regions for the processing of sexual stimuli in pedophilia and should address the generally weaker activation pattern in homosexual men. PMID:18197269
Brain response to visual sexual stimuli in homosexual pedophiles.
Schiffer, Boris; Krueger, Tillmann; Paul, Thomas; de Greiff, Armin; Forsting, Michael; Leygraf, Norbert; Schedlowski, Manfred; Gizewski, Elke
2008-01-01
The neurobiological mechanisms of deviant sexual preferences such as pedophilia are largely unknown. The objective of this study was to analyze whether brain activation patterns of homosexual pedophiles differed from those of a nonpedophile homosexual control group during visual sexual stimulation. A consecutive sample of 11 pedophile forensic inpatients exclusively attracted to boys and 12 age-matched homosexual control participants from a comparable socioeconomic stratum underwent functional magnetic resonance imaging during a visual sexual stimulation procedure that used sexually stimulating and emotionally neutral photographs. Sexual arousal was assessed according to a subjective rating scale. In contrast to sexually neutral pictures, in both groups sexually arousing pictures having both homosexual and pedophile content activated brain areas known to be involved in processing visual stimuli containing emotional content, including the occipitotemporal and prefrontal cortices. However, during presentation of the respective sexual stimuli, the thalamus, globus pallidus and striatum, which correspond to the key areas of the brain involved in sexual arousal and behaviour, showed significant activation in pedophiles, but not in control subjects. Central processing of visual sexual stimuli in homosexual pedophiles seems to be comparable to that in nonpedophile control subjects. However, compared with homosexual control subjects, activation patterns in pedophiles refer more strongly to subcortical regions, which have previously been discussed in the context of processing reward signals and also play an important role in addictive and stimulus-controlled behaviour. Thus future studies should further elucidate the specificity of these brain regions for the processing of sexual stimuli in pedophilia and should address the generally weaker activation pattern in homosexual men.
Area 21a of cat visual cortex strongly modulates neuronal activities in the superior colliculus
Hashemi-Nezhad, M; Wang, C; Burke, W; Dreher, B
2003-01-01
We have examined the influence of cortico-tectal projections from one of the pattern-processing extrastriate visual cortical areas, area 21a, on the responses to visual stimuli of single neurones in the superior colliculi of adult cats. For this purpose area 21a was briefly inactivated by cooling to 10 °C using a Peltier device. Responses to visual stimuli before and during cooling as well as after rewarming ipsilateral area 21a were compared. In addition, in a subpopulation of collicular neurones we have studied the effects of reversible inactivation of ipsilateral striate cortex (area 17, area V1). When area 21a was cooled, the temperature of area 17 was kept at 36 °C and vice versa. In the majority of cases (41/65; 63 %), irrespective of the velocity response profiles of collicular neurones, inactivation of area 21a resulted in a significant decrease in magnitude of responses of neurones in the ipsilateral colliculus and only in a small proportion of cells (2/65; 3.1 %) was there a significant increase in the magnitude of responses. Inactivation of area 21a resulted in significant changes in the magnitude of responses of collicular cells located not only in the retino-recipient layers but also in the stratum griseum intermediale. In most cases, reversible inactivation of area 17 resulted in a greater reduction in the magnitude of responses of collicular cells than inactivation of area 21a. Reversible inactivation of area 21a also affected the direction selectivity indices and length tuning of most collicular cells tested. PMID:12794178
Wide field-of-view, multi-region two-photon imaging of neuronal activity in the mammalian brain
Stirman, Jeffrey N.; Smith, Ikuko T.; Kudenov, Michael W.; Smith, Spencer L.
2016-01-01
Two-photon calcium imaging provides an optical readout of neuronal activity in populations of neurons with subcellular resolution. However, conventional two-photon imaging systems are limited in their field of view to ~1 mm2, precluding the visualization of multiple cortical areas simultaneously. Here, we demonstrate a two-photon microscope with an expanded field of view (>9.5 mm2) for rapidly reconfigurable simultaneous scanning of widely separated populations of neurons. We custom designed and assembled an optimized scan engine, objective, and two independently positionable, temporally multiplexed excitation pathways. We used this new microscope to measure activity correlations between two cortical visual areas in mice during visual processing. PMID:27347754
Pandey, Anil Kumar; Saroha, Kartik; Sharma, Param Dev; Patel, Chetan; Bal, Chandrashekhar; Kumar, Rakesh
2017-01-01
In this study, we have developed a simple image processing application in MATLAB that uses suprathreshold stochastic resonance (SSR) and helps the user to visualize abdominopelvic tumor on the exported prediuretic positron emission tomography/computed tomography (PET/CT) images. A brainstorming session was conducted for requirement analysis for the program. It was decided that program should load the screen captured PET/CT images and then produces output images in a window with a slider control that should enable the user to view the best image that visualizes the tumor, if present. The program was implemented on personal computer using Microsoft Windows and MATLAB R2013b. The program has option for the user to select the input image. For the selected image, it displays output images generated using SSR in a separate window having a slider control. The slider control enables the user to view images and select one which seems to provide the best visualization of the area(s) of interest. The developed application enables the user to select, process, and view output images in the process of utilizing SSR to detect the presence of abdominopelvic tumor on prediuretic PET/CT image.
Calderone, Daniel J.; Hoptman, Matthew J.; Martínez, Antígona; Nair-Collins, Sangeeta; Mauro, Cristina J.; Bar, Moshe; Javitt, Daniel C.; Butler, Pamela D.
2013-01-01
Patients with schizophrenia exhibit cognitive and sensory impairment, and object recognition deficits have been linked to sensory deficits. The “frame and fill” model of object recognition posits that low spatial frequency (LSF) information rapidly reaches the prefrontal cortex (PFC) and creates a general shape of an object that feeds back to the ventral temporal cortex to assist object recognition. Visual dysfunction findings in schizophrenia suggest a preferential loss of LSF information. This study used functional magnetic resonance imaging (fMRI) and resting state functional connectivity (RSFC) to investigate the contribution of visual deficits to impaired object “framing” circuitry in schizophrenia. Participants were shown object stimuli that were intact or contained only LSF or high spatial frequency (HSF) information. For controls, fMRI revealed preferential activation to LSF information in precuneus, superior temporal, and medial and dorsolateral PFC areas, whereas patients showed a preference for HSF information or no preference. RSFC revealed a lack of connectivity between early visual areas and PFC for patients. These results demonstrate impaired processing of LSF information during object recognition in schizophrenia, with patients instead displaying increased processing of HSF information. This is consistent with findings of a preference for local over global visual information in schizophrenia. PMID:22735157
Beyond perceptual expertise: revisiting the neural substrates of expert object recognition
Harel, Assaf; Kravitz, Dwight; Baker, Chris I.
2013-01-01
Real-world expertise provides a valuable opportunity to understand how experience shapes human behavior and neural function. In the visual domain, the study of expert object recognition, such as in car enthusiasts or bird watchers, has produced a large, growing, and often-controversial literature. Here, we synthesize this literature, focusing primarily on results from functional brain imaging, and propose an interactive framework that incorporates the impact of high-level factors, such as attention and conceptual knowledge, in supporting expertise. This framework contrasts with the perceptual view of object expertise that has concentrated largely on stimulus-driven processing in visual cortex. One prominent version of this perceptual account has almost exclusively focused on the relation of expertise to face processing and, in terms of the neural substrates, has centered on face-selective cortical regions such as the Fusiform Face Area (FFA). We discuss the limitations of this face-centric approach as well as the more general perceptual view, and highlight that expert related activity is: (i) found throughout visual cortex, not just FFA, with a strong relationship between neural response and behavioral expertise even in the earliest stages of visual processing, (ii) found outside visual cortex in areas such as parietal and prefrontal cortices, and (iii) modulated by the attentional engagement of the observer suggesting that it is neither automatic nor driven solely by stimulus properties. These findings strongly support a framework in which object expertise emerges from extensive interactions within and between the visual system and other cognitive systems, resulting in widespread, distributed patterns of expertise-related activity across the entire cortex. PMID:24409134
Neural organization and visual processing in the anterior optic tubercle of the honeybee brain.
Mota, Theo; Yamagata, Nobuhiro; Giurfa, Martin; Gronenberg, Wulfila; Sandoz, Jean-Christophe
2011-08-10
The honeybee Apis mellifera represents a valuable model for studying the neural segregation and integration of visual information. Vision in honeybees has been extensively studied at the behavioral level and, to a lesser degree, at the physiological level using intracellular electrophysiological recordings of single neurons. However, our knowledge of visual processing in honeybees is still limited by the lack of functional studies of visual processing at the circuit level. Here we contribute to filling this gap by providing a neuroanatomical and neurophysiological characterization at the circuit level of a practically unstudied visual area of the bee brain, the anterior optic tubercle (AOTu). First, we analyzed the internal organization and neuronal connections of the AOTu. Second, we established a novel protocol for performing optophysiological recordings of visual circuit activity in the honeybee brain and studied the responses of AOTu interneurons during stimulation of distinct eye regions. Our neuroanatomical data show an intricate compartmentalization and connectivity of the AOTu, revealing a dorsoventral segregation of the visual input to the AOTu. Light stimuli presented in different parts of the visual field (dorsal, lateral, or ventral) induce distinct patterns of activation in AOTu output interneurons, retaining to some extent the dorsoventral input segregation revealed by our neuroanatomical data. In particular, activity patterns evoked by dorsal and ventral eye stimulation are clearly segregated into distinct AOTu subunits. Our results therefore suggest an involvement of the AOTu in the processing of dorsoventrally segregated visual information in the honeybee brain.
Striem-Amit, Ella; Cohen, Laurent; Dehaene, Stanislas; Amedi, Amir
2012-11-08
Using a visual-to-auditory sensory-substitution algorithm, congenitally fully blind adults were taught to read and recognize complex images using "soundscapes"--sounds topographically representing images. fMRI was used to examine key questions regarding the visual word form area (VWFA): its selectivity for letters over other visual categories without visual experience, its feature tolerance for reading in a novel sensory modality, and its plasticity for scripts learned in adulthood. The blind activated the VWFA specifically and selectively during the processing of letter soundscapes relative to both textures and visually complex object categories and relative to mental imagery and semantic-content controls. Further, VWFA recruitment for reading soundscapes emerged after 2 hr of training in a blind adult on a novel script. Therefore, the VWFA shows category selectivity regardless of input sensory modality, visual experience, and long-term familiarity or expertise with the script. The VWFA may perform a flexible task-specific rather than sensory-specific computation, possibly linking letter shapes to phonology. Copyright © 2012 Elsevier Inc. All rights reserved.
Ball, Keira; Lane, Alison R; Smith, Daniel T; Ellison, Amanda
2013-11-01
The right posterior parietal cortex (rPPC) and the right frontal eye field (rFEF) form part of a network of brain areas involved in orienting spatial attention. Previous studies using transcranial magnetic stimulation (TMS) have demonstrated that both areas are critically involved in the processing of conjunction visual search tasks, since stimulation of these sites disrupts performance. This study investigated the effects of long term neuronal modulation to rPPC and rFEF using transcranial direct current stimulation (tDCS) with the aim of uncovering sharing of these resources in the processing of conjunction visual search tasks. Participants completed four blocks of conjunction search trials over the course of 45 min. Following the first block they received 15 min of either cathodal or anodal stimulation to rPPC or rFEF, or sham stimulation. A significant interaction between block and stimulation condition was found, indicating that tDCS caused different effects according to the site (rPPC or rFEF) and type of stimulation (cathodal, anodal, or sham). Practice resulted in a significant reduction in reaction time across the four blocks in all conditions except when cathodal tDCS was applied to rPPC. The effects of cathodal tDCS over rPPC are subtler than those seen with TMS, and no effect of tDCS was evident at rFEF. This suggests that rFEF has a more transient role than rPPC in the processing of conjunction visual search and is robust to longer-term methods of neuro-disruption. Our results may be explained within the framework of functional connectivity between these, and other, areas. Copyright © 2013 Elsevier Inc. All rights reserved.
Visual Modeling for Aqua Ventus I off Monhegan Island, ME
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanna, Luke A.; Whiting, Jonathan M.; Copping, Andrea E.
2013-11-27
To assist the University of Maine in demonstrating a clear pathway to project completion, PNNL has developed visualization models of the Aqua Ventus I project that accurately depict the Aqua Ventus I turbines from various points on Monhegain Island, ME and the surrounding area. With a hub height of 100 meters, the Aqua Ventus I turbines are large and may be seen from many areas on Monhegan Island, potentially disrupting important viewsheds. By developing these visualization models, which consist of actual photographs taken from Monhegan Island and the surrounding area with the Aqua Ventus I turbines superimposed within each photograph,more » PNNL intends to support the project’s siting and permitting process by providing the Monhegan Island community and various other stakeholders with a probable glimpse of how the Aqua Ventus I project will appear.« less
Neural representations of contextual guidance in visual search of real-world scenes.
Preston, Tim J; Guo, Fei; Das, Koel; Giesbrecht, Barry; Eckstein, Miguel P
2013-05-01
Exploiting scene context and object-object co-occurrence is critical in guiding eye movements and facilitating visual search, yet the mediating neural mechanisms are unknown. We used functional magnetic resonance imaging while observers searched for target objects in scenes and used multivariate pattern analyses (MVPA) to show that the lateral occipital complex (LOC) can predict the coarse spatial location of observers' expectations about the likely location of 213 different targets absent from the scenes. In addition, we found weaker but significant representations of context location in an area related to the orienting of attention (intraparietal sulcus, IPS) as well as a region related to scene processing (retrosplenial cortex, RSC). Importantly, the degree of agreement among 100 independent raters about the likely location to contain a target object in a scene correlated with LOC's ability to predict the contextual location while weaker but significant effects were found in IPS, RSC, the human motion area, and early visual areas (V1, V3v). When contextual information was made irrelevant to observers' behavioral task, the MVPA analysis of LOC and the other areas' activity ceased to predict the location of context. Thus, our findings suggest that the likely locations of targets in scenes are represented in various visual areas with LOC playing a key role in contextual guidance during visual search of objects in real scenes.
Deng, Yanjia; Shi, Lin; Lei, Yi; Liang, Peipeng; Li, Kuncheng; Chu, Winnie C. W.; Wang, Defeng
2016-01-01
The human cortical regions for processing high-level visual (HLV) functions of different categories remain ambiguous, especially in terms of their conjunctions and specifications. Moreover, the neurobiology of declined HLV functions in patients with Alzheimer's disease (AD) has not been fully investigated. This study provides a functionally sorted overview of HLV cortices for processing “what” and “where” visual perceptions and it investigates their atrophy in AD and MCI patients. Based upon activation likelihood estimation (ALE), brain regions responsible for processing five categories of visual perceptions included in “what” and “where” visions (i.e., object, face, word, motion, and spatial visions) were analyzed, and subsequent contrast analyses were performed to show regions with conjunctive and specific activations for processing these visual functions. Next, based on the resulting ALE maps, the atrophy of HLV cortices in AD and MCI patients was evaluated using voxel-based morphometry. Our ALE results showed brain regions for processing visual perception across the five categories, as well as areas of conjunction and specification. Our comparisons of gray matter (GM) volume demonstrated atrophy of three “where” visual cortices in late MCI group and extensive atrophy of HLV cortices (25 regions in both “what” and “where” visual cortices) in AD group. In addition, the GM volume of atrophied visual cortices in AD and MCI subjects was found to be correlated to the deterioration of overall cognitive status and to the cognitive performances related to memory, execution, and object recognition functions. In summary, these findings may add to our understanding of HLV network organization and of the evolution of visual perceptual dysfunction in AD as the disease progresses. PMID:27445770
Hayashi, Yutaka; Kinoshita, Masashi; Nakada, Mitsutoshi; Hamada, Jun-ichiro
2012-11-01
Disturbance of the arcuate fasciculus in the dominant hemisphere is thought to be associated with language-processing disorders, including conduction aphasia. Although the arcuate fasciculus can be visualized in vivo with diffusion tensor imaging (DTI) tractography, its involvement in functional processes associated with language has not been shown dynamically using DTI tractography. In the present study, to clarify the participation of the arcuate fasciculus in language functions, postoperative changes in the arcuate fasciculus detected by DTI tractography were evaluated chronologically in relation to postoperative changes in language function after brain tumor surgery. Preoperative and postoperative arcuate fasciculus area and language function were examined in 7 right-handed patients with a brain tumor in the left hemisphere located in proximity to part of the arcuate fasciculus. The arcuate fasciculus was depicted, and its area was calculated using DTI tractography. Language functions were measured using the Western Aphasia Battery (WAB). After tumor resection, visualization of the arcuate fasciculus was increased in 5 of the 7 patients, and the total WAB score improved in 6 of the 7 patients. The relative ratio of postoperative visualized area of the arcuate fasciculus to preoperative visualized area of the arcuate fasciculus was increased in association with an improvement in postoperative language function (p = 0.0039). The role of the left arcuate fasciculus in language functions can be evaluated chronologically in vivo by DTI tractography after brain tumor surgery. Because increased postoperative visualization of the fasciculus was significantly associated with postoperative improvement in language functions, the arcuate fasciculus may play an important role in language function, as previously thought. In addition, postoperative changes in the arcuate fasciculus detected by DTI tractography could represent a predicting factor for postoperative language-dependent functional outcomes in patients with brain tumor.
Perceptual learning modifies the functional specializations of visual cortical areas.
Chen, Nihong; Cai, Peng; Zhou, Tiangang; Thompson, Benjamin; Fang, Fang
2016-05-17
Training can improve performance of perceptual tasks. This phenomenon, known as perceptual learning, is strongest for the trained task and stimulus, leading to a widely accepted assumption that the associated neuronal plasticity is restricted to brain circuits that mediate performance of the trained task. Nevertheless, learning does transfer to other tasks and stimuli, implying the presence of more widespread plasticity. Here, we trained human subjects to discriminate the direction of coherent motion stimuli. The behavioral learning effect substantially transferred to noisy motion stimuli. We used transcranial magnetic stimulation (TMS) and functional magnetic resonance imaging (fMRI) to investigate the neural mechanisms underlying the transfer of learning. The TMS experiment revealed dissociable, causal contributions of V3A (one of the visual areas in the extrastriate visual cortex) and MT+ (middle temporal/medial superior temporal cortex) to coherent and noisy motion processing. Surprisingly, the contribution of MT+ to noisy motion processing was replaced by V3A after perceptual training. The fMRI experiment complemented and corroborated the TMS finding. Multivariate pattern analysis showed that, before training, among visual cortical areas, coherent and noisy motion was decoded most accurately in V3A and MT+, respectively. After training, both kinds of motion were decoded most accurately in V3A. Our findings demonstrate that the effects of perceptual learning extend far beyond the retuning of specific neural populations for the trained stimuli. Learning could dramatically modify the inherent functional specializations of visual cortical areas and dynamically reweight their contributions to perceptual decisions based on their representational qualities. These neural changes might serve as the neural substrate for the transfer of perceptual learning.
Visual circuits of the avian telencephalon: evolutionary implications
NASA Technical Reports Server (NTRS)
Shimizu, T.; Bowers, A. N.
1999-01-01
Birds and primates are vertebrates that possess the most advanced, efficient visual systems. Although lineages leading to these two classes were separated about 300 million years ago, there are striking similarities in their underlying neural mechanisms for visual processing. This paper discusses such similarities with special emphasis on the visual circuits in the avian telencephalon. These similarities include: (1) the existence of two parallel visual pathways and their distinct telencephalic targets, (2) anatomical and functional segregation within the visual pathways, (3) laminar organization of the telencephalic targets of the pathways (e.g. striate cortex in primates), and (4) possible interactions between multiple visual areas. Additional extensive analyses are necessary to determine whether these similarities are due to inheritance from a common ancestral stock or the consequences of convergent evolution based on adaptive response to similar selective pressures. Nevertheless, such a comparison is important to identify the general and specific principles of visual processing in amniotes (reptiles, birds, and mammals). Furthermore, these principles in turn will provide a critical foundation for understanding the evolution of the brain in amniotes.
ERIC Educational Resources Information Center
Reeder, Kevin
2005-01-01
The movie industry heavily relies on storyboards as an effective way to visually describe the process of a movie. The storyboard visually describes how the movie flows from beginning to end, how the characters are interacting, and where transitions and/or gaps exist in the storyline. The storyboard is an effective tool in industrial design as…
Lim, Seung-Lark; O'Doherty, John P.
2013-01-01
We often have to make choices among multiattribute stimuli (e.g., a food that differs on its taste and health). Behavioral data suggest that choices are made by computing the value of the different attributes and then integrating them into an overall stimulus value signal. However, it is not known whether this theory describes the way the brain computes the stimulus value signals, or how the underlying computations might be implemented. We investigated these questions using a human fMRI task in which individuals had to evaluate T-shirts that varied in their visual esthetic (e.g., color) and semantic (e.g., meaning of logo printed in T-shirt) components. We found that activity in the fusiform gyrus, an area associated with the processing of visual features, correlated with the value of the visual esthetic attributes, but not with the value of the semantic attributes. In contrast, activity in posterior superior temporal gyrus, an area associated with the processing of semantic meaning, exhibited the opposite pattern. Furthermore, both areas exhibited functional connectivity with an area of ventromedial prefrontal cortex that reflects the computation of overall stimulus values at the time of decision. The results provide supporting evidence for the hypothesis that some attribute values are computed in cortical areas specialized in the processing of such features, and that those attribute-specific values are then passed to the vmPFC to be integrated into an overall stimulus value signal to guide the decision. PMID:23678116
Lim, Seung-Lark; O'Doherty, John P; Rangel, Antonio
2013-05-15
We often have to make choices among multiattribute stimuli (e.g., a food that differs on its taste and health). Behavioral data suggest that choices are made by computing the value of the different attributes and then integrating them into an overall stimulus value signal. However, it is not known whether this theory describes the way the brain computes the stimulus value signals, or how the underlying computations might be implemented. We investigated these questions using a human fMRI task in which individuals had to evaluate T-shirts that varied in their visual esthetic (e.g., color) and semantic (e.g., meaning of logo printed in T-shirt) components. We found that activity in the fusiform gyrus, an area associated with the processing of visual features, correlated with the value of the visual esthetic attributes, but not with the value of the semantic attributes. In contrast, activity in posterior superior temporal gyrus, an area associated with the processing of semantic meaning, exhibited the opposite pattern. Furthermore, both areas exhibited functional connectivity with an area of ventromedial prefrontal cortex that reflects the computation of overall stimulus values at the time of decision. The results provide supporting evidence for the hypothesis that some attribute values are computed in cortical areas specialized in the processing of such features, and that those attribute-specific values are then passed to the vmPFC to be integrated into an overall stimulus value signal to guide the decision.
Papera, Massimiliano; Richards, Anne
2016-05-01
Exogenous allocation of attentional resources allows the visual system to encode and maintain representations of stimuli in visual working memory (VWM). However, limits in the processing capacity to allocate resources can prevent unexpected visual stimuli from gaining access to VWM and thereby to consciousness. Using a novel approach to create unbiased stimuli of increasing saliency, we investigated visual processing during a visual search task in individuals who show a high or low propensity to neglect unexpected stimuli. When propensity to inattention is high, ERP recordings show a diminished amplification concomitantly with a decrease in theta band power during the N1 latency, followed by a poor target enhancement during the N2 latency. Furthermore, a later modulation in the P3 latency was also found in individuals showing propensity to visual neglect, suggesting that more effort is required for conscious maintenance of visual information in VWM. Effects during early stages of processing (N80 and P1) were also observed suggesting that sensitivity to contrasts and medium-to-high spatial frequencies may be modulated by low-level saliency (albeit no statistical group differences were found). In accordance with the Global Workplace Model, our data indicate that a lack of resources in low-level processors and visual attention may be responsible for the failure to "ignite" a state of high-level activity spread across several brain areas that is necessary for stimuli to access awareness. These findings may aid in the development of diagnostic tests and intervention to detect/reduce inattention propensity to visual neglect of unexpected stimuli. © 2016 Society for Psychophysiological Research.
Age-Related Visual Changes and Their Impications for the Motor Skill Performance of Older Adults.
ERIC Educational Resources Information Center
Haywood, Kathleen M.; Trick, Linda R.
Physical changes in and conditions of the eye associated with the normal aging process are discussed with reference to their impact on performance in physical and recreational activities. Descriptions are given of characteristic changes in visual acuity in the areas of: (1) presbyopia (inability to clearly focus near images); (2) sensitivity to…
Language Networks in Anophthalmia: Maintained Hierarchy of Processing in "Visual" Cortex
ERIC Educational Resources Information Center
Watkins, Kate E.; Cowey, Alan; Alexander, Iona; Filippini, Nicola; Kennedy, James M.; Smith, Stephen M.; Ragge, Nicola; Bridge, Holly
2012-01-01
Imaging studies in blind subjects have consistently shown that sensory and cognitive tasks evoke activity in the occipital cortex, which is normally visual. The precise areas involved and degree of activation are dependent upon the cause and age of onset of blindness. Here, we investigated the cortical language network at rest and during an…
Philip A. Marcus; Ethan T. Smith
1979-01-01
Five petroleum-related facilities often sited in the coastal zone during development of Outer Continental oil and gas can change the visual appearance of coastal areas. These facilities are service bases, platform fabrication yards, marine terminals and associated storage facilities, oil and gas processing facilities, and liquified natural gas terminals. Examples of...
Spatial attention increases high-frequency gamma synchronisation in human medial visual cortex.
Koelewijn, Loes; Rich, Anina N; Muthukumaraswamy, Suresh D; Singh, Krish D
2013-10-01
Visual information processing involves the integration of stimulus and goal-driven information, requiring neuronal communication. Gamma synchronisation is linked to neuronal communication, and is known to be modulated in visual cortex both by stimulus properties and voluntarily-directed attention. Stimulus-driven modulations of gamma activity are particularly associated with early visual areas such as V1, whereas attentional effects are generally localised to higher visual areas such as V4. The absence of a gamma increase in early visual cortex is at odds with robust attentional enhancements found with other measures of neuronal activity in this area. Here we used magnetoencephalography (MEG) to explore the effect of spatial attention on gamma activity in human early visual cortex using a highly effective gamma-inducing stimulus and strong attentional manipulation. In separate blocks, subjects tracked either a parafoveal grating patch that induced gamma activity in contralateral medial visual cortex, or a small line at fixation, effectively attending away from the gamma-inducing grating. Both items were always present, but rotated unpredictably and independently of each other. The rotating grating induced gamma synchronisation in medial visual cortex at 30-70 Hz, and in lateral visual cortex at 60-90 Hz, regardless of whether it was attended. Directing spatial attention to the grating increased gamma synchronisation in medial visual cortex, but only at 60-90 Hz. These results suggest that the generally found increase in gamma activity by spatial attention can be localised to early visual cortex in humans, and that stimulus and goal-driven modulations may be mediated at different frequencies within the gamma range. Copyright © 2013 Elsevier Inc. All rights reserved.
Emotion processing in the visual brain: a MEG analysis.
Peyk, Peter; Schupp, Harald T; Elbert, Thomas; Junghöfer, Markus
2008-06-01
Recent functional magnetic resonance imaging (fMRI) and event-related brain potential (ERP) studies provide empirical support for the notion that emotional cues guide selective attention. Extending this line of research, whole head magneto-encephalogram (MEG) was measured while participants viewed in separate experimental blocks a continuous stream of either pleasant and neutral or unpleasant and neutral pictures, presented for 330 ms each. Event-related magnetic fields (ERF) were analyzed after intersubject sensor coregistration, complemented by minimum norm estimates (MNE) to explore neural generator sources. Both streams of analysis converge by demonstrating the selective emotion processing in an early (120-170 ms) and a late time interval (220-310 ms). ERF analysis revealed that the polarity of the emotion difference fields was reversed across early and late intervals suggesting distinct patterns of activation in the visual processing stream. Source analysis revealed the amplified processing of emotional pictures in visual processing areas with more pronounced occipito-parieto-temporal activation in the early time interval, and a stronger engagement of more anterior, temporal, regions in the later interval. Confirming previous ERP studies showing facilitated emotion processing, the present data suggest that MEG provides a complementary look at the spread of activation in the visual processing stream.
Visually cued motor synchronization: modulation of fMRI activation patterns by baseline condition.
Cerasa, Antonio; Hagberg, Gisela E; Bianciardi, Marta; Sabatini, Umberto
2005-01-03
A well-known issue in functional neuroimaging studies, regarding motor synchronization, is to design suitable control tasks able to discriminate between the brain structures involved in primary time-keeper functions and those related to other processes such as attentional effort. The aim of this work was to investigate how the predictability of stimulus onsets in the baseline condition modulates the activity in brain structures related to processes involved in time-keeper functions during the performance of a visually cued motor synchronization task (VM). The rational behind this choice derives from the notion that using different stimulus predictability can vary the subject's attention and the consequently neural activity. For this purpose, baseline levels of BOLD activity were obtained from 12 subjects during a conventional-baseline condition: maintained fixation of the visual rhythmic stimuli presented in the VM task, and a random-baseline condition: maintained fixation of visual stimuli occurring randomly. fMRI analysis demonstrated that while brain areas with a documented role in basic time processing are detected independent of the baseline condition (right cerebellum, bilateral putamen, left thalamus, left superior temporal gyrus, left sensorimotor cortex, left dorsal premotor cortex and supplementary motor area), the ventral premotor cortex, caudate nucleus, insula and inferior frontal gyrus exhibited a baseline-dependent activation. We conclude that maintained fixation of unpredictable visual stimuli can be employed in order to reduce or eliminate neural activity related to attentional components present in the synchronization task.
Visual processing of words in a patient with visual form agnosia: a behavioural and fMRI study.
Cavina-Pratesi, Cristiana; Large, Mary-Ellen; Milner, A David
2015-03-01
Patient D.F. has a profound and enduring visual form agnosia due to a carbon monoxide poisoning episode suffered in 1988. Her inability to distinguish simple geometric shapes or single alphanumeric characters can be attributed to a bilateral loss of cortical area LO, a loss that has been well established through structural and functional fMRI. Yet despite this severe perceptual deficit, D.F. is able to "guess" remarkably well the identity of whole words. This paradoxical finding, which we were able to replicate more than 20 years following her initial testing, raises the question as to whether D.F. has retained specialized brain circuitry for word recognition that is able to function to some degree without the benefit of inputs from area LO. We used fMRI to investigate this, and found regions in the left fusiform gyrus, left inferior frontal gyrus, and left middle temporal cortex that responded selectively to words. A group of healthy control subjects showed similar activations. The left fusiform activations appear to coincide with the area commonly named the visual word form area (VWFA) in studies of healthy individuals, and appear to be quite separate from the fusiform face area (FFA). We hypothesize that there is a route to this area that lies outside area LO, and which remains relatively unscathed in D.F. Copyright © 2014 Elsevier Ltd. All rights reserved.
Steady-state visually evoked potential correlates of human body perception.
Giabbiconi, Claire-Marie; Jurilj, Verena; Gruber, Thomas; Vocks, Silja
2016-11-01
In cognitive neuroscience, interest in the neuronal basis underlying the processing of human bodies is steadily increasing. Based on functional magnetic resonance imaging studies, it is assumed that the processing of pictures of human bodies is anchored in a network of specialized brain areas comprising the extrastriate and the fusiform body area (EBA, FBA). An alternative to examine the dynamics within these networks is electroencephalography, more specifically so-called steady-state visually evoked potentials (SSVEPs). In SSVEP tasks, a visual stimulus is presented repetitively at a predefined flickering rate and typically elicits a continuous oscillatory brain response at this frequency. This brain response is characterized by an excellent signal-to-noise ratio-a major advantage for source reconstructions. The main goal of present study was to demonstrate the feasibility of this method to study human body perception. To that end, we presented pictures of bodies and contrasted the resulting SSVEPs to two control conditions, i.e., non-objects and pictures of everyday objects (chairs). We found specific SSVEPs amplitude differences between bodies and both control conditions. Source reconstructions localized the SSVEP generators to a network of temporal, occipital and parietal areas. Interestingly, only body perception resulted in activity differences in middle temporal and lateral occipitotemporal areas, most likely reflecting the EBA/FBA.
NASA Astrophysics Data System (ADS)
Iramina, Keiji; Ge, Sheng; Hyodo, Akira; Hayami, Takehito; Ueno, Shoogo
2009-04-01
In this study, we applied a transcranial magnetic stimulation (TMS) to investigate the temporal aspect for the functional processing of visual attention. Although it has been known that right posterior parietal cortex (PPC) in the brain has a role in certain visual search tasks, there is little knowledge about the temporal aspect of this area. Three visual search tasks that have different difficulties of task execution individually were carried out. These three visual search tasks are the "easy feature task," the "hard feature task," and the "conjunction task." To investigate the temporal aspect of the PPC involved in the visual search, we applied various stimulus onset asynchronies (SOAs) and measured the reaction time of the visual search. The magnetic stimulation was applied on the right PPC or the left PPC by the figure-eight coil. The results show that the reaction times of the hard feature task are longer than those of the easy feature task. When SOA=150 ms, compared with no-TMS condition, there was a significant increase in target-present reaction time when TMS pulses were applied. We considered that the right PPC was involved in the visual search at about SOA=150 ms after visual stimulus presentation. The magnetic stimulation to the right PPC disturbed the processing of the visual search. However, the magnetic stimulation to the left PPC gives no effect on the processing of the visual search.
A proposed intracortical visual prosthesis image processing system.
Srivastava, N R; Troyk, P
2005-01-01
It has been a goal of neuroprosthesis researchers to develop a system, which could provide artifical vision to a large population of individuals with blindness. It has been demonstrated by earlier researches that stimulating the visual cortex area electrically can evoke spatial visual percepts, i.e. phosphenes. The goal of visual cortex prosthesis is to stimulate the visual cortex area and generate a visual perception in real time to restore vision. Even though the normal working of the visual system is not been completely understood, the existing knowledge has inspired research groups to develop strategies to develop visual cortex prosthesis which can help blind patients in their daily activities. A major limitation in this work is the development of an image proceessing system for converting an electronic image, as captured by a camera, into a real-time data stream for stimulation of the implanted electrodes. This paper proposes a system, which will capture the image using a camera and use a dedicated hardware real time image processor to deliver electrical pulses to intracortical electrodes. This system has to be flexible enough to adapt to individual patients and to various strategies of image reconstruction. Here we consider a preliminary architecture for this system.
fMRI-activation during drawing a naturalistic or sketchy portrait.
Schaer, K; Jahn, G; Lotze, M
2012-07-15
Neural processes for naturalistic drawing might be discerned into object recognition and analysis, attention processes guiding eye hand interaction, encoding of visual features in an allocentric reference frame, a transfer into the motor command and precise motor guidance with tight sensorimotor feedback. Cerebral representations in a real life paradigm during naturalistic drawing have sparsely been investigated. Using a functional Magnetic Resonance Imaging (fMRI) paradigm we measured 20 naive subjects during drawing a portrait from a frontal face presented as a photograph. Participants were asked to draw the portrait in either a naturalistic or a sketchy characteristic way. Tracing the contours of the face with a pencil or passive viewing of the face served as control conditions. Compared to passive viewing, naturalistic and sketchy drawing recruited predominantly the dorsal visual pathway, somatosensory and motor areas and bilateral BA 44. The right occipital lobe, middle temporal (MT) and the fusiform face area were increasingly active during drawing compared to passive viewing as well. Compared to tracing with a pencil, both drawing tasks increasingly involved the bilateral precuneus together with the cuneus and right inferior temporal lobe. Overall, our study identified cerebral areas characteristic for previously proposed aspects of drawing: face perception and analysis (fusiform gyrus and higher visual areas), encoding and retrieval of locations in an allocentric reference frame (precuneus), and continuous feedback processes during motor output (parietal sulcus, cerebellar hemisphere). Copyright © 2012 Elsevier B.V. All rights reserved.
Representational dynamics of object recognition: Feedforward and feedback information flows.
Goddard, Erin; Carlson, Thomas A; Dermody, Nadene; Woolgar, Alexandra
2016-03-01
Object perception involves a range of visual and cognitive processes, and is known to include both a feedfoward flow of information from early visual cortical areas to higher cortical areas, along with feedback from areas such as prefrontal cortex. Previous studies have found that low and high spatial frequency information regarding object identity may be processed over different timescales. Here we used the high temporal resolution of magnetoencephalography (MEG) combined with multivariate pattern analysis to measure information specifically related to object identity in peri-frontal and peri-occipital areas. Using stimuli closely matched in their low-level visual content, we found that activity in peri-occipital cortex could be used to decode object identity from ~80ms post stimulus onset, and activity in peri-frontal cortex could also be used to decode object identity from a later time (~265ms post stimulus onset). Low spatial frequency information related to object identity was present in the MEG signal at an earlier time than high spatial frequency information for peri-occipital cortex, but not for peri-frontal cortex. We additionally used Granger causality analysis to compare feedforward and feedback influences on representational content, and found evidence of both an early feedfoward flow and later feedback flow of information related to object identity. We discuss our findings in relation to existing theories of object processing and propose how the methods we use here could be used to address further questions of the neural substrates underlying object perception. Copyright © 2016 Elsevier Inc. All rights reserved.
Dabek, Filip; Caban, Jesus J
2017-01-01
Despite the recent popularity of visual analytics focusing on big data, little is known about how to support users that use visualization techniques to explore multi-dimensional datasets and accomplish specific tasks. Our lack of models that can assist end-users during the data exploration process has made it challenging to learn from the user's interactive and analytical process. The ability to model how a user interacts with a specific visualization technique and what difficulties they face are paramount in supporting individuals with discovering new patterns within their complex datasets. This paper introduces the notion of visualization systems understanding and modeling user interactions with the intent of guiding a user through a task thereby enhancing visual data exploration. The challenges faced and the necessary future steps to take are discussed; and to provide a working example, a grammar-based model is presented that can learn from user interactions, determine the common patterns among a number of subjects using a K-Reversible algorithm, build a set of rules, and apply those rules in the form of suggestions to new users with the goal of guiding them along their visual analytic process. A formal evaluation study with 300 subjects was performed showing that our grammar-based model is effective at capturing the interactive process followed by users and that further research in this area has the potential to positively impact how users interact with a visualization system.
NASA Astrophysics Data System (ADS)
Marcum, Richard A.; Davis, Curt H.; Scott, Grant J.; Nivin, Tyler W.
2017-10-01
We evaluated how deep convolutional neural networks (DCNN) could assist in the labor-intensive process of human visual searches for objects of interest in high-resolution imagery over large areas of the Earth's surface. Various DCNN were trained and tested using fewer than 100 positive training examples (China only) from a worldwide surface-to-air-missile (SAM) site dataset. A ResNet-101 DCNN achieved a 98.2% average accuracy for the China SAM site data. The ResNet-101 DCNN was used to process ˜19.6 M image chips over a large study area in southeastern China. DCNN chip detections (˜9300) were postprocessed with a spatial clustering algorithm to produce a ranked list of ˜2100 candidate SAM site locations. The combination of DCNN processing and spatial clustering effectively reduced the search area by ˜660X (0.15% of the DCNN-processed land area). An efficient web interface was used to facilitate a rapid serial human review of the candidate SAM sites in the China study area. Four novice imagery analysts with no prior imagery analysis experience were able to complete a DCNN-assisted SAM site search in an average time of ˜42 min. This search was ˜81X faster than a traditional visual search over an equivalent land area of ˜88,640 km2 while achieving nearly identical statistical accuracy (˜90% F1).
A Visual Analytics Paradigm Enabling Trillion-Edge Graph Exploration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, Pak C.; Haglin, David J.; Gillen, David S.
We present a visual analytics paradigm and a system prototype for exploring web-scale graphs. A web-scale graph is described as a graph with ~one trillion edges and ~50 billion vertices. While there is an aggressive R&D effort in processing and exploring web-scale graphs among internet vendors such as Facebook and Google, visualizing a graph of that scale still remains an underexplored R&D area. The paper describes a nontraditional peek-and-filter strategy that facilitates the exploration of a graph database of unprecedented size for visualization and analytics. We demonstrate that our system prototype can 1) preprocess a graph with ~25 billion edgesmore » in less than two hours and 2) support database query and visualization on the processed graph database afterward. Based on our computational performance results, we argue that we most likely will achieve the one trillion edge mark (a computational performance improvement of 40 times) for graph visual analytics in the near future.« less
Humphries, Colin; Desai, Rutvik H.; Seidenberg, Mark S.; Osmon, David C.; Stengel, Ben C.; Binder, Jeffrey R.
2013-01-01
Although the left posterior occipitotemporal sulcus (pOTS) has been called a visual word form area, debate persists over the selectivity of this region for reading relative to general nonorthographic visual object processing. We used high-resolution functional magnetic resonance imaging to study left pOTS responses to combinatorial orthographic and object shape information. Participants performed naming and visual discrimination tasks designed to encourage or suppress phonological encoding. During the naming task, all participants showed subregions within left pOTS that were more sensitive to combinatorial orthographic information than to object information. This difference disappeared, however, when phonological processing demands were removed. Responses were stronger to pseudowords than to words, but this effect also disappeared when phonological processing demands were removed. Subregions within the left pOTS are preferentially activated when visual input must be mapped to a phonological representation (i.e., a name) and particularly when component parts of the visual input must be mapped to corresponding phonological elements (consonant or vowel phonemes). Results indicate a specialized role for subregions within the left pOTS in the isomorphic mapping of familiar combinatorial visual patterns to phonological forms. This process distinguishes reading from picture naming and accounts for a wide range of previously reported stimulus and task effects in left pOTS. PMID:22505661
Sneve, Markus H; Magnussen, Svein; Alnæs, Dag; Endestad, Tor; D'Esposito, Mark
2013-11-01
Visual STM of simple features is achieved through interactions between retinotopic visual cortex and a set of frontal and parietal regions. In the present fMRI study, we investigated effective connectivity between central nodes in this network during the different task epochs of a modified delayed orientation discrimination task. Our univariate analyses demonstrate that the inferior frontal junction (IFJ) is preferentially involved in memory encoding, whereas activity in the putative FEFs and anterior intraparietal sulcus (aIPS) remains elevated throughout periods of memory maintenance. We have earlier reported, using the same task, that areas in visual cortex sustain information about task-relevant stimulus properties during delay intervals [Sneve, M. H., Alnæs, D., Endestad, T., Greenlee, M. W., & Magnussen, S. Visual short-term memory: Activity supporting encoding and maintenance in retinotopic visual cortex. Neuroimage, 63, 166-178, 2012]. To elucidate the temporal dynamics of the IFJ-FEF-aIPS-visual cortex network during memory operations, we estimated Granger causality effects between these regions with fMRI data representing memory encoding/maintenance as well as during memory retrieval. We also investigated a set of control conditions involving active processing of stimuli not associated with a memory task and passive viewing. In line with the developing understanding of IFJ as a region critical for control processes with a possible initiating role in visual STM operations, we observed influence from IFJ to FEF and aIPS during memory encoding. Furthermore, FEF predicted activity in a set of higher-order visual areas during memory retrieval, a finding consistent with its suggested role in top-down biasing of sensory cortex.
Brain activation associated with practiced left hand mirror writing.
Kushnir, T; Arzouan, Y; Karni, A; Manor, D
2013-04-01
Mirror writing occurs in healthy children, in various pathologies and occasionally in healthy adults. There are only scant experimental data on the underlying brain processes. Eight, right-handed, healthy young adults were scanned (BOLD-fMRI) before and after practicing left-hand mirror-writing (lh-MW) over seven sessions. They wrote dictated words, using either the right hand with regularly oriented writing or lh-MW. An MRI compatible stylus-point recording system was used and online visual feedback was provided. Practice resulted in increased speed and readability of lh-MW but the number of movement segments was unchanged. Post-training signal increases occurred in visual, right lateral and medial premotor areas, and in right anterior and posterior peri-sylvian areas corresponding to language areas. These results suggest that lh-MW may constitute a latent ability that can be reinstated by a relatively brief practice experience. Concurrently, right hemisphere language processing areas may emerge, reflecting perhaps a reduction in trans-hemispheric suppression. Copyright © 2013 Elsevier Inc. All rights reserved.
A deep (learning) dive into visual search behaviour of breast radiologists
NASA Astrophysics Data System (ADS)
Mall, Suneeta; Brennan, Patrick C.; Mello-Thoms, Claudia
2018-03-01
Visual search, the process of detecting and identifying objects using the eye movements (saccades) and the foveal vision, has been studied for identification of root causes of errors in the interpretation of mammography. The aim of this study is to model visual search behaviour of radiologists and their interpretation of mammograms using deep machine learning approaches. Our model is based on a deep convolutional neural network, a biologically-inspired multilayer perceptron that simulates the visual cortex, and is reinforced with transfer learning techniques. Eye tracking data obtained from 8 radiologists (of varying experience levels in reading mammograms) reviewing 120 two-view digital mammography cases (59 cancers) have been used to train the model, which was pre-trained with the ImageNet dataset for transfer learning. Areas of the mammogram that received direct (foveally fixated), indirect (peripherally fixated) or no (never fixated) visual attention were extracted from radiologists' visual search maps (obtained by a head mounted eye tracking device). These areas, along with the radiologists' assessment (including confidence of the assessment) of suspected malignancy were used to model: 1) Radiologists' decision; 2) Radiologists' confidence on such decision; and 3) The attentional level (i.e. foveal, peripheral or none) obtained by an area of the mammogram. Our results indicate high accuracy and low misclassification in modelling such behaviours.
De Weerd, Peter; Reithler, Joel; van de Ven, Vincent; Been, Marin; Jacobs, Christianne; Sack, Alexander T
2012-02-08
Practice-induced improvements in skilled performance reflect "offline " consolidation processes extending beyond daily training sessions. According to visual learning theories, an early, fast learning phase driven by high-level areas is followed by a late, asymptotic learning phase driven by low-level, retinotopic areas when higher resolution is required. Thus, low-level areas would not contribute to learning and offline consolidation until late learning. Recent studies have challenged this notion, demonstrating modified responses to trained stimuli in primary visual cortex (V1) and offline activity after very limited training. However, the behavioral relevance of modified V1 activity for offline consolidation of visual skill memory in V1 after early training sessions remains unclear. Here, we used neuronavigated transcranial magnetic stimulation (TMS) directed to a trained retinotopic V1 location to test for behaviorally relevant consolidation in human low-level visual cortex. Applying TMS to the trained V1 location within 45 min of the first or second training session strongly interfered with learning, as measured by impaired performance the next day. The interference was conditional on task context and occurred only when training in the location targeted by TMS was followed by training in a second location before TMS. In this condition, high-level areas may become coupled to the second location and uncoupled from the previously trained low-level representation, thereby rendering consolidation vulnerable to interference. Our data show that, during the earliest phases of skill learning in the lowest-level visual areas, a behaviorally relevant form of consolidation exists of which the robustness is controlled by high-level, contextual factors.
Representation of visual symbols in the visual word processing network.
Muayqil, Taim; Davies-Thompson, Jodie; Barton, Jason J S
2015-03-01
Previous studies have shown that word processing involves a predominantly left-sided occipitotemporal network. Words are a form of symbolic representation, in that they are arbitrary perceptual stimuli that represent other objects, actions or concepts. Lesions of parts of the visual word processing network can cause alexia, which can be associated with difficulty processing other types of symbols such as musical notation or road signs. We investigated whether components of the visual word processing network were also activated by other types of symbols. In 16 music-literate subjects, we defined the visual word network using fMRI and examined responses to four symbolic categories: visual words, musical notation, instructive symbols (e.g. traffic signs), and flags and logos. For each category we compared responses not only to scrambled stimuli, but also to similar stimuli that lacked symbolic meaning. The left visual word form area and a homologous right fusiform region responded similarly to all four categories, but equally to both symbolic and non-symbolic equivalents. Greater response to symbolic than non-symbolic stimuli occurred only in the left inferior frontal and middle temporal gyri, but only for words, and in the case of the left inferior frontal gyri, also for musical notation. A whole-brain analysis comparing symbolic versus non-symbolic stimuli revealed a distributed network of inferior temporooccipital and parietal regions that differed for different symbols. The fusiform gyri are involved in processing the form of many symbolic stimuli, but not specifically for stimuli with symbolic content. Selectivity for stimuli with symbolic content only emerges in the visual word network at the level of the middle temporal and inferior frontal gyri, but is specific for words and musical notation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Katzner, Steffen; Busse, Laura; Treue, Stefan
2009-01-01
Directing visual attention to spatial locations or to non-spatial stimulus features can strongly modulate responses of individual cortical sensory neurons. Effects of attention typically vary in magnitude, not only between visual cortical areas but also between individual neurons from the same area. Here, we investigate whether the size of attentional effects depends on the match between the tuning properties of the recorded neuron and the perceptual task at hand. We recorded extracellular responses from individual direction-selective neurons in the middle temporal area (MT) of rhesus monkeys trained to attend either to the color or the motion signal of a moving stimulus. We found that effects of spatial and feature-based attention in MT, which are typically observed in tasks allocating attention to motion, were very similar even when attention was directed to the color of the stimulus. We conclude that attentional modulation can occur in extrastriate cortex, even under conditions without a match between the tuning properties of the recorded neuron and the perceptual task at hand. Our data are consistent with theories of object-based attention describing a transfer of attention from relevant to irrelevant features, within the attended object and across the visual field. These results argue for a unified attentional system that modulates responses to a stimulus across cortical areas, even if a given area is specialized for processing task-irrelevant aspects of that stimulus.
How task demands shape brain responses to visual food cues.
Pohl, Tanja Maria; Tempelmann, Claus; Noesselt, Toemme
2017-06-01
Several previous imaging studies have aimed at identifying the neural basis of visual food cue processing in humans. However, there is little consistency of the functional magnetic resonance imaging (fMRI) results across studies. Here, we tested the hypothesis that this variability across studies might - at least in part - be caused by the different tasks employed. In particular, we assessed directly the influence of task set on brain responses to food stimuli with fMRI using two tasks (colour vs. edibility judgement, between-subjects design). When participants judged colour, the left insula, the left inferior parietal lobule, occipital areas, the left orbitofrontal cortex and other frontal areas expressed enhanced fMRI responses to food relative to non-food pictures. However, when judging edibility, enhanced fMRI responses to food pictures were observed in the superior and middle frontal gyrus and in medial frontal areas including the pregenual anterior cingulate cortex and ventromedial prefrontal cortex. This pattern of results indicates that task sets can significantly alter the neural underpinnings of food cue processing. We propose that judging low-level visual stimulus characteristics - such as colour - triggers stimulus-related representations in the visual and even in gustatory cortex (insula), whereas discriminating abstract stimulus categories activates higher order representations in both the anterior cingulate and prefrontal cortex. Hum Brain Mapp 38:2897-2912, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Störmer, Viola S; Alvarez, George A; Cavanagh, Patrick
2014-08-27
It is much easier to divide attention across the left and right visual hemifields than within the same visual hemifield. Here we investigate whether this benefit of dividing attention across separate visual fields is evident at early cortical processing stages. We measured the steady-state visual evoked potential, an oscillatory response of the visual cortex elicited by flickering stimuli, of moving targets and distractors while human observers performed a tracking task. The amplitude of responses at the target frequencies was larger than that of the distractor frequencies when participants tracked two targets in separate hemifields, indicating that attention can modulate early visual processing when it is divided across hemifields. However, these attentional modulations disappeared when both targets were tracked within the same hemifield. These effects were not due to differences in task performance, because accuracy was matched across the tracking conditions by adjusting target speed (with control conditions ruling out effects due to speed alone). To investigate later processing stages, we examined the P3 component over central-parietal scalp sites that was elicited by the test probe at the end of the trial. The P3 amplitude was larger for probes on targets than on distractors, regardless of whether attention was divided across or within a hemifield, indicating that these higher-level processes were not constrained by visual hemifield. These results suggest that modulating early processing stages enables more efficient target tracking, and that within-hemifield competition limits the ability to modulate multiple target representations within the hemifield maps of the early visual cortex. Copyright © 2014 the authors 0270-6474/14/3311526-08$15.00/0.
Visual and auditory perception in preschool children at risk for dyslexia.
Ortiz, Rosario; Estévez, Adelina; Muñetón, Mercedes; Domínguez, Carolina
2014-11-01
Recently, there has been renewed interest in perceptive problems of dyslexics. A polemic research issue in this area has been the nature of the perception deficit. Another issue is the causal role of this deficit in dyslexia. Most studies have been carried out in adult and child literates; consequently, the observed deficits may be the result rather than the cause of dyslexia. This study addresses these issues by examining visual and auditory perception in children at risk for dyslexia. We compared children from preschool with and without risk for dyslexia in auditory and visual temporal order judgment tasks and same-different discrimination tasks. Identical visual and auditory, linguistic and nonlinguistic stimuli were presented in both tasks. The results revealed that the visual as well as the auditory perception of children at risk for dyslexia is impaired. The comparison between groups in auditory and visual perception shows that the achievement of children at risk was lower than children without risk for dyslexia in the temporal tasks. There were no differences between groups in auditory discrimination tasks. The difficulties of children at risk in visual and auditory perceptive processing affected both linguistic and nonlinguistic stimuli. Our conclusions are that children at risk for dyslexia show auditory and visual perceptive deficits for linguistic and nonlinguistic stimuli. The auditory impairment may be explained by temporal processing problems and these problems are more serious for processing language than for processing other auditory stimuli. These visual and auditory perceptive deficits are not the consequence of failing to learn to read, thus, these findings support the theory of temporal processing deficit. Copyright © 2014 Elsevier Ltd. All rights reserved.
Timing the impact of literacy on visual processing
Pegado, Felipe; Comerlato, Enio; Ventura, Fabricio; Jobert, Antoinette; Nakamura, Kimihiro; Buiatti, Marco; Ventura, Paulo; Dehaene-Lambertz, Ghislaine; Kolinsky, Régine; Morais, José; Braga, Lucia W.; Cohen, Laurent; Dehaene, Stanislas
2014-01-01
Learning to read requires the acquisition of an efficient visual procedure for quickly recognizing fine print. Thus, reading practice could induce a perceptual learning effect in early vision. Using functional magnetic resonance imaging (fMRI) in literate and illiterate adults, we previously demonstrated an impact of reading acquisition on both high- and low-level occipitotemporal visual areas, but could not resolve the time course of these effects. To clarify whether literacy affects early vs. late stages of visual processing, we measured event-related potentials to various categories of visual stimuli in healthy adults with variable levels of literacy, including completely illiterate subjects, early-schooled literate subjects, and subjects who learned to read in adulthood (ex-illiterates). The stimuli included written letter strings forming pseudowords, on which literacy is expected to have a major impact, as well as faces, houses, tools, checkerboards, and false fonts. To evaluate the precision with which these stimuli were encoded, we studied repetition effects by presenting the stimuli in pairs composed of repeated, mirrored, or unrelated pictures from the same category. The results indicate that reading ability is correlated with a broad enhancement of early visual processing, including increased repetition suppression, suggesting better exemplar discrimination, and increased mirror discrimination, as early as ∼100–150 ms in the left occipitotemporal region. These effects were found with letter strings and false fonts, but also were partially generalized to other visual categories. Thus, learning to read affects the magnitude, precision, and invariance of early visual processing. PMID:25422460
Timing the impact of literacy on visual processing.
Pegado, Felipe; Comerlato, Enio; Ventura, Fabricio; Jobert, Antoinette; Nakamura, Kimihiro; Buiatti, Marco; Ventura, Paulo; Dehaene-Lambertz, Ghislaine; Kolinsky, Régine; Morais, José; Braga, Lucia W; Cohen, Laurent; Dehaene, Stanislas
2014-12-09
Learning to read requires the acquisition of an efficient visual procedure for quickly recognizing fine print. Thus, reading practice could induce a perceptual learning effect in early vision. Using functional magnetic resonance imaging (fMRI) in literate and illiterate adults, we previously demonstrated an impact of reading acquisition on both high- and low-level occipitotemporal visual areas, but could not resolve the time course of these effects. To clarify whether literacy affects early vs. late stages of visual processing, we measured event-related potentials to various categories of visual stimuli in healthy adults with variable levels of literacy, including completely illiterate subjects, early-schooled literate subjects, and subjects who learned to read in adulthood (ex-illiterates). The stimuli included written letter strings forming pseudowords, on which literacy is expected to have a major impact, as well as faces, houses, tools, checkerboards, and false fonts. To evaluate the precision with which these stimuli were encoded, we studied repetition effects by presenting the stimuli in pairs composed of repeated, mirrored, or unrelated pictures from the same category. The results indicate that reading ability is correlated with a broad enhancement of early visual processing, including increased repetition suppression, suggesting better exemplar discrimination, and increased mirror discrimination, as early as ∼ 100-150 ms in the left occipitotemporal region. These effects were found with letter strings and false fonts, but also were partially generalized to other visual categories. Thus, learning to read affects the magnitude, precision, and invariance of early visual processing.
Resting state neural networks for visual Chinese word processing in Chinese adults and children.
Li, Ling; Liu, Jiangang; Chen, Feiyan; Feng, Lu; Li, Hong; Tian, Jie; Lee, Kang
2013-07-01
This study examined the resting state neural networks for visual Chinese word processing in Chinese children and adults. Both the functional connectivity (FC) and amplitude of low frequency fluctuation (ALFF) approaches were used to analyze the fMRI data collected when Chinese participants were not engaged in any specific explicit tasks. We correlated time series extracted from the visual word form area (VWFA) with those in other regions in the brain. We also performed ALFF analysis in the resting state FC networks. The FC results revealed that, regarding the functionally connected brain regions, there exist similar intrinsically organized resting state networks for visual Chinese word processing in adults and children, suggesting that such networks may already be functional after 3-4 years of informal exposure to reading plus 3-4 years formal schooling. The ALFF results revealed that children appear to recruit more neural resources than adults in generally reading-irrelevant brain regions. Differences between child and adult ALFF results suggest that children's intrinsic word processing network during the resting state, though similar in functional connectivity, is still undergoing development. Further exposure to visual words and experience with reading are needed for children to develop a mature intrinsic network for word processing. The developmental course of the intrinsically organized word processing network may parallel that of the explicit word processing network. Copyright © 2013 Elsevier Ltd. All rights reserved.
Sequential sensory and decision processing in posterior parietal cortex
Ibos, Guilhem; Freedman, David J
2017-01-01
Decisions about the behavioral significance of sensory stimuli often require comparing sensory inference of what we are looking at to internal models of what we are looking for. Here, we test how neuronal selectivity for visual features is transformed into decision-related signals in posterior parietal cortex (area LIP). Monkeys performed a visual matching task that required them to detect target stimuli composed of conjunctions of color and motion-direction. Neuronal recordings from area LIP revealed two main findings. First, the sequential processing of visual features and the selection of target-stimuli suggest that LIP is involved in transforming sensory information into decision-related signals. Second, the patterns of color and motion selectivity and their impact on decision-related encoding suggest that LIP plays a role in detecting target stimuli by comparing bottom-up sensory inputs (what the monkeys were looking at) and top-down cognitive encoding inputs (what the monkeys were looking for). DOI: http://dx.doi.org/10.7554/eLife.23743.001 PMID:28418332
Objects Classification by Learning-Based Visual Saliency Model and Convolutional Neural Network.
Li, Na; Zhao, Xinbo; Yang, Yongjia; Zou, Xiaochun
2016-01-01
Humans can easily classify different kinds of objects whereas it is quite difficult for computers. As a hot and difficult problem, objects classification has been receiving extensive interests with broad prospects. Inspired by neuroscience, deep learning concept is proposed. Convolutional neural network (CNN) as one of the methods of deep learning can be used to solve classification problem. But most of deep learning methods, including CNN, all ignore the human visual information processing mechanism when a person is classifying objects. Therefore, in this paper, inspiring the completed processing that humans classify different kinds of objects, we bring forth a new classification method which combines visual attention model and CNN. Firstly, we use the visual attention model to simulate the processing of human visual selection mechanism. Secondly, we use CNN to simulate the processing of how humans select features and extract the local features of those selected areas. Finally, not only does our classification method depend on those local features, but also it adds the human semantic features to classify objects. Our classification method has apparently advantages in biology. Experimental results demonstrated that our method made the efficiency of classification improve significantly.
Procedures for precap visual inspection
NASA Technical Reports Server (NTRS)
1984-01-01
Screening procedures for the final precap visual inspection of microcircuits used in electronic system components are described as an aid in training personnel unfamiliar with microcircuits. Processing techniques used in industry for the manufacture of monolithic and hybrid components are presented and imperfections that may be encountered during this inspection are discussed. Problem areas such as scratches, voids, adhesions, and wire bonding are illustrated by photomicrographs. This guide can serve as an effective tool in training personnel to perform precap visual inspections efficiently and reliably.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown-VanHoozer, S.A.
Most designers are not schooled in the area of human-interaction psychology and therefore tend to rely on the traditional ergonomic aspects of human factors when designing complex human-interactive workstations related to reactor operations. They do not take into account the differences in user information processing behavior and how these behaviors may affect individual and team performance when accessing visual displays or utilizing system models in process and control room areas. Unfortunately, by ignoring the importance of the integration of the user interface at the information process level, the result can be sub-optimization and inherently error- and failure-prone systems. Therefore, tomore » minimize or eliminate failures in human-interactive systems, it is essential that the designers understand how each user`s processing characteristics affects how the user gathers information, and how the user communicates the information to the designer and other users. A different type of approach in achieving this understanding is Neuro Linguistic Programming (NLP). The material presented in this paper is based on two studies involving the design of visual displays, NLP, and the user`s perspective model of a reactor system. The studies involve the methodology known as NLP, and its use in expanding design choices from the user`s ``model of the world,`` in the areas of virtual reality, workstation design, team structure, decision and learning style patterns, safety operations, pattern recognition, and much, much more.« less
The threshold for conscious report: Signal loss and response bias in visual and frontal cortex.
van Vugt, Bram; Dagnino, Bruno; Vartak, Devavrat; Safaai, Houman; Panzeri, Stefano; Dehaene, Stanislas; Roelfsema, Pieter R
2018-05-04
Why are some visual stimuli consciously detected, whereas others remain subliminal? We investigated the fate of weak visual stimuli in the visual and frontal cortex of awake monkeys trained to report stimulus presence. Reported stimuli were associated with strong sustained activity in the frontal cortex, and frontal activity was weaker and quickly decayed for unreported stimuli. Information about weak stimuli could be lost at successive stages en route from the visual to the frontal cortex, and these propagation failures were confirmed through microstimulation of area V1. Fluctuations in response bias and sensitivity during perception of identical stimuli were traced back to prestimulus brain-state markers. A model in which stimuli become consciously reportable when they elicit a nonlinear ignition process in higher cortical areas explained our results. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.
Le, Thang M; Borghi, John A; Kujawa, Autumn J; Klein, Daniel N; Leung, Hoi-Chung
2017-01-01
The present study examined the impacts of major depressive disorder (MDD) on visual and prefrontal cortical activity as well as their connectivity during visual working memory updating and related them to the core clinical features of the disorder. Impairment in working memory updating is typically associated with the retention of irrelevant negative information which can lead to persistent depressive mood and abnormal affect. However, performance deficits have been observed in MDD on tasks involving little or no demand on emotion processing, suggesting dysfunctions may also occur at the more basic level of information processing. Yet, it is unclear how various regions in the visual working memory circuit contribute to behavioral changes in MDD. We acquired functional magnetic resonance imaging data from 18 unmedicated participants with MDD and 21 age-matched healthy controls (CTL) while they performed a visual delayed recognition task with neutral faces and scenes as task stimuli. Selective working memory updating was manipulated by inserting a cue in the delay period to indicate which one or both of the two memorized stimuli (a face and a scene) would remain relevant for the recognition test. Our results revealed several key findings. Relative to the CTL group, the MDD group showed weaker postcue activations in visual association areas during selective maintenance of face and scene working memory. Across the MDD subjects, greater rumination and depressive symptoms were associated with more persistent activation and connectivity related to no-longer-relevant task information. Classification of postcue spatial activation patterns of the scene-related areas was also less consistent in the MDD subjects compared to the healthy controls. Such abnormalities appeared to result from a lack of updating effects in postcue functional connectivity between prefrontal and scene-related areas in the MDD group. In sum, disrupted working memory updating in MDD was revealed by alterations in activity patterns of the visual association areas, their connectivity with the prefrontal cortex, and their relationship with core clinical characteristics. These results highlight the role of information updating deficits in the cognitive control and symptomatology of depression.
Drawing and writing: An ALE meta-analysis of sensorimotor activations.
Yuan, Ye; Brown, Steven
2015-08-01
Drawing and writing are the two major means of creating what are referred to as "images", namely visual patterns on flat surfaces. They share many sensorimotor processes related to visual guidance of hand movement, resulting in the formation of visual shapes associated with pictures and words. However, while the human capacity to draw is tens of thousands of years old, the capacity for writing is only a few thousand years old, and widespread literacy is quite recent. In order to compare the neural activations for drawing and writing, we conducted two activation likelihood estimation (ALE) meta-analyses for these two bodies of neuroimaging literature. The results showed strong overlap in the activation profiles, especially in motor areas (motor cortex, frontal eye fields, supplementary motor area, cerebellum, putamen) and several parts of the posterior parietal cortex. A distinction was found in the left posterior parietal cortex, with drawing showing a preference for a ventral region and writing a dorsal region. These results demonstrate that drawing and writing employ the same basic sensorimotor networks but that some differences exist in parietal areas involved in spatial processing. Copyright © 2015 Elsevier Inc. All rights reserved.
Jiang, Fang; Stecker, G. Christopher; Boynton, Geoffrey M.; Fine, Ione
2016-01-01
Early blind subjects exhibit superior abilities for processing auditory motion, which are accompanied by enhanced BOLD responses to auditory motion within hMT+ and reduced responses within right planum temporale (rPT). Here, by comparing BOLD responses to auditory motion in hMT+ and rPT within sighted controls, early blind, late blind, and sight-recovery individuals, we were able to separately examine the effects of developmental and adult visual deprivation on cortical plasticity within these two areas. We find that both the enhanced auditory motion responses in hMT+ and the reduced functionality in rPT are driven by the absence of visual experience early in life; neither loss nor recovery of vision later in life had a discernable influence on plasticity within these areas. Cortical plasticity as a result of blindness has generally be presumed to be mediated by competition across modalities within a given cortical region. The reduced functionality within rPT as a result of early visual loss implicates an additional mechanism for cross modal plasticity as a result of early blindness—competition across different cortical areas for functional role. PMID:27458357
Shichinohe, Natsuko; Akao, Teppei; Kurkin, Sergei; Fukushima, Junko; Kaneko, Chris R S; Fukushima, Kikuro
2009-06-11
Cortical motor areas are thought to contribute "higher-order processing," but what that processing might include is unknown. Previous studies of the smooth pursuit-related discharge of supplementary eye field (SEF) neurons have not distinguished activity associated with the preparation for pursuit from discharge related to processing or memory of the target motion signals. Using a memory-based task designed to separate these components, we show that the SEF contains signals coding retinal image-slip-velocity, memory, and assessment of visual motion direction, the decision of whether to pursue, and the preparation for pursuit eye movements. Bilateral muscimol injection into SEF resulted in directional errors in smooth pursuit, errors of whether to pursue, and impairment of initial correct eye movements. These results suggest an important role for the SEF in memory and assessment of visual motion direction and the programming of appropriate pursuit eye movements.
The visual analysis of emotional actions.
Chouchourelou, Arieta; Matsuka, Toshihiko; Harber, Kent; Shiffrar, Maggie
2006-01-01
Is the visual analysis of human actions modulated by the emotional content of those actions? This question is motivated by a consideration of the neuroanatomical connections between visual and emotional areas. Specifically, the superior temporal sulcus (STS), known to play a critical role in the visual detection of action, is extensively interconnected with the amygdala, a center for emotion processing. To the extent that amygdala activity influences STS activity, one would expect to find systematic differences in the visual detection of emotional actions. A series of psychophysical studies tested this prediction. Experiment 1 identified point-light walker movies that convincingly depicted five different emotional states: happiness, sadness, neutral, anger, and fear. In Experiment 2, participants performed a walker detection task with these movies. Detection performance was systematically modulated by the emotional content of the gaits. Participants demonstrated the greatest visual sensitivity to angry walkers. The results of Experiment 3 suggest that local velocity cues to anger may account for high false alarm rates to the presence of angry gaits. These results support the hypothesis that the visual analysis of human action depends upon emotion processes.
Structural reorganization of the early visual cortex following Braille training in sighted adults.
Bola, Łukasz; Siuda-Krzywicka, Katarzyna; Paplińska, Małgorzata; Sumera, Ewa; Zimmermann, Maria; Jednoróg, Katarzyna; Marchewka, Artur; Szwed, Marcin
2017-12-12
Training can induce cross-modal plasticity in the human cortex. A well-known example of this phenomenon is the recruitment of visual areas for tactile and auditory processing. It remains unclear to what extent such plasticity is associated with changes in anatomy. Here we enrolled 29 sighted adults into a nine-month tactile Braille-reading training, and used voxel-based morphometry and diffusion tensor imaging to describe the resulting anatomical changes. In addition, we collected resting-state fMRI data to relate these changes to functional connectivity between visual and somatosensory-motor cortices. Following Braille-training, we observed substantial grey and white matter reorganization in the anterior part of early visual cortex (peripheral visual field). Moreover, relative to its posterior, foveal part, the peripheral representation of early visual cortex had stronger functional connections to somatosensory and motor cortices even before the onset of training. Previous studies show that the early visual cortex can be functionally recruited for tactile discrimination, including recognition of Braille characters. Our results demonstrate that reorganization in this region induced by tactile training can also be anatomical. This change most likely reflects a strengthening of existing connectivity between the peripheral visual cortex and somatosensory cortices, which suggests a putative mechanism for cross-modal recruitment of visual areas.
Lahnakoski, Juha M; Salmi, Juha; Jääskeläinen, Iiro P; Lampinen, Jouko; Glerean, Enrico; Tikka, Pia; Sams, Mikko
2012-01-01
Understanding how the brain processes stimuli in a rich natural environment is a fundamental goal of neuroscience. Here, we showed a feature film to 10 healthy volunteers during functional magnetic resonance imaging (fMRI) of hemodynamic brain activity. We then annotated auditory and visual features of the motion picture to inform analysis of the hemodynamic data. The annotations were fitted to both voxel-wise data and brain network time courses extracted by independent component analysis (ICA). Auditory annotations correlated with two independent components (IC) disclosing two functional networks, one responding to variety of auditory stimulation and another responding preferentially to speech but parts of the network also responding to non-verbal communication. Visual feature annotations correlated with four ICs delineating visual areas according to their sensitivity to different visual stimulus features. In comparison, a separate voxel-wise general linear model based analysis disclosed brain areas preferentially responding to sound energy, speech, music, visual contrast edges, body motion and hand motion which largely overlapped the results revealed by ICA. Differences between the results of IC- and voxel-based analyses demonstrate that thorough analysis of voxel time courses is important for understanding the activity of specific sub-areas of the functional networks, while ICA is a valuable tool for revealing novel information about functional connectivity which need not be explained by the predefined model. Our results encourage the use of naturalistic stimuli and tasks in cognitive neuroimaging to study how the brain processes stimuli in rich natural environments.
Lahnakoski, Juha M.; Salmi, Juha; Jääskeläinen, Iiro P.; Lampinen, Jouko; Glerean, Enrico; Tikka, Pia; Sams, Mikko
2012-01-01
Understanding how the brain processes stimuli in a rich natural environment is a fundamental goal of neuroscience. Here, we showed a feature film to 10 healthy volunteers during functional magnetic resonance imaging (fMRI) of hemodynamic brain activity. We then annotated auditory and visual features of the motion picture to inform analysis of the hemodynamic data. The annotations were fitted to both voxel-wise data and brain network time courses extracted by independent component analysis (ICA). Auditory annotations correlated with two independent components (IC) disclosing two functional networks, one responding to variety of auditory stimulation and another responding preferentially to speech but parts of the network also responding to non-verbal communication. Visual feature annotations correlated with four ICs delineating visual areas according to their sensitivity to different visual stimulus features. In comparison, a separate voxel-wise general linear model based analysis disclosed brain areas preferentially responding to sound energy, speech, music, visual contrast edges, body motion and hand motion which largely overlapped the results revealed by ICA. Differences between the results of IC- and voxel-based analyses demonstrate that thorough analysis of voxel time courses is important for understanding the activity of specific sub-areas of the functional networks, while ICA is a valuable tool for revealing novel information about functional connectivity which need not be explained by the predefined model. Our results encourage the use of naturalistic stimuli and tasks in cognitive neuroimaging to study how the brain processes stimuli in rich natural environments. PMID:22496909
A Cortical Network for the Encoding of Object Change
Hindy, Nicholas C.; Solomon, Sarah H.; Altmann, Gerry T.M.; Thompson-Schill, Sharon L.
2015-01-01
Understanding events often requires recognizing unique stimuli as alternative, mutually exclusive states of the same persisting object. Using fMRI, we examined the neural mechanisms underlying the representation of object states and object-state changes. We found that subjective ratings of visual dissimilarity between a depicted object and an unseen alternative state of that object predicted the corresponding multivoxel pattern dissimilarity in early visual cortex during an imagery task, while late visual cortex patterns tracked dissimilarity among distinct objects. Early visual cortex pattern dissimilarity for object states in turn predicted the level of activation in an area of left posterior ventrolateral prefrontal cortex (pVLPFC) most responsive to conflict in a separate Stroop color-word interference task, and an area of left ventral posterior parietal cortex (vPPC) implicated in the relational binding of semantic features. We suggest that when visualizing object states, representational content instantiated across early and late visual cortex is modulated by processes in left pVLPFC and left vPPC that support selection and binding, and ultimately event comprehension. PMID:24127425
Lightness computation by the human visual system
NASA Astrophysics Data System (ADS)
Rudd, Michael E.
2017-05-01
A model of achromatic color computation by the human visual system is presented, which is shown to account in an exact quantitative way for a large body of appearance matching data collected with simple visual displays. The model equations are closely related to those of the original Retinex model of Land and McCann. However, the present model differs in important ways from Land and McCann's theory in that it invokes additional biological and perceptual mechanisms, including contrast gain control, different inherent neural gains for incremental, and decremental luminance steps, and two types of top-down influence on the perceptual weights applied to local luminance steps in the display: edge classification and spatial integration attentional windowing. Arguments are presented to support the claim that these various visual processes must be instantiated by a particular underlying neural architecture. By pointing to correspondences between the architecture of the model and findings from visual neurophysiology, this paper suggests that edge classification involves a top-down gating of neural edge responses in early visual cortex (cortical areas V1 and/or V2) while spatial integration windowing occurs in cortical area V4 or beyond.
Decoding visual object categories in early somatosensory cortex.
Smith, Fraser W; Goodale, Melvyn A
2015-04-01
Neurons, even in the earliest sensory areas of cortex, are subject to a great deal of contextual influence from both within and across modality connections. In the present work, we investigated whether the earliest regions of somatosensory cortex (S1 and S2) would contain content-specific information about visual object categories. We reasoned that this might be possible due to the associations formed through experience that link different sensory aspects of a given object. Participants were presented with visual images of different object categories in 2 fMRI experiments. Multivariate pattern analysis revealed reliable decoding of familiar visual object category in bilateral S1 (i.e., postcentral gyri) and right S2. We further show that this decoding is observed for familiar but not unfamiliar visual objects in S1. In addition, whole-brain searchlight decoding analyses revealed several areas in the parietal lobe that could mediate the observed context effects between vision and somatosensation. These results demonstrate that even the first cortical stages of somatosensory processing carry information about the category of visually presented familiar objects. © The Author 2013. Published by Oxford University Press.
Decoding Visual Object Categories in Early Somatosensory Cortex
Smith, Fraser W.; Goodale, Melvyn A.
2015-01-01
Neurons, even in the earliest sensory areas of cortex, are subject to a great deal of contextual influence from both within and across modality connections. In the present work, we investigated whether the earliest regions of somatosensory cortex (S1 and S2) would contain content-specific information about visual object categories. We reasoned that this might be possible due to the associations formed through experience that link different sensory aspects of a given object. Participants were presented with visual images of different object categories in 2 fMRI experiments. Multivariate pattern analysis revealed reliable decoding of familiar visual object category in bilateral S1 (i.e., postcentral gyri) and right S2. We further show that this decoding is observed for familiar but not unfamiliar visual objects in S1. In addition, whole-brain searchlight decoding analyses revealed several areas in the parietal lobe that could mediate the observed context effects between vision and somatosensation. These results demonstrate that even the first cortical stages of somatosensory processing carry information about the category of visually presented familiar objects. PMID:24122136
ERIC Educational Resources Information Center
Eidels, Ami; Townsend, James T.; Pomerantz, James R.
2008-01-01
People are especially efficient in processing certain visual stimuli such as human faces or good configurations. It has been suggested that topology and geometry play important roles in configural perception. Visual search is one area in which configurality seems to matter. When either of 2 target features leads to a correct response and the…
On the Functional Neuroanatomy of Visual Word Processing: Effects of Case and Letter Deviance
ERIC Educational Resources Information Center
Kronbichler, Martin; Klackl, Johannes; Richlan, Fabio; Schurz, Matthias; Staffen, Wolfgang; Ladurner, Gunther; Wimmer, Heinz
2009-01-01
This functional magnetic resonance imaging study contrasted case-deviant and letter-deviant forms with familiar forms of the same phonological words (e.g., "TaXi" and "Taksi" vs. "Taxi") and found that both types of deviance led to increased activation in a left occipito-temporal region, corresponding to the visual word form area (VWFA). The…
Exploring associations between gaze patterns and putative human mirror neuron system activity.
Donaldson, Peter H; Gurvich, Caroline; Fielding, Joanne; Enticott, Peter G
2015-01-01
The human mirror neuron system (MNS) is hypothesized to be crucial to social cognition. Given that key MNS-input regions such as the superior temporal sulcus are involved in biological motion processing, and mirror neuron activity in monkeys has been shown to vary with visual attention, aberrant MNS function may be partly attributable to atypical visual input. To examine the relationship between gaze pattern and interpersonal motor resonance (IMR; an index of putative MNS activity), healthy right-handed participants aged 18-40 (n = 26) viewed videos of transitive grasping actions or static hands, whilst the left primary motor cortex received transcranial magnetic stimulation. Motor-evoked potentials recorded in contralateral hand muscles were used to determine IMR. Participants also underwent eyetracking analysis to assess gaze patterns whilst viewing the same videos. No relationship was observed between predictive gaze and IMR. However, IMR was positively associated with fixation counts in areas of biological motion in the videos, and negatively associated with object areas. These findings are discussed with reference to visual influences on the MNS, and the possibility that MNS atypicalities might be influenced by visual processes such as aberrant gaze pattern.
Attention, Intention, and Priority in the Parietal Lobe
Bisley, James W.; Goldberg, Michael E.
2013-01-01
For many years there has been a debate about the role of the parietal lobe in the generation of behavior. Does it generate movement plans (intention) or choose objects in the environment for further processing? To answer this, we focus on the lateral intraparietal area (LIP), an area that has been shown to play independent roles in target selection for saccades and the generation of visual attention. Based on results from a variety of tasks, we propose that LIP acts as a priority map in which objects are represented by activity proportional to their behavioral priority. We present evidence to show that the priority map combines bottom-up inputs like a rapid visual response with an array of top-down signals like a saccade plan. The spatial location representing the peak of the map is used by the oculomotor system to target saccades and by the visual system to guide visual attention. PMID:20192813
Manginelli, Angela A; Baumgartner, Florian; Pollmann, Stefan
2013-02-15
Behavioral evidence suggests that the use of implicitly learned spatial contexts for improved visual search may depend on visual working memory resources. Working memory may be involved in contextual cueing in different ways: (1) for keeping implicitly learned working memory contents available during search or (2) for the capture of attention by contexts retrieved from memory. We mapped brain areas that were modulated by working memory capacity. Within these areas, activation was modulated by contextual cueing along the descending segment of the intraparietal sulcus, an area that has previously been related to maintenance of explicit memories. Increased activation for learned displays, but not modulated by the size of contextual cueing, was observed in the temporo-parietal junction area, previously associated with the capture of attention by explicitly retrieved memory items, and in the ventral visual cortex. This pattern of activation extends previous research on dorsal versus ventral stream functions in memory guidance of attention to the realm of attentional guidance by implicit memory. Copyright © 2012 Elsevier Inc. All rights reserved.
Spatiotemporal Dynamics of Bilingual Word Processing
Leonard, Matthew K.; Brown, Timothy T.; Travis, Katherine E.; Gharapetian, Lusineh; Hagler, Donald J.; Dale, Anders M.; Elman, Jeffrey L.; Halgren, Eric
2009-01-01
Studies with monolingual adults have identified successive stages occurring in different brain regions for processing single written words. We combined magnetoencephalography and magnetic resonance imaging to compare these stages between the first (L1) and second (L2) languages in bilingual adults. L1 words in a size judgment task evoked a typical left-lateralized sequence of activity first in ventral occipitotemporal cortex (VOT: previously associated with visual word-form encoding), and then ventral frontotemporal regions (associated with lexico-semantic processing). Compared to L1, words in L2 activated right VOT more strongly from ~135 ms; this activation was attenuated when words became highly familiar with repetition. At ~400ms, L2 responses were generally later than L1, more bilateral, and included the same lateral occipitotemporal areas as were activated by pictures. We propose that acquiring a language involves the recruitment of right hemisphere and posterior visual areas that are not necessary once fluency is achieved. PMID:20004256
Jerath, Ravinder; Cearley, Shannon M; Barnes, Vernon A; Jensen, Mike
2018-01-01
A fundamental function of the visual system is detecting motion, yet visual perception is poorly understood. Current research has determined that the retina and ganglion cells elicit responses for motion detection; however, the underlying mechanism for this is incompletely understood. Previously we proposed that retinogeniculo-cortical oscillations and photoreceptors work in parallel to process vision. Here we propose that motion could also be processed within the retina, and not in the brain as current theory suggests. In this paper, we discuss: 1) internal neural space formation; 2) primary, secondary, and tertiary roles of vision; 3) gamma as the secondary role; and 4) synchronization and coherence. Movement within the external field is instantly detected by primary processing within the space formed by the retina, providing a unified view of the world from an internal point of view. Our new theory begins to answer questions about: 1) perception of space, erect images, and motion, 2) purpose of lateral inhibition, 3) speed of visual perception, and 4) how peripheral color vision occurs without a large population of cones located peripherally in the retina. We explain that strong oscillatory activity influences on brain activity and is necessary for: 1) visual processing, and 2) formation of the internal visuospatial area necessary for visual consciousness, which could allow rods to receive precise visual and visuospatial information, while retinal waves could link the lateral geniculate body with the cortex to form a neural space formed by membrane potential-based oscillations and photoreceptors. We propose that vision is tripartite, with three components that allow a person to make sense of the world, terming them "primary, secondary, and tertiary roles" of vision. Finally, we propose that Gamma waves that are higher in strength and volume allow communication among the retina, thalamus, and various areas of the cortex, and synchronization brings cortical faculties to the retina, while the thalamus is the link that couples the retina to the rest of the brain through activity by gamma oscillations. This novel theory lays groundwork for further research by providing a theoretical understanding that expands upon the functions of the retina, photoreceptors, and retinal plexus to include parallel processing needed to form the internal visual space that we perceive as the external world. Copyright © 2017 Elsevier Ltd. All rights reserved.
Visual and proprioceptive interaction in patients with bilateral vestibular loss☆
Cutfield, Nicholas J.; Scott, Gregory; Waldman, Adam D.; Sharp, David J.; Bronstein, Adolfo M.
2014-01-01
Following bilateral vestibular loss (BVL) patients gradually adapt to the loss of vestibular input and rely more on other sensory inputs. Here we examine changes in the way proprioceptive and visual inputs interact. We used functional magnetic resonance imaging (fMRI) to investigate visual responses in the context of varying levels of proprioceptive input in 12 BVL subjects and 15 normal controls. A novel metal-free vibrator was developed to allow vibrotactile neck proprioceptive input to be delivered in the MRI system. A high level (100 Hz) and low level (30 Hz) control stimulus was applied over the left splenius capitis; only the high frequency stimulus generates a significant proprioceptive stimulus. The neck stimulus was applied in combination with static and moving (optokinetic) visual stimuli, in a factorial fMRI experimental design. We found that high level neck proprioceptive input had more cortical effect on brain activity in the BVL patients. This included a reduction in visual motion responses during high levels of proprioceptive input and differential activation in the midline cerebellum. In early visual cortical areas, the effect of high proprioceptive input was present for both visual conditions but in lateral visual areas, including V5/MT, the effect was only seen in the context of visual motion stimulation. The finding of a cortical visuo-proprioceptive interaction in BVL patients is consistent with behavioural data indicating that, in BVL patients, neck afferents partly replace vestibular input during the CNS-mediated compensatory process. An fMRI cervico-visual interaction may thus substitute the known visuo-vestibular interaction reported in normal subject fMRI studies. The results provide evidence for a cortical mechanism of adaptation to vestibular failure, in the form of an enhanced proprioceptive influence on visual processing. The results may provide the basis for a cortical mechanism involved in proprioceptive substitution of vestibular function in BVL patients. PMID:25061564
Visual and proprioceptive interaction in patients with bilateral vestibular loss.
Cutfield, Nicholas J; Scott, Gregory; Waldman, Adam D; Sharp, David J; Bronstein, Adolfo M
2014-01-01
Following bilateral vestibular loss (BVL) patients gradually adapt to the loss of vestibular input and rely more on other sensory inputs. Here we examine changes in the way proprioceptive and visual inputs interact. We used functional magnetic resonance imaging (fMRI) to investigate visual responses in the context of varying levels of proprioceptive input in 12 BVL subjects and 15 normal controls. A novel metal-free vibrator was developed to allow vibrotactile neck proprioceptive input to be delivered in the MRI system. A high level (100 Hz) and low level (30 Hz) control stimulus was applied over the left splenius capitis; only the high frequency stimulus generates a significant proprioceptive stimulus. The neck stimulus was applied in combination with static and moving (optokinetic) visual stimuli, in a factorial fMRI experimental design. We found that high level neck proprioceptive input had more cortical effect on brain activity in the BVL patients. This included a reduction in visual motion responses during high levels of proprioceptive input and differential activation in the midline cerebellum. In early visual cortical areas, the effect of high proprioceptive input was present for both visual conditions but in lateral visual areas, including V5/MT, the effect was only seen in the context of visual motion stimulation. The finding of a cortical visuo-proprioceptive interaction in BVL patients is consistent with behavioural data indicating that, in BVL patients, neck afferents partly replace vestibular input during the CNS-mediated compensatory process. An fMRI cervico-visual interaction may thus substitute the known visuo-vestibular interaction reported in normal subject fMRI studies. The results provide evidence for a cortical mechanism of adaptation to vestibular failure, in the form of an enhanced proprioceptive influence on visual processing. The results may provide the basis for a cortical mechanism involved in proprioceptive substitution of vestibular function in BVL patients.
Snyder, Adam C.; Foxe, John J.
2010-01-01
Retinotopically specific increases in alpha-band (~10 Hz) oscillatory power have been strongly implicated in the suppression of processing for irrelevant parts of the visual field during the deployment of visuospatial attention. Here, we asked whether this alpha suppression mechanism also plays a role in the nonspatial anticipatory biasing of feature-based attention. Visual word cues informed subjects what the task-relevant feature of an upcoming visual stimulus (S2) was, while high-density electroencephalographic recordings were acquired. We examined anticipatory oscillatory activity in the Cue-to-S2 interval (~2 s). Subjects were cued on a trial-by-trial basis to attend to either the color or direction of motion of an upcoming dot field array, and to respond when they detected that a subset of the dots differed from the majority along the target feature dimension. We used the features of color and motion, expressly because they have well known, spatially separated cortical processing areas, to distinguish shifts in alpha power over areas processing each feature. Alpha power from dorsal regions increased when motion was the irrelevant feature (i.e., color was cued), and alpha power from ventral regions increased when color was irrelevant. Thus, alpha-suppression mechanisms appear to operate during feature-based selection in much the same manner as has been shown for space-based attention. PMID:20237273
Top-down processing of symbolic meanings modulates the visual word form area.
Song, Yiying; Tian, Moqian; Liu, Jia
2012-08-29
Functional magnetic resonance imaging (fMRI) studies on humans have identified a region in the left middle fusiform gyrus consistently activated by written words. This region is called the visual word form area (VWFA). Recently, a hypothesis, called the interactive account, is proposed that to effectively analyze the bottom-up visual properties of words, the VWFA receives predictive feedback from higher-order regions engaged in processing sounds, meanings, or actions associated with words. Further, this top-down influence on the VWFA is independent of stimulus formats. To test this hypothesis, we used fMRI to examine whether a symbolic nonword object (e.g., the Eiffel Tower) intended to represent something other than itself (i.e., Paris) could activate the VWFA. We found that scenes associated with symbolic meanings elicited a higher VWFA response than those not associated with symbolic meanings, and such top-down modulation on the VWFA can be established through short-term associative learning, even across modalities. In addition, the magnitude of the symbolic effect observed in the VWFA was positively correlated with the subjective experience on the strength of symbol-referent association across individuals. Therefore, the VWFA is likely a neural substrate for the interaction of the top-down processing of symbolic meanings with the analysis of bottom-up visual properties of sensory inputs, making the VWFA the location where the symbolic meaning of both words and nonword objects is represented.
Visual imagery without visual perception: lessons from blind subjects
NASA Astrophysics Data System (ADS)
Bértolo, Helder
2014-08-01
The question regarding visual imagery and visual perception remain an open issue. Many studies have tried to understand if the two processes share the same mechanisms or if they are independent, using different neural substrates. Most research has been directed towards the need of activation of primary visual areas during imagery. Here we review some of the works providing evidence for both claims. It seems that studying visual imagery in blind subjects can be used as a way of answering some of those questions, namely if it is possible to have visual imagery without visual perception. We present results from the work of our group using visual activation in dreams and its relation with EEG's spectral components, showing that congenitally blind have visual contents in their dreams and are able to draw them; furthermore their Visual Activation Index is negatively correlated with EEG alpha power. This study supports the hypothesis that it is possible to have visual imagery without visual experience.
Amsel, Ben D; Kutas, Marta; Coulson, Seana
2017-10-01
In grapheme-color synesthesia, seeing particular letters or numbers evokes the experience of specific colors. We investigate the brain's real-time processing of words in this population by recording event-related brain potentials (ERPs) from 15 grapheme-color synesthetes and 15 controls as they judged the validity of word pairs ('yellow banana' vs. 'blue banana') presented under high and low visual contrast. Low contrast words elicited delayed P1/N170 visual ERP components in both groups, relative to high contrast. When color concepts were conveyed to synesthetes by individually tailored achromatic grapheme strings ('55555 banana'), visual contrast effects were like those in color words: P1/N170 components were delayed but unchanged in amplitude. When controls saw equivalent colored grapheme strings, visual contrast modulated P1/N170 amplitude but not latency. Color induction in synesthetes thus differs from color perception in controls. Independent from experimental effects, all orthographic stimuli elicited larger N170 and P2 in synesthetes than controls. While P2 (150-250ms) enhancement was similar in all synesthetes, N170 (130-210ms) amplitude varied with individual differences in synesthesia and visual imagery. Results suggest immediate cross-activation in visual areas processing color and shape is most pronounced in so-called projector synesthetes whose concurrent colors are experienced as originating in external space.
Kim, Judy S; Kanjlia, Shipra; Merabet, Lotfi B; Bedny, Marina
2017-11-22
Learning to read causes the development of a letter- and word-selective region known as the visual word form area (VWFA) within the human ventral visual object stream. Why does a reading-selective region develop at this anatomical location? According to one hypothesis, the VWFA develops at the nexus of visual inputs from retinotopic cortices and linguistic input from the frontotemporal language network because reading involves extracting linguistic information from visual symbols. Surprisingly, the anatomical location of the VWFA is also active when blind individuals read Braille by touch, suggesting that vision is not required for the development of the VWFA. In this study, we tested the alternative prediction that VWFA development is in fact influenced by visual experience. We predicted that in the absence of vision, the "VWFA" is incorporated into the frontotemporal language network and participates in high-level language processing. Congenitally blind ( n = 10, 9 female, 1 male) and sighted control ( n = 15, 9 female, 6 male), male and female participants each took part in two functional magnetic resonance imaging experiments: (1) word reading (Braille for blind and print for sighted participants), and (2) listening to spoken sentences of different grammatical complexity (both groups). We find that in blind, but not sighted participants, the anatomical location of the VWFA responds both to written words and to the grammatical complexity of spoken sentences. This suggests that in blindness, this region takes on high-level linguistic functions, becoming less selective for reading. More generally, the current findings suggest that experience during development has a major effect on functional specialization in the human cortex. SIGNIFICANCE STATEMENT The visual word form area (VWFA) is a region in the human cortex that becomes specialized for the recognition of written letters and words. Why does this particular brain region become specialized for reading? We tested the hypothesis that the VWFA develops within the ventral visual stream because reading involves extracting linguistic information from visual symbols. Consistent with this hypothesis, we find that in congenitally blind Braille readers, but not sighted readers of print, the VWFA region is active during grammatical processing of spoken sentences. These results suggest that visual experience contributes to VWFA specialization, and that different neural implementations of reading are possible. Copyright © 2017 the authors 0270-6474/17/3711495-10$15.00/0.
Kanjlia, Shipra; Merabet, Lotfi B.
2017-01-01
Learning to read causes the development of a letter- and word-selective region known as the visual word form area (VWFA) within the human ventral visual object stream. Why does a reading-selective region develop at this anatomical location? According to one hypothesis, the VWFA develops at the nexus of visual inputs from retinotopic cortices and linguistic input from the frontotemporal language network because reading involves extracting linguistic information from visual symbols. Surprisingly, the anatomical location of the VWFA is also active when blind individuals read Braille by touch, suggesting that vision is not required for the development of the VWFA. In this study, we tested the alternative prediction that VWFA development is in fact influenced by visual experience. We predicted that in the absence of vision, the “VWFA” is incorporated into the frontotemporal language network and participates in high-level language processing. Congenitally blind (n = 10, 9 female, 1 male) and sighted control (n = 15, 9 female, 6 male), male and female participants each took part in two functional magnetic resonance imaging experiments: (1) word reading (Braille for blind and print for sighted participants), and (2) listening to spoken sentences of different grammatical complexity (both groups). We find that in blind, but not sighted participants, the anatomical location of the VWFA responds both to written words and to the grammatical complexity of spoken sentences. This suggests that in blindness, this region takes on high-level linguistic functions, becoming less selective for reading. More generally, the current findings suggest that experience during development has a major effect on functional specialization in the human cortex. SIGNIFICANCE STATEMENT The visual word form area (VWFA) is a region in the human cortex that becomes specialized for the recognition of written letters and words. Why does this particular brain region become specialized for reading? We tested the hypothesis that the VWFA develops within the ventral visual stream because reading involves extracting linguistic information from visual symbols. Consistent with this hypothesis, we find that in congenitally blind Braille readers, but not sighted readers of print, the VWFA region is active during grammatical processing of spoken sentences. These results suggest that visual experience contributes to VWFA specialization, and that different neural implementations of reading are possible. PMID:29061700
Al-Marri, Faraj; Reza, Faruque; Begum, Tahamina; Hitam, Wan Hazabbah Wan; Jin, Goh Khean; Xiang, Jing
2017-10-25
Visual cognitive function is important to build up executive function in daily life. Perception of visual Number form (e.g., Arabic digit) and numerosity (magnitude of the Number) is of interest to cognitive neuroscientists. Neural correlates and the functional measurement of Number representations are complex occurrences when their semantic categories are assimilated with other concepts of shape and colour. Colour perception can be processed further to modulate visual cognition. The Ishihara pseudoisochromatic plates are one of the best and most common screening tools for basic red-green colour vision testing. However, there is a lack of study of visual cognitive function assessment using these pseudoisochromatic plates. We recruited 25 healthy normal trichromat volunteers and extended these studies using a 128-sensor net to record event-related EEG. Subjects were asked to respond by pressing Numbered buttons when they saw the Number and Non-number plates of the Ishihara colour vision test. Amplitudes and latencies of N100 and P300 event related potential (ERP) components were analysed from 19 electrode sites in the international 10-20 system. A brain topographic map, cortical activation patterns and Granger causation (effective connectivity) were analysed from 128 electrode sites. No major significant differences between N100 ERP components in either stimulus indicate early selective attention processing was similar for Number and Non-number plate stimuli, but Non-number plate stimuli evoked significantly higher amplitudes, longer latencies of the P300 ERP component with a slower reaction time compared to Number plate stimuli imply the allocation of attentional load was more in Non-number plate processing. A different pattern of asymmetric scalp voltage map was noticed for P300 components with a higher intensity in the left hemisphere for Number plate tasks and higher intensity in the right hemisphere for Non-number plate tasks. Asymmetric cortical activation and connectivity patterns revealed that Number recognition occurred in the occipital and left frontal areas where as the consequence was limited to the occipital area during the Non-number plate processing. Finally, the results displayed that the visual recognition of Numbers dissociates from the recognition of Non-numbers at the level of defined neural networks. Number recognition was not only a process of visual perception and attention, but it was also related to a higher level of cognitive function, that of language.
Awareness Becomes Necessary Between Adaptive Pattern Coding of Open and Closed Curvatures
Sweeny, Timothy D.; Grabowecky, Marcia; Suzuki, Satoru
2012-01-01
Visual pattern processing becomes increasingly complex along the ventral pathway, from the low-level coding of local orientation in the primary visual cortex to the high-level coding of face identity in temporal visual areas. Previous research using pattern aftereffects as a psychophysical tool to measure activation of adaptive feature coding has suggested that awareness is relatively unimportant for the coding of orientation, but awareness is crucial for the coding of face identity. We investigated where along the ventral visual pathway awareness becomes crucial for pattern coding. Monoptic masking, which interferes with neural spiking activity in low-level processing while preserving awareness of the adaptor, eliminated open-curvature aftereffects but preserved closed-curvature aftereffects. In contrast, dichoptic masking, which spares spiking activity in low-level processing while wiping out awareness, preserved open-curvature aftereffects but eliminated closed-curvature aftereffects. This double dissociation suggests that adaptive coding of open and closed curvatures straddles the divide between weakly and strongly awareness-dependent pattern coding. PMID:21690314
Visual feature-tolerance in the reading network.
Rauschecker, Andreas M; Bowen, Reno F; Perry, Lee M; Kevan, Alison M; Dougherty, Robert F; Wandell, Brian A
2011-09-08
A century of neurology and neuroscience shows that seeing words depends on ventral occipital-temporal (VOT) circuitry. Typically, reading is learned using high-contrast line-contour words. We explored whether a specific VOT region, the visual word form area (VWFA), learns to see only these words or recognizes words independent of the specific shape-defining visual features. Word forms were created using atypical features (motion-dots, luminance-dots) whose statistical properties control word-visibility. We measured fMRI responses as word form visibility varied, and we used TMS to interfere with neural processing in specific cortical circuits, while subjects performed a lexical decision task. For all features, VWFA responses increased with word-visibility and correlated with performance. TMS applied to motion-specialized area hMT+ disrupted reading performance for motion-dots, but not line-contours or luminance-dots. A quantitative model describes feature-convergence in the VWFA and relates VWFA responses to behavioral performance. These findings suggest how visual feature-tolerance in the reading network arises through signal convergence from feature-specialized cortical areas. Copyright © 2011 Elsevier Inc. All rights reserved.
Bernstein, Lynne E.; Jiang, Jintao; Pantazis, Dimitrios; Lu, Zhong-Lin; Joshi, Anand
2011-01-01
The talking face affords multiple types of information. To isolate cortical sites with responsibility for integrating linguistically relevant visual speech cues, speech and non-speech face gestures were presented in natural video and point-light displays during fMRI scanning at 3.0T. Participants with normal hearing viewed the stimuli and also viewed localizers for the fusiform face area (FFA), the lateral occipital complex (LOC), and the visual motion (V5/MT) regions of interest (ROIs). The FFA, the LOC, and V5/MT were significantly less activated for speech relative to non-speech and control stimuli. Distinct activation of the posterior superior temporal sulcus and the adjacent middle temporal gyrus to speech, independent of media, was obtained in group analyses. Individual analyses showed that speech and non-speech stimuli were associated with adjacent but different activations, with the speech activations more anterior. We suggest that the speech activation area is the temporal visual speech area (TVSA), and that it can be localized with the combination of stimuli used in this study. PMID:20853377
Fujimaki, Norio; Hayakawa, Tomoe; Ihara, Aya; Matani, Ayumu; Wei, Qiang; Terazono, Yasushi; Murata, Tsutomu
2010-10-01
A masked priming paradigm has been used to measure unconscious and automatic context effects on the processing of words. However, its spatiotemporal neural basis has not yet been clarified. To test the hypothesis that masked repetition priming causes enhancement of neural activation, we conducted a magnetoencephalography experiment in which a prime was visually presented for a short duration (50 ms), preceded by a mask pattern, and followed by a target word that was represented by a Japanese katakana syllabogram. The prime, which was identical to the target, was represented by another hiragana syllabogram in the "Repeated" condition, whereas it was a string of unreadable pseudocharacters in the "Unrepeated" condition. Subjects executed a categorical decision task on the target. Activation was significantly larger for the Repeated condition than for the Unrepeated condition at a time window of 150-250 ms in the right occipital area, 200-250 ms in the bilateral ventral occipitotemporal areas, and 200-250 ms and 200-300 ms in the left and right anterior temporal areas, respectively. These areas have been reported to be related to processing of visual-form/orthography and lexico-semantics, and the enhanced activation supports the hypothesis. However, the absence of the priming effect in the areas related to phonological processing implies that automatic phonological priming effect depends on task requirements. 2010 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.
The role of attention in figure-ground segregation in areas V1 and V4 of the visual cortex.
Poort, Jasper; Raudies, Florian; Wannig, Aurel; Lamme, Victor A F; Neumann, Heiko; Roelfsema, Pieter R
2012-07-12
Our visual system segments images into objects and background. Figure-ground segregation relies on the detection of feature discontinuities that signal boundaries between the figures and the background and on a complementary region-filling process that groups together image regions with similar features. The neuronal mechanisms for these processes are not well understood and it is unknown how they depend on visual attention. We measured neuronal activity in V1 and V4 in a task where monkeys either made an eye movement to texture-defined figures or ignored them. V1 activity predicted the timing and the direction of the saccade if the figures were task relevant. We found that boundary detection is an early process that depends little on attention, whereas region filling occurs later and is facilitated by visual attention, which acts in an object-based manner. Our findings are explained by a model with local, bottom-up computations for boundary detection and feedback processing for region filling. Copyright © 2012 Elsevier Inc. All rights reserved.
Mapping language to visual referents: Does the degree of image realism matter?
Saryazdi, Raheleh; Chambers, Craig G
2018-01-01
Studies of real-time spoken language comprehension have shown that listeners rapidly map unfolding speech to available referents in the immediate visual environment. This has been explored using various kinds of 2-dimensional (2D) stimuli, with convenience or availability typically motivating the choice of a particular image type. However, work in other areas has suggested that certain cognitive processes are sensitive to the level of realism in 2D representations. The present study examined the process of mapping language to depictions of objects that are more or less realistic, namely photographs versus clipart images. A custom stimulus set was first created by generating clipart images directly from photographs of real objects. Two visual world experiments were then conducted, varying whether referent identification was driven by noun or verb information. A modest benefit for clipart stimuli was observed during real-time processing, but only for noun-driving mappings. The results are discussed in terms of their implications for studies of visually situated language processing. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.
Top-down beta oscillatory signaling conveys behavioral context in early visual cortex.
Richter, Craig G; Coppola, Richard; Bressler, Steven L
2018-05-03
Top-down modulation of sensory processing is a critical neural mechanism subserving numerous important cognitive roles, one of which may be to inform lower-order sensory systems of the current 'task at hand' by conveying behavioral context to these systems. Accumulating evidence indicates that top-down cortical influences are carried by directed interareal synchronization of oscillatory neuronal populations, with recent results pointing to beta-frequency oscillations as particularly important for top-down processing. However, it remains to be determined if top-down beta-frequency oscillations indeed convey behavioral context. We measured spectral Granger Causality (sGC) using local field potentials recorded from microelectrodes chronically implanted in visual areas V1/V2, V4, and TEO of two rhesus macaque monkeys, and applied multivariate pattern analysis to the spatial patterns of top-down sGC. We decoded behavioral context by discriminating patterns of top-down (V4/TEO-to-V1/V2) beta-peak sGC for two different task rules governing correct responses to identical visual stimuli. The results indicate that top-down directed influences are carried to visual cortex by beta oscillations, and differentiate task demands even before visual stimulus processing. They suggest that top-down beta-frequency oscillatory processes coordinate processing of sensory information by conveying global knowledge states to early levels of the sensory cortical hierarchy independently of bottom-up stimulus-driven processing.
Vinken, Kasper; Vogels, Rufin; Op de Beeck, Hans
2017-03-20
From an ecological point of view, it is generally suggested that the main goal of vision in rats and mice is navigation and (aerial) predator evasion [1-3]. The latter requires fast and accurate detection of a change in the visual environment. An outstanding question is whether there are mechanisms in the rodent visual system that would support and facilitate visual change detection. An experimental protocol frequently used to investigate change detection in humans is the oddball paradigm, in which a rare, unexpected stimulus is presented in a train of stimulus repetitions [4]. A popular "predictive coding" theory of cortical responses states that neural responses should decrease for expected sensory input and increase for unexpected input [5, 6]. Despite evidence for response suppression and enhancement in noninvasive scalp recordings in humans with this paradigm [7, 8], it has proven challenging to observe both phenomena in invasive action potential recordings in other animals [9-11]. During a visual oddball experiment, we recorded multi-unit spiking activity in rat primary visual cortex (V1) and latero-intermediate area (LI), which is a higher area of the rodent ventral visual stream. In rat V1, there was only evidence for response suppression related to stimulus-specific adaptation, and not for response enhancement. However, higher up in area LI, spiking activity showed clear surprise-based response enhancement in addition to stimulus-specific adaptation. These results show that neural responses along the rat ventral visual stream become increasingly sensitive to changes in the visual environment, suggesting a system specialized in the detection of unexpected events. Copyright © 2017 Elsevier Ltd. All rights reserved.
Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Torralba, Antonio; Oliva, Aude
2016-01-01
The complex multi-stage architecture of cortical visual pathways provides the neural basis for efficient visual object recognition in humans. However, the stage-wise computations therein remain poorly understood. Here, we compared temporal (magnetoencephalography) and spatial (functional MRI) visual brain representations with representations in an artificial deep neural network (DNN) tuned to the statistics of real-world visual recognition. We showed that the DNN captured the stages of human visual processing in both time and space from early visual areas towards the dorsal and ventral streams. Further investigation of crucial DNN parameters revealed that while model architecture was important, training on real-world categorization was necessary to enforce spatio-temporal hierarchical relationships with the brain. Together our results provide an algorithmically informed view on the spatio-temporal dynamics of visual object recognition in the human visual brain. PMID:27282108
Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Torralba, Antonio; Oliva, Aude
2016-06-10
The complex multi-stage architecture of cortical visual pathways provides the neural basis for efficient visual object recognition in humans. However, the stage-wise computations therein remain poorly understood. Here, we compared temporal (magnetoencephalography) and spatial (functional MRI) visual brain representations with representations in an artificial deep neural network (DNN) tuned to the statistics of real-world visual recognition. We showed that the DNN captured the stages of human visual processing in both time and space from early visual areas towards the dorsal and ventral streams. Further investigation of crucial DNN parameters revealed that while model architecture was important, training on real-world categorization was necessary to enforce spatio-temporal hierarchical relationships with the brain. Together our results provide an algorithmically informed view on the spatio-temporal dynamics of visual object recognition in the human visual brain.
Davies-Thompson, Jodie; Johnston, Samantha; Tashakkor, Yashar; Pancaroglu, Raika; Barton, Jason J S
2016-08-01
Visual words and faces activate similar networks but with complementary hemispheric asymmetries, faces being lateralized to the right and words to the left. A recent theory proposes that this reflects developmental competition between visual word and face processing. We investigated whether this results in an inverse correlation between the degree of lateralization of visual word and face activation in the fusiform gyri. 26 literate right-handed healthy adults underwent functional MRI with face and word localizers. We derived lateralization indices for cluster size and peak responses for word and face activity in left and right fusiform gyri, and correlated these across subjects. A secondary analysis examined all face- and word-selective voxels in the inferior occipitotemporal cortex. No negative correlations were found. There were positive correlations for the peak MR response between word and face activity within the left hemisphere, and between word activity in the left visual word form area and face activity in the right fusiform face area. The face lateralization index was positively rather than negatively correlated with the word index. In summary, we do not find a complementary relationship between visual word and face lateralization across subjects. The significance of the positive correlations is unclear: some may reflect the influences of general factors such as attention, but others may point to other factors that influence lateralization of function. Copyright © 2016 Elsevier B.V. All rights reserved.
Age-equivalent top-down modulation during cross-modal selective attention.
Guerreiro, Maria J S; Anguera, Joaquin A; Mishra, Jyoti; Van Gerven, Pascal W M; Gazzaley, Adam
2014-12-01
Selective attention involves top-down modulation of sensory cortical areas, such that responses to relevant information are enhanced whereas responses to irrelevant information are suppressed. Suppression of irrelevant information, unlike enhancement of relevant information, has been shown to be deficient in aging. Although these attentional mechanisms have been well characterized within the visual modality, little is known about these mechanisms when attention is selectively allocated across sensory modalities. The present EEG study addressed this issue by testing younger and older participants in three different tasks: Participants attended to the visual modality and ignored the auditory modality, attended to the auditory modality and ignored the visual modality, or passively perceived information presented through either modality. We found overall modulation of visual and auditory processing during cross-modal selective attention in both age groups. Top-down modulation of visual processing was observed as a trend toward enhancement of visual information in the setting of auditory distraction, but no significant suppression of visual distraction when auditory information was relevant. Top-down modulation of auditory processing, on the other hand, was observed as suppression of auditory distraction when visual stimuli were relevant, but no significant enhancement of auditory information in the setting of visual distraction. In addition, greater visual enhancement was associated with better recognition of relevant visual information, and greater auditory distractor suppression was associated with a better ability to ignore auditory distraction. There were no age differences in these effects, suggesting that when relevant and irrelevant information are presented through different sensory modalities, selective attention remains intact in older age.
Stephan-Otto, Christian; Siddi, Sara; Senior, Carl; Muñoz-Samons, Daniel; Ochoa, Susana; Sánchez-Laforga, Ana María; Brébion, Gildas
2017-01-01
Visual mental imagery might be critical in the ability to discriminate imagined from perceived pictures. Our aim was to investigate the neural bases of this specific type of reality-monitoring process in individuals with high visual imagery abilities. A reality-monitoring task was administered to twenty-six healthy participants using functional magnetic resonance imaging. During the encoding phase, 45 words designating common items, and 45 pictures of other common items, were presented in random order. During the recall phase, participants were required to remember whether a picture of the item had been presented, or only a word. Two subgroups of participants with a propensity for high vs. low visual imagery were contrasted. Activation of the amygdala, left inferior occipital gyrus, insula, and precuneus were observed when high visual imagers encoded words later remembered as pictures. At the recall phase, these same participants activated the middle frontal gyrus and inferior and superior parietal lobes when erroneously remembering pictures. The formation of visual mental images might activate visual brain areas as well as structures involved in emotional processing. High visual imagers demonstrate increased activation of a fronto-parietal source-monitoring network that enables distinction between imagined and perceived pictures.
Melzer, P; Morgan, V L; Pickens, D R; Price, R R; Wall, R S; Ebner, F F
2001-11-01
Functional magnetic resonance imaging was performed on blind adults resting and reading Braille. The strongest activation was found in primary somatic sensory/motor cortex on both cortical hemispheres. Additional foci of activation were situated in the parietal, temporal, and occipital lobes where visual information is processed in sighted persons. The regions were differentiated most in the correlation of their time courses of activation with resting and reading. Differences in magnitude and expanse of activation were substantially less significant. Among the traditionally visual areas, the strength of correlation was greatest in posterior parietal cortex and moderate in occipitotemporal, lateral occipital, and primary visual cortex. It was low in secondary visual cortex as well as in dorsal and ventral inferior temporal cortex and posterior middle temporal cortex. Visual experience increased the strength of correlation in all regions except dorsal inferior temporal and posterior parietal cortex. The greatest statistically significant increase, i.e., approximately 30%, was in ventral inferior temporal and posterior middle temporal cortex. In these regions, words are analyzed semantically, which may be facilitated by visual experience. In contrast, visual experience resulted in a slight, insignificant diminution of the strength of correlation in dorsal inferior temporal cortex where language is analyzed phonetically. These findings affirm that posterior temporal regions are engaged in the processing of written language. Moreover, they suggest that this function is modified by early visual experience. Furthermore, visual experience significantly strengthened the correlation of activation and Braille reading in occipital regions traditionally involved in the processing of visual features and object recognition suggesting a role for visual imagery. Copyright 2001 Wiley-Liss, Inc.
Di Nota, Paula M; Levkov, Gabriella; Bar, Rachel; DeSouza, Joseph F X
2016-07-01
The lateral occipitotemporal cortex (LOTC) is comprised of subregions selectively activated by images of human bodies (extrastriate body area, EBA), objects (lateral occipital complex, LO), and motion (MT+). However, their role in motor imagery and movement processing is unclear, as are the influences of learning and expertise on its recruitment. The purpose of our study was to examine putative changes in LOTC activation during action processing following motor learning of novel choreography in professional ballet dancers. Subjects were scanned with functional magnetic resonance imaging up to four times over 34 weeks and performed four tasks: viewing and visualizing a newly learned ballet dance, visualizing a dance that was not being learned, and movement of the foot. EBA, LO, and MT+ were activated most while viewing dance compared to visualization and movement. Significant increases in activation were observed over time in left LO only during visualization of the unlearned dance, and all subregions were activated bilaterally during the viewing task after 34 weeks of performance, suggesting learning-induced plasticity. Finally, we provide novel evidence for modulation of EBA with dance experience during the motor task, with significant activation elicited in a comparison group of novice dancers only. These results provide a composite of LOTC activation during action processing of newly learned ballet choreography and movement of the foot. The role of these areas is confirmed as primarily subserving observation of complex sequences of whole-body movement, with new evidence for modification by experience and over the course of real world ballet learning.
Aging effect in pattern, motion and cognitive visual evoked potentials.
Kuba, Miroslav; Kremláček, Jan; Langrová, Jana; Kubová, Zuzana; Szanyi, Jana; Vít, František
2012-06-01
An electrophysiological study on the effect of aging on the visual pathway and various levels of visual information processing (primary cortex, associate visual motion processing cortex and cognitive cortical areas) was performed. We examined visual evoked potentials (VEPs) to pattern-reversal, motion-onset (translation and radial motion) and visual stimuli with a cognitive task (cognitive VEPs - P300 wave) at luminance of 17 cd/m(2). The most significant age-related change in a group of 150 healthy volunteers (15-85 years of age) was the increase in the P300 wave latency (2 ms per 1 year of age). Delays of the motion-onset VEPs (0.47 ms/year in translation and 0.46 ms/year in radial motion) and the pattern-reversal VEPs (0.26 ms/year) and the reductions of their amplitudes with increasing subject age (primarily in P300) were also found to be significant. The amplitude of the motion-onset VEPs to radial motion remained the most constant parameter with increasing age. Age-related changes were stronger in males. Our results indicate that cognitive VEPs, despite larger variability of their parameters, could be a useful criterion for an objective evaluation of the aging processes within the CNS. Possible differences in aging between the motion-processing system and the form-processing system within the visual pathway might be indicated by the more pronounced delay in the motion-onset VEPs and by their preserved size for radial motion (a biologically significant variant of motion) compared to the changes in pattern-reversal VEPs. Copyright © 2012 Elsevier Ltd. All rights reserved.
Orthographic Coding: Brain Activation for Letters, Symbols, and Digits.
Carreiras, Manuel; Quiñones, Ileana; Hernández-Cabrera, Juan Andrés; Duñabeitia, Jon Andoni
2015-12-01
The present experiment investigates the input coding mechanisms of 3 common printed characters: letters, numbers, and symbols. Despite research in this area, it is yet unclear whether the identity of these 3 elements is processed through the same or different brain pathways. In addition, some computational models propose that the position-in-string coding of these elements responds to general flexible mechanisms of the visual system that are not character-specific, whereas others suggest that the position coding of letters responds to specific processes that are different from those that guide the position-in-string assignment of other types of visual objects. Here, in an fMRI study, we manipulated character position and character identity through the transposition or substitution of 2 internal elements within strings of 4 elements. Participants were presented with 2 consecutive visual strings and asked to decide whether they were the same or different. The results showed: 1) that some brain areas responded more to letters than to numbers and vice versa, suggesting that processing may follow different brain pathways; 2) that the left parietal cortex is involved in letter identity, and critically in letter position coding, specifically contributing to the early stages of the reading process; and that 3) a stimulus-specific mechanism for letter position coding is operating during orthographic processing. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Sellers, Kristin K.; Bennett, Davis V.; Hutt, Axel; Williams, James H.
2015-01-01
During general anesthesia, global brain activity and behavioral state are profoundly altered. Yet it remains mostly unknown how anesthetics alter sensory processing across cortical layers and modulate functional cortico-cortical connectivity. To address this gap in knowledge of the micro- and mesoscale effects of anesthetics on sensory processing in the cortical microcircuit, we recorded multiunit activity and local field potential in awake and anesthetized ferrets (Mustela putoris furo) during sensory stimulation. To understand how anesthetics alter sensory processing in a primary sensory area and the representation of sensory input in higher-order association areas, we studied the local sensory responses and long-range functional connectivity of primary visual cortex (V1) and prefrontal cortex (PFC). Isoflurane combined with xylazine provided general anesthesia for all anesthetized recordings. We found that anesthetics altered the duration of sensory-evoked responses, disrupted the response dynamics across cortical layers, suppressed both multimodal interactions in V1 and sensory responses in PFC, and reduced functional cortico-cortical connectivity between V1 and PFC. Together, the present findings demonstrate altered sensory responses and impaired functional network connectivity during anesthesia at the level of multiunit activity and local field potential across cortical layers. PMID:25833839
Rosa, Marcello G.P; Tweedale, Rowan
2005-01-01
In this paper, we review evidence from comparative studies of primate cortical organization, highlighting recent findings and hypotheses that may help us to understand the rules governing evolutionary changes of the cortical map and the process of formation of areas during development. We argue that clear unequivocal views of cortical areas and their homologies are more likely to emerge for ‘core’ fields, including the primary sensory areas, which are specified early in development by precise molecular identification steps. In primates, the middle temporal area is probably one of these primordial cortical fields. Areas that form at progressively later stages of development correspond to progressively more recent evolutionary events, their development being less firmly anchored in molecular specification. The certainty with which areal boundaries can be delimited, and likely homologies can be assigned, becomes increasingly blurred in parallel with this evolutionary/developmental sequence. For example, while current concepts for the definition of cortical areas have been vindicated in allowing a clarification of the organization of the New World monkey ‘third tier’ visual cortex (the third and dorsomedial areas, V3 and DM), our analyses suggest that more flexible mapping criteria may be needed to unravel the organization of higher-order visual association and polysensory areas. PMID:15937007
Michael, Neethu; Löwel, Siegrid; Bischof, Hans-Joachim
2015-01-01
The visual wulst of the zebra finch comprises at least two retinotopic maps of the contralateral eye. As yet, it is not known how much of the visual field is represented in the wulst neuronal maps, how the organization of the maps is related to the retinal architecture, and how information from the ipsilateral eye is involved in the activation of the wulst. Here, we have used autofluorescent flavoprotein imaging and classical anatomical methods to investigate such characteristics of the most posterior map of the multiple retinotopic representations. We found that the visual wulst can be activated by visual stimuli from a large part of the visual field of the contralateral eye. Horizontally, the visual field representation extended from -5° beyond the beak tip up to +125° laterally. Vertically, a small strip from -10° below to about +25° above the horizon activated the visual wulst. Although retinal ganglion cells had a much higher density around the fovea and along a strip extending from the fovea towards the beak tip, these areas were not overrepresented in the wulst map. The wulst area activated from the foveal region of the ipsilateral eye, overlapped substantially with the middle of the three contralaterally activated regions in the visual wulst, and partially with the other two. Visual wulst activity evoked by stimulation of the frontal visual field was stronger with contralateral than with binocular stimulation. This confirms earlier electrophysiological studies indicating an inhibitory influence of the activation of the ipsilateral eye on wulst activity elicited by stimulating the contralateral eye. The lack of a foveal overrepresentation suggests that identification of objects may not be the primary task of the zebra finch visual wulst. Instead, this brain area may be involved in the processing of visual information necessary for spatial orientation. PMID:25853253
Functional specialization and generalization for grouping of stimuli based on colour and motion
Zeki, Semir; Stutters, Jonathan
2013-01-01
This study was undertaken to learn whether the principle of functional specialization that is evident at the level of the prestriate visual cortex extends to areas that are involved in grouping visual stimuli according to attribute, and specifically according to colour and motion. Subjects viewed, in an fMRI scanner, visual stimuli composed of moving dots, which could be either coloured or achromatic; in some stimuli the moving coloured dots were randomly distributed or moved in random directions; in others, some of the moving dots were grouped together according to colour or to direction of motion, with the number of groupings varying from 1 to 3. Increased activation was observed in area V4 in response to colour grouping and in V5 in response to motion grouping while both groupings led to activity in separate though contiguous compartments within the intraparietal cortex. The activity in all the above areas was parametrically related to the number of groupings, as was the prominent activity in Crus I of the cerebellum where the activity resulting from the two types of grouping overlapped. This suggests (a) that, the specialized visual areas of the prestriate cortex have functions beyond the processing of visual signals according to attribute, namely that of grouping signals according to colour (V4) or motion (V5); (b) that the functional separation evident in visual cortical areas devoted to motion and colour, respectively, is maintained at the level of parietal cortex, at least as far as grouping according to attribute is concerned; and (c) that, by contrast, this grouping-related functional segregation is not maintained at the level of the cerebellum. PMID:23415950
Differential responses in dorsal visual cortex to motion and disparity depth cues
Arnoldussen, David M.; Goossens, Jeroen; van den Berg, Albert V.
2013-01-01
We investigated how interactions between monocular motion parallax and binocular cues to depth vary in human motion areas for wide-field visual motion stimuli (110 × 100°). We used fMRI with an extensive 2 × 3 × 2 factorial blocked design in which we combined two types of self-motion (translational motion and translational + rotational motion), with three categories of motion inflicted by the degree of noise (self-motion, distorted self-motion, and multiple object-motion), and two different view modes of the flow patterns (stereo and synoptic viewing). Interactions between disparity and motion category revealed distinct contributions to self- and object-motion processing in 3D. For cortical areas V6 and CSv, but not the anterior part of MT+ with bilateral visual responsiveness (MT+/b), we found a disparity-dependent effect of rotational flow and noise: When self-motion perception was degraded by adding rotational flow and moderate levels of noise, the BOLD responses were reduced compared with translational self-motion alone, but this reduction was cancelled by adding stereo information which also rescued the subject's self-motion percept. At high noise levels, when the self-motion percept gave way to a swarm of moving objects, the BOLD signal strongly increased compared to self-motion in areas MT+/b and V6, but only for stereo in the latter. BOLD response did not increase for either view mode in CSv. These different response patterns indicate different contributions of areas V6, MT+/b, and CSv to the processing of self-motion perception and the processing of multiple independent motions. PMID:24339808
Premotor cortex is sensitive to auditory-visual congruence for biological motion.
Wuerger, Sophie M; Parkes, Laura; Lewis, Penelope A; Crocker-Buque, Alex; Rutschmann, Roland; Meyer, Georg F
2012-03-01
The auditory and visual perception systems have developed special processing strategies for ecologically valid motion stimuli, utilizing some of the statistical properties of the real world. A well-known example is the perception of biological motion, for example, the perception of a human walker. The aim of the current study was to identify the cortical network involved in the integration of auditory and visual biological motion signals. We first determined the cortical regions of auditory and visual coactivation (Experiment 1); a conjunction analysis based on unimodal brain activations identified four regions: middle temporal area, inferior parietal lobule, ventral premotor cortex, and cerebellum. The brain activations arising from bimodal motion stimuli (Experiment 2) were then analyzed within these regions of coactivation. Auditory footsteps were presented concurrently with either an intact visual point-light walker (biological motion) or a scrambled point-light walker; auditory and visual motion in depth (walking direction) could either be congruent or incongruent. Our main finding is that motion incongruency (across modalities) increases the activity in the ventral premotor cortex, but only if the visual point-light walker is intact. Our results extend our current knowledge by providing new evidence consistent with the idea that the premotor area assimilates information across the auditory and visual modalities by comparing the incoming sensory input with an internal representation.
Symbol processing in the left angular gyrus: evidence from passive perception of digits.
Price, Gavin R; Ansari, Daniel
2011-08-01
Arabic digits are one of the most ubiquitous symbol sets in the world. While there have been many investigations into the neural processing of the semantic information digits represent (e.g. through numerical comparison tasks), little is known about the neural mechanisms which support the processing of digits as visual symbols. To characterise the component neurocognitive mechanisms which underlie numerical cognition, it is essential to understand the processing of digits as a visual category, independent of numerical magnitude processing. The 'Triple Code Model' (Dehaene, 1992; Dehaene and Cohen, 1995) posits an asemantic visual code for processing Arabic digits in the ventral visual stream, yet there is currently little empirical evidence in support of this code. This outstanding question was addressed in the current functional Magnetic Resonance (fMRI) study by contrasting brain responses during the passive viewing of digits versus letters and novel symbols at short (50 ms) and long (500 ms) presentation times. The results of this study reveal increased activation for familiar symbols (digits and letters) relative to unfamiliar symbols (scrambled digits and letters) at long presentation durations in the left dorsal Angular gyrus (dAG). Furthermore, increased activation for Arabic digits was observed in the left ventral Angular gyrus (vAG) in comparison to letters, scrambled digits and scrambled letters at long presentation durations, but no digit specific activation in any region at short presentation durations. These results suggest an absence of a digit specific 'Visual Number Form Area' (VNFA) in the ventral visual cortex, and provide evidence for the role of the left ventral AG during the processing of digits in the absence of any explicit processing demands. We conclude that Arabic digit processing depends specifically on the left AG rather than a ventral visual stream VNFA. Copyright © 2011 Elsevier Inc. All rights reserved.
Thinking Maps in Writing Project in English for Taiwanese Elementary School Students
ERIC Educational Resources Information Center
Fan, Yu Shu
2016-01-01
Thinking Maps is a language of eight visual patterns, each based on a fundamental thought process, designed by Dr. David N. Hyerle. The visual patterns are based on cognitive skills and applied in all content areas. Not only are they used in different combinations for depth and complexity, but are also used by all members in the school community.…
Matsuzaki, Naoyuki; Schwarzlose, Rebecca F.; Nishida, Masaaki; Ofen, Noa; Asano, Eishi
2015-01-01
Behavioral studies demonstrate that a face presented in the upright orientation attracts attention more rapidly than an inverted face. Saccades toward an upright face take place in 100-140 ms following presentation. The present study using electrocorticography determined whether upright face-preferential neural activation, as reflected by augmentation of high-gamma activity at 80-150 Hz, involved the lower-order visual cortex within the first 100 ms post-stimulus presentation. Sampled lower-order visual areas were verified by the induction of phosphenes upon electrical stimulation. These areas resided in the lateral-occipital, lingual, and cuneus gyri along the calcarine sulcus, roughly corresponding to V1 and V2. Measurement of high-gamma augmentation during central (circular) and peripheral (annular) checkerboard reversal pattern stimulation indicated that central-field stimuli were processed by the more polar surface whereas peripheral-field stimuli by the more anterior medial surface. Upright face stimuli, compared to inverted ones, elicited up to 23% larger augmentation of high-gamma activity in the lower-order visual regions at 40-90 ms. Upright face-preferential high-gamma augmentation was more highly correlated with high-gamma augmentation for central than peripheral stimuli. Our observations are consistent with the hypothesis that lower-order visual regions, especially those for the central field, are involved in visual cues for rapid detection of upright face stimuli. PMID:25579446
Grotheer, Mareike; Jeska, Brianna; Grill-Spector, Kalanit
2018-03-28
A region in the posterior inferior temporal gyrus (ITG), referred to as the number form area (NFA, here ITG-numbers) has been implicated in the visual processing of Arabic numbers. However, it is unknown if this region is specifically involved in the visual encoding of Arabic numbers per se or in mathematical processing more broadly. Using functional magnetic resonance imaging (fMRI) during experiments that systematically vary tasks and stimuli, we find that mathematical processing, not preference to Arabic numbers, consistently drives both mean and distributed responses in the posterior ITG. While we replicated findings of higher responses in ITG-numbers to numbers than other visual stimuli during a 1-back task, this preference to numbers was abolished when participants engaged in mathematical processing. In contrast, an ITG region (ITG-math) that showed higher responses during an adding task vs. other tasks maintained this preference for mathematical processing across a wide range of stimuli including numbers, number/letter morphs, hands, and dice. Analysis of distributed responses across an anatomically-defined posterior ITG expanse further revealed that mathematical task but not Arabic number form can be successfully and consistently decoded from these distributed responses. Together, our findings suggest that the function of neuronal regions in the posterior ITG goes beyond the specific visual processing of Arabic numbers. We hypothesize that they ascribe numerical content to the visual input, irrespective of the format of the stimulus. Copyright © 2018 Elsevier Inc. All rights reserved.
Baron, S; Kaufmann Alves, I; Schmitt, T G; Schöffel, S; Schwank, J
2015-01-01
Predicted demographic, climatic and socio-economic changes will require adaptations of existing water supply and wastewater disposal systems. Especially in rural areas, these new challenges will affect the functionality of the present systems. This paper presents a joint interdisciplinary research project with the objective of developing an innovative software-based optimization and decision support system for the implementation of long-term transformations of existing infrastructures of water supply, wastewater and energy. The concept of the decision support and optimization tool is described and visualization methods for the presentation of results are illustrated. The model is tested in a rural case study region in the Southwest of Germany. A transformation strategy for a decentralized wastewater treatment concept and its visualization are presented for a model village.
Clay, Olivio J.; Edwards, Jerri D.; Ross, Lesley A.; Okonkwo, Ozioma; Wadley, Virginia G.; Roth, David L.; Ball, Karlene K.
2010-01-01
Objectives: To evaluate the relationship between sensory and cognitive decline, particularly with respect to speed of processing, memory span, and fluid intelligence. Additionally, the common cause, sensory degradation and speed of processing hypotheses were compared. Methods: Structural equation modeling was used to investigate the complex relationships among age-related decrements in these areas. Results: Cross-sectional data analyses included 842 older adult participants (M = 73 years). After accounting for age-related declines in vision and processing speed, the direct associations between age and memory span and between age and fluid intelligence were nonsignificant. Older age was associated with visual decline, which was associated with slower speed of processing, which in turn was associated with greater cognitive deficits. Discussion: The findings support both the sensory degradation and speed of processing accounts of age-related cognitive decline. Further, the findings highlight positive aspects of normal cognitive aging in that older age may not be associated with a loss of fluid intelligence if visual sensory functioning and processing speed can be maintained. PMID:19436063
Visual attention modulates brain activation to angry voices.
Mothes-Lasch, Martin; Mentzel, Hans-Joachim; Miltner, Wolfgang H R; Straube, Thomas
2011-06-29
In accordance with influential models proposing prioritized processing of threat, previous studies have shown automatic brain responses to angry prosody in the amygdala and the auditory cortex under auditory distraction conditions. However, it is unknown whether the automatic processing of angry prosody is also observed during cross-modal distraction. The current fMRI study investigated brain responses to angry versus neutral prosodic stimuli during visual distraction. During scanning, participants were exposed to angry or neutral prosodic stimuli while visual symbols were displayed simultaneously. By means of task requirements, participants either attended to the voices or to the visual stimuli. While the auditory task revealed pronounced activation in the auditory cortex and amygdala to angry versus neutral prosody, this effect was absent during the visual task. Thus, our results show a limitation of the automaticity of the activation of the amygdala and auditory cortex to angry prosody. The activation of these areas to threat-related voices depends on modality-specific attention.
Abnormal Visual Motion Processing is not a Cause of Dyslexia
Olulade, Olumide A.; Napoliello, Eileen M.; Eden, Guinevere F.
2013-01-01
SUMMARY Developmental dyslexia is a reading disorder, yet deficits also manifest in the magnocellular-dominated dorsal visual system. Uncertainty about whether visual deficits are causal or consequential to reading disability encumbers accurate identification and appropriate treatment of this common learning disability. Using fMRI, we demonstrate in typical readers a relationship between reading ability and activity in area V5/MT during visual motion processing and, as expected, also found lower V5/MT activity for dyslexic children compared to age-matched controls. However, when dyslexics were matched to younger controls on reading ability, no differences emerged, suggesting that weakness in V5/MT may not be causal to dyslexia. To further test for causality, dyslexics underwent a phonological-based reading intervention. Surprisingly, V5/MT activity increased along with intervention-driven reading gains, demonstrating that activity here is mobilized through reading. Our results provide strong evidence that visual magnocellular dysfunction is not causal to dyslexia, but may instead be consequential to impoverished reading. PMID:23746630
Global motion perception deficits in autism are reflected as early as primary visual cortex
Thomas, Cibu; Kravitz, Dwight J.; Wallace, Gregory L.; Baron-Cohen, Simon; Martin, Alex; Baker, Chris I.
2014-01-01
Individuals with autism are often characterized as ‘seeing the trees, but not the forest’—attuned to individual details in the visual world at the expense of the global percept they compose. Here, we tested the extent to which global processing deficits in autism reflect impairments in (i) primary visual processing; or (ii) decision-formation, using an archetypal example of global perception, coherent motion perception. In an event-related functional MRI experiment, 43 intelligence quotient and age-matched male participants (21 with autism, age range 15–27 years) performed a series of coherent motion perception judgements in which the amount of local motion signals available to be integrated into a global percept was varied by controlling stimulus viewing duration (0.2 or 0.6 s) and the proportion of dots moving in the correct direction (coherence: 4%, 15%, 30%, 50%, or 75%). Both typical participants and those with autism evidenced the same basic pattern of accuracy in judging the direction of motion, with performance decreasing with reduced coherence and shorter viewing durations. Critically, these effects were exaggerated in autism: despite equal performance at the long duration, performance was more strongly reduced by shortening viewing duration in autism (P < 0.015) and decreasing stimulus coherence (P < 0.008). To assess the neural correlates of these effects we focused on the responses of primary visual cortex and the middle temporal area, critical in the early visual processing of motion signals, as well as a region in the intraparietal sulcus thought to be involved in perceptual decision-making. The behavioural results were mirrored in both primary visual cortex and the middle temporal area, with a greater reduction in response at short, compared with long, viewing durations in autism compared with controls (both P < 0.018). In contrast, there was no difference between the groups in the intraparietal sulcus (P > 0.574). These findings suggest that reduced global motion perception in autism is driven by an atypical response early in visual processing and may reflect a fundamental perturbation in neural circuitry. PMID:25060095
Bueichekú, Elisenda; Ventura-Campos, Noelia; Palomar-García, María-Ángeles; Miró-Padilla, Anna; Parcet, María-Antonia; Ávila, César
2015-10-01
Spatiotemporal activity that emerges spontaneously "at rest" has been proposed to reflect individual a priori biases in cognitive processing. This research focused on testing neurocognitive models of visual attention by studying the functional connectivity (FC) of the superior parietal lobule (SPL), given its central role in establishing priority maps during visual search tasks. Twenty-three human participants completed a functional magnetic resonance imaging session that featured a resting-state scan, followed by a visual search task based on the alphanumeric category effect. As expected, the behavioral results showed longer reaction times and more errors for the within-category (i.e., searching a target letter among letters) than the between-category search (i.e., searching a target letter among numbers). The within-category condition was related to greater activation of the superior and inferior parietal lobules, occipital cortex, inferior frontal cortex, dorsal anterior cingulate cortex, and the superior colliculus than the between-category search. The resting-state FC analysis of the SPL revealed a broad network that included connections with the inferotemporal cortex, dorsolateral prefrontal cortex, and dorsal frontal areas like the supplementary motor area and frontal eye field. Noteworthy, the regression analysis revealed that the more efficient participants in the visual search showed stronger FC between the SPL and areas of primary visual cortex (V1) related to the search task. We shed some light on how the SPL establishes a priority map of the environment during visual attention tasks and how FC is a valuable tool for assessing individual differences while performing cognitive tasks.
Shifting Attention within Memory Representations Involves Early Visual Areas
Munneke, Jaap; Belopolsky, Artem V.; Theeuwes, Jan
2012-01-01
Prior studies have shown that spatial attention modulates early visual cortex retinotopically, resulting in enhanced processing of external perceptual representations. However, it is not clear whether the same visual areas are modulated when attention is focused on, and shifted within a working memory representation. In the current fMRI study participants were asked to memorize an array containing four stimuli. After a delay, participants were presented with a verbal cue instructing them to actively maintain the location of one of the stimuli in working memory. Additionally, on a number of trials a second verbal cue instructed participants to switch attention to the location of another stimulus within the memorized representation. Results of the study showed that changes in the BOLD pattern closely followed the locus of attention within the working memory representation. A decrease in BOLD-activity (V1–V3) was observed at ROIs coding a memory location when participants switched away from this location, whereas an increase was observed when participants switched towards this location. Continuous increased activity was obtained at the memorized location when participants did not switch. This study shows that shifting attention within memory representations activates the earliest parts of visual cortex (including V1) in a retinotopic fashion. We conclude that even in the absence of visual stimulation, early visual areas support shifting of attention within memorized representations, similar to when attention is shifted in the outside world. The relationship between visual working memory and visual mental imagery is discussed in light of the current findings. PMID:22558165
NASA Astrophysics Data System (ADS)
Engel, Stephen A.; Harley, Erin M.; Pope, Whitney B.; Villablanca, J. Pablo; Mazziotta, John C.; Enzmann, Dieter
2009-02-01
Training in radiology dramatically changes observers' ability to process images, but the neural bases of this visual expertise remain unexplored. Prior imaging work has suggested that the fusiform face area (FFA), normally selectively responsive to faces, becomes responsive to images in observers' area of expertise. The FFA has been hypothesized to be important for "holistic" processing that integrates information across the entire image. Here, we report a cross-sectional study of radiologists that used functional magnetic resonance imaging to measure neural activity in first-year radiology residents, fourth-year radiology residents, and practicing radiologists as they detected abnormalities in chest radiographs. Across subjects, activity in the FFA correlated with visual expertise, measured as behavioral performance during scanning. To test whether processing in the FFA was holistic, we measured its responses both to intact radiographs and radiographs that had been divided into 25 square pieces whose locations were scrambled. Activity in the FFA was equal in magnitude for intact and scrambled images, and responses to both kinds of stimuli correlated reliably with expertise. These results suggest that the FFA is one of the cortical regions that provides the basis of expertise in radiology, but that its contribution is not holistic processing of images.
Hecht, Marcus; Thiemann, Ulf; Freitag, Christine M; Bender, Stephan
2016-01-15
Post-perceptual cues can enhance visual short term memory encoding even after the offset of the visual stimulus. However, both the mechanisms by which the sensory stimulus characteristics are buffered as well as the mechanisms by which post-perceptual selective attention enhances short term memory encoding remain unclear. We analyzed late post-perceptual event-related potentials (ERPs) in visual change detection tasks (100ms stimulus duration) by high-resolution ERP analysis to elucidate these mechanisms. The effects of early and late auditory post-cues (300ms or 850ms after visual stimulus onset) as well as the effects of a visual interference stimulus were examined in 27 healthy right-handed adults. Focusing attention with post-perceptual cues at both latencies significantly improved memory performance, i.e. sensory stimulus characteristics were available for up to 850ms after stimulus presentation. Passive watching of the visual stimuli without auditory cue presentation evoked a slow negative wave (N700) over occipito-temporal visual areas. N700 was strongly reduced by a visual interference stimulus which impeded memory maintenance. In contrast, contralateral delay activity (CDA) still developed in this condition after the application of auditory post-cues and was thereby dissociated from N700. CDA and N700 seem to represent two different processes involved in short term memory encoding. While N700 could reflect visual post processing by automatic attention attraction, CDA may reflect the top-down process of searching selectively for the required information through post-perceptual attention. Copyright © 2015 Elsevier Inc. All rights reserved.
Neuronal basis of covert spatial attention in the frontal eye field.
Thompson, Kirk G; Biscoe, Keri L; Sato, Takashi R
2005-10-12
The influential "premotor theory of attention" proposes that developing oculomotor commands mediate covert visual spatial attention. A likely source of this attentional bias is the frontal eye field (FEF), an area of the frontal cortex involved in converting visual information into saccade commands. We investigated the link between FEF activity and covert spatial attention by recording from FEF visual and saccade-related neurons in monkeys performing covert visual search tasks without eye movements. Here we show that the source of attention signals in the FEF is enhanced activity of visually responsive neurons. At the time attention is allocated to the visual search target, nonvisually responsive saccade-related movement neurons are inhibited. Therefore, in the FEF, spatial attention signals are independent of explicit saccade command signals. We propose that spatially selective activity in FEF visually responsive neurons corresponds to the mental spotlight of attention via modulation of ongoing visual processing.
Garrido, Lucia; Driver, Jon; Dolan, Raymond J.; Duchaine, Bradley C.; Furl, Nicholas
2016-01-01
Face processing is mediated by interactions between functional areas in the occipital and temporal lobe, and the fusiform face area (FFA) and anterior temporal lobe play key roles in the recognition of facial identity. Individuals with developmental prosopagnosia (DP), a lifelong face recognition impairment, have been shown to have structural and functional neuronal alterations in these areas. The present study investigated how face selectivity is generated in participants with normal face processing, and how functional abnormalities associated with DP, arise as a function of network connectivity. Using functional magnetic resonance imaging and dynamic causal modeling, we examined effective connectivity in normal participants by assessing network models that include early visual cortex (EVC) and face-selective areas and then investigated the integrity of this connectivity in participants with DP. Results showed that a feedforward architecture from EVC to the occipital face area, EVC to FFA, and EVC to posterior superior temporal sulcus (pSTS) best explained how face selectivity arises in both controls and participants with DP. In this architecture, the DP group showed reduced connection strengths on feedforward connections carrying face information from EVC to FFA and EVC to pSTS. These altered network dynamics in DP contribute to the diminished face selectivity in the posterior occipitotemporal areas affected in DP. These findings suggest a novel view on the relevance of feedforward projection from EVC to posterior occipitotemporal face areas in generating cortical face selectivity and differences in face recognition ability. SIGNIFICANCE STATEMENT Areas of the human brain showing enhanced activation to faces compared to other objects or places have been extensively studied. However, the factors leading to this face selectively have remained mostly unknown. We show that effective connectivity from early visual cortex to posterior occipitotemporal face areas gives rise to face selectivity. Furthermore, people with developmental prosopagnosia, a lifelong face recognition impairment, have reduced face selectivity in the posterior occipitotemporal face areas and left anterior temporal lobe. We show that this reduced face selectivity can be predicted by effective connectivity from early visual cortex to posterior occipitotemporal face areas. This study presents the first network-based account of how face selectivity arises in the human brain. PMID:27030766
Common neural substrates for visual working memory and attention.
Mayer, Jutta S; Bittner, Robert A; Nikolić, Danko; Bledowski, Christoph; Goebel, Rainer; Linden, David E J
2007-06-01
Humans are severely limited in their ability to memorize visual information over short periods of time. Selective attention has been implicated as a limiting factor. Here we used functional magnetic resonance imaging to test the hypothesis that this limitation is due to common neural resources shared by visual working memory (WM) and selective attention. We combined visual search and delayed discrimination of complex objects and independently modulated the demands on selective attention and WM encoding. Participants were presented with a search array and performed easy or difficult visual search in order to encode one or three complex objects into visual WM. Overlapping activation for attention-demanding visual search and WM encoding was observed in distributed posterior and frontal regions. In the right prefrontal cortex and bilateral insula blood oxygen-level-dependent activation additively increased with increased WM load and attentional demand. Conversely, several visual, parietal and premotor areas showed overlapping activation for the two task components and were severely reduced in their WM load response under the condition with high attentional demand. Regions in the left prefrontal cortex were selectively responsive to WM load. Areas selectively responsive to high attentional demand were found within the right prefrontal and bilateral occipital cortex. These results indicate that encoding into visual WM and visual selective attention require to a high degree access to common neural resources. We propose that competition for resources shared by visual attention and WM encoding can limit processing capabilities in distributed posterior brain regions.
Interdigitated Color- and Disparity-Selective Columns within Human Visual Cortical Areas V2 and V3
Polimeni, Jonathan R.; Tootell, Roger B.H.
2016-01-01
In nonhuman primates (NHPs), secondary visual cortex (V2) is composed of repeating columnar stripes, which are evident in histological variations of cytochrome oxidase (CO) levels. Distinctive “thin” and “thick” stripes of dark CO staining reportedly respond selectively to stimulus variations in color and binocular disparity, respectively. Here, we first tested whether similar color-selective or disparity-selective stripes exist in human V2. If so, available evidence predicts that such stripes should (1) radiate “outward” from the V1–V2 border, (2) interdigitate, (3) differ from each other in both thickness and length, (4) be spaced ∼3.5–4 mm apart (center-to-center), and, perhaps, (5) have segregated functional connections. Second, we tested whether analogous segregated columns exist in a “next-higher” tier area, V3. To answer these questions, we used high-resolution fMRI (1 × 1 × 1 mm3) at high field (7 T), presenting color-selective or disparity-selective stimuli, plus extensive signal averaging across multiple scan sessions and cortical surface-based analysis. All hypotheses were confirmed. V2 stripes and V3 columns were reliably localized in all subjects. The two stripe/column types were largely interdigitated (e.g., nonoverlapping) in both V2 and V3. Color-selective stripes differed from disparity-selective stripes in both width (thickness) and length. Analysis of resting-state functional connections (eyes closed) showed a stronger correlation between functionally alike (compared with functionally unlike) stripes/columns in V2 and V3. These results revealed a fine-scale segregation of color-selective or disparity-selective streams within human areas V2 and V3. Together with prior evidence from NHPs, this suggests that two parallel processing streams extend from visual subcortical regions through V1, V2, and V3. SIGNIFICANCE STATEMENT In current textbooks and reviews, diagrams of cortical visual processing highlight two distinct neural-processing streams within the first and second cortical areas in monkeys. Two major streams consist of segregated cortical columns that are selectively activated by either color or ocular interactions. Because such cortical columns are so small, they were not revealed previously by conventional imaging techniques in humans. Here we demonstrate that such segregated columnar systems exist in humans. We find that, in humans, color versus binocular disparity columns extend one full area further, into the third visual area. Our approach can be extended to reveal and study additional types of columns in human cortex, perhaps including columns underlying more cognitive functions. PMID:26865609
Brain white matter microstructure is associated with susceptibility to motion-induced nausea.
Napadow, V; Sheehan, J; Kim, J; Dassatti, A; Thurler, A H; Surjanhata, B; Vangel, M; Makris, N; Schaechter, J D; Kuo, B
2013-05-01
Nausea is associated with significant morbidity, and there is a wide range in the propensity of individuals to experience nausea. The neural basis of the heterogeneity in nausea susceptibility is poorly understood. Our previous functional magnetic resonance imaging (fMRI) study in healthy adults showed that a visual motion stimulus caused activation in the right MT+/V5 area, and that increased sensation of nausea due to this stimulus was associated with increased activation in the right anterior insula. For the current study, we hypothesized that individual differences in visual motion-induced nausea are due to microstructural differences in the inferior fronto-occipital fasciculus (IFOF), the white matter tract connecting the right visual motion processing area (MT+/V5) and right anterior insula. To test this hypothesis, we acquired diffusion tensor imaging data from 30 healthy adults who were subsequently dichotomized into high and low nausea susceptibility groups based on the Motion Sickness Susceptibility Scale. We quantified diffusion along the IFOF for each subject based on axial diffusivity (AD); radial diffusivity (RD), mean diffusivity (MD) and fractional anisotropy (FA), and evaluated between-group differences in these diffusion metrics. Subjects with high susceptibility to nausea rated significantly (P < 0.001) higher nausea intensity to visual motion stimuli and had significantly (P < 0.05) lower AD and MD along the right IFOF compared to subjects with low susceptibility to nausea. This result suggests that differences in white matter microstructure within tracts connecting visual motion and nausea-processing brain areas may contribute to nausea susceptibility or may have resulted from an increased history of nausea episodes. © 2013 Blackwell Publishing Ltd.
Using Proton Magnetic Resonance Imaging and Spectroscopy to Understand Brain "Activation"
ERIC Educational Resources Information Center
Baslow, Morris H.; Guilfoyle, David N.
2007-01-01
Upon stimulation, areas of the brain associated with specific cognitive processing tasks may undergo observable physiological changes, and measures of such changes have been used to create brain maps for visualization of stimulated areas in task-related brain "activation" studies. These perturbations usually continue throughout the period of the…
Low-cost solar array project and Proceedings of the 14th Project Integration Meeting
NASA Technical Reports Server (NTRS)
Mcdonald, R. R.
1980-01-01
Activities are reported on the following areas: project analysis and integration; technology development in silicon material, large area sheet silicon, and encapsulation; production process and equipment development; and engineering and operations, and the steps taken to integrate these efforts. Visual materials presented at the project Integration Meeting are included.
Krajcovicova, Lenka; Mikl, Michal; Marecek, Radek; Rektorova, Irena
2014-01-01
Changes in connectivity of the posterior node of the default mode network (DMN) were studied when switching from baseline to a cognitive task using functional magnetic resonance imaging. In all, 15 patients with mild to moderate Alzheimer's disease (AD) and 18 age-, gender-, and education-matched healthy controls (HC) participated in the study. Psychophysiological interactions analysis was used to assess the specific alterations in the DMN connectivity (deactivation-based) due to psychological effects from the complex visual scene encoding task. In HC, we observed task-induced connectivity decreases between the posterior cingulate and middle temporal and occipital visual cortices. These findings imply successful involvement of the ventral visual pathway during the visual processing in our HC cohort. In AD, involvement of the areas engaged in the ventral visual pathway was observed only in a small volume of the right middle temporal gyrus. Additional connectivity changes (decreases) in AD were present between the posterior cingulate and superior temporal gyrus when switching from baseline to task condition. These changes are probably related to both disturbed visual processing and the DMN connectivity in AD and reflect deficits and compensatory mechanisms within the large scale brain networks in this patient population. Studying the DMN connectivity using psychophysiological interactions analysis may provide a sensitive tool for exploring early changes in AD and their dynamics during the disease progression.
Facultative Lagoons. Student Manual. Biological Treatment Process Control.
ERIC Educational Resources Information Center
Andersen, Lorri
The textual material for a unit on facultative lagoons is presented in this student manual. Topic areas discussed include: (1) loading; (2) microbial theory; (3) structure and design; (4) process control; (5) lagoon start-up; (6) data handling and analysis; (7) lagoon maintenance (considering visual observations, pond structure, safety, odor,…
A neural model of the temporal dynamics of figure-ground segregation in motion perception.
Raudies, Florian; Neumann, Heiko
2010-03-01
How does the visual system manage to segment a visual scene into surfaces and objects and manage to attend to a target object? Based on psychological and physiological investigations, it has been proposed that the perceptual organization and segmentation of a scene is achieved by the processing at different levels of the visual cortical hierarchy. According to this, motion onset detection, motion-defined shape segregation, and target selection are accomplished by processes which bind together simple features into fragments of increasingly complex configurations at different levels in the processing hierarchy. As an alternative to this hierarchical processing hypothesis, it has been proposed that the processing stages for feature detection and segregation are reflected in different temporal episodes in the response patterns of individual neurons. Such temporal epochs have been observed in the activation pattern of neurons as low as in area V1. Here, we present a neural network model of motion detection, figure-ground segregation and attentive selection which explains these response patterns in an unifying framework. Based on known principles of functional architecture of the visual cortex, we propose that initial motion and motion boundaries are detected at different and hierarchically organized stages in the dorsal pathway. Visual shapes that are defined by boundaries, which were generated from juxtaposed opponent motions, are represented at different stages in the ventral pathway. Model areas in the different pathways interact through feedforward and modulating feedback, while mutual interactions enable the communication between motion and form representations. Selective attention is devoted to shape representations by sending modulating feedback signals from higher levels (working memory) to intermediate levels to enhance their responses. Areas in the motion and form pathway are coupled through top-down feedback with V1 cells at the bottom end of the hierarchy. We propose that the different temporal episodes in the response pattern of V1 cells, as recorded in recent experiments, reflect the strength of modulating feedback signals. This feedback results from the consolidated shape representations from coherent motion patterns and the attentive modulation of responses along the cortical hierarchy. The model makes testable predictions concerning the duration and delay of the temporal episodes of V1 cell responses as well as their response variations that were caused by modulating feedback signals. Copyright 2009 Elsevier Ltd. All rights reserved.
3-D vision and figure-ground separation by visual cortex.
Grossberg, S
1994-01-01
A neural network theory of three-dimensional (3-D) vision, called FACADE theory, is described. The theory proposes a solution of the classical figure-ground problem for biological vision. It does so by suggesting how boundary representations and surface representations are formed within a boundary contour system (BCS) and a feature contour system (FCS). The BCS and FCS interact reciprocally to form 3-D boundary and surface representations that are mutually consistent. Their interactions generate 3-D percepts wherein occluding and occluded object parts are separated, completed, and grouped. The theory clarifies how preattentive processes of 3-D perception and figure-ground separation interact reciprocally with attentive processes of spatial localization, object recognition, and visual search. A new theory of stereopsis is proposed that predicts how cells sensitive to multiple spatial frequencies, disparities, and orientations are combined by context-sensitive filtering, competition, and cooperation to form coherent BCS boundary segmentations. Several factors contribute to figure-ground pop-out, including: boundary contrast between spatially contiguous boundaries, whether due to scenic differences in luminance, color, spatial frequency, or disparity; partially ordered interactions from larger spatial scales and disparities to smaller scales and disparities; and surface filling-in restricted to regions surrounded by a connected boundary. Phenomena such as 3-D pop-out from a 2-D picture, Da Vinci stereopsis, 3-D neon color spreading, completion of partially occluded objects, and figure-ground reversals are analyzed. The BCS and FCS subsystems model aspects of how the two parvocellular cortical processing streams that join the lateral geniculate nucleus to prestriate cortical area V4 interact to generate a multiplexed representation of Form-And-Color-And-DEpth, or FACADE, within area V4. Area V4 is suggested to support figure-ground separation and to interact with cortical mechanisms of spatial attention, attentive object learning, and visual search. Adaptive resonance theory (ART) mechanisms model aspects of how prestriate visual cortex interacts reciprocally with a visual object recognition system in inferotemporal (IT) cortex for purposes of attentive object learning and categorization. Object attention mechanisms of the What cortical processing stream through IT cortex are distinguished from spatial attention mechanisms of the Where cortical processing stream through parietal cortex. Parvocellular BCS and FCS signals interact with the model What stream. Parvocellular FCS and magnocellular motion BCS signals interact with the model Where stream.(ABSTRACT TRUNCATED AT 400 WORDS)
Maternal Scaffolding and Preterm Toddlers’ Visual-Spatial Processing and Emerging Working Memory
Poehlmann, Julie; Hilgendorf, Amy E; Miller, Kyle; Lambert, Heather
2010-01-01
Objective We examined longitudinal associations among neonatal and socioeconomic risks, maternal scaffolding behaviors, and 24-month visual-spatial processing and working memory in a sample of 73 toddlers born preterm or low birthweight (PT LBW). Methods Risk data were collected at hospital discharge and dyadic play interactions were observed at 16-months postterm. Abbreviated IQ scores, verbal/nonverbal working memory, and verbal/nonverbal visual-spatial processing data were collected at 24-months postterm. Results Higher attention scaffolding and lower emotion scaffolding during 16-month play were associated with 24-month verbal working memory scores. A joint significance test revealed that maternal attention and emotion scaffolding during 16-month play mediated the relationship between socioeconomic risk and 24-month verbal working memory. Conclusions These findings suggest areas for future research and intervention with children born PT LBW who also experience high socioeconomic risk. PMID:19505998
Visual analytics as a translational cognitive science.
Fisher, Brian; Green, Tera Marie; Arias-Hernández, Richard
2011-07-01
Visual analytics is a new interdisciplinary field of study that calls for a more structured scientific approach to understanding the effects of interaction with complex graphical displays on human cognitive processes. Its primary goal is to support the design and evaluation of graphical information systems that better support cognitive processes in areas as diverse as scientific research and emergency management. The methodologies that make up this new field are as yet ill defined. This paper proposes a pathway for development of visual analytics as a translational cognitive science that bridges fundamental research in human/computer cognitive systems and design and evaluation of information systems in situ. Achieving this goal will require the development of enhanced field methods for conceptual decomposition of human/computer cognitive systems that maps onto laboratory studies, and improved methods for conducting laboratory investigations that might better map onto real-world cognitive processes in technology-rich environments. Copyright © 2011 Cognitive Science Society, Inc.
Real-Time Visualization of Tissue Ischemia
NASA Technical Reports Server (NTRS)
Bearman, Gregory H. (Inventor); Chrien, Thomas D. (Inventor); Eastwood, Michael L. (Inventor)
2000-01-01
A real-time display of tissue ischemia which comprises three CCD video cameras, each with a narrow bandwidth filter at the correct wavelength is discussed. The cameras simultaneously view an area of tissue suspected of having ischemic areas through beamsplitters. The output from each camera is adjusted to give the correct signal intensity for combining with, the others into an image for display. If necessary a digital signal processor (DSP) can implement algorithms for image enhancement prior to display. Current DSP engines are fast enough to give real-time display. Measurement at three, wavelengths, combined into a real-time Red-Green-Blue (RGB) video display with a digital signal processing (DSP) board to implement image algorithms, provides direct visualization of ischemic areas.
Phonological processing of ignored distractor pictures, an fMRI investigation.
Bles, Mart; Jansma, Bernadette M
2008-02-11
Neuroimaging studies of attention often focus on interactions between stimulus representations and top-down selection mechanisms in visual cortex. Less is known about the neural representation of distractor stimuli beyond visual areas, and the interactions between stimuli in linguistic processing areas. In the present study, participants viewed simultaneously presented line drawings at peripheral locations, while in the MRI scanner. The names of the objects depicted in these pictures were either phonologically related (i.e. shared the same consonant-vowel onset construction), or unrelated. Attention was directed either at the linguistic properties of one of these pictures, or at the fixation point (i.e. away from the pictures). Phonological representations of unattended pictures could be detected in the posterior superior temporal gyrus, the inferior frontal gyrus, and the insula. Under some circumstances, the name of ignored distractor pictures is retrieved by linguistic areas. This implies that selective attention to a specific location does not completely filter out the representations of distractor stimuli at early perceptual stages.
Prefrontal cortex modulates posterior alpha oscillations during top-down guided visual perception
Helfrich, Randolph F.; Huang, Melody; Wilson, Guy; Knight, Robert T.
2017-01-01
Conscious visual perception is proposed to arise from the selective synchronization of functionally specialized but widely distributed cortical areas. It has been suggested that different frequency bands index distinct canonical computations. Here, we probed visual perception on a fine-grained temporal scale to study the oscillatory dynamics supporting prefrontal-dependent sensory processing. We tested whether a predictive context that was embedded in a rapid visual stream modulated the perception of a subsequent near-threshold target. The rapid stream was presented either rhythmically at 10 Hz, to entrain parietooccipital alpha oscillations, or arrhythmically. We identified a 2- to 4-Hz delta signature that modulated posterior alpha activity and behavior during predictive trials. Importantly, delta-mediated top-down control diminished the behavioral effects of bottom-up alpha entrainment. Simultaneous source-reconstructed EEG and cross-frequency directionality analyses revealed that this delta activity originated from prefrontal areas and modulated posterior alpha power. Taken together, this study presents converging behavioral and electrophysiological evidence for frontal delta-mediated top-down control of posterior alpha activity, selectively facilitating visual perception. PMID:28808023
Post-determined emotion: motor action retrospectively modulates emotional valence of visual images
Sasaki, Kyoshiro; Yamada, Yuki; Miura, Kayo
2015-01-01
Upward and downward motor actions influence subsequent and ongoing emotional processing in accordance with a space–valence metaphor: positive is up/negative is down. In this study, we examined whether upward and downward motor actions could also affect previous emotional processing. Participants were shown an emotional image on a touch screen. After the image disappeared, they were required to drag a centrally located dot towards a cued area, which was either in the upper or lower portion of the screen. They were then asked to rate the emotional valence of the image using a 7-point scale. We found that the emotional valence of the image was more positive when the cued area was located in the upper portion of the screen. However, this was the case only when the dragging action was required immediately after the image had disappeared. Our findings suggest that when somatic information that is metaphorically associated with an emotion is linked temporally with a visual event, retrospective emotional integration between the visual and somatic events occurs. PMID:25808884
van Kerkoerle, Timo; Self, Matthew W.; Dagnino, Bruno; Gariel-Mathis, Marie-Alice; Poort, Jasper; van der Togt, Chris; Roelfsema, Pieter R.
2014-01-01
Cognitive functions rely on the coordinated activity of neurons in many brain regions, but the interactions between cortical areas are not yet well understood. Here we investigated whether low-frequency (α) and high-frequency (γ) oscillations characterize different directions of information flow in monkey visual cortex. We recorded from all layers of the primary visual cortex (V1) and found that γ-waves are initiated in input layer 4 and propagate to the deep and superficial layers of cortex, whereas α-waves propagate in the opposite direction. Simultaneous recordings from V1 and downstream area V4 confirmed that γ- and α-waves propagate in the feedforward and feedback direction, respectively. Microstimulation in V1 elicited γ-oscillations in V4, whereas microstimulation in V4 elicited α-oscillations in V1, thus providing causal evidence for the opposite propagation of these rhythms. Furthermore, blocking NMDA receptors, thought to be involved in feedback processing, suppressed α while boosting γ. These results provide new insights into the relation between brain rhythms and cognition. PMID:25205811
van Kerkoerle, Timo; Self, Matthew W; Dagnino, Bruno; Gariel-Mathis, Marie-Alice; Poort, Jasper; van der Togt, Chris; Roelfsema, Pieter R
2014-10-07
Cognitive functions rely on the coordinated activity of neurons in many brain regions, but the interactions between cortical areas are not yet well understood. Here we investigated whether low-frequency (α) and high-frequency (γ) oscillations characterize different directions of information flow in monkey visual cortex. We recorded from all layers of the primary visual cortex (V1) and found that γ-waves are initiated in input layer 4 and propagate to the deep and superficial layers of cortex, whereas α-waves propagate in the opposite direction. Simultaneous recordings from V1 and downstream area V4 confirmed that γ- and α-waves propagate in the feedforward and feedback direction, respectively. Microstimulation in V1 elicited γ-oscillations in V4, whereas microstimulation in V4 elicited α-oscillations in V1, thus providing causal evidence for the opposite propagation of these rhythms. Furthermore, blocking NMDA receptors, thought to be involved in feedback processing, suppressed α while boosting γ. These results provide new insights into the relation between brain rhythms and cognition.
Moore, Michelle W; Durisko, Corrine; Perfetti, Charles A; Fiez, Julie A
2014-04-01
Numerous functional neuroimaging studies have shown that most orthographic stimuli, such as printed English words, produce a left-lateralized response within the fusiform gyrus (FG) at a characteristic location termed the visual word form area (VWFA). We developed an experimental alphabet (FaceFont) comprising 35 face-phoneme pairs to disentangle phonological and perceptual influences on the lateralization of orthographic processing within the FG. Using functional imaging, we found that a region in the vicinity of the VWFA responded to FaceFont words more strongly in trained versus untrained participants, whereas no differences were observed in the right FG. The trained response magnitudes in the left FG region correlated with behavioral reading performance, providing strong evidence that the neural tissue recruited by training supported the newly acquired reading skill. These results indicate that the left lateralization of the orthographic processing is not restricted to stimuli with particular visual-perceptual features. Instead, lateralization may occur because the anatomical projections in the vicinity of the VWFA provide a unique interconnection between the visual system and left-lateralized language areas involved in the representation of speech.
Alterations in the trapezius muscle in young patients with migraine--a pilot case series with MRI.
Landgraf, M N; Ertl-Wagner, B; Koerte, I K; Thienel, J; Langhagen, T; Straube, A; von Kries, R; Reilich, P; Pomschar, A; Heinen, F
2015-05-01
Migraine is frequent in young adults and adolescents and often associated with neck muscle tension and pain. Common pathophysiological pathways, such as reciprocal cervico-trigeminal activation, are assumed. Tense areas within the neck muscles can be clinically observed many patients with migraine. The aim of this pilot case study was to visualize these tense areas via magnet resonance imaging (MRI). Three young patients with migraine were examined by an experienced investigator. In all three patients tense areas in the trapezius muscles were palpated. These areas were marked by nitroglycerin capsules on the adjacent skin surface. The MRI showed focal signal alterations at the marked locations within the trapezius muscles. Visualization of palpable tense areas by MRI may be usefully applied in the future to help elucidate the underlying pathophysiological processes of migraine. Copyright © 2015 European Paediatric Neurology Society. Published by Elsevier Ltd. All rights reserved.
Anders, Silke; Eippert, Falk; Wiens, Stefan; Birbaumer, Niels; Lotze, Martin; Wildgruber, Dirk
2009-11-01
Affective neuroscience has been strongly influenced by the view that a 'feeling' is the perception of somatic changes and has consequently often neglected the neural mechanisms that underlie the integration of somatic and other information in affective experience. Here, we investigate affective processing by means of functional magnetic resonance imaging in nine cortically blind patients. In these patients, unilateral postgeniculate lesions prevent primary cortical visual processing in part of the visual field which, as a result, becomes subjectively blind. Residual subcortical processing of visual information, however, is assumed to occur in the entire visual field. As we have reported earlier, these patients show significant startle reflex potentiation when a threat-related visual stimulus is shown in their blind visual field. Critically, this was associated with an increase of brain activity in somatosensory-related areas, and an increase in experienced negative affect. Here, we investigated the patients' response when the visual stimulus was shown in the sighted visual field, that is, when it was visible and cortically processed. Despite the fact that startle reflex potentiation was similar in the blind and sighted visual field, patients reported significantly less negative affect during stimulation of the sighted visual field. In other words, when the visual stimulus was visible and received full cortical processing, the patients' phenomenal experience of affect did not closely reflect somatic changes. This decoupling of phenomenal affective experience and somatic changes was associated with an increase of activity in the left ventrolateral prefrontal cortex and a decrease of affect-related somatosensory activity. Moreover, patients who showed stronger left ventrolateral prefrontal cortex activity tended to show a stronger decrease of affect-related somatosensory activity. Our findings show that similar affective somatic changes can be associated with different phenomenal experiences of affect, depending on the depth of cortical processing. They are in line with a model in which the left ventrolateral prefrontal cortex is a relay station that integrates information about subcortically triggered somatic responses and information resulting from in-depth cortical stimulus processing. Tentatively, we suggest that the observed decoupling of somatic responses and experienced affect, and the reduction of negative phenomenal experience, can be explained by a left ventrolateral prefrontal cortex-mediated inhibition of affect-related somatosensory activity.
Eippert, Falk; Wiens, Stefan; Birbaumer, Niels; Lotze, Martin; Wildgruber, Dirk
2009-01-01
Affective neuroscience has been strongly influenced by the view that a ‘feeling’ is the perception of somatic changes and has consequently often neglected the neural mechanisms that underlie the integration of somatic and other information in affective experience. Here, we investigate affective processing by means of functional magnetic resonance imaging in nine cortically blind patients. In these patients, unilateral postgeniculate lesions prevent primary cortical visual processing in part of the visual field which, as a result, becomes subjectively blind. Residual subcortical processing of visual information, however, is assumed to occur in the entire visual field. As we have reported earlier, these patients show significant startle reflex potentiation when a threat-related visual stimulus is shown in their blind visual field. Critically, this was associated with an increase of brain activity in somatosensory-related areas, and an increase in experienced negative affect. Here, we investigated the patients’ response when the visual stimulus was shown in the sighted visual field, that is, when it was visible and cortically processed. Despite the fact that startle reflex potentiation was similar in the blind and sighted visual field, patients reported significantly less negative affect during stimulation of the sighted visual field. In other words, when the visual stimulus was visible and received full cortical processing, the patients’ phenomenal experience of affect did not closely reflect somatic changes. This decoupling of phenomenal affective experience and somatic changes was associated with an increase of activity in the left ventrolateral prefrontal cortex and a decrease of affect-related somatosensory activity. Moreover, patients who showed stronger left ventrolateral prefrontal cortex activity tended to show a stronger decrease of affect-related somatosensory activity. Our findings show that similar affective somatic changes can be associated with different phenomenal experiences of affect, depending on the depth of cortical processing. They are in line with a model in which the left ventrolateral prefrontal cortex is a relay station that integrates information about subcortically triggered somatic responses and information resulting from in-depth cortical stimulus processing. Tentatively, we suggest that the observed decoupling of somatic responses and experienced affect, and the reduction of negative phenomenal experience, can be explained by a left ventrolateral prefrontal cortex-mediated inhibition of affect-related somatosensory activity. PMID:19767414
NASA Astrophysics Data System (ADS)
Mori, Toshio; Kai, Shoichi
2003-05-01
We present the first observation of stochastic resonance (SR) in the human brain's visual processing area. The novel experimental protocol is to stimulate the right eye with a sub-threshold periodic optical signal and the left eye with a noisy one. The stimuli bypass sensory organs and are mixed in the visual cortex. With many noise sources present in the brain, higher brain functions, e.g. perception and cognition, may exploit SR.
From genes to brain oscillations: is the visual pathway the epigenetic clue to schizophrenia?
González-Hernández, J A; Pita-Alcorta, C; Cedeño, I R
2006-01-01
Molecular data and gene expression data and recently mitochondrial genes and possible epigenetic regulation by non-coding genes is revolutionizing our views on schizophrenia. Genes and epigenetic mechanisms are triggered by cell-cell interaction and by external stimuli. A number of recent clinical and molecular observations indicate that epigenetic factors may be operational in the origin of the illness. Based on the molecular insights, gene expression profiles and epigenetic regulation of gene, we went back to the neurophysiology (brain oscillations) and found a putative role of the visual experiences (i.e. visual stimuli) as epigenetic factor. The functional evidences provided here, establish a direct link between the striate and extrastriate unimodal visual cortex and the neurobiology of the schizophrenia. This result support the hypothesis that 'visual experience' has a potential role as epigenetic factor and contribute to trigger and/or to maintain the progression of the schizophrenia. In this case, candidate genes sensible for the visual 'insult' may be located within the visual cortex including associative areas, while the integrity of the visual pathway before reaching the primary visual cortex is preserved. The same effect can be perceived if target genes are localised within the visual pathway, which actually, is more sensitive for 'insult' during the early life than the cortex per se. If this process affects gene expression at these sites a stably sensory specific 'insult', i.e. distorted visual information, is entering the visual system and expanded to fronto-temporo-parietal multimodal areas even from early maturation periods. The difference in the timing of postnatal neuroanatomical events between such areas and the primary visual cortex in humans (with the formers reaching the same development landmarks later in life than the latter) is 'optimal' to establish an abnormal 'cell- communication' mediated by the visual system that may further interfere with the local physiology. In this context the strategy to search target genes need to be rearrangement and redirected to visual-related genes. Otherwise, psychophysics studies combining functional neuroimage, and electrophysiology are strongly recommended, for the search of epigenetic clues that will allow to carrier gene association studies in schizophrenia.
Visual cortical areas of the mouse: comparison of parcellation and network structure with primates
Laramée, Marie-Eve; Boire, Denis
2015-01-01
Brains have evolved to optimize sensory processing. In primates, complex cognitive tasks must be executed and evolution led to the development of large brains with many cortical areas. Rodents do not accomplish cognitive tasks of the same level of complexity as primates and remain with small brains both in relative and absolute terms. But is a small brain necessarily a simple brain? In this review, several aspects of the visual cortical networks have been compared between rodents and primates. The visual system has been used as a model to evaluate the level of complexity of the cortical circuits at the anatomical and functional levels. The evolutionary constraints are first presented in order to appreciate the rules for the development of the brain and its underlying circuits. The organization of sensory pathways, with their parallel and cross-modal circuits, is also examined. Other features of brain networks, often considered as imposing constraints on the development of underlying circuitry, are also discussed and their effect on the complexity of the mouse and primate brain are inspected. In this review, we discuss the common features of cortical circuits in mice and primates and see how these can be useful in understanding visual processing in these animals. PMID:25620914
Visual cortical areas of the mouse: comparison of parcellation and network structure with primates.
Laramée, Marie-Eve; Boire, Denis
2014-01-01
Brains have evolved to optimize sensory processing. In primates, complex cognitive tasks must be executed and evolution led to the development of large brains with many cortical areas. Rodents do not accomplish cognitive tasks of the same level of complexity as primates and remain with small brains both in relative and absolute terms. But is a small brain necessarily a simple brain? In this review, several aspects of the visual cortical networks have been compared between rodents and primates. The visual system has been used as a model to evaluate the level of complexity of the cortical circuits at the anatomical and functional levels. The evolutionary constraints are first presented in order to appreciate the rules for the development of the brain and its underlying circuits. The organization of sensory pathways, with their parallel and cross-modal circuits, is also examined. Other features of brain networks, often considered as imposing constraints on the development of underlying circuitry, are also discussed and their effect on the complexity of the mouse and primate brain are inspected. In this review, we discuss the common features of cortical circuits in mice and primates and see how these can be useful in understanding visual processing in these animals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bethel, E. Wes; Frank, Randy; Fulcomer, Sam
Scientific visualization is the transformation of abstract information into images, and it plays an integral role in the scientific process by facilitating insight into observed or simulated phenomena. Visualization as a discipline spans many research areas from computer science, cognitive psychology and even art. Yet the most successful visualization applications are created when close synergistic interactions with domain scientists are part of the algorithmic design and implementation process, leading to visual representations with clear scientific meaning. Visualization is used to explore, to debug, to gain understanding, and as an analysis tool. Visualization is literally everywhere--images are present in this report,more » on television, on the web, in books and magazines--the common theme is the ability to present information visually that is rapidly assimilated by human observers, and transformed into understanding or insight. As an indispensable part a modern science laboratory, visualization is akin to the biologist's microscope or the electrical engineer's oscilloscope. Whereas the microscope is limited to small specimens or use of optics to focus light, the power of scientific visualization is virtually limitless: visualization provides the means to examine data that can be at galactic or atomic scales, or at any size in between. Unlike the traditional scientific tools for visual inspection, visualization offers the means to ''see the unseeable.'' Trends in demographics or changes in levels of atmospheric CO{sub 2} as a function of greenhouse gas emissions are familiar examples of such unseeable phenomena. Over time, visualization techniques evolve in response to scientific need. Each scientific discipline has its ''own language,'' verbal and visual, used for communication. The visual language for depicting electrical circuits is much different than the visual language for depicting theoretical molecules or trends in the stock market. There is no ''one visualization too'' that can serve as a panacea for all science disciplines. Instead, visualization researchers work hand in hand with domain scientists as part of the scientific research process to define, create, adapt and refine software that ''speaks the visual language'' of each scientific domain.« less
Fort, Alexandra; Delpuech, Claude; Pernier, Jacques; Giard, Marie-Hélène
2002-10-01
Very recently, a number of neuroimaging studies in humans have begun to investigate the question of how the brain integrates information from different sensory modalities to form unified percepts. Already, intermodal neural processing appears to depend on the modalities of inputs or the nature (speech/non-speech) of information to be combined. Yet, the variety of paradigms, stimuli and technics used make it difficult to understand the relationships between the factors operating at the perceptual level and the underlying physiological processes. In a previous experiment, we used event-related potentials to describe the spatio-temporal organization of audio-visual interactions during a bimodal object recognition task. Here we examined the network of cross-modal interactions involved in simple detection of the same objects. The objects were defined either by unimodal auditory or visual features alone, or by the combination of the two features. As expected, subjects detected bimodal stimuli more rapidly than either unimodal stimuli. Combined analysis of potentials, scalp current densities and dipole modeling revealed several interaction patterns within the first 200 micro s post-stimulus: in occipito-parietal visual areas (45-85 micro s), in deep brain structures, possibly the superior colliculus (105-140 micro s), and in right temporo-frontal regions (170-185 micro s). These interactions differed from those found during object identification in sensory-specific areas and possibly in the superior colliculus, indicating that the neural operations governing multisensory integration depend crucially on the nature of the perceptual processes involved.
Rudolph, G; Bechmann, M; Berninger, T; Kutschbach, E; Held, U; Tornow, R P; Kalpadakis, P; Zol'nikova, I V; Shamshinova, A M
2001-01-01
A new method of multifocal electroretinography making use of scanning laser ophthalmoscope with a wavelength of 630 nm (SLO-m-ERG), evoking short spatial visual stimuli on the retina, is proposed. Algorithm of presenting the visual stimuli and analysis of distribution of local electroretinograms on the surface of the retina is based on short m-sequences. Mathematical cross correlation analysis shows a three-dimensional distribution of bioelectrical activity of the retina in the central visual field. In normal subjects the cone bioelectrical activity is the maximum in the macular area (corresponding to the density of cone distribution) and absent in the blind spot. The method detects the slightest pathological changes in the retina under control of the site of stimulation and ophthalmoscopic picture of the fundus oculi. The site of the pathological process correlates with the topography of changes in bioelectrical activity of the examined retinal area in diseases of the macular area and pigmented retinitis detectable by ophthalmoscopy.
Unraveling the principles of auditory cortical processing: can we learn from the visual system?
King, Andrew J; Nelken, Israel
2013-01-01
Studies of auditory cortex are often driven by the assumption, derived from our better understanding of visual cortex, that basic physical properties of sounds are represented there before being used by higher-level areas for determining sound-source identity and location. However, we only have a limited appreciation of what the cortex adds to the extensive subcortical processing of auditory information, which can account for many perceptual abilities. This is partly because of the approaches that have dominated the study of auditory cortical processing to date, and future progress will unquestionably profit from the adoption of methods that have provided valuable insights into the neural basis of visual perception. At the same time, we propose that there are unique operating principles employed by the auditory cortex that relate largely to the simultaneous and sequential processing of previously derived features and that therefore need to be studied and understood in their own right. PMID:19471268
Jung, Minju; Hwang, Jungsik; Tani, Jun
2015-01-01
It is well known that the visual cortex efficiently processes high-dimensional spatial information by using a hierarchical structure. Recently, computational models that were inspired by the spatial hierarchy of the visual cortex have shown remarkable performance in image recognition. Up to now, however, most biological and computational modeling studies have mainly focused on the spatial domain and do not discuss temporal domain processing of the visual cortex. Several studies on the visual cortex and other brain areas associated with motor control support that the brain also uses its hierarchical structure as a processing mechanism for temporal information. Based on the success of previous computational models using spatial hierarchy and temporal hierarchy observed in the brain, the current report introduces a novel neural network model for the recognition of dynamic visual image patterns based solely on the learning of exemplars. This model is characterized by the application of both spatial and temporal constraints on local neural activities, resulting in the self-organization of a spatio-temporal hierarchy necessary for the recognition of complex dynamic visual image patterns. The evaluation with the Weizmann dataset in recognition of a set of prototypical human movement patterns showed that the proposed model is significantly robust in recognizing dynamically occluded visual patterns compared to other baseline models. Furthermore, an evaluation test for the recognition of concatenated sequences of those prototypical movement patterns indicated that the model is endowed with a remarkable capability for the contextual recognition of long-range dynamic visual image patterns. PMID:26147887
Jung, Minju; Hwang, Jungsik; Tani, Jun
2015-01-01
It is well known that the visual cortex efficiently processes high-dimensional spatial information by using a hierarchical structure. Recently, computational models that were inspired by the spatial hierarchy of the visual cortex have shown remarkable performance in image recognition. Up to now, however, most biological and computational modeling studies have mainly focused on the spatial domain and do not discuss temporal domain processing of the visual cortex. Several studies on the visual cortex and other brain areas associated with motor control support that the brain also uses its hierarchical structure as a processing mechanism for temporal information. Based on the success of previous computational models using spatial hierarchy and temporal hierarchy observed in the brain, the current report introduces a novel neural network model for the recognition of dynamic visual image patterns based solely on the learning of exemplars. This model is characterized by the application of both spatial and temporal constraints on local neural activities, resulting in the self-organization of a spatio-temporal hierarchy necessary for the recognition of complex dynamic visual image patterns. The evaluation with the Weizmann dataset in recognition of a set of prototypical human movement patterns showed that the proposed model is significantly robust in recognizing dynamically occluded visual patterns compared to other baseline models. Furthermore, an evaluation test for the recognition of concatenated sequences of those prototypical movement patterns indicated that the model is endowed with a remarkable capability for the contextual recognition of long-range dynamic visual image patterns.
Stephan-Otto, Christian; Siddi, Sara; Senior, Carl; Muñoz-Samons, Daniel; Ochoa, Susana; Sánchez-Laforga, Ana María; Brébion, Gildas
2017-01-01
Background Visual mental imagery might be critical in the ability to discriminate imagined from perceived pictures. Our aim was to investigate the neural bases of this specific type of reality-monitoring process in individuals with high visual imagery abilities. Methods A reality-monitoring task was administered to twenty-six healthy participants using functional magnetic resonance imaging. During the encoding phase, 45 words designating common items, and 45 pictures of other common items, were presented in random order. During the recall phase, participants were required to remember whether a picture of the item had been presented, or only a word. Two subgroups of participants with a propensity for high vs. low visual imagery were contrasted. Results Activation of the amygdala, left inferior occipital gyrus, insula, and precuneus were observed when high visual imagers encoded words later remembered as pictures. At the recall phase, these same participants activated the middle frontal gyrus and inferior and superior parietal lobes when erroneously remembering pictures. Conclusions The formation of visual mental images might activate visual brain areas as well as structures involved in emotional processing. High visual imagers demonstrate increased activation of a fronto-parietal source-monitoring network that enables distinction between imagined and perceived pictures. PMID:28046076